||This site uses||
Last updated on
13 July 2021
This is a list of the available thesis topics, within the scope of my research interests, that may be undertaken by students about to finish their academic career path towards a MSc in Computer Engineering or Computer Science or similar (e.g., the MSc in Embedded Computing Systems or the MSc in Computer Science and Networking jointly offered by University of Pisa and Scuola Sant'Anna), who might be interested in developing their MSc thesis project at the Real-Time Systems Laboratory (ReTiS) of Scuola Superiore Sant'Anna in Pisa.
If you are interested into one of the available topics, please send me and e-mail.
For a list of completed thesis projects, please refer to the dedicated page.
|Automata-based run-time verification of code in the Linux kernel|
DescriptionLinux is gaining popularity as an operating system in a number of time-critical and safety-critical domains like automotive or railways. However, one of the critical elements still obstructing its use in said scenarios is the one of the complexity of its kernel with million lines of code, which makes it quite difficult to gain the necessary certifications.
This complexity may be tackled by the use of formal methods, and an increasingly promising area is the one of run-time verification, where automata-based models of various excerpts of the code base can be composed and analyzed, verifying that the run-time behavior complies with said models.
This thesis proposal deals with realizing an open-source tool for the description of automata and their composition, and their integration with a framework for run-time verification of code which is being actively developed by Red Hat for the Linux kernel.
RequirementsThe student should be fluent with the C/C++ programming language. Some knowledge of, and experience with, Qt or other GUI subsystems is desirable. Students of a MSc degree in computer engineering or computer science are suitable to undertake this thesis project.
BenefitsThe student will have a deep dive on a hot-topic in the development of time-critical and safety-critical software, and gain the chance to develop a key tool helping to improve an automata-based run-time verification toolchain for the Linux kernel.
CollaborationsThe student will have the opportunity to be involved in state-of-the-art research activities being carried out in the context of an international collaboration going on between Scuola Sant'Anna and Red Hat.
|Adaptive high-performance networking|
DescriptionHigh-performance networking primitives based on kernel-bypass, such as DPDK, are receiving an increasing attraction across industry practitioners and academics, thanks to their capability to realize higher throughput and lower latencies, than achievable with traditional socket-based primitives requiring the OS intevention for the transmission of each packet or batch.
However, the achievable performance points are strictly depending on how many CPUs are dedicated on the platform to the switching logic among multiple entities that need to communicate. Said logic becomes a critical part of the system, constituting a potential bottleneck for techniques of this kind. The consequent computational requirements, as well as their associated power consumption levels, may turn out to be excessive, during periods in which the hosted services are exhibiting moderate workloads.
This thesis proposal deals with realizing an adaptive high-performance networking switch for DPDK, capable of dynamically switching among a number of modes, including the ability to instantiate additional threads for packet switching and remove them as needed, based on the instantaneous conditions of the system.
RequirementsThe student should be fluent with socket-based networking primitives and the use of the C programming language. Some knowledge and experience with parallel programming is desirable. Computer engineering, computer science and telecommunication engineering are all excellent backgrounds to undertake a MSc thesis project on the proposed topics.
BenefitsThe student will have a deep dive on efficient software engineering for high-performance networking switches, gaining a practical and hands-on experience on some of the key and hottest technologies for the development of future data-intensive distributed software in the industry of cloud and distributed computing.
CollaborationsThe student will have the opportunity to be involved in state-of-the-art research activities being carried out in the context of an international collaboration tackling some among the most important challenges in realizing high-performance networking services.
|Model-Driven Engineering with multi-core, GPU or FPGA acceleration|
DescriptionModel-Driven Engineering and Model-Based Design are gaining momentum in various embedded industrial fields like automotive, railroad, aerospace and others. These techniques involve the use of a number of tools that help system designers and software engineers to carry out the whole software life-cycle of a component or application: from the requirements specification to high-level architecture design, down to low-level components specification and the final implementation phases. The use of MDE/MBD techniques, also enriched by automated code generation tools, promises to reduce the potential gap between the features and the properties of the implemented system, versus the ones that were stated in the initial high-level specifications, including critical non-functional requirements concerning the performance and timeliness of the realized components.
However, the computational requirements of modern cyber-physical systems have grown enormously in the last decade, with the growing interest in deploying complex robot control algorithms requiring on-line optimizations, sophisticated computer vision algorithms for object recognition, trajectory detection and forecasts, and machine learning and artificial intelligence techniques applying data analysis and forecasting as required in predictive maintenance, towards the full potential of the so called Industry 4.0 revolution. All of these algorithms need expensive vectorial and matrix operations that are conveniently accelerated through the use of multi- and many-core general-purpose computing platforms, GP-GPU acceleration or even FPGA acceleration. However, writing software capable to run on a wide heterogeneity of hardware elements is quite cumbersome nowadays.
The AMPERE European Project is tackling these challenges, with a consortium featuring key industrial players in the field of high-performance software for automotive and railroad use-cases, like BOSCH and THALES, and renowned international research centers in the fields of high-performance computing, real-time and energy-efficient systems like the Barcelona Supercomputing Center, the RETIS of Scuola Superiore Sant'Anna in Pisa, the ETH in Zurich and the ISEP engineering institute in Porto.
This thesis proposal deals with extending the open-source APP4MC plugin for Eclipse, supporting the AMALTHEA MDE methodology, for the specification of Runnables with either: a) multi-core acceleration via OpenMP; b) GPU-acceleration via OpenCL; c) FPGA-acceleration via the FRED framework realized at the RETIS.
RequirementsThe student should be familiar with modeling languages and frameworks such as UML or AUTOSAR. The student should be fluent in programming in Java and C/C++. Some knowledge and experience with parallel and real-time software programming is desirable. Computer engineering, computer science and electronic engineering are all excellent backgrounds to undertake a MSc thesis project on the proposed topics.
BenefitsThe student will have a deep dive on efficient software engineering for parallel and heterogeneous hardware boards, gaining a practical and hands-on experience on some of the key technologies for the development of future software components in the embedded industry.
Industrial collaborationsThis thesis proposal is framed in the context of a long-standing industrial collaboration with Ericsson, Stockholm (Sweden).
|Artificial Intelligence techniques to support monitoring of infrastructures for Network Function Virtualization (NFV)|
The world of network operators is shifting
away from the traditional paradigm of physical appliances, to
the novel one of software components deployed as virtual
machines or containers, managed in an elastic way, according
to a private cloud paradigm.|
This thesis project proposes to build tools based on Artificial Intelligence, in order to support such tasks as performance monitoring and troubleshooting, and capacity monitoring and planning, needed to operate an infrastructure for Network Function Virtualization, in the context of a Telecom operator.
RequirementsStrong programming skills in C/C++ and Python, fluent Linux command-line tooling and BaSH shell scripting, good knowledge of operating systems and virtualization.
BenefitsThe student will have a good opportunity to refine his/her skills in the above fields while working on real data, and gain a unique experience with developing innovative technologies that promise to disrupt the world of cloud and NFV operations.
Industrial collaborationsThis thesis proposal is framed in the context of an on-going industrial collaboration with Vodafone, involving the RETIS, INRETE, PERCRO and ICT-COISP labs of Scuola Sant'Anna, as well as multiple international sites of the virtual infrastructure capacity management team of Vodafone.
Reference thesis mentorsTommaso Cucinotta, Marco Vannucci, Luca Valcarenghi.
|Mechanisms for efficient communications among containers in Cloud Computing and NFV|
More and more software components and services are deployed
nowadays over shared infrastructures, either as available at a
public cloud provider, or in-house within private cloud data
centres. In this context, OS-level virtualization mechanisms,
such as Linux Containers (LXC), Dockers or others, are growing
in demand and popularity as deployment and isolation
mechanisms, thanks to their increased efficiency in resource
usage, when compared with traditional machine virtualization
techniques. Containers are becoming a fundamental brick in
novel architectures for distributed fault-tolerant components,
which are increasingly based on micro-services. This is a
development trend where monolithic software is split into a
multitude of smaller services, which can be independently
designed, developed, deployed and scaled out as collections
of containers, enhancing reliability of the overall solution,
adding a higher degree of flexibility in the management of the
underlying physical resources needed at run-time.
Current middleware solutions for communications among containers involve an extensive use of networking protocols, often based on TCP/IP, HTTP, XML-RPC, JSON-RPC, SOAP or others, for letting different container environments communicate with each other, often leading to an excess of overheads. The purpose of this thesis proposal is the one to investigate on more efficients mechanism, particularly with services that end-up co-located onto the same physical hosts, with a use-case focused on either distributed multimedia processing, or virtualized network functions in a NFV infrastructure.
RequirementsStrong programming skills in C/C++ and Python, solid knowledge of concurrent programming and OS primitives for inter-process communications (IPC) and synchronization.
BenefitsThe student will have a good opportunity to refine his/her skills in the above fields, and gain a unique experience with developing distributed services over shared physical infrastructures, building a practical experience on advanced OS concepts, which are fundamental in the ICT (Information and Communications Technology) industry.
Industrial collaborationsThis thesis proposal is framed in the context of a long-standing industrial collaboration with Ericsson, Stockholm (Sweden).
|Fault-tolerant replication log with real-time performance and high reliability|
NoSQL data-base services are taking momentum in cloud and
distributed computing as a key technology enabling scalable and
real-time applications to store and retrieve data according to
precise timing, consistency and availability requirements (that
can be formalized in a SLA -- service-level agreement).
A key component of such a system is the replication log, guaranteeing a consistent view on the sequence of operations to perform on each data object. Realizing a fault-tolerant replication log with high availability and consistency, yet predictable performance, presents a variety of technical challenges spanning across software engineering, concurrent programming, operating systems and kernel internals, including CPU and disk scheduling.
In this thesis, we propose the design and realization of a fault-tolerant, real-time replication log with minimum functionality.
RequirementsThe student shall have strong programming skills in C/C++ and/or Java, experience with concurrent/multi-threaded programming, solid knowledge and understanding of computer architectures and their performance implications, operating systems internals and Linux, and be familiar with developing distributed software.
BenefitsThe student will have a good opportunity to refine his/her skills in the above fields, and gain a unique experience with developing distributed, fault-tolerant, real-time software components, which are fundamental in the ICT (Information and Communications Technology) industry.
|Improvements to the SCHED_DEADLINE Linux process scheduler for real-time multimedia|
The Linux kernel has been recently enriched with
SCHED_DEADLINE, an EDF-based process
scheduler that is particularly promising for real-time and multimedia workloads.
The scheduler exhibits a minimum set of features, but several extensions are possible
for various use-cases. In this project, the student will design and realize extensions
suitable to support a specific multimedia-oriented use-case (e.g., when using the
JACK architecture for low-latency audio, or the new
API for low-latency audio processing on Android), and will adapt user-space application
components to gain advantage of the enriched scheduler.
RequirementsThe student shall have strong programming skills in C/C++, experience with concurrent/multi-threaded programming, solid knowledge and understanding of computer architectures and their performance implications, operating systems internals and Linux, and be familiar with developing kernel-level software.
BenefitsThe student will have a good opportunity to refine his/her skills in the above fields, and gain a unique experience with developing real-time multimedia-oriented systems.
Industrial collaborationsIn this area, we have long-standing industrial collaborations with Arm, Cambridge, UK and RedHat.
|Real-time spectrum analyzer for audio signals empowered by Artificial Intelligence|
The project consists in realizing a spectrum
analyzer for audio signals which applies neural networks in
order to recognize common sound patterns. The project may
undertake various directions in view of the interests and skills
of the candidate. For example, the software might be able to
recognize the tones of notes played by an instrument (realizing
a real-time sound to MIDI component), or it might recognize
different types of sound types or sound patterns, or it might
even venture into the land of voice recognition. The project
might be realized as a Qt or Gnome desktop application, using the
JACK framework for
low-latency audio or
the Advanced Linux Sound
Architecture (ALSA) sound library on Linux, or it might be
realized as an Android application for smartphones and tablets
using the new
API for low-latency audio processing on Android. For
recognition of sounds and/or sound patterns, the project might
rely on machine-learning, neural networks and/or traditional
RequirementsThe student shall be fluent in C/C++ and/or Java programming and be familiar with the development of applications with a Graphical User Interface (GUI).
BenefitsThe student will gain insightful knowledge about how to build real-time audio processing applications, enhanced with a GUI either on desktop or Android systems.
|Predicibilità temporale di applicazioni real-time distribuite e virtualizzate|
| Negli ultimi anni le tecnologie di virtualizzazione si stanno
imponendo come una soluzione efficace per fornire servizi software
anche complessi ad applicazioni distribuite. Le suddette tecnologie
permettono di astrarre la reale macchina fisica su cui avvengono le
elaborazioni, creando un insieme di macchine virtuali (VM) e
permettendo quindi di eseguire più di un sistema operativo (con
relative applicazioni) sulla stessa macchina fisica. Sfortunatamente,
però, le tecnologie di virtualizzazione attualmente esistenti
sono spesso inadeguate per supportare applicazioni con vincoli
temporali e non permettono di garantire stabilmente all'utente finale
dei livelli di qualità del servizio prefissati. Oggigiorno,
molte applicazioni distribuite richiedono tempi di risposta limitati e
predicibili per poter fornire i propri servizi in modo corretto: ad
esempio, applicazioni di realtà virtuale, telepresenza o
generalmente per la collaborazione on-line, che richiedono di
acquisire, elaborare e visualizzare dati con una temporizzazione
Il problema di garantire una quantità sufficiente di risorse, e con la giusta granularità temporale, a questo tipo di applicazioni diventa ancor più spinoso a causa delle interferenze che possono crearsi fra VM che impegnano risorse diverse, tipicamente di elaborazione e di rete. Ad es., una VM con un traffico di I/O pesante può influenzare negativamente la performance di elaborazione di altre VM.
In questa tesi, si propone di investigare sulle problematiche che impediscono di avere una performance real-time e predicibile di componenti software virtualizzate, nonché di sperimentare alcuni dei meccanismi per l'isolamento temporale all'avanguardia nel mondo dei sistemi soft real-time.
Requisiti.Ottima conoscenza del linguaggio C, dello stack TCP/IP, e dei cosiddetti "server" nella letteratura degli scheduler real-time. Buona dimestichezza con il Sistema Operativo Linux, interesse per la sperimentazione di feature non standard del kernel.
Benefici.Lo studente avrà l'opportunità di applicare concretamente alcuni aspetti della teoria dei sistemi real-time, nel contesto estremamente spinoso delle applicazioni real-time distribuite virtualizzate, utilizzando meccanismi per l'isolamento temporale che costituiranno le fondamenta per il supporto alla Quality of Service nei Sistemi Operativi di domani. Inoltre, prenderà dimestichezza con strumenti di virtualizzazione come KVM, che sono alla base delle infrastrutture di rete allo stato dell'arte.
|Simulation of Cloud Computing infrastructures|
|Sistemi Operativi e scheduling per sistemi multicore scalabili|
| I sistemi multicore stanno prendendo piede ad un ritmo
incalzante. In un prossimo futuro, il mondo del computing
sarà dominato da dispositivi mobili che costituiranno il
punto d'accesso ad applicativi completamente distribuiti messi a
disposizione remotamente da opportuni provider. Le applicazioni
di Cloud Computing di domani faranno largo uso di sistemi
massicciamente paralleli, i cosiddetti sistemi many-core,
per i quali i Sistemi Operativi di oggi risultano inadeguati per
una gestione ottimale delle risorse.|
In quest'ambito si propone di investigare su problematiche di scalabilità a livello di kernel di Sistema Operativo. In particolare, le possibilità di lavori di tesi in quest'area sono molteplici:
Requisiti.In generale, per tutte le tesi che si collocano in quest'area, è necessaria un'ottima conoscenza dei sistemi operativi e delle architetture dei calcolatori. Inoltre, per ciascuna proposta di tesi specifica, possono essere richieste ulteriori conoscenze e capacità individuali.
Benefici.Lo studente avrà l'opportunità di acquisire competenze ed esperienza nel mondo del calcolo parallelo, della programmazione concorrente e distribuita, del supporto a livello di Sistema Operativo per sistemi massicciamente paralleli, con particolare riferimento alla progettazione di algoritmi di scheduling e primitive di sincronizzazione scalabili ed efficienti.
Last updated on
13 July 2021