Predictable and Trustworthy AI
The objective of this research is to enable the use of AI algorithms and deep networks in safety-critical cyber-physical systems, as autonomous vehicles, advanced robots, space crafts, and medical systems. To be safely deployed, such systems must be certified and must react within given timing constraints imposed by the environment. Unfortunately, current deep learning frameworks are not designed to be used in safety-critical systems and cannot guarantee predictable response times. To solve this problem the following research is carried out at the RETIS Lab:
Safe and secure architectures for AI-powered cyber-physical systems
Federico Nesti, Alessandro Biondi, Giorgio Buttazzo
This work leverages hypervisor technology to integrate a high-performance computing domain (hosting replicas of neural controllers) with a safe, certifiable computing domain (hosting safety-critical components). The safe domain includes a backup controller, a voter, and a monitoring look-ahead module that switches to the safe controller whenever the results produced by the neural one are judged unreliable.
Defense perturbations to detect adversarial examples
Federico Nesti, Alessandro Biondi, Giorgio Buttazzo
This work proposes a method for detecting adversarial examples. This method is designed to detect also robust adversarial examples, i.e., those whose fooling power cannot easily be disrupted with input transformations. It is based on the optimization of a special defense perturbation crafted to be applied to the input image and designed to remove the robustness from robust adversarial examples.
This work also extensively explores the detection capabilities of several input transformations, and the effect of introducing redundant neural networks followed by a majority voting mechanism for tolerating possible faults on a single network.
Coverage analysis for increasing trustworthiness of deep neural networks
Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo
This work aims at detecting unsafe neural networks inputs by applying coverage analysis methods at inference time. Given a set of trusted inputs, for which a neural network produces a correct output, an off-line processing phase is performed to compute a “signature” that encodes all activation patterns corresponding to a normal behavior. Then, at inference time, the activation pattern stimulated by a new input is compared with those stored in the signature, quantifying its deviation from the trusted behavior with a confidence metric, which is used to judge the trustworthiness of the network prediction.
The technique is currently implemented as a lightweight monitoring architecture for the Caffe framework. Different coverage analysis methods have been evaluated and tested in the architecture for evaluating multiple detection logics.
Predictable support for concurrent deep neural networks on GPU platforms
Alessandro Biondi
This work aims at designing and implementing an inference engine for deep neural networks developed with the TensorRT framework by NVIDIA. This engine allows for time predictable multitasking of a set of deep neural networks with different timing requirements to be executed on an embedded GPU (TX2 and Xavier platforms).
Predictable FPGA acceleration of deep neural networks
Francesco Restuccia, Marco Pagani, Biruk Seyoum, Alessandro Biondi, Tommaso Cucinotta
With respect to GPUs, FPGAs can accelerate computations with a more predictable timing behavior and much less energy consumption. A major problem of FPGAs, however, is that they involve complex design flows and require considerable expertise to properly manage the available resources. This work exploits dynamic partial reconfiguration and virtualization techniques to create a larger virtual FPGA, where neural networks can be partitioned into multiple subnetworks that can be executed in timesharing on the physical FPGA fabric, and provides an automated framework to synthetize optimized neural accelerators under given timing and resource constraints.
Lidar odometry and localization through deep learning
Gianluca D’Amico, Mauro Marinoni, Giorgio Buttazzo
The objective of this work is vehicle localization through lidar odometry, where lidar frames are processed by a neural architecture including a convolutional neural network (CNN) and a recurrent neural network (RNN). The integration of these two networks enables to combine the spatial capacity of CNNs with the memory capacity of RNNs for considering past frames. Among other aspects, this work investigates an effective encoding procedure of 3D data frames, required to train the neural networks.
Explainability of deep neural networks
Marco Pacini, Federico Nesti, Giorgio Buttazzo, Alessandro Biondi
The high performance of deep neural networks comes with a price: these systems are highly complex and their outputs cannot easily be interpreted, hence trusted, by humans. Such a difficulty in providing a clear explanation of their behavior makes AI inapplicable in areas where explanations are necessary for legal, safety, or security reasons. This work investigates different methodologies for building a clear graphical explanation of the results generated by a deep neural network. This research also investigates how to exploit generated explanations for automatically detecting possible biases present in the training set and possible unsafe inputs, such as adversarial examples or out-of-distributions samples.
Verification of deep neural networks
Fabio Brau, Alessandro Biondi
Adversarial examples play a central role in the trustworthiness of deep learning models. They can be thought of as malicious inputs that mislead an AI-model. This work investigates formal verification methods to provide guarantees of robustness for enhancing the trustworthiness of deep neural networks.
Accident prediction in autonomous driving
Saasha Nair, Alessandro Biondi
Despite the recent success stories of self-driving cars, fully autonomous vehicles are not yet commercially available. One of the major impediments to the large-scale adoption of autonomous vehicles is related to safety concerns. Safety monitoring can potentially be a solution. Safety monitors work by observing the inputs and outputs in the driving pipeline, analyzing them to detect anomalous behaviors, and applying the correct intervention when needed. However, existing frameworks for this rely on hard-coded and handpicked safety limits, which may not be optimal.
The objective of this work is to build a safety monitor for predict accidents or crashes using a deep neural network architecture, called Crash Prediction Networks (CPN). The idea is to create an envelope around the driving module that observes the action decisions delivered from the driving modules to see whether it is likely to lead to a crash, given the sensory information about the state. Here, CPN is an ensemble of neural networks, where each network focuses on a different subset of sensory data. The networks then work in unison to determine whether the current vehicle trajectory is likely to cause a crash. The current focus of the work is on designing and evaluating the architecture of this deep neural network ensemble.
Enhance predictability in inference engines
Daniel Casini, Alessandro Biondi, Giorgio Buttazzo
The native scheduler used by popular inference engines, e.g., the one employed by TensorFlow, to run deep neural networks on multicore platforms does not take timing issues into account, since it has been designed to optimize the average case, rather than the worst-case performance. Therefore, it can introduce long and unpredictable delays, making it unsuitable for safety-critical applications. This work aims at enhancing predictability by acting on the node scheduler to introduce mechanisms designed to handle neural-network-specific workload.
AI for cloud computing and network function virtualization (NFV) infrastructures
Tommaso Cucinotta
Cyber-physical systems are becoming increasingly interconnected, and low-latency and high-reliability connectivity are among hot topics in networking, for example with reference to 5G scenarios. In this context, adaptive AI-based techniques are becoming more and more important to support communications in distributed cyber-physical systems. This task investigates techniques based on artificial intelligence and machine learning to analyze the massive amount of data coming from the monitoring system of a cloud/NFV infrastructure, for purposes related to supporting operations, performance troubleshooting, root-cause analysis, workload prediction and capacity planning.
Improving predictability, safety, and security in the Apollo autonomous driving framework
Alessandro Biondi, Daniel Casini, Giorgiomaria Cicero, Francesco Restuccia
Modern frameworks for autonomous driving include several functionalities that needs to run in a predictable, safe, and secure manner. The Apollo open-source framework for autonomous driving consists of multiple modules, each taking care of a specific task, e.g., control, planning, and perception. Since Apollo requires interacting with sensors and devices (such as GPUs) whose drivers and software stacks may not be available on a real-time operating system, it runs on Linux, a feature-rich operating system that, however, is vulnerable to safety threats and cyber-attacks. For this reason, it is not suitable for the certification of the most safety-critical components, e.g., control and actuation. This work aims at improving Apollo’s safety and security features by using a hypervisor for creating two virtual machines that share the same physical platform, a Linux-based virtual machine (Linux-VM) and a virtual machine running a real-time operating system (RTOS-VM). In this way, the Linux-VM runs the perception-related components requiring a tight interaction with sensors and hardware accelerators, while the RTOS-VM is in charge of handling the most safety-related activities. Furthermore, a more predictable acceleration of Apollo’s deep neural networks is provided by using FPGA in place of GPUs.