I am Federico Nesti, I am currently working as a Ph.D. student at Retis lab (Tecip Institute, Scuola Superiore Sant'Anna), with a grand funded by Department of Excellence in Robotics & AI. My Ph.D. focuses on understanding the limits of trustworthiness of machine learning models from an adversarial perspective. I am also working as a consultant for Scuola Superiore Sant'Anna in the area of navigation and localization for railway systems.

Education and Experience

I received my BSc in Electronics Engineering from University of Pisa in 2015, then my MSc in Robotics and Automation Engineering in 2018.

I joined the U-PHOS project, selected by SNSB, DLR and ESA for Rexus-Bexus program in 2015 until completion in 2017.

I worked as an intern at Fermilab (Illinois, USA) as part of the Fermilab Summerschool organized by the University of Pisa in 2017.

In 2018 I worked for my MSc thesis at TU Delft, developing an Eye Tracking device, under supervision of prof. Michel Verhaegen.

I worked as a R&D robotics engineer for Fabrica Machinale srl - Roboticom for 8 months, before starting working at Retis lab with a scholarship, then enrolled in the PhD program in 2019.

I collaborated with Hitachi STS for an industrial project on realistic train simulation and navigation from March 2020 to February 2022.

From February 2022 I am a Visiting Researcher at the University of Alicante, under the supervision of Prof. Miguel Cazorla.

News

January 2022: Our pre-print “On the Real-World Adversarial Robustness of Real-time Semantic Segmentation models for Autonomous Driving" is online.

October 2021: Our paper “Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks ” was accepted as conference paper at WACV2022!

August 2021: Our pre-print “Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks ” is online.

August 2021: Our paper “Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting” was accepted for publication on IEEE Transactions on Neural Networks and Learning Systems.

January 2021: Our pre-print “Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting” is online.

November 2020: Our team HyperPendulum won second prize at Huawei University Challenge for safe, secure, and predictable use of neural networks in safety-critical systems.

September 2020: Our paper “A Safe, Secure, and Predictable Software Architecture for Deep Learning in Safety-Critical Systems” was published!

August 2020: I was selected for Deep Learning + Reinforcement Learning Summer School, originally in Montréal, but held virtually.