1
Evaluating adversarial attacks on and defences of autonomous driving models | |
Author | Siddharth, Thoviti |
Call Number | AIT RSPR no.IM-22-02 |
Subject(s) | Machine learning--Security measures Automated vehicles--Safety measures--Evaluation |
Note | A research submitted in partial fulfillment of the requirements for the degree of Master of Engineering in Information Management |
Publisher | Asian Institute of Technology |
Abstract | Machine learning models are actively being used in various fields for prediction and classification tasks. Unfortunately, several state-of-the-art deep learning systems are vulnerable to attacks by adversarial examples that lead to misclassification. Adversarial examples are generated by adding a perturbation to a clean input, resulting in a dirty input that is indistin guishable from clean data to the human eye and yet is misclassified by the machine learning model. This raises concerns regarding the usability of vulnerable models in real-world appli cations such as autonomous vehicles. This work evaluates four adversarial attacks and two defense methods for NVIDIA DAVE-2, Epoch, and VGG16, each trained on the Udacity’s self driving car dataset. The results show that regression models are vulnerable to adversarial attacks, the transferability of attacks under black-box environment are unsuccessful, and the two defenses are effective against gradient-based attacks, but fail against more sophisticated attacks. |
Year | 2022 |
Type | Research Study Project Report (RSPR) |
School | School of Engineering and Technology |
Department | Department of Information and Communications Technologies (DICT) |
Academic Program/FoS | Information Management (IM) |
Chairperson(s) | Dailey, Matthew N.; |
Examination Committee(s) | Chaklam Silpasuwanchai;Chutiporn Anutariya; |
Degree | Research studies project report (M. Eng.) - Asian Institute of Technology, 2022 |