1 AIT Asian Institute of Technology

Evaluating adversarial attacks on and defences of autonomous driving models

AuthorSiddharth, Thoviti
Call NumberAIT RSPR no.IM-22-02
Subject(s)Machine learning--Security measures
Automated vehicles--Safety measures--Evaluation
NoteA research submitted in partial fulfillment of the requirements for the degree of Master of Engineering in Information Management
PublisherAsian Institute of Technology
AbstractMachine learning models are actively being used in various fields for prediction and classification tasks. Unfortunately, several state-of-the-art deep learning systems are vulnerable to attacks by adversarial examples that lead to misclassification. Adversarial examples are generated by adding a perturbation to a clean input, resulting in a dirty input that is indistin guishable from clean data to the human eye and yet is misclassified by the machine learning model. This raises concerns regarding the usability of vulnerable models in real-world appli cations such as autonomous vehicles. This work evaluates four adversarial attacks and two defense methods for NVIDIA DAVE-2, Epoch, and VGG16, each trained on the Udacity’s self driving car dataset. The results show that regression models are vulnerable to adversarial attacks, the transferability of attacks under black-box environment are unsuccessful, and the two defenses are effective against gradient-based attacks, but fail against more sophisticated attacks.
Year2022
TypeResearch Study Project Report (RSPR)
SchoolSchool of Engineering and Technology
DepartmentDepartment of Information and Communications Technologies (DICT)
Academic Program/FoSInformation Management (IM)
Chairperson(s)Dailey, Matthew N.;
Examination Committee(s)Chaklam Silpasuwanchai;Chutiporn Anutariya;
DegreeResearch studies project report (M. Eng.) - Asian Institute of Technology, 2022


Usage Metrics
View Detail0
Read PDF0
Download PDF0