Boosting Adversarial Robustness via Neural Architecture Search and Design
Access status:
Open Access
Type
ThesisThesis type
Doctor of PhilosophyAuthor/s
Dong, MinjingAbstract
Adversarial robustness in Deep Neural Networks (DNNs) is a critical and emerging field of research that addresses the vulnerability of DNNs to subtle, intentionally crafted perturbations in their input data. These perturbations, often imperceptible to the human eye, can lead to ...
See moreAdversarial robustness in Deep Neural Networks (DNNs) is a critical and emerging field of research that addresses the vulnerability of DNNs to subtle, intentionally crafted perturbations in their input data. These perturbations, often imperceptible to the human eye, can lead to significant error increment in the network's predictions, while they can be easily derived via adversarial attacks in various data formats, such as image, text, and audio. This susceptibility poses serious security and trustworthy concerns in real-world applications such as autonomous driving, healthcare diagnostics, and cybersecurity. To enhance the trustworthiness of DNNs, lots of research efforts have been put into developing techniques that aim to improve DNNs ability to defend against such adversarial attacks, ensuring that trustworthy results can be provided in real-world scenarios. The main stream of adversarial robustness lies in the adversarial training strategies and regularizations. However, less attention has been paid to the DNN itself. Little is known about the influence of different neural network architectures or designs on adversarial robustness. To fulfill this knowledge gap, we propose to advance adversarial robustness via investigating neural architecture search and design in this thesis.
See less
See moreAdversarial robustness in Deep Neural Networks (DNNs) is a critical and emerging field of research that addresses the vulnerability of DNNs to subtle, intentionally crafted perturbations in their input data. These perturbations, often imperceptible to the human eye, can lead to significant error increment in the network's predictions, while they can be easily derived via adversarial attacks in various data formats, such as image, text, and audio. This susceptibility poses serious security and trustworthy concerns in real-world applications such as autonomous driving, healthcare diagnostics, and cybersecurity. To enhance the trustworthiness of DNNs, lots of research efforts have been put into developing techniques that aim to improve DNNs ability to defend against such adversarial attacks, ensuring that trustworthy results can be provided in real-world scenarios. The main stream of adversarial robustness lies in the adversarial training strategies and regularizations. However, less attention has been paid to the DNN itself. Little is known about the influence of different neural network architectures or designs on adversarial robustness. To fulfill this knowledge gap, we propose to advance adversarial robustness via investigating neural architecture search and design in this thesis.
See less
Date
2023Rights statement
The author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.Faculty/School
Faculty of Engineering, School of Computer ScienceAwarding institution
The University of SydneyShare