Show simple item record

FieldValueLanguage
dc.contributor.authorKarunanayake, Naveen Harshitha
dc.date.accessioned2025-04-09T01:22:34Z
dc.date.available2025-04-09T01:22:34Z
dc.date.issued2025en_AU
dc.identifier.urihttps://hdl.handle.net/2123/33805
dc.description.abstractDeep neural networks (DNNs) have achieved significant success across a broad spectrum of applications, such as autonomous driving, medical imaging, and natural language processing. However, their susceptibility to distributional shifts, including adversarial examples and out-of-distribution (OOD) data, poses serious challenges to their robustness and reliability in safety-critical applications. This thesis examines adversarial and OOD perspectives on DNN robustness, proposing novel approaches to identify adversarial vulnerabilities and strengthen OOD robustness, ultimately improving their reliability in real-world applications. We begin by surveying the intersection of adversarial robustness and OOD detection, establishing a taxonomy centered around distributional shifts. Next, we investigate the adversarial vulnerability of inputs, proposing a novel metric based on the clipped gradients of the loss with respect to the input. This metric reveals that some inputs are inherently more susceptible to adversarial perturbations, a property that can be leveraged to improve black-box attack pipelines. To improve OOD detection, we propose two algorithms that utilise class rank information implicitly learned by DNNs during standard training with cross-entropy. These methods are motivated by the observation that class ranking patterns in in-distribution (ID) data are more consistent and deterministic than the stochastic patterns seen in OOD data. Accordingly, first, we introduce ExCeL, a post-hoc detector that integrates extreme and collective information from the output layer of DNNs. By combining the maximum logit (i.e., extreme information) with novel rank-based score (i.e., collective information), ExCeL achieves consistent and competitive performance across diverse OOD scenarios. Finally, we present CRAFT, a fine-tuning approach that further strengthens OOD robustness by optimising DNNs based on the implicit class ranking information learned during pre-training.en_AU
dc.language.isoenen_AU
dc.subjectAdversarial attacksen_AU
dc.subjectOut-of-Distribution detectionen_AU
dc.subjectRobustnessen_AU
dc.subjectDeep Neural Networksen_AU
dc.titleAdversarial and Out-of-Distribution Perspectives on Deep Neural Network Robustnessen_AU
dc.typeThesis
dc.type.thesisDoctor of Philosophyen_AU
dc.rights.otherThe author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.en_AU
usyd.facultySeS faculties schools::Faculty of Engineering::School of Computer Scienceen_AU
usyd.degreeDoctor of Philosophy Ph.D.en_AU
usyd.awardinginstThe University of Sydneyen_AU
usyd.advisorSeneviratne, Suranga


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.