Show simple item record

FieldValueLanguage
dc.contributor.authorGuo, Jianyuan
dc.date.accessioned2025-04-07T01:57:18Z
dc.date.available2025-04-07T01:57:18Z
dc.date.issued2025en_AU
dc.identifier.urihttps://hdl.handle.net/2123/33792
dc.description.abstractIn recent years, intelligent systems have evolved significantly, transforming daily life. To maximize their impact, efficient deployment on edge devices—smartphones, smartwatches, robots, and autonomous vehicles—is essential. Deep neural networks, foundational in computer vision, offer powerful feature encoding but demand substantial computational resources, leading to high energy consumption and carbon footprint. This thesis focuses on developing compact yet high-precision deep learning models that balance performance and efficiency. It explores efficient vision backbones and compression techniques to support various tasks while ensuring deployability. We propose a hybrid architecture integrating transformers for global dependencies and CNNs for local feature extraction, replacing traditional components with fully connected layers to enhance efficiency. This design reduces complexity while maintaining accuracy. We further investigate training a unified model for multiple vision tasks through a data-efficient strategy, enabling the model to handle both high- and low-level tasks. Extending to multi-modal learning, we introduce an efficient fusion framework to enhance AI perception in real-world applications. Additionally, we refine knowledge distillation for compact models, reassessing existing methods to improve real-world applicability. Specifically, for object detection, we highlight the overlooked role of background information and propose a decoupled distillation method that enhances performance. This thesis presents practical solutions for lightweight neural networks, enabling AI deployment in resource-constrained environments. By optimizing deep learning models for efficiency, it contributes to the accessibility and sustainability of AI across various domains.en_AU
dc.language.isoenen_AU
dc.subjectmachine perceptionen_AU
dc.subjectefficiencyen_AU
dc.subjectvision foundation modelen_AU
dc.subjectdeep learningen_AU
dc.subjectmodel compressionen_AU
dc.titleNeural Architecture Design and Compression for Efficient Vision Perceptionen_AU
dc.typeThesis
dc.type.thesisDoctor of Philosophyen_AU
dc.rights.otherThe author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.en_AU
usyd.facultySeS faculties schools::Faculty of Engineering::School of Computer Scienceen_AU
usyd.degreeDoctor of Philosophy Ph.D.en_AU
usyd.awardinginstThe University of Sydneyen_AU
usyd.advisorXu, Chang


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.