Show simple item record

FieldValueLanguage
dc.contributor.authorZhao, Zhen
dc.date.accessioned2024-01-07T23:44:34Z
dc.date.available2024-01-07T23:44:34Z
dc.date.issued2023en_AU
dc.identifier.urihttps://hdl.handle.net/2123/32049
dc.descriptionIncludes publication
dc.description.abstractThe rapid development of deep learning has revolutionized various vision tasks, but the success relies heavily on supervised training with large-scale labeled datasets, which can be costly and laborious to acquire. In this context, semi-supervised learning (SSL) has emerged as a promising approach to facilitating deep visual learning with less labeled data. Despite numerous research endeavours in SSL, some technical issues, e.g., the low unlabeled utilization and instance-discriminating, have not been well studied. This thesis emphasizes the cruciality of these issues and proposes new methods for semi-supervised classification (SSC) and semantic segmentation (SSS). In SSC, recent studies are limited in excluding samples with low-confidence predictions and underutilization of label information. Hence, we propose a Label-guided Self-training approach to SSL, which exploits label information to employ a class-aware contrastive loss and buffer-aided label propagation algorithm to fully utilize all unlabeled data. Furthermore, most SSC assumes labeled and unlabeled datasets share an identical class distribution, which is hard to meet in practice. The distribution mismatch between the two sets causes severe bias and performance degradation. We thus propose the Distribution Consistency SSL to address the mismatch from a distribution perspective. In SSS, most studies treat all unlabeled data equally and barely consider different training difficulties among unlabeled instances. We highlight instance differences and propose instance-specific and model-adaptive supervision for SSS. We also study semi-supervised medical image segmentation, where labeled data is scarce. Unlike current increasingly complicated methods, we propose a simple yet effective approach that applies data perturbation and model stabilization strategies to boost performance. Extensive experiments and ablation studies are conducted to verify the superiority of proposed methods on SSC and SSS benchmarks.en_AU
dc.language.isoenen_AU
dc.subjectsemi-supervised learningen_AU
dc.subjectself-supervised learningen_AU
dc.subjectsemi-supervised semantic segmentationen_AU
dc.subjectlabel-efficient learningen_AU
dc.subjectless labeled dataen_AU
dc.titleDeep Visual Learning with Less Labeled Dataen_AU
dc.typeThesis
dc.type.thesisDoctor of Philosophyen_AU
dc.rights.otherThe author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.en_AU
usyd.facultySeS faculties schools::Faculty of Engineering::School of Electrical and Information Engineeringen_AU
usyd.degreeDoctor of Philosophy Ph.D.en_AU
usyd.awardinginstThe University of Sydneyen_AU
usyd.advisorZhou, Luping
usyd.include.pubYesen_AU


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.