Deep Visual Learning with Less Labeled Data
Field | Value | Language |
dc.contributor.author | Zhao, Zhen | |
dc.date.accessioned | 2024-01-07T23:44:34Z | |
dc.date.available | 2024-01-07T23:44:34Z | |
dc.date.issued | 2023 | en_AU |
dc.identifier.uri | https://hdl.handle.net/2123/32049 | |
dc.description | Includes publication | |
dc.description.abstract | The rapid development of deep learning has revolutionized various vision tasks, but the success relies heavily on supervised training with large-scale labeled datasets, which can be costly and laborious to acquire. In this context, semi-supervised learning (SSL) has emerged as a promising approach to facilitating deep visual learning with less labeled data. Despite numerous research endeavours in SSL, some technical issues, e.g., the low unlabeled utilization and instance-discriminating, have not been well studied. This thesis emphasizes the cruciality of these issues and proposes new methods for semi-supervised classification (SSC) and semantic segmentation (SSS). In SSC, recent studies are limited in excluding samples with low-confidence predictions and underutilization of label information. Hence, we propose a Label-guided Self-training approach to SSL, which exploits label information to employ a class-aware contrastive loss and buffer-aided label propagation algorithm to fully utilize all unlabeled data. Furthermore, most SSC assumes labeled and unlabeled datasets share an identical class distribution, which is hard to meet in practice. The distribution mismatch between the two sets causes severe bias and performance degradation. We thus propose the Distribution Consistency SSL to address the mismatch from a distribution perspective. In SSS, most studies treat all unlabeled data equally and barely consider different training difficulties among unlabeled instances. We highlight instance differences and propose instance-specific and model-adaptive supervision for SSS. We also study semi-supervised medical image segmentation, where labeled data is scarce. Unlike current increasingly complicated methods, we propose a simple yet effective approach that applies data perturbation and model stabilization strategies to boost performance. Extensive experiments and ablation studies are conducted to verify the superiority of proposed methods on SSC and SSS benchmarks. | en_AU |
dc.language.iso | en | en_AU |
dc.subject | semi-supervised learning | en_AU |
dc.subject | self-supervised learning | en_AU |
dc.subject | semi-supervised semantic segmentation | en_AU |
dc.subject | label-efficient learning | en_AU |
dc.subject | less labeled data | en_AU |
dc.title | Deep Visual Learning with Less Labeled Data | en_AU |
dc.type | Thesis | |
dc.type.thesis | Doctor of Philosophy | en_AU |
dc.rights.other | The author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission. | en_AU |
usyd.faculty | SeS faculties schools::Faculty of Engineering::School of Electrical and Information Engineering | en_AU |
usyd.degree | Doctor of Philosophy Ph.D. | en_AU |
usyd.awardinginst | The University of Sydney | en_AU |
usyd.advisor | Zhou, Luping | |
usyd.include.pub | Yes | en_AU |
Associated file/s
Associated collections