Show simple item record

FieldValueLanguage
dc.contributor.authorYu, Qianbi
dc.date.accessioned2023-09-15T04:59:13Z
dc.date.available2023-09-15T04:59:13Z
dc.date.issued2023en_AU
dc.identifier.urihttps://hdl.handle.net/2123/31670
dc.description.abstractDeep learning-based segmentation methods have been widely employed for medical image analysis, especially for automatic disease diagnosis and prognosis. Nevertheless, existing deep-learning models benefit from large amounts of annotated data, bringing auxiliary data acquisition and annotation costs. In practice, privacy, security, and storage concerns often impede the availability of medical images for model training. On the other side, most of the deep learning models suffer from performance drops when validated on unseen datasets with distribution shifts. Unsupervised domain adaptation (UDA) has been developed to address this issue by transferring the knowledge from the labeled source data to the unlabeled target data. To further facilitate the data efficiency of the cross-domain segmentation methods, we explore UDA medical image segmentation problems using a few labeled source data and under a multi-source data-free situation in this work. For UDA image segmentation with few labeled source data, we first create a searching-based multi-style invariant mechanism to expand the data distribution with style diversity. A prototype consistency mechanism for the foreground objects is then developed to enable the alignment of the features for each kind of tissue in various image styles. The segmentation performance on the target photos is further improved by a cross-style self-supervised learning strategy. For a multi-source data-free UDA problem, a single student multiple teach network is initially established to distill knowledge from several pre-trained source models. The pre-trained model is then sorted to remove domain biases from the various source domains using a weighted transfer learning module. A Cross-domain averaging module also preserves overall consistency by accounting for model parameters. Our methods have outperformed several state-of-the-art UDA segmentation methods on both retinal fundus segmentation and MRI prostate segmentation tasks.en_AU
dc.language.isoenen_AU
dc.subjectComputer Visionen_AU
dc.subjectMachine Learningen_AU
dc.subjectMedical Image Processingen_AU
dc.subjectRetinal Fundus Segmentationen_AU
dc.subjectMRI Prostate Segmentationen_AU
dc.titleData-efficient Cross-domain Medical Image Segmentationen_AU
dc.typeThesis
dc.type.thesisMasters by Researchen_AU
dc.rights.otherThe author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.en_AU
usyd.facultySeS faculties schools::Faculty of Engineering::School of Computer Scienceen_AU
usyd.degreeMaster of Philosophy M.Philen_AU
usyd.awardinginstThe University of Sydneyen_AU
usyd.advisorCai, Weidong


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.