Learning Disentangled Representations
Field | Value | Language |
dc.contributor.author | Zhu, Xinqi | |
dc.date.accessioned | 2023-01-09T04:33:16Z | |
dc.date.available | 2023-01-09T04:33:16Z | |
dc.date.issued | 2022 | en_AU |
dc.identifier.uri | https://hdl.handle.net/2123/29858 | |
dc.description.abstract | Artificial intelligence systems are seeking to learn better representations. One of the most desirable properties in these representations is disentanglement. Disentangled representations show merits of interpretability and generalizability. Through these representations, the world around us can be decomposed into explanatory factors of variation, and can thus be more easily understood by not only machines but humans. Disentanglement is akin to the reverse engineering process of a video game, where based on exploring the beautiful open world we need to figure out what underlying controllable factors that actually render/generate these fantastic dynamics. This thesis mainly discusses the problem of how such "reverse engineering" can be achieved using deep learning techniques in the computer vision domain. Although there have been plenty of works tackling this challenging problem, this thesis shows that an important ingredient that is highly effective but largely neglected by existing works is the modeling of visual variation. We show from various perspectives that by integrating the modeling of visual variation in generative models, we can achieve superior unsupervised disentanglement performance that has never been seen before. Specifically, this thesis will cover various novel methods based on technical insights such as variation consistency, variation predictability, perceptual simplicity, spatial constriction, Lie group decomposition, and contrastive nature in semantic changes. Besides the proposed methods, this thesis also touches on topics such as variational autoencoders, generative adversarial networks, latent space examination, unsupervised disentanglement metrics, and neural network architectures. We hope the observations, analysis, and methods presented in this thesis can inspire and contribute to future works in disentanglement learning and related machine learning fields. | en_AU |
dc.language.iso | en | en_AU |
dc.subject | disentangled representation | en_AU |
dc.subject | generative models | en_AU |
dc.subject | variation consistency | en_AU |
dc.subject | Lie Group VAE | en_AU |
dc.subject | interpretable representation | en_AU |
dc.subject | disentanglement learning | en_AU |
dc.title | Learning Disentangled Representations | en_AU |
dc.type | Thesis | |
dc.type.thesis | Doctor of Philosophy | en_AU |
dc.rights.other | The author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission. | en_AU |
usyd.faculty | SeS faculties schools::Faculty of Engineering | en_AU |
usyd.faculty | SeS faculties schools::Faculty of Engineering::School of Computer Science | en_AU |
usyd.degree | Doctor of Philosophy Ph.D. | en_AU |
usyd.awardinginst | The University of Sydney | en_AU |
usyd.advisor | Tao, Dacheng |
Associated file/s
Associated collections