Show simple item record

FieldValueLanguage
dc.contributor.authorTiao, Chi-Chun
dc.date.accessioned2024-07-17T00:47:07Z
dc.date.available2024-07-17T00:47:07Z
dc.date.issued2024en_AU
dc.identifier.urihttps://hdl.handle.net/2123/32803
dc.descriptionIncludes publication
dc.description.abstractAdvances in artificial intelligence (AI) are rapidly transforming our world, with systems now surpassing human capabilities in numerous domains. Much of this progress traces back to machine learning (ML), particularly deep learning, and its ability to uncover meaningful patterns in data. However, true intelligence in AI demands more than raw predictive power; it requires a principled approach to making decisions under uncertainty. Probabilistic ML offers a framework for reasoning about the unknown in ML models through probability theory and Bayesian inference. Gaussian processes (GPs) are a quintessential probabilistic model known for their flexibility, data efficiency, and well-calibrated uncertainty estimates. GPs are integral to sequential decision-making algorithms like Bayesian optimisation (BO), which optimises expensive black-box objective functions. Despite efforts to improve GP scalability, performance gaps persist compared to neural networks (NNs) due to their lack of representation learning capabilities. This thesis aims to integrate deep learning with probabilistic methods and lend probabilistic perspectives to deep learning. Key contributions include: (1) Extending orthogonally-decoupled sparse GP approximations to incorporate nonlinear NN activations as inter-domain features, bringing predictive performance closer to NNs. (2) Framing cycle-consistent adversarial networks (CYCLEGANs) for unpaired image-to-image translation as variational inference (VI) in an implicit latent variable model, providing a Bayesian perspective on these deep generative models. (3) Introducing a model-agnostic reformulation of BO based on binary classification, enabling the integration of powerful modelling approaches like deep learning for complex optimisation tasks. By enriching the interplay between deep learning and probabilistic ML, this thesis advances the foundations of AI, facilitating the development of more capable and dependable automated decision-making systems.en_AU
dc.language.isoenen_AU
dc.subjectBayesian optimisationen_AU
dc.subjectGaussian processesen_AU
dc.subjectvariational inferenceen_AU
dc.subjectdeep learningen_AU
dc.subjectmachine learningen_AU
dc.subjectartificial intelligenceen_AU
dc.titleProbabilistic Machine Learning in the Age of Deep Learning: New Perspectives for Gaussian Processes, Bayesian Optimisation and Beyonden_AU
dc.typeThesis
dc.type.thesisDoctor of Philosophyen_AU
dc.rights.otherThe author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.en_AU
usyd.facultySeS faculties schools::Faculty of Engineering::School of Computer Scienceen_AU
usyd.degreeDoctor of Philosophy Ph.D.en_AU
usyd.awardinginstThe University of Sydneyen_AU
usyd.advisorRamos, Fabio
usyd.include.pubYesen_AU


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.