Show simple item record

FieldValueLanguage
dc.contributor.authorZhang, Kaining
dc.date.accessioned2024-05-21T03:08:35Z
dc.date.available2024-05-21T03:08:35Z
dc.date.issued2023en_AU
dc.identifier.urihttps://hdl.handle.net/2123/32569
dc.descriptionIncludes publication
dc.description.abstractRecent advancements in machine learning have revolutionised research across various fields. Despite their success, conventional learning techniques are hindered by their significant computational resources and energy requirements. Prompted by recent experimental breakthroughs in quantum computing, variational quantum machine learning (QML) – machine learning integrated with variational quantum circuits (VQCs) – has emerged as a promising alternative. Nonetheless, the theoretical framework underpinning the advantages of variational QML is still rudimentary. Specifically, the training of VQCs faces several challenges, such as the barren plateau problem, where the gradient diminishes exponentially with an increasing qubits. A related issue arises in variational QML training, where the convergence rate is exponentially small. In this thesis, we present theoretically guaranteed solutions to these challenges. First, we construct innovative circuit architectures to address the vanishing gradient problem in deep VQCs. We propose quantum controlled-layer and quantum ResNet structures, demonstrating that the expected gradient norm's lower bound is unaffected by the increase in qubits and circuit depth. Next, we introduce an initialization strategy to mitigate the vanishing gradient issue in general deep quantum circuits. We prove that Gaussian-initialized parameters ensure the gradient norm's decay rate remains inversely polynomial despite the increase in qubit numbers and circuit depth. Finally, we propose a novel and effective theory for analysing the training of quantum neural networks with moderate depths. We prove that, under certain randomness conditions in the circuits and datasets, training converges linearly with a rate inversely proportional to the dataset size. Our approach surpasses previous results, achieving exponentially larger convergence rates with modest depth, or conversely, requiring exponentially less depth for equivalent rates.en_AU
dc.language.isoenen_AU
dc.subjectquantum machine learningen_AU
dc.subjectquantum algorithmen_AU
dc.titleTraining Theory of Variational Quantum Machine Learningen_AU
dc.typeThesis
dc.type.thesisDoctor of Philosophyen_AU
dc.rights.otherThe author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.en_AU
usyd.facultySeS faculties schools::Faculty of Engineering::School of Computer Scienceen_AU
usyd.degreeDoctor of Philosophy Ph.D.en_AU
usyd.awardinginstThe University of Sydneyen_AU
usyd.advisorTao, Dacheng
usyd.include.pubYesen_AU


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.