The Power of Quantum Neural Networks in The Noisy Intermediate-Scale Quantum Era
Access status:
USyd Access
Type
ThesisThesis type
Doctor of PhilosophyAuthor/s
Du, YuxuanAbstract
Machine learning (ML) has revolutionized the world in recent years. Despite the success, the huge computational overhead required by ML models makes them approach the limits of Moore’s law. Quantum machine learning (QML) is a promising way to conquer this issue, empowered by Google's ...
See moreMachine learning (ML) has revolutionized the world in recent years. Despite the success, the huge computational overhead required by ML models makes them approach the limits of Moore’s law. Quantum machine learning (QML) is a promising way to conquer this issue, empowered by Google's demonstration of quantum computational supremacy. Meanwhile, another cornerstone in QML is validating that quantum neural networks (QNNs) implemented on the noisy intermediate-scale quantum (NISQ) chips can accomplish classification and image generation tasks. Despite the experimental progress, little is known about the theoretical advances of QNNs. In this thesis, we explore the power of QNNs to fill this knowledge gap. First, we consider the potential advantages of QNNs in generative learning. We demonstrate that QNNs possess a stronger expressive power than that of classical neural networks in the measure of computational complexity and entanglement entropy. Moreover, we employ QNNs to tackle synthetic generation tasks with state-of-the-art performance. Next, we propose a Grover-search based quantum classifier, which can tackle specific classification tasks with quadratic runtime speedups. Furthermore, we exhibit that the proposed scheme allows batch gradient descent optimization, which is different from previous studies. This property is crucial to train large-scale datasets. Then, we study the capabilities and limitations of QNNs in the view of optimization theory and learning theory. The achieved results imply that a large system noise can destroy the trainability of QNNs. Meanwhile, we show that QNNs can tackle parity learning and juntas learning with provable advantages. Last, we devise a quantum auto-ML scheme to enhance the trainability QNNs under the NISQ setting. The achieved results indicate that our proposal effectively mitigates system noise and alleviates barren plateaus for both conventional machine learning and quantum chemistry tasks.
See less
See moreMachine learning (ML) has revolutionized the world in recent years. Despite the success, the huge computational overhead required by ML models makes them approach the limits of Moore’s law. Quantum machine learning (QML) is a promising way to conquer this issue, empowered by Google's demonstration of quantum computational supremacy. Meanwhile, another cornerstone in QML is validating that quantum neural networks (QNNs) implemented on the noisy intermediate-scale quantum (NISQ) chips can accomplish classification and image generation tasks. Despite the experimental progress, little is known about the theoretical advances of QNNs. In this thesis, we explore the power of QNNs to fill this knowledge gap. First, we consider the potential advantages of QNNs in generative learning. We demonstrate that QNNs possess a stronger expressive power than that of classical neural networks in the measure of computational complexity and entanglement entropy. Moreover, we employ QNNs to tackle synthetic generation tasks with state-of-the-art performance. Next, we propose a Grover-search based quantum classifier, which can tackle specific classification tasks with quadratic runtime speedups. Furthermore, we exhibit that the proposed scheme allows batch gradient descent optimization, which is different from previous studies. This property is crucial to train large-scale datasets. Then, we study the capabilities and limitations of QNNs in the view of optimization theory and learning theory. The achieved results imply that a large system noise can destroy the trainability of QNNs. Meanwhile, we show that QNNs can tackle parity learning and juntas learning with provable advantages. Last, we devise a quantum auto-ML scheme to enhance the trainability QNNs under the NISQ setting. The achieved results indicate that our proposal effectively mitigates system noise and alleviates barren plateaus for both conventional machine learning and quantum chemistry tasks.
See less
Date
2021Rights statement
The author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.Faculty/School
Faculty of Engineering, School of Computer ScienceAwarding institution
The University of SydneyShare