Geometric Signal Processing with Graph Neural Networks
Access status:
Open Access
Type
ThesisThesis type
Doctor of PhilosophyAuthor/s
Zhou, BingxinAbstract
One of the most predominant techniques that have achieved phenomenal success in many modern applications is deep learning. The obsession with massive data analysis in image recognition, speech processing, and text understanding spawns remarkable advances in deep learning of diverse ...
See moreOne of the most predominant techniques that have achieved phenomenal success in many modern applications is deep learning. The obsession with massive data analysis in image recognition, speech processing, and text understanding spawns remarkable advances in deep learning of diverse research areas. The alliance of deep learning technologies yields mighty graph neural networks (GNNs), an emerging type of deep neural networks that encodes internal structural relationships of inputs. The mainstream of GNNs finds an adequate numerical representation of graphs, which is vital to the prediction performance of machine learning models. Graph representation learning has many real-world applications, such as drug repurposing, protein classification, epidemic spread controlling, and social networks analysis. The rapid development of GNNs in the last five years has witnessed a couple of design flaws, such as over-smoothing, vulnerability to perturbation, lack of expressivity, and missing explainability. Meanwhile, the persistent enthusiasm in this research area allows for cumulative experience in solving complicated problems, such as size-variant graph compression and time-variant graph dynamic capturing. The ambition of this thesis is to shed some light of mathematics on a few outlined issues. The permutation-invariant design of graph compression is supported by manifold learning, the robust graph smoothing relies heavily on the principles of convex optimization, and the efficient dynamic graph embedding leverages global spectral transforms and power method singular value decomposition. The author believes that the effectiveness of deep learning designs should not be oriented solely by performance over particular datasets. Modifications on a black-box model should operate beyond fine-tuning tricks. The reliability of deep learning looks forward to designing models with rigorous mathematics so that the `computer science' becomes actual science one day.
See less
See moreOne of the most predominant techniques that have achieved phenomenal success in many modern applications is deep learning. The obsession with massive data analysis in image recognition, speech processing, and text understanding spawns remarkable advances in deep learning of diverse research areas. The alliance of deep learning technologies yields mighty graph neural networks (GNNs), an emerging type of deep neural networks that encodes internal structural relationships of inputs. The mainstream of GNNs finds an adequate numerical representation of graphs, which is vital to the prediction performance of machine learning models. Graph representation learning has many real-world applications, such as drug repurposing, protein classification, epidemic spread controlling, and social networks analysis. The rapid development of GNNs in the last five years has witnessed a couple of design flaws, such as over-smoothing, vulnerability to perturbation, lack of expressivity, and missing explainability. Meanwhile, the persistent enthusiasm in this research area allows for cumulative experience in solving complicated problems, such as size-variant graph compression and time-variant graph dynamic capturing. The ambition of this thesis is to shed some light of mathematics on a few outlined issues. The permutation-invariant design of graph compression is supported by manifold learning, the robust graph smoothing relies heavily on the principles of convex optimization, and the efficient dynamic graph embedding leverages global spectral transforms and power method singular value decomposition. The author believes that the effectiveness of deep learning designs should not be oriented solely by performance over particular datasets. Modifications on a black-box model should operate beyond fine-tuning tricks. The reliability of deep learning looks forward to designing models with rigorous mathematics so that the `computer science' becomes actual science one day.
See less
Date
2022Rights statement
The author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.Faculty/School
The University of Sydney Business School, Discipline of Business AnalyticsAwarding institution
The University of SydneyShare