Show simple item record

FieldValueLanguage
dc.contributor.authorLe, Long Tan
dc.date.accessioned2025-05-07T05:56:33Z
dc.date.available2025-05-07T05:56:33Z
dc.date.issued2025en_AU
dc.identifier.urihttps://hdl.handle.net/2123/33875
dc.description.abstractThe rapid integration of Internet of Things (IoT) technologies and smart devices at the network edge has driven a shift from centralized cloud architectures to edge computing to meet growing demands for efficient, low-latency, and secure data processing. Traditional centralized methods are increasingly unsustainable due to concerns over latency, bandwidth, privacy, and cost, prompting the rise of decentralized learning paradigms. Among them, Federated Edge Learning (FEL) has emerged as a key solution for highly distributed, resource-constrained edge environments. It enables multiple devices to collaboratively train a shared model without transmitting local data, enhancing privacy, scalability, and efficiency. This thesis proposes a series of FEL frameworks addressing key challenges in edge intelligence. Firstly, FeDEQ integrates consensus optimization with deep equilibrium learning via the alternating directions method of multipliers, significantly reducing communication and memory usage while maintaining strong performance, validated both theoretically and experimentally. Secondly, WAFL employs Wasserstein distributionally robust optimization to enhance model generalization under adversarial and non-i.i.d. data conditions, consistently outperforming baseline FL methods. Thirdly, FedKO combines Koopman operator theory and Reservoir Computing to process multivariate IoT time series data, enabling efficient, privacy-preserving anomaly detection through a bi-level optimization formulation. Finally, iREPO introduces an edge-friendly framework for aligning large language models via empirical preference optimization and implicit reward pairwise difference regression, achieving strong improvements in alignment benchmarks. Collectively, these contributions advance FEL by offering scalable, robust, and resource-efficient solutions, setting new directions for deploying machine learning in dynamic, heterogeneous, and resource-limited environments.en_AU
dc.language.isoenen_AU
dc.subjectMachine Learningen_AU
dc.subjectFederated Learningen_AU
dc.subjectDistributed Optimizationen_AU
dc.subjectEdge Networksen_AU
dc.titleOptimised Resource-Constrained and Heterogeneity-Aware Federated Edge Learningen_AU
dc.typeThesis
dc.type.thesisDoctor of Philosophyen_AU
dc.rights.otherThe author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.en_AU
usyd.facultySeS faculties schools::Faculty of Engineering::School of Computer Scienceen_AU
usyd.degreeDoctor of Philosophy Ph.D.en_AU
usyd.awardinginstThe University of Sydneyen_AU
usyd.advisorTran, Nguyen


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.