Optimised Resource-Constrained and Heterogeneity-Aware Federated Edge Learning
Access status:
USyd Access
Type
ThesisThesis type
Doctor of PhilosophyAuthor/s
Le, Long TanAbstract
The rapid integration of Internet of Things (IoT) technologies and smart devices at the network edge has driven a shift from centralized cloud architectures to edge computing to meet growing demands for efficient, low-latency, and secure data processing. Traditional centralized ...
See moreThe rapid integration of Internet of Things (IoT) technologies and smart devices at the network edge has driven a shift from centralized cloud architectures to edge computing to meet growing demands for efficient, low-latency, and secure data processing. Traditional centralized methods are increasingly unsustainable due to concerns over latency, bandwidth, privacy, and cost, prompting the rise of decentralized learning paradigms. Among them, Federated Edge Learning (FEL) has emerged as a key solution for highly distributed, resource-constrained edge environments. It enables multiple devices to collaboratively train a shared model without transmitting local data, enhancing privacy, scalability, and efficiency. This thesis proposes a series of FEL frameworks addressing key challenges in edge intelligence. Firstly, FeDEQ integrates consensus optimization with deep equilibrium learning via the alternating directions method of multipliers, significantly reducing communication and memory usage while maintaining strong performance, validated both theoretically and experimentally. Secondly, WAFL employs Wasserstein distributionally robust optimization to enhance model generalization under adversarial and non-i.i.d. data conditions, consistently outperforming baseline FL methods. Thirdly, FedKO combines Koopman operator theory and Reservoir Computing to process multivariate IoT time series data, enabling efficient, privacy-preserving anomaly detection through a bi-level optimization formulation. Finally, iREPO introduces an edge-friendly framework for aligning large language models via empirical preference optimization and implicit reward pairwise difference regression, achieving strong improvements in alignment benchmarks. Collectively, these contributions advance FEL by offering scalable, robust, and resource-efficient solutions, setting new directions for deploying machine learning in dynamic, heterogeneous, and resource-limited environments.
See less
See moreThe rapid integration of Internet of Things (IoT) technologies and smart devices at the network edge has driven a shift from centralized cloud architectures to edge computing to meet growing demands for efficient, low-latency, and secure data processing. Traditional centralized methods are increasingly unsustainable due to concerns over latency, bandwidth, privacy, and cost, prompting the rise of decentralized learning paradigms. Among them, Federated Edge Learning (FEL) has emerged as a key solution for highly distributed, resource-constrained edge environments. It enables multiple devices to collaboratively train a shared model without transmitting local data, enhancing privacy, scalability, and efficiency. This thesis proposes a series of FEL frameworks addressing key challenges in edge intelligence. Firstly, FeDEQ integrates consensus optimization with deep equilibrium learning via the alternating directions method of multipliers, significantly reducing communication and memory usage while maintaining strong performance, validated both theoretically and experimentally. Secondly, WAFL employs Wasserstein distributionally robust optimization to enhance model generalization under adversarial and non-i.i.d. data conditions, consistently outperforming baseline FL methods. Thirdly, FedKO combines Koopman operator theory and Reservoir Computing to process multivariate IoT time series data, enabling efficient, privacy-preserving anomaly detection through a bi-level optimization formulation. Finally, iREPO introduces an edge-friendly framework for aligning large language models via empirical preference optimization and implicit reward pairwise difference regression, achieving strong improvements in alignment benchmarks. Collectively, these contributions advance FEL by offering scalable, robust, and resource-efficient solutions, setting new directions for deploying machine learning in dynamic, heterogeneous, and resource-limited environments.
See less
Date
2025Rights statement
The author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.Faculty/School
Faculty of Engineering, School of Computer ScienceAwarding institution
The University of SydneyShare