Show simple item record

FieldValueLanguage
dc.contributor.authorZhang, Shaojun
dc.date.accessioned2022-01-10T04:23:36Z
dc.date.available2022-01-10T04:23:36Z
dc.date.issued2021en_AU
dc.identifier.urihttps://hdl.handle.net/2123/27303
dc.description.abstractThe recent surge of deep learning challenges scheduling and resource management in clusters. Their workloads are composed of large numbers of parallel tasks and complex intra-dependencies. The cluster scheduler must maintain efficient utilization of resources to keep high service quality. However, traditional heuristics can hardly capture the diversity within the workload pattern, whose optimization is NP-hard. So, recent work introduces deep reinforcement learning (DRL) as a potential alternative in scheduling. It is able to tackle the optimization problem effectively but still shares some limitations. First, the deep neural network model is hard to interpret and the policy has to work as a black box. Moreover, the model is vulnerable to input perturbations and might cause panic in the system. In this research, we solve key problems of scheduling deep learning workloads in the cluster with both heuristic and DRL-based schedulers. First, we propose a scheduling system for deep learning inference, incorporating fine-grained batching and fair scheduling. Our scheduler collaborates with existing deep learning frameworks to offer high throughput and low latency. Second, we develop the multi-level explanation framework for the DRL-based policy. It utilizes interpretable features, simple machine learning models, and heuristics to approximate and explain the policy. Then, we propose job perturbation to investigate the robustness issue associated with the DRL-based scheduler. We show that a user may craft gradient-guided perturbations to job features or structures to obtain more computational resources and make the scheduler dispatch her task faster. Finally, we propose the adversarial training framework to improve the robustness of the DRL-based scheduler. By learning deliberate perturbations during training, the scheduler lowers the success rate of perturbation and the benefits of the perturbed job during testing.en_AU
dc.language.isoenen_AU
dc.subjectSchedulingen_AU
dc.subjectdeep learningen_AU
dc.subjectrobustnessen_AU
dc.subjectdeep reinforcement learningen_AU
dc.subjectclusteren_AU
dc.subjectclouden_AU
dc.titleDeep Learning and Job Scheduling in Clustersen_AU
dc.typeThesis
dc.type.thesisDoctor of Philosophyen_AU
dc.rights.otherThe author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.en_AU
usyd.facultySeS faculties schools::Faculty of Engineering::School of Computer Scienceen_AU
usyd.degreeDoctor of Philosophy Ph.D.en_AU
usyd.awardinginstThe University of Sydneyen_AU
usyd.advisorZomaya, Albert


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.