Show simple item record

FieldValueLanguage
dc.contributor.authorZhang, Yinmin
dc.date.accessioned2024-09-09T04:23:58Z
dc.date.available2024-09-09T04:23:58Z
dc.date.issued2024en_AU
dc.identifier.urihttps://hdl.handle.net/2123/33059
dc.description.abstractInspired by the successful application of large models in natural language processing and computer vision, both the research community and industry have increasingly focused on leveraging extensive datasets to enhance decision-making capabilities. As a prominent area of research, Offline Reinforcement Learning (RL) aims to facilitate agents to learn effective strategies from a static dataset without environmental interaction. This approach has the potential to utilize vast amounts of accumulated data collected from the real world, significantly mitigating the constraints and costs associated with online learning, while reducing the risks of suboptimal decisions during the learning phase. This thesis addresses several challenges within the context of offline RL, aiming to enhance the decision-making processes, with a focus on improving robustness, effectiveness, and generalizability. We introduce novel methodologies that adaptively adjust the level of conservatism in policy learning, extend the capabilities of offline RL to multi-agent systems, and smooth the transition from offline to online learning. Through a combination of theoretical insights and empirical validations, this work significantly contributes to both the understanding and practice of offline RL in complex decision-making scenarios. In conclusion, this thesis systematically explores innovative methods to overcome inherent challenges in offline RL and additionally extends offline RL to the context of multi-agent systems and online continue learning. This work suggests new avenues for future research in adaptive, multi-agent, and online RL paradigms, highlighting the potential directions for the offline RL research community.en_AU
dc.language.isoenen_AU
dc.subjectOffline Reinforcement Learningen_AU
dc.subjectReinforcement Learningen_AU
dc.subjectDeep Reinforcement Learningen_AU
dc.subjectMulti-Agent Reinforcement Learningen_AU
dc.subjectDecision-Makingen_AU
dc.titleEnhancing Decision-Making in Offline Reinforcement Learning: Adaptive, Multi-Agent, and Online Perspectivesen_AU
dc.typeThesis
dc.type.thesisDoctor of Philosophyen_AU
dc.rights.otherThe author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.en_AU
usyd.facultySeS faculties schools::Faculty of Engineering::School of Electrical and Information Engineeringen_AU
usyd.degreeDoctor of Philosophy Ph.D.en_AU
usyd.awardinginstThe University of Sydneyen_AU
usyd.advisorYuan, Dong
usyd.include.pubNoen_AU


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.