Show simple item record

FieldValueLanguage
dc.contributor.authorChen, Chen
dc.date.accessioned2025-07-08T05:36:34Z
dc.date.available2025-07-08T05:36:34Z
dc.date.issued2025en_AU
dc.identifier.urihttps://hdl.handle.net/2123/34082
dc.description.abstractText-based games (TBG) are complex environments that allow users or computer agents to make textual interactions and achieve game goals. The text-based gameplay experiences are comparable to smart computing agent task episodes, which are understanding user text input and achieving a target to get user satisfaction. Designing text-based game agents that autonomously learn from the games and reach the game goals is challenging. In order to understand and propose our way to tackle this challenge, we first need to systematically classify and analyze text-based game agent-related methods. Based on the encoder analysis and observations, we propose a new model to improve the text-based game agent's trajectory learning capability, allowing the agent to learn from its gameplay experience and improve performance. In this thesis, we first review the related literature of text-based games and analyze associated methods. We implemented a standardized agent and performed ablation tests on selected encoder models, which allowed us to choose the best-performing encoder architecture. Next, with a Transformer-based model chosen as our base model, we propose a model that uses a GPT encoder to replace the widely adopted LSTM trajectory encoder to improve text-based game agent trajectory learning capability. To capture the state relationship of the text-based game environment, we extend the relative positional embedding used in traditional NLP tasks into state trajectory learning under the Reinforcement Learning framework. We analyze the critical differences between relative positional embedding used in traditional NLP tasks and reinforcement learning state observation trajectory learning. We propose our exploration trajectory embedding, which is the first relative positional embedding used in this field. Our experiment shows that our model, together with the newly introduced relative positional embedding, brings substantial improvement to text-based game trajectory learning.en_AU
dc.language.isoenen_AU
dc.subjectReinforcement Learningen_AU
dc.subjectText-Based Gamesen_AU
dc.subjectDeep Learningen_AU
dc.subjectNatural Language Processingen_AU
dc.titleLeveraging Relative Position for Trajectory Learning in Text-based Gamesen_AU
dc.typeThesis
dc.type.thesisMasters by Researchen_AU
dc.rights.otherThe author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.en_AU
usyd.facultySeS faculties schools::Faculty of Engineering::School of Computer Scienceen_AU
usyd.degreeMaster of Philosophy M.Philen_AU
usyd.awardinginstThe University of Sydneyen_AU
usyd.advisorPoon, Josiah
usyd.include.pubNoen_AU


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.