Towards Multi-modal Interpretation and Explanation
Field | Value | Language |
dc.contributor.author | Luo, Siwen | |
dc.date.accessioned | 2024-01-08T01:01:47Z | |
dc.date.available | 2024-01-08T01:01:47Z | |
dc.date.issued | 2023 | en_AU |
dc.identifier.uri | https://hdl.handle.net/2123/32057 | |
dc.description | Includes publication | |
dc.description.abstract | Multimodal task processes on different modalities simultaneously. Visual Question Answering, as a type of multimodal task, aims to answer the natural question answering based on the given image. To understand and process the image, many models to solve the visual question answering task encode the object regions through the convolutional neural network based backbones. Such an image processing method captures the visual features of the object regions in the image. However, the relations between objects are also important information to comprehensively understand the image for answering the complex question, and whether such relational information is captured by the visual features of the object regions remains opaque. To explicitly extract such relational information in images for visual question answering tasks, this research explores an interpretable and structural graph representation to encode the relations between objects. This research works on the three variants of Visual Question Answering tasks with different types of images, including photo-realistic images, daily scene pictures and document pages. Different task-specific relational graphs have been used and proposed to explicitly capture and encode the relations to be used by the proposed models. Such a relational graph provides an interpretable representation of the model inputs and proves its effectiveness in improving the model performance in output prediction. In addition, to improve the interpretation of the model’s prediction, this research also explores the suitable local interpretation method to be applied to the VQA model. | en_AU |
dc.language.iso | en | en_AU |
dc.subject | Explainable AI | en_AU |
dc.subject | Interpretable Artificial Intelligence | en_AU |
dc.subject | Multimodal | en_AU |
dc.subject | Visual Question Answering | en_AU |
dc.title | Towards Multi-modal Interpretation and Explanation | en_AU |
dc.type | Thesis | |
dc.type.thesis | Doctor of Philosophy | en_AU |
dc.rights.other | The author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission. | en_AU |
usyd.faculty | SeS faculties schools::Faculty of Engineering::School of Computer Science | en_AU |
usyd.degree | Doctor of Philosophy Ph.D. | en_AU |
usyd.awardinginst | The University of Sydney | en_AU |
usyd.advisor | Poon, Josiah | |
usyd.include.pub | Yes | en_AU |
Associated file/s
Associated collections