Interpretability and Generalization of Deep Low-Level Vision Models
Field | Value | Language |
dc.contributor.author | Gu, Jinjin | |
dc.date.accessioned | 2024-03-07T22:59:12Z | |
dc.date.available | 2024-03-07T22:59:12Z | |
dc.date.issued | 2024 | en_AU |
dc.identifier.uri | https://hdl.handle.net/2123/32330 | |
dc.description.abstract | The low-level vision task is an important type of task in computer vision, including various image restoration tasks, such as image super-resolution, image denoising, image deraining, etc. In recent years, deep learning technology has become the de facto method for solving low-level vision problems, relying on its excellent performance and ease of use. By training on large amounts of paired data, it is anticipated that deep low-level vision models can learn rich semantic knowledge and process images in an intelligent manner for real-world applications. However, because our understanding of deep learning models and low-level vision tasks is not deep enough, we cannot explain the success and failure of these deep low-level vision models. Deep learning models are widely acknowledged as ``black boxes'' due to their complexity and non-linearity. We cannot know what information the model used when processing the input or whether it learned what we wanted. When there is a problem with the model, we cannot identify the underlying source of the problem, such as the generalization problem of the low-level vision model. This research proposes interpretability analysis of deep low-level vision models to gain a more profound insight into the deep learning models for low-level vision tasks. I aim to elucidate the mechanisms of the deep learning approach and to discern insights regarding the successes or shortcomings of these methods. This is the first study to perform interpretability analysis on the deep low-level vision model. | en_AU |
dc.language.iso | en | en_AU |
dc.subject | Deep learning | en_AU |
dc.subject | Computer Vision | en_AU |
dc.subject | Low-level Vision | en_AU |
dc.subject | Deep learning Interpretability | en_AU |
dc.subject | Generalization Problem | en_AU |
dc.subject | Super-Resolution | en_AU |
dc.title | Interpretability and Generalization of Deep Low-Level Vision Models | en_AU |
dc.type | Thesis | |
dc.type.thesis | Doctor of Philosophy | en_AU |
dc.rights.other | The author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission. | en_AU |
usyd.faculty | SeS faculties schools::Faculty of Engineering::School of Electrical and Information Engineering | en_AU |
usyd.degree | Doctor of Philosophy Ph.D. | en_AU |
usyd.awardinginst | The University of Sydney | en_AU |
usyd.advisor | Zhou, Luping | |
usyd.include.pub | No | en_AU |
Associated file/s
Associated collections