Adversarial Data Generation for Robust Deep Learning
Access status:
USyd Access
Type
ThesisThesis type
Doctor of PhilosophyAuthor/s
Yang, ShuoAbstract
The success of deep learning is inseparable from the support of massive data. The vast amount of high-quality data is one of the most crucial prerequisites to train a robust deep learning model. However, the raw data collected in the real world usually has some defects which prevent ...
See moreThe success of deep learning is inseparable from the support of massive data. The vast amount of high-quality data is one of the most crucial prerequisites to train a robust deep learning model. However, the raw data collected in the real world usually has some defects which prevent their direct usage for training. In this thesis, we propose to employ the generative adversarial learning framework to generate high-quality data for training robust deep learning models. We focus on the generation of two of the most common data types: time series and images. For time series, we propose an adversarial recurrent time series imputation model to reconstruct the incomplete time series. Specifically, our model modifies the traditional Recurrent Neural Network (RNN) architecture to better capture the temporal dependencies and feature correlations and legitimately combines them to impute the missing part. Besides, we employ an element-wise generative adversarial learning framework to train the modified recurrent structure to generate more realistic data. Experiments on several real-world time series datasets demonstrate encouraging improvement of our model on the imputation performance as well as the classification accuracy. For image classification, recent studies find that deep learning models trained with natural images can be vulnerable to a kind of adversarially generated perturbations, namely adversarial perturbations. In this work, we improve the adversarial example generation and traditional AT framework from four aspects: adaptive perturbation size, diversified adversarial examples, stabilizing AT with massive contrastive adversaries, and adversarial robustness through representation disentanglement. The models presented above all demonstrate empirical remarkable performance improvement in terms of the quality of adversarial examples and model robustness.
See less
See moreThe success of deep learning is inseparable from the support of massive data. The vast amount of high-quality data is one of the most crucial prerequisites to train a robust deep learning model. However, the raw data collected in the real world usually has some defects which prevent their direct usage for training. In this thesis, we propose to employ the generative adversarial learning framework to generate high-quality data for training robust deep learning models. We focus on the generation of two of the most common data types: time series and images. For time series, we propose an adversarial recurrent time series imputation model to reconstruct the incomplete time series. Specifically, our model modifies the traditional Recurrent Neural Network (RNN) architecture to better capture the temporal dependencies and feature correlations and legitimately combines them to impute the missing part. Besides, we employ an element-wise generative adversarial learning framework to train the modified recurrent structure to generate more realistic data. Experiments on several real-world time series datasets demonstrate encouraging improvement of our model on the imputation performance as well as the classification accuracy. For image classification, recent studies find that deep learning models trained with natural images can be vulnerable to a kind of adversarially generated perturbations, namely adversarial perturbations. In this work, we improve the adversarial example generation and traditional AT framework from four aspects: adaptive perturbation size, diversified adversarial examples, stabilizing AT with massive contrastive adversaries, and adversarial robustness through representation disentanglement. The models presented above all demonstrate empirical remarkable performance improvement in terms of the quality of adversarial examples and model robustness.
See less
Date
2021Rights statement
The author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission.Faculty/School
Faculty of Engineering, School of Computer ScienceAwarding institution
The University of SydneyShare