Anomaly detection is an important problem that has been well-studied within diverse research areas and application domains. Anomaly detection or outlier detection is an unsupervised learning task of discerning unusual samples in data. While there are several classical algorithms for anomaly detection, these algorithms require explicit feature engineering and domain knowledge. Another common limitation among these classical methods is their computational scalability. The recent years have witnessed a rapid proliferation of deep neural networks, with unprecedented results in tasks as diverse as visual, speech. In spite of the great progress made by deep learning methods in these domains, there is a relative dearth of deep learning approaches for outlier detection. This thesis investigates how best to leverage deep neural networks for the task of anomaly detection. Firstly we propose the deep and robust autoencoder which learns a nonlinear subspace that captures the majority of data points. We show that this technique yields noticeable improvements in anomaly detection performance on complex real-world image data, where a linear projection cannot capture sufficient structure in the data. Secondly, for the task of detecting anomalous collections of individual data points, in particular for group anomaly detection (GAD), we take a generative approach by proposing deep generative models: Adversarial autoencoder (AAE) and variational autoencoder (VAE) for group anomaly detection. The empirical results on real-world data-sets demonstrate that our approach is effective and robust in detecting group anomalies. We also propose a One-class neural network (OC- NN) model to detect anomalies in complex data sets. OC-NN combines the ability of deep networks to extract a progressively rich representation of data with the one-class objective of creating a tight envelope around normal data in order to separate anomalous points.