In this talk we will addresses the problem of learning Deep Neural Network (DNN) through the use of smart initializations or regularizations. Moreover, we will look at recent applications of DNN to structured output problems (such as image labeling or facial landmark detection). - Introduction to supervised learning, why using regularization ? why looking for sparsity ? - Introduction to perceptron, multilayer perceptron and back-propagation - Deep Neural Network and the vanishing gradient problem - Smart initializations and topologies (stacked autoencoders, convolutional neural networks) - Regularizing (denoising and contractive AE, dropout, multi-obj) - Deep architecture for high dimensional output or structured output problems