An experimental study of the neuron-level mechanisms emerging during backpropagation-based training
However deep and complex they may be, deep neural networks result from the repetition of a very simple building block: the neuron. Our work makes the bet that this key structural characteristic should be used to understand deep learning, and studies the following research question: could it be that during global, end-to-end SGD training of deep nets, neuron-level mechanisms emerge even though they have not been explicitly programmed?