Information processing in the Deep Neural Network: the content of the image (the handwritten number 3) is read. The intermediate layers take in the information one after the other. In the process, it is distributed to the artificial neurons. In the final stage, the output layer outputs a number, which should match the input value. The artificial neurons specialise: one neuron becomes active when, for example, a three is shown, another neuron becomes active when a nine is shown. Figure: David Ehrlich, CIDBN, University of Göttingen

Looking deep into the Network

Göttingen research team explores information processing in Deep Neural Networks

Artificial neural networks are everywhere in research and technology, as well as in everyday technologies such as speech recognition. Despite this, it is still unclear to researchers what is exactly going on deep down in these networks. To find out, researchers at the Göttingen Campus Institute for Dynamics of Biological Networks (CIDBN) at Göttingen University, the Max Planck Institute for Dynamics and Self-Organisation (MPI-DS) and MBExC member Viola Priesemann have carried out an information-theoretic analysis of Deep Learning, a special form of machine learning. They realised that information is represented in a less complex way the more it is processed. Furthermore, they observed training effects: the more often a network is “trained” with certain data, the fewer “neurons” are needed to process the information at the same time. The results were published in Transactions on Machine Learning Research.

Click here to read the full press release.