Deep learning research landscape & roadmap in a nutshell: past, present and future -- Towards deep cortical learning
Abstract
The past, present and future of deep learning is presented in this work. Given this landscape
& roadmap, we predict that deep cortical learning will be the convergence of deep learning &
cortical learning which builds an artificial cortical column ultimately.
Past: Deep learning inspirations
Deep learning horizon, landscape and research roadmap in nutshell is presented in this figure. The historical development and timeline of deep learning & neural network is separately illustrated in figure. The Origin of neural nets is thoroughly reviewed in terms of the evolutionary history of deep learning models. Vernon Mountcastle discovery of cortical columns in somatosensory cortex was a breakthrough in brain science. The big bang was the discovery of Hubel & Wiesel of simple cells and complex cell in visual cortex which won the Nobel prize for this discovery in 1981. This work was heavily founded on Vernon Mountcastle discovery of cortical columns in somatosensory cortex.
In 80s and maybe a bit earlier backpropagation have been proposed by multiple people but the first time it was well-explained and applied for learning neural nets was done by Hinton and his colleagues in 1987.
Present: Deep learning by LeCun, Bengio and Hinton
Convolutional nets was invented by LeCun which led to deep learning conspiracy which also started by the three founding fathers of the field: LeCun, Bengio and Hinton. The main hype in deep learning happened in 2012 when the state-of-the-art result in Imagenet classification and TIMIT speech recognition task were dramatically reduced using an end-to-end deep convolutional network and deep belief net.
The power of deep learning is scalability and the ability to learn in an end-to-end fashion. In this sense, deep learning architectures are capable of learning big datasets such as Imagenet directly from raw inputs, all the way the desired outputs. Alexnet used two GPUs for Imagenet classification which is a very big dataset of images, almost 1.5 million images of size 215x215.
Very many application domains have been revolutionized using deep learning architectures such as image classifications, machine translation, speech recognition, and robotics. The Nobel Prize in Physiology or Medicine 2014 was given to John O'Keefe, May-Britt Moser and Edvard I. Moser for their discoveries of cells that constitute a positioning system in the brain. This study of cognitive neuroscience shed light on how the world is represented within the brain.
Future: Brain-plausible deep learning & cortical learning algorithms
The main direction and inclination in the deep learning for future is the ability to bridge the gap between the cortical architecture and deep learning architectures, specifically convolutional nets. In this quest, Hinton proposed capsule network as an effort to get rid of pooling layers and replace it with capsules which are highly inspired by cortical mini-columns in cortical columns and layers and include the location information or pose information of parts.
Another important quest in deep learning is understanding the biological root of learning in our brain, specifically in our cortex. Backpropagation is not biologically inspired and plausible. Hinton and the other founding fathers of deep learning have been trying to understand how backprop might be feasible biologically in brain.
Finale: Deep cortical learning as the merge of deep learning and cortical learning
By merging deep learning and cortical learning, a very more focused and detailed architectures, named deep cortical learning might be created. We might be able to understand and reconstruct the cortical structure with much more accuracy and have a better idea what the true intelligence is and how artificial general intelligence or AGI might be reproducible. Deep cortical learning might be the algorithm behind one cortical column in the neocortex.