Deep Neural Networks and Their Role in the Quest for Human-Like Brain Power
doc. RNDr. Iveta Mrázová, CSc.
Katedra teoretické informatiky a matematické logiky, MFF UK Praha
The long-term interest in cognitive sciences has been enhanced by several strong impulses to contemporary computer science - in particular by large government initiated brain research projects. Other developments shift the area even more from the traditional von Neumann computing paradigm towards true connectionism implemented in silicon, too. New imaging technologies allow to follow the brain activity even at the individual neuron´s level. Inexpensive graphics processing units are becoming a common option for learning large-scale deep neural networks and currently unveiled brain-inspired chip architectures let us think of constructing complex cognitive algorithms mimicking the function of biological brains.
Perhaps the first deep artificial neural network incorporating some neurophysiological insights was the Neocognitron. Recent brain-inspired models of artificial neural networks include especially the so-called Deep Belief Networks and Convolutional Neural Networks. Both types of networks comprise several layers of functional neurons and both of them proved to be able to beat human performance in various areas of 2D image recognition. These models are, however, expected to yield superior results also for many other tasks ranging from language understanding and translation to multimedia data processing, among others.
While the majority of classical image processing techniques is based on carefully preselected image features, deep neural networks are designed to learn local features autonomously with minimum or no advanced pre-processing. The representations formed in their hidden layers resemble a hierarchy combining simpler features found at lower layers into more complex features detected at higher layers. Deep networks can be moreover trained by means of unlabeled data collected, e.g., from the internet. The found features can then be used as common building blocks for new images if labeled data is scarce.