Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
On Intelligence: How a New Understanding of the Brain will Lead to the Creation of Truly Intelligent Machines is a book by Palm Pilot-inventor Jeff Hawkins with New York Times science writer Sandra Blakeslee. The book explains Hawkins' memory-prediction framework theory of the brain and describes some of its consequences. (Times Books: 2004, ISBN 0805074562)
Hawkins outlines the book as follows
- The book starts with some background on why previous attempts at understanding intelligence and building intelligent machines have failed. I then introduce and develop the core idea of the theory, what I call the memory-prediction framework. In chapter 6 I detail how the physical brain implements the memory-prediction model—in other words, how the brain actually works. I then discuss social and other implications of the theory, which for many readers might be the most thought-provoking section. The book ends with a discussion of intelligent machines—how we can build them and what the future will be like. (p. 5)
A personal historyEdit
The first chapter is a brief history of Hawkins' interest in neuroscience juxtaposed against a history of artificial intelligence research. Hawkins uses a story of his failed application to the Massachusetts Institute of Technology to illustrate a conflict of ideas. Hawkins believed (and ostensibly continues to believe) creating true artificial intelligence will only be possible with intellectual progress in the discipline of neuroscience. Hawkins writes that the scientific establishment (as symbolized by the Massachusetts Institute) has historically rejected the relevance of neuroscience to artificial intelligence. Indeed, some artificial intelligence researchers have "[taken] pride in ignoring neurobiology" (p. 12).
Hawkins is an electrical engineer by training, and a neuroscientist by inclination. He used electrical engineering concepts as well as the studies of neuroscience to formulate his framework. In particular, Hawkins treats the propagation of nerve impulses in our nervous system as an encoding problem, specifically, a future predicting state machine, similar in principle to feed-forward error-correcting state machines.
Hawkins' basic idea is that the brain is a mechanism to predict the future, specifically, it predicts memories. Perhaps not always far in the future, but far enough to be of real use to an organism. As such, the brain is a feed forward hierarchical state machine with special properties that enable it to learn.
The hierarchy is a hierarchy of abstraction. That is, higher levels of the state machine predict the future on a longer time scale, or over a wider range of data. Lower levels interpret or control limited domains of experience, or sensory or effector systems. Connections from the higher level states predispose some selected transitions in the lower-level state machines.
Hebbian learning is part of the framework, in which the event of learning physically alters neurons and connections, as learning takes place.
Vernon Mountcastle's formulation of a cortical column is a basic element in the framework. Hawkins places particular emphasis on the role of the interconnections from peer columns, and the activation of columns as a whole. He strongly implies that a column is the cortex's physical representation of a state in a state machine.
As an engineer, any specific failure to find a natural occurrence of some process in his framework does not signal a fault in the memory-prediction framework per se, but merely signals that the natural process has performed Hawkins' functional decomposition in a different, unexpected way, as Hawkins' motivation is to create intelligent machines. For example, for the purposes of his framework, the nerve impulses can be taken to form a temporal sequence (but phase encoding could be a possible implementation of such a sequence; these details are immaterial for the framework).
- Major article: memory-prediction framework.
Predictions of the theory of the memory-prediction frameworkEdit
- An Appendix of 11 Testable Predictions:
Enhanced neural activity in anticipation of a sensory eventEdit
Spacially specific predictionEdit
2. In primary sensory cortex, Hawkins predicts, for example, we should find anticipatory cells in or near V1, at a precise location in the visual field (the scene). It has been experimentally determined, for example, after mapping the angular position of some objects in the visual field, there will be a one-to-one correspondence of cells in the scene to the angular positions of those objects. Hawkins predicts that when the features of a visual scene are known in a memory, anticipatory cells should fire before the actual objects are seen in the scene.
Prediction should stop propagating in the cortical column at layers 2 and 3Edit
3. In layers 2 and 3, predictive activity (neural firing) should stop propagating at specific cells, corresponding to a specific prediction. Hawkins does not rule out anticipatory cells in layers 4 and 5.
"Name cells" at layers 2 and 3 should preferentially connect to layer 6 cells of cortexEdit
4. Learned sequences of firings comprise a representation of temporally constant invariants. Hawkins calls the cells which fire in this sequence "name cells". Hawkins suggests that these name cells are in layer 2, physically adjacent to layer 1. Hawkins does not rule out the existence of layer 3 cells with dendrites in layer 1, which might perform as name cells.
"Name cells" should remain ON during a learned sequenceEdit
5. By definition, a temporally constant invariant will be active during a learned sequence. Hawkins posits that these cells will remain active for the duration of the learned sequence, even if the remainder of the cortical column is shifting state. Since we do not know the encoding of the sequence, we do not yet know the definition of ON or active; Hawkins suggests that the ON pattern may be as simple as a simultaneous AND (i.e., the name cells simultaneously "light up") across an array of name cells.
- See Neural ensemble#Encoding for grandmother neurons which perform this type of function.
"Exception cells" should remain OFF during a learned sequence Edit
6. Hawkins' novel prediction is that certain cells are inhibited during a learned sequence. A class of cells in layers 2 and 3 should NOT fire during a learned sequence, the axons of these "exception cells" should fire only if a local prediction is failing. This prevents flooding the brain with the usual sensations, leaving only exceptions for post-processing.
"Exception cells" should propagate unanticipated eventsEdit
7. If an unusual event occurs (the learned sequence fails), the "exception cells" should fire, propagating up the cortical hierarchy to the hippocampus, the repository of new memories.
"Aha! cells" should trigger predictive activity Edit
Pyramidal cells should detect coincidences of synaptic activity on thin dendritesEdit
9. Pyramidal cells should be capable of detecting coincident events on thin dendrites, even for a neuron with thousands of synapses. Hawkins posits a temporal window (presuming time-encoded firing) which is necessary for his theory to remain viable.
Learned representations move down the cortical hierarchy, with training Edit
10. Hawkins posits, for example, that if the inferotemporal (IT) layer has learned a sequence, that eventually cells in V4 will also learn the sequence.
"Name cells" exist in all regions of cortexEdit
11. Hawkins predicts that "Name cells" will be found in all regions of cortex.
See also Edit
- Hierarchical Temporal Memory, a technology by Jeff's startup Numenta, Inc. to replicate the properties of the neocortex.
- Memory-prediction framework
- OnIntelligence.org - official website
- Dileep George's vision research page - includes implementations of framework
- A Hierarchical Bayesian Model of Invariant Pattern Recognition in the Visual Cortex, a conference paper by George & Hawkins
- Machine Intelligence Meets Neuroscience (By Bob Colwell, published in IEEE's Computer, January 2005)
- A review by Franz Dill
- On Intelligence, People and Computers (Arnold Kling, Tech Central Station, 22 November 2004)