The scenario described in the previous post (see October 26, 2013 post) of the mass of commuter humanity changing trains in a crowded subway station, silently cooperating to avoid colliding with one another as they cross paths was intended to introduce the subject of humans reading and understanding the intentions of others as a foundation of human cooperative activity. But there is another characteristic of the human brain besides mindreading that supports this outcome: the human brain constantly anticipates, predicts the future. In this scenario, it predicts (perhaps not perfectly) the future behavior of others (more likely their immediate behavior), where they are directing their motion, where they are turning, whether they are accelerating or slowing down. Jeff Hawkins labels this intelligence: how the brain predicts behavior and future events is the subject of On Intelligence. Hawkins' interest is in understanding human intelligence to build a foundation for improved machine intelligence. The focus of his inquiry is the neocortex, the outermost layers of the human brain, and memory. What Hawkins offers up is the memory-prediction framework of intelligence. This differs from a computational framework.
Hawkins is not out to explain what makes us human (compare September 27, 2009 post). Nor is he out to explain human consciousness (compare April 8, 2011 post). But he does briefly touch on these matters. Previous posts in the blog address human imagination and creativity as a hallmark of what makes us "human," (see November 6, 2011 and May 22, 2011 post), and Hawkins presents a model discussed below about the role of the neocortex in imagination, including imagination by false analogy. What he does not touch on is the role of the brain in generating and controlling emotions, the subject of Jaak Panksepp's research (see May 19, 2013 post), which naturally links to the origins of the moral and social aspects of what makes us human. (See November 21, 2012 post). So while Hawkins does connect the neocortex and thalamus within his memory-prediction framework (see below), he does not elaborate upon the role of the large thalamo-cortical system that resides in the human brain that plays a substantial role in what makes us human and the biological basis of human consciousness. (See April 8, 2011 post).
Prior posts identify the critical role of the hippocampus in memory formation, but ultimately long-term memory is shifted to the cerebral cortex through a process known as consolidation that occurs during sleep. (See September 10, 2013 and November 6, 2011 posts). As a prior post described: "Memories are distributed in the same parts of the brain that encoded the original experience. So sounds are found in the auditory cortex, taste and skin sensory memories are found in the somatosensory cortex, and sight in the visual cortex. But procedural -- "how to" --- memories are stored outside of the cortex, in the cerebellum and putamen, and fear memories are stored in the amygdala." Hawkins' thesis is that the cortex is critical to human capacity to predict events because of the linkage to memory storage in the cortex. In focusing on the neocortex, Hawkins is looking at, evolutionarily speaking, the most recent adaptation in the development of animal neurological systems. The neocortex is unique to mammals, and the human neocortex is larger than the neocortex in any other mammal, facts that suggest the human neocortex is critical to understanding what makes us human. This is just the opposite of Jaak Panksepp's focus on the older parts of the brain, the brain stem and the midbrain. (See May 19, 2013 post). It is not as though Hawkins believes these older parts of the brain are irrelevant to human behavior. "First," Hawkins says, "the human mind is created not only by the neocortex but also by the emotional systems of the old brain and by the complexity of the human body. To be human you need all of your biological machinery, not just a cortex." But Hawkins is ultimately interested in the creation of an intelligent machine, and he believes that in the pursuit of that interest he needs to understand what makes humans "intelligent." He finds that understanding in how the neocortex is structured and proposes a model for how it operates to predict future events.
Hawkins' model is based on our current knowledge of the structure of the neocortex. That much is known. And here is a graphical representation of that structure:
Each region of the neocortex is known to consist of four areas, labeled 1, 2, 4 and IT. The graph above represents those four layers, with IT at the top and 4, 2, and 1 below it for one of the regions of the cortex (visual, auditory, somatosensory, motor). The visual cortex layers are labeled, from bottom to top, V1, V2, V4, and IT; the auditory cortex layers labeled A1, A2, A4 and IT; the somatosensory (touch) cortex layers labeled S1, S2, S4 and IT, and similarly for the motor cortex. The arrows are pointed in both directions, indicating that information moves in both directions between the areas.
Neurons fire in a specific pattern in response to a specific sensory stimulus. For the exact same sensory stimulus, the same neurons will fire in the same pattern within this hierarchy. For a different sensory stimulus, different neurons will fire in a pattern. The brain's capacity to recognize (predict) these patterns is at the heart of memory.
Recall the discussion in connection with Rodrigo Quiroges' book, Borges and Memory (September 10, 2013 post): "Each neuron in the retina responds to a particular point, and we can infer the outline of a cube starting from the activity of about thirty of them [retinal neurons]. Next the neurons in the primary visual cortex fire in response to oriented lines; fewer neurons are involved and yet the cube is more clearly seen. This information is received in turn by neurons in higher visual areas, which are triggered by more complex patterns --- for example, the angles defined by the crossing of two or three lines. . . As the processing of visual information progresses through different brain areas, the information represented by each neuron becomes more complex, and at the same time fewer neurons are needed to encode a given stimulus." The arrows representing sensory input from the retinal neurons are the arrows pointing to area V1 of the visual cortex. A particular pattern of neurons firing in V1 leads neurons in V2 to fire, and all the way up to IT. As just noted, in each higher layer "fewer neurons are involved." In V1, the cells are spatially specific, tiny feature-recognition cells that infrequently fire depending on which of the millions of retinal neurons are providing sensory input; at the higher IT, the cells are constantly firing, spatially non-specific, object recognition cells. One of way of thinking about this is that certain neurons in V1 fired in recognition of two ears, a nose, two eyes, and perhaps even more details like the texture of skin, facial hair, the color of hair; neurons in IT fired in recognition of an entire head or face. Cells in the IT encode for categories; Hawkins calls them "invariant representations." In philosophy, these invariant representations might be analogous to Plato's forms. It is here one would find neurons firing in response to things --- rocks, platypuses, your house, a song, Jennifer Aniston or Bill Clinton. (See September 10, 2013 post).
Psychologists recognize the same phenomenon, although in different terms. Paul Bloom asserts that humans are "splitters" and "lumpers," but for the most part we are lumpers. Borges' Funes was a splitter. (See September 10, 2013 post). "Our minds have evolved," Bloom says, "to put things into categories and to ignore or downplay what makes these things distinct. Some categories are more obvious than others: all children understand the categories chairs and tigers; only scientists are comfortable with the categories such as ungulates and quarks. What all categories share is that they capture a potential infinity of individuals under a single perspective. They lump." Bloom says, "We lump the world into categories so that we can learn." He adds, "A perfect memory, one that treats each experience as a distinct thing-in-itself, is useless. The whole point of storing the past is to make sense of the present and to plan for the future. Without categories [or concepts], everything is perfectly different from everything else, and nothing can be generalized or learned."
The neocortex consists of six horizontal layers of cells (I-VI) each roughly 2mm thick (shown below for area V1 of the visual cortex). The cells within each layer are aligned in columns perpendicular to the layers. The layers in each column are connected via axons, making synapses along the way. "Columns do not stand out like neat little pillars," explains Hawkins, "nothing in the cortex is that simple, but their existence can be inferred from several lines of evidence." Vertically aligned cells tend to become active for the same stimulus.
Again, as was the case with the different areas of a region of the cortex, information is moving both up and down the layers of a given area. Inputs move up the columns; memories move down the columns. "When you begin to realize that the cortex's core function is to make predictions, then you have to put feedback into the model; the brain has to send information flowing back toward the region that first receives inputs. Prediction requires a comparison between what is happening and what you expect to happen. What is actually happening flows up, and what you expect to happen flows down."
Memories are stored in this hierarchical structure. "The design of the cortex and the method by which it learns naturally discover the hierarchical relationships in the world. You are not born with knowledge of language, houses, or music. The cortex has a clever learning algorithm that naturally finds whatever hierarchical structure exists and captures it. When structure is absent, we are thrown into confusion, even chaos. *** You can only experience a subset of the world at any moment in time. You can only be in one room of your home, looking in one direction. Because of the hierarchy of the cortex, you are able to know that you are at home, in your living room, looking at a window, even though at that moment your eyes happened to be fixated on a window latch. Higher regions of cortex are maintaining a representation of your home, while lower regions are representing rooms, and still lower regions are looking at window. Similarly, the hierarchy allows you to know you are listening to both a song and album of music, even though at any point in time you are hearing only one note, which on its own tells you next to nothing." Critical to this capability is the brain's ability to process sequences and recognize patterns of sequences. "Information flowing in to the brain naturally arrives as a sequence of patterns." When the patterns are repeated through a repeated firing of a particular combination of neurons, the cortical region forms a persistent representation, or memory, for the sequence. In learning sequences, we form invariant representations of objects. When certain input patterns repeat over and over, cortical regions "know that those experiences are caused by a real object in the world."
One of the most important attributes of Hawkins' model is a concept called auto-associative memory. This is what enables the brain to recall something by sensing only a portion of that memory. In the case of the brain, that input may belong to an entirely different category than what is recalled. Auto-associative memory is part of pattern recognition: the cortex does not need to see the entire pattern in order to recognize the larger pattern. The second feature of auto-associative memory, says Hawkins, is that an auto-associative memory can be designed to store sequences of patterns, or temporal patterns. He says this is accomplished by adding a time-delay to feedback.
The cortex is linked to the thalamus. Hawkins says that one of the six layers of cells (L5 - second from the bottom in a given cortical area) within the neocortex is wired to the thalamus, which in turn sends information back to Layer I (the highest layer in a given cortical layer), acting as a delayed feedback important to learning sequences and to predicting. The thalamus is selective in what it transmits back to the cortex because the number of neurons going to the thalamus exceeds the number of neurons back to the cortex by a factor of ten. This requires an understanding of reentrant activity and recursion, which need not be explained here. But Layer 1 (at the top of a given cortical area) is also receiving information from higher cortical areas (e.g. in the case of the visual cortex, layer 1 in V4 from layer 6 in IT; layer 1 in V2 from layer 6 in V4, etc.) So layer 1 now has two inputs: from the thalamus and from the higher cortical area. Layer 1, Hawkins emphasizes, now carries "much of the information we need to predict when a column should be active. Using these two signals in layer 1, a region of cortex can learn and recall multiple sequences of patterns."
Cortical regions "store" sequences of patterns when synapses are strengthened by repeated firing. "If this occurs often enough, the layer 1 synapses [at the top of the region] become strong enough to make the cells in layers 2, 3, and 5 [below] fire, even when a layer 4 cell hasn't fired--- meaning parts of the column can become active without receiving input from a lower region of the cortex. In this way, cells in layers 2, 3, and 5 learn to 'anticipate' when they should fire based on the pattern in layer 1. Before learning, the column can only come active if driven by a layer 4 cell. After learning, the column can become partially active via memory. When a column becomes active via layer1 synapses, it is anticipating being driven from below. This is prediction. If the column could speak, it would say, 'When I have been active in the past, this particular set of my layer 1 synapses have been active. So when I see this particular set again, I will fire in anticipation.' Finally, layer 6 cells can send their output back into layer 4 cells of their own column. Hawkins says that when they do, our predictions become the input. This is what we do, he adds, when we daydream, think, imagine. It allows us to see the consequences of our own predictions, noting that we do this when we plan the future, rehearse speeches, and worry about future events. In Hawkins' model, this has to be part of what Michael Gazzaniga refers to as our decoupling mechanism. (See May 22, 2011 post)
This is Hawkins' model of the brain's capacity to predict, intelligence if you will. Of course, it is a more complex than I have regurgitated here. "If a region of cortex finds it can reliably and predictably move among these input patterns using a series of physical motions (such as saccades of the eyes or fondling with the fingers) and can predict them accurately as they unfold in time (such as the sounds comprising a song or the spoken word), the brain interprets these as having a causal relationship. The odds of numerous input patters occurring in the same relation over and over again by sheer coincidence are vanishingly small. A predictable sequence of patterns must be part of a larger object that really exists. So reliable predictability is an ironclad way of knowing that different events in the world are physically tied together. Every face has eyes, ears, mouth and nose. If the brain sees an eye, the saccades and sees another eye, then saccades and sees a mouth, it can feel certain it is seeing a face."
This begins at a very early age in our post-natal development. The two basic components of learning, explains Hawkins, are forming the classifications of patterns and building sequences. "The basics of forming sequences is to group patterns together that are part of the same object. One way to do this is by grouping patterns that occur contiguously in time. If a child holds a toy in her hand and slowly moves it, her brain can safely assume that the image on her retina is of the same object moment to moment, and therefore the changing set of patterns can be grouped together. At other times, you need outside instruction to help you decide which patterns belong together. To learn that apples and bananas are fruits, but carrots and celery are not, requires a teacher to guide you to group these items as fruits. Either way, your brain slowly builds sequences of patterns that belong together. But as a region of cortex builds sequences, the input to the next region changes. The input changes from representing mostly individual patterns to representing groups of patterns. The input to a region changes from notes to melodies, from letters to words, from noses to faces, and so on. Where before a region built sequences of letters, it now builds sequences of words, The unexpected result of this learning process is that, during repetitive learning, representations of objects move down the cortical hierarchy. During the early years of your life, your memories of the world first form in higher regions of cortex, but as you learn they are re-formed in lower and lower parts of the cortical hierarchy."
Michael Shermer (see June 12, 2011 post) made the same point in a slightly different way when he referred to "patternicity." According to Shermer, as sensory data flows into the brain, there is a "tendency" for the brain to begin looking for meaningful patterns in both meaningful and meaningless data. He calls this process patternicity. Shermer asserts that patternicity is premised on "association learning," which is "fundamental to all animal behavior from which is "fundamental to all animal behavior from C. elegans (roundworm) to homo sapiens." Because our survival may depend on split-second decisions in which there is no time to research and discover underlying facts about every threat or opportunity that faces us, evolution set the brain's default mode in the position of assuming that all patterns are real, says Shermer. A cost associated with this behavior is that the brain may lump causal associations (e.g. wind causes plants to rustle) with non-causal associations (e.g. there is an unseen agent in the plants). In this circumstance, superstition --- incorrect causal associations --- is born. "In this sense, patternicities such as superstition and magical thinking are not so much errors in cognition as they are the natural processes of a learning brain." Religion, conspiracy theories and political beliefs fit this model as well.
My surmise is that beliefs and concepts rooted in false analogy become stored in memory in higher cortical areas when they are reinforced over and over through cultural transmission. Something like this may be what Edward O. Wilson means when he refers to epigenetic rules and culture. "Human nature," Wilson says, is the "inherited regularities of mental development
common to our species. They are epigenetic
rules, which evolved by the interaction of genetic and cultural
evolution that occurred over a long period in deep prehistory. These rules are
the genetic biases in the way our senses perceive the world, the symbolic coding
by which we represent the world, the options we automatically open to ourselves,
and the responses we find easiest and most rewarding to make. . ." (See April 8, 2013 post). Storytelling --- the creation of works of fiction --- may be important to making and reinforcing memories. (See August 15, 2011 post). Thus, when a prediction based on a false analogy is violated and one would normally recognize an error, the error message is transmitted back up to the higher cortical areas for a check. But because the belief based in false analogy resides there in the higher areas, the false analogy may never be corrected. The false analogy becomes an invariant representation. Paul Bloom explains in Descartes' Baby just how these concepts and beliefs can be rooted in our brains at a very early age, and as Hawkins describes above, memories formed earlier in life form in the higher regions of the cortex. These false analogies can be difficult to dislodge.
Hawkins has been helpful in providing a model of the cortex as the part of the brain devoted to our capacity to predict. When tied into the models of other parts of the brain relating to consciousness and emotion discussed elsewhere in this blog, we begin to assemble the whole human brain and begin an appreciation of what makes us "human." (See September 27, 2009 post discussing Michael Gazzaniga's reference to Jeff Hawkins). While Hawkins' interest lies in the intelligent machine, he does not believe a machine can ever become "human."
And finally, Hawkins confirms why I have had held to my instinct that John Searle's Chinese Room argument was intuitively correct. The man in the Chinese Room must have been human.