Memory is a subject that is no stranger to this blog. (See February 26, 2013, December 2, 2011, November 6, 2011, September 20, 2011, August 15, 2011, April 8, 2011, November 27, 2010, September 9, 2010 posts.). Rodrigo Quiroga's Borges and Memory effectively assembles most of the comments and observations in these earlier posts and weaves in the parallel observations of Argentinian writer Jorge Luis Borges' about human memory, demonstrating that Borges' astute observations hold close to the empirical research of neuroscience. Borges' artistic tool drafted by Quiroga to make his point is a short story authored by Borges in 1942, Funes the Memorious, about a man who could not forget the most trivial facts and datum and sustained an incredible memory. In Borges' words, Funes had an "infinite vocabulary for the natural series of numbers, a useless mental catalogue of all the images of his memory." He was "almost incapable of ideas in general," which means he could not abstract and generalize. In light of Daniel Schacter's characterization of the human mind's near universal capacity for forgetting (see September 20, 2011 post), sometimes instantly, sometimes only over considerable periods of time --- and the evolutionary benefits of forgetting --- the story of Funes is a unique story, one far removed from normal experience. But Quiroga informs and documents that there are real cases of persons like Funes, who are simply incapable of forgetting the slightest details and their memories are clogged with useless data. He also confirms that these individuals are limited in their ability to generalize and abstract. There is the case of Solomon Shereshevskii of Russia, a patient of Alexander Luria, whose capacity for memorizing meaningless sequences of numbers, formulae, words, sequences of syllables, and retaining them in memory for years is much like the Funes of Borges' imagination. Shereshevskii's diagnosis includes severe synesthesia. (See November 20, 2011 and October 25, 2011 post). Shereshevskii reportedly tells Luria that he could not read nor study because it made him lose track of what he was reading if he had to think about words beyond their literal meaning.
And there are others, and they are often referred to as savants and in other cases are regarded as autistic: Kim Peek (whom the story Rain Man was based upon) who was diagnosed with FG Syndrome, Daniel Tammet whose incredible ability to recall sequences of numbers (including large numbers such as primes) was ascribed to synesthesia, Leslie Lemke who suffered from cerebral palsy and was blind, but was nevertheless a highly skilled pianist who had an enormous memory for music after hearing a song only once, and Jill Price, for whom the inability was a curse.
We sometimes refer to a person's "photographic" memory. But that is not the way memory works, particularly memory based on visual perception. Recall Antonio Damasio's account of how the brain retains and retrieves memories. (See April 8, 2011 post). Larger human brains lack the storage capacity for "large files of recorded images of prior events." To solve this problem, Damasio argues, human brains "borrowed the dispositional strategy" from early evolution that allows us to be able to retrieve those memories without, figuratively speaking, having to film and store those images. Here is how Damasio says this works. He refers to ancient dispositional networks, which we can identify as the subcortical systems discussed by Jaak Panksepp in his account of human emotional systems (see May 19, 2013 post), and these are unconscious, automatic dispositions that are essentially hardwired into our biology. Recall Panksepp's discussion of the Fear System and the link between emotions and memory: "Learning and memory are automatic and involuntary responses (mediated by
unconscious mechanisms of the brain), which in their most lasting forms are
commonly tethered to emotional arousal. Emotional arousal is a necessary
condition for the creation of fear-learning memories." (May 19, 2013 post). The ancient dispositional networks and the more recent, evolutionarily speaking, mapping networks in the brain, says Damasio, are now connected. A current perception triggers the dispositional network that directs the brain to reassemble aspects of past perceptions from the part of the cerebral cortex that had been previously activated when an original perception of an object occurred and where the representation or image was mapped. This occurs in what Damasio refers to as convergence-divergence zones that record "the coincidence of activity in neurons hailing from different brain sites, neurons that had been made active by the mapping of a certain object." A part of the cerebral cortex is devoted to image space where images of all sensory types occurs and map-making occurs; a separate part of the cerebral cortex is devoted to dispositional space where the tools exist to reactivate and generate images previously experienced --- affectionately called "grandmother cells" after our ability to recall our grandmother. The contents of dispositions, Damasio says, are always unconscious --- he says they are "encrypted" and "implicit" --- in contrast to the explicit images in the image space created by current perceptions. The "encrypted" dispositions are not themselves images, but merely implicit formulas for how to reconstruct maps in image space. Our "knowledge base" is, by this hypothesis, part of our unconscious brain, stored in code, waiting to be retrieved from what Damasio refers to as "association cortices" (and hence the analogy to Hume's associationism).
Quiroga concurs in substantial part. "The brain does not reproduce visual stimuli like a digital camera or television, but rather processes their meaning starting from relatively little information and a set of unconscious inferences. Now, if the neurons in the primary visual cortex do not simply copy information detected by the retina, what do they do? This was what David Hubel and Torsten Wiesel at the Johns Hopkins University set out to study in the late 1950s. Following a serendipitous event and a spectacular series of experiments that ensued (which earned them the Nobel Prize in Physiology or Medicine in 1981). Hubel and Wiesel discovered neurons in the primary visual cortex that respond to oriented lines. This information eventually reaches the inferior temporal lobe where "face cells" -- ie., neurons that respond to human or monkey faces but not to other images, for example of hands, fruits or houses --- were found in experiments with monkeys. . . Each neuron in the retina responds to a particular point, and we can infer the outline of a cube starting from the activity of about thirty of them [retinal neurons]. Next the neurons in the primary visual cortex fire in response to oriented lines; fewer neurons are involved and yet the cube is more clearly seen. This information is received in turn by neurons in higher visual areas, which are triggered by more complex patterns --- for example, the angles defined by the crossing of two or three lines. . . As the processing of visual information progresses through different brain areas, the information represented by each neuron becomes more complex, and at the same time fewer neurons are needed to encode a given stimulus. In the late 1960s, Polish psychologist Jerzy Konorski wondered if at the end of this process there might be individual neurons that represent an object or a person as a whole. Is there, for example, a neuron that represents the concept of 'my home,' another one that responds to 'my dog,' and another for 'my grandmother.'" Quiroga believed the answer lies in or near the hippocampus, because of a case made famous by Brenda Milner involving split brain surgery that removed a patient's hippocampus and resulted in an inability to form explicit, new memories.
Quiroga became engaged in research with Christof Koch at Caltech that studied recordings of activity of individual neurons in the medial temporal lobe that responded not to a generic face, but responded uniquely to the faces of individual humans: actress Jennifer Aniston, soccer star Diego Maradona, and Mr. T. Medial temporal lobe structures that are critical for long-term memory include the amygdala, brainstem, and hippocampus, along with the surrounding hippocampal region consisting of the perirhinal, parahippocampal, and entorhinal neocortical regions. The hippocampus is critical for memory formation, and the surrounding medial temporal cortex is currently theorized to be critical for memory storage. The prefrontal and visual cortices are also involved in explicit memory. The firing of neurons that begins in the retina (or in the case of auditory stimuli, in the cochlea), goes through the primary visual cortex (or other primary cortical areas), passing through higher visual areas in the temporal lobe ultimately reaches the hippocampus. The many individual neurons in the retina respond to many discrete visual stimuli that does not represent the image as a whole. At some point in the brain's pathways from the retina to the hippocampus the brain assembles these varied discrete stimuli. What Quiroga and others found is that the "Jennifer Aniston neuron" did not respond to a single image of Jennifer Aniston, but to multiple images of Jennifer Aniston. Further research found that a specific neuron would also respond to a multitude of Star Wars characters --- Luke Skywalker, Yoda, Darth Vader. What this signified is that the unique neuron did not associate with a particular camera image, but to a conceptual image. This "loss of detail" led Quiroga to conclude that the neuron codes for an abstraction of Jennifer Aniston. "We can see Funes," writes Quiroga, "as someone lacking those Jennifer Aniston neurons that encode abstract concepts." He repeats, "We do not process images in our brains in the same way a camera does; on the contrary, we extract a meaning and leave aside a multitude of details.. . . Just like perception, memory is a creative process; when we remember something we do not repeat the experience as it was; rather we relive it in another context, create a new representation, and even change its meaning." If we integrate Damasio into Quiroga's thinking, the convergence-divergence zones assemble "a cascade of processes that involve memories and emotions," "extracting particular features that help us recognize these people and objects in very different circumstances." (See November 6, 2011 post).
Quiroga quotes Francis Crick (The Astonishing Hypothesis): "What you see is not what is really there; it is what your brain believes is there. Seeing is an active, constructive process. Your brain makes the best interpretation it can according to its previous experience and the limited and ambiguous information provided by our eyes." This recalls Michael Gazzaniga's "Interpreter" (see July 25, 2013, September 27, 2009 posts). A previous post that quotes Crick and Gazzaniga is likewise worth recalling (see May 22, 2011 post). This latter post introduces the subject of the brain's capacity to engage in self-deception, to imagine things that don't exist in reality. I think in most cases we do not engage in self-deception, the brain's default mode is to construct a reality as close to the reality that is out there. So what we see, most of the time, is really there, or at least close to it. (See February 4, 2012 post). Information in our quantum world is probablistic. It is true that the brain quickly discards much information that is never encoded. As Daniel Schacter has explained --- and Quiroga acknowledges --- our memories can become very fragile if not unreliable, and our "reality" can read more like fiction. (See September 20, 2011 post). And that is interesting, because the minds of savants like Funes have the ability to recollect and retain for considerable periods of time incredible detail without assigning meaning and in those cases the brain does not filter out reality from gibberish.