Wednesday, September 25, 2013

Luigi Luca Cavalli-Sforza, Genes, Peoples and Languages (2001)

Information is typically packaged.  The smallest unit of information (something like a bit) (see August 23, 2009 and August 17, 2009 posts) has limited meaning (information value) on its own.  Aggregating, absorbing, connecting, colliding, and communicating with other units of information expands the information value associated with the package of bits.  These packages of information include small subatomic units, electrons, atoms, chemical compounds, photons, waves of sound and light, proteins, genotypes, cells, organs, phenotypes, letters, words, songs, books, and culture. (See November 27, 2010 post).

Information migrates.  (See May 20, 2012 post)  It is in nearly constant motion.  And when it is in motion, information can be altered and its meaning changed.  (See August 15, 2011 and  August 23, 2009 posts).  Sometimes information is degraded by change; sometimes information is enhanced. Information moves with its package; the package migrates, and information moves along.  Genes, Peoples and Languages is about the movement of genetic information in the package of a phenotype and the scientific quest to track the movement and transformation of modern human genes over the course of roughly one hundred thousand years.  And along the way, as a result of natural selection, and in some geographic areas, genetic drift, the information in this genetic package was edited and revised from the general population that preceded it:  hair texture and color changed, skin color changed, small genetic changes enabled humans to digest milk, immunized them from diseases such as malaria in certain areas, morphologies changed, and so on. 

This research supports the Out of Africa hypothesis:  that modern human origins begin on the African continent approximately 100,000 years ago, likely in southern Africa; that intra-continental African migration ensued northward along East Africa in the thousands of years afterward; and the first migration of homo sapiens out of Africa occurred roughly 50-60,000 years ago to the Arabian peninsula and the Levant, likely along the coast, and ultimately to southern Asia (India), southeast Asia, and Oceania (Australia) about 45,000 years ago.   And about the same time that modern humans were reaching Oceania, migrations out of the Levant northward in the direction of Europe, and later in the direction of central Asia and ultimately to North America roughly 15,000 years ago. What should not be forgotten in this focus on modern human migration is that a similar migratory path may have been taken over a million years earlier by homo erectus.

Cavalli-Sforza sees a parallel between genetic evolution and cultural evolution. The units and type of information as well as the means of information transmission in these two circumstances, however, are radically different.   Speech acts (including rituals) and language are the means of transmitting cultural information, and Cavalli-Sforza treats linguistic evolution as a type of cultural evolution.  But genes and culture do not co-evolve. As mentioned in an earlier post, "Language is a social institution, and social institutions and culture evolve, albeit at a different and faster pace than biological evolution."  (August 31, 2009 post). Language changes can occur as a result of migration and conquest of another's territory.  Cavalli-Sforza documents this in a number of cases.  Religion is another attribute of culture that likewise can change as a result of migration and conquest.  (See May 12, 2010 post).  And ideas can change as a result of migration and conquest.  (See May 20, 2012 post).  "There is a fundamental difference between biological and cultural mutation," writes Cavalli-Sforza.  "Cultural mutations may result from random events, and thus be very similar to genetic mutations, but cultural changes are more often intentional or directed to a very specific goal, while biological mutations are blind to their potential benefit.  At the level of mutation, cultural evolution can be directed while genetic change cannot."   Later he adds, "We must note a significant difference between biological and linguistic mutation.  A genetic mutant is generally very similar to the original gene, since one gives rise to another with only a small change.  Words vary in more complicated ways.  The same root can change meaning.  One word can have may unrelated senses.  One could try to establish greater similarities between genes and words taking into account all of the peculiarities, but it is not clear that would be useful."  The curious aspect of Cavalli-Sforza's discussion of biological and cultural evolution and transmission is the absence of any discussion of the evolutionary debate about whether evolution operates on genes, phenotypes, or groups that has laced this subject for several decades now.  (See November 4, 2009, November 30, 2009, September 12, 2012, and September 17, 2012 posts).  References to Richard Dawkins, memes, and Edward Wilson are not to be found.  Cavalli-Sforza's discussion on this subject is disjointed, and one wonders how he would treat the subject of the unit of information on which evolution operates.

Tuesday, September 10, 2013

Rodrigo Q. Quiroga, Borges and Memory (2012)

Memory is a subject that is no stranger to this blog.  (See February 26, 2013, December 2, 2011, November 6, 2011, September 20, 2011, August 15, 2011April 8, 2011, November 27, 2010September 9, 2010 posts.).  Rodrigo Quiroga's Borges and Memory effectively assembles most of the comments and observations in these earlier posts and weaves in the parallel observations of Argentinian writer Jorge Luis Borges' about human memory, demonstrating that Borges' astute observations hold close to the empirical research of neuroscience.  Borges' artistic tool drafted by Quiroga to make his point is a short story authored by Borges in 1942, Funes the Memorious, about a man who could not forget the most trivial facts and datum and sustained an incredible memory.  In Borges' words, Funes had an "infinite vocabulary for the natural series of numbers, a useless mental catalogue of all the images of his memory."  He was "almost incapable of ideas in general," which means he could not abstract and generalize.  In light of Daniel Schacter's characterization of the human mind's near universal capacity for forgetting (see September 20, 2011 post), sometimes instantly, sometimes only over considerable periods of time --- and the evolutionary benefits of forgetting --- the story of Funes is a unique story, one far removed from normal experience.  But Quiroga informs and documents that there are real cases of persons like Funes, who are simply incapable of forgetting the slightest details and their memories are clogged with useless data.  He also confirms that these individuals are limited in their ability to generalize and abstract.  There is the case of Solomon Shereshevskii of Russia, a patient of Alexander Luria, whose capacity for memorizing meaningless sequences of numbers, formulae, words, sequences of syllables, and retaining them in memory for years is much like the Funes of Borges' imagination.  Shereshevskii's diagnosis includes severe synesthesia. (See November 20, 2011  and October 25, 2011 post). Shereshevskii reportedly tells Luria that he could not read nor study because it made him lose track of what he was reading if he had to think about words beyond their literal meaning.

And there are others, and they are often referred to as savants and in other cases are regarded as autistic:  Kim Peek (whom the story Rain Man was based upon) who was diagnosed with FG Syndrome, Daniel Tammet whose incredible ability to recall sequences of numbers (including large numbers such as primes) was ascribed to synesthesia, Leslie Lemke who suffered from cerebral palsy and was blind, but was nevertheless a highly skilled pianist who had an enormous memory for music after hearing a song only once, and Jill Price, for whom the inability was a curse.

We sometimes refer to a person's "photographic" memory.  But that is not the way memory works, particularly memory based on visual perception.  Recall Antonio Damasio's account of how the brain retains and retrieves memories.  (See April 8, 2011 post).   Larger human brains lack the storage capacity for "large files of recorded images of prior events." To solve this problem, Damasio argues, human brains "borrowed the dispositional strategy" from early evolution that allows us to be able to retrieve those memories without, figuratively speaking, having to film and store those images. Here is how Damasio says this works. He refers to ancient dispositional networks, which we can identify as the subcortical systems discussed by Jaak Panksepp in his account of  human emotional systems (see May 19, 2013 post), and these are unconscious, automatic dispositions that are essentially hardwired into our biology. Recall Panksepp's discussion of the Fear System and the link between emotions and memory:  "Learning and memory are automatic and involuntary responses (mediated by unconscious mechanisms of the brain), which in their most lasting forms are commonly tethered to emotional arousal. Emotional arousal is a necessary condition for the creation of fear-learning memories."  (May 19, 2013 post). The ancient dispositional networks and the more recent, evolutionarily speaking, mapping networks in the brain, says Damasio, are now connected. A current perception triggers the dispositional network that directs the brain to reassemble aspects of past perceptions from the part of the cerebral cortex that had been previously activated when an original perception of an object occurred and where the representation or image was mapped. This occurs in what Damasio refers to as convergence-divergence zones that record "the coincidence of activity in neurons hailing from different brain sites, neurons that had been made active by the mapping of a certain object." A part of the cerebral cortex is devoted to image space where images of all sensory types occurs and map-making occurs; a separate part of the cerebral cortex is devoted to dispositional space where the tools exist to reactivate and generate images previously experienced --- affectionately called "grandmother cells" after our ability to recall our grandmother. The contents of dispositions, Damasio says, are always unconscious --- he says they are "encrypted" and "implicit" --- in contrast to the explicit images in the image space created by current perceptions. The "encrypted" dispositions are not themselves images, but merely implicit formulas for how to reconstruct maps in image space. Our "knowledge base" is, by this hypothesis, part of our unconscious brain, stored in code, waiting to be retrieved from what Damasio refers to as "association cortices" (and hence the analogy to Hume's associationism).

Quiroga concurs in substantial part.  "The brain does not reproduce visual stimuli like a digital camera or television, but rather processes their meaning starting from relatively little information and a set of unconscious inferences.  Now, if the neurons in the primary visual cortex do not simply copy information detected by the retina, what do they do?  This was what David Hubel and Torsten Wiesel at the Johns Hopkins University set out to study in the late 1950s.  Following a serendipitous event and a spectacular series of experiments that ensued (which earned them the Nobel Prize in Physiology or Medicine in 1981).  Hubel and Wiesel discovered neurons in the primary visual cortex that respond to oriented lines.  This information eventually reaches the inferior temporal lobe where "face cells" -- ie., neurons that respond to human or monkey faces but not to other images, for example of hands, fruits or houses --- were found in experiments with monkeys. . . Each neuron in the retina responds to a particular point, and we can infer the outline of a cube starting from the activity of about thirty of them [retinal neurons].  Next the neurons in the primary visual cortex fire in response to oriented lines; fewer neurons are involved and yet the cube is more clearly seen.  This information is received in turn by neurons in higher visual areas, which are triggered by more complex patterns --- for example, the angles defined by the crossing of two or three lines.  . . As the processing of visual information progresses through different brain areas, the information represented by each neuron becomes more complex, and at the same time fewer neurons are needed to encode a given stimulus.  In the late 1960s, Polish psychologist Jerzy Konorski wondered if at the end of this process there might be individual neurons that represent an object or a person as a whole.  Is there, for example, a neuron that represents the concept of 'my home,' another one that responds to 'my dog,' and another for 'my grandmother.'"  Quiroga believed the answer lies in or near the hippocampus, because of a case made famous by Brenda Milner involving split brain surgery that removed a patient's hippocampus and resulted in an inability to form explicit, new memories. 

Quiroga became engaged in research with Christof Koch at Caltech that studied recordings of activity of individual neurons in the medial temporal lobe that responded not to a generic face, but responded uniquely to the faces of individual humans:  actress Jennifer Aniston, soccer star Diego Maradona, and Mr. T.  Medial temporal lobe structures that are critical for long-term memory include the amygdala, brainstem, and hippocampus, along with the surrounding hippocampal region consisting of the perirhinal, parahippocampal, and entorhinal neocortical regions. The hippocampus is critical for memory formation, and the surrounding medial temporal cortex is currently theorized to be critical for memory storage. The prefrontal and visual cortices are also involved in explicit memory.  The firing of neurons that begins in the retina (or in the case of auditory stimuli, in the cochlea), goes through the primary visual cortex (or other primary cortical areas), passing through higher visual areas in the temporal lobe ultimately reaches the hippocampus.  The many individual neurons in the retina respond to many discrete visual stimuli that does not represent the image as a whole.  At some point in the brain's pathways from the retina to the hippocampus the brain assembles these varied discrete stimuli.  What Quiroga and others found is that the "Jennifer Aniston neuron" did not respond to a single image of Jennifer Aniston, but to multiple images of Jennifer Aniston.  Further research found that a specific neuron would also respond to a multitude of Star Wars characters --- Luke Skywalker, Yoda, Darth Vader.  What this signified is that the unique neuron did not associate with a particular camera image, but to a conceptual image.  This "loss of detail" led Quiroga to conclude that the neuron codes for an abstraction of Jennifer Aniston.  "We can see Funes," writes Quiroga, "as someone lacking those Jennifer Aniston neurons that encode abstract concepts."  He repeats, "We do not process images in our brains in the same way a camera does; on the contrary, we extract a meaning and leave aside a multitude of details.. . . Just like perception, memory is a creative process; when we remember something we do not repeat the experience as it was; rather we relive it in another context, create a new representation, and even change its meaning."  If we integrate Damasio into Quiroga's thinking, the convergence-divergence zones assemble "a cascade of processes that involve memories and emotions," "extracting particular features that help us recognize these people and objects in very different circumstances."  (See November 6, 2011 post).

Quiroga quotes Francis Crick (The Astonishing Hypothesis):  "What you see is not what is really there; it is what your brain believes is there.  Seeing is an active, constructive process.  Your brain makes the best interpretation it can according to its previous experience and the limited and ambiguous information provided by our eyes." This recalls Michael Gazzaniga's "Interpreter" (see July 25, 2013, September 27, 2009 posts).  A previous post that quotes Crick and Gazzaniga is likewise worth recalling (see May 22, 2011 post).  This latter post introduces the subject of the brain's capacity to engage in self-deception, to imagine things that don't exist in reality.  I think in most cases we do not engage in self-deception, the brain's default mode is to construct a reality as close to the reality that is out there.  So what we see, most of the time, is really there, or at least close to it.  (See February 4, 2012 post).  Information in our quantum world is probablistic. It is true that the brain quickly discards much information that is never encoded.  As Daniel Schacter has explained --- and Quiroga acknowledges --- our memories can become very fragile if not unreliable, and our "reality" can read more like fiction.  (See September 20, 2011 post).  And that is interesting, because the minds of savants like Funes have the ability to recollect and retain for considerable periods of time incredible detail without assigning meaning and in those cases the brain does not filter out reality from gibberish.