I usually discover something new in rereading a book I have not touched in a long time. With the passage of time, there is inevitably a different perspective than the original perspective that yields a different insight. In some cases the book loses it magic the second time around, and in other cases the book is just as vibrant as it was the first time but for entirely different reasons.
Decades ago, while a mere high school student reading Shakespeare, and in the wake of the 1970 Kent State University shootings, I submitted a paper as part of the Shakespeare course requirements that re-wrote Shakespeare's Richard III in contemporary terms. I titled it Richard the Third Rate. I wish I could recall how I dealt with the opening lines, "Now is the winter of our discontent made glorious summer by this sun of York; And all the clouds that lour'd upon our house in the deep bosom of the ocean buried." Certainly I modified "[son] of York" in some way to refer to Richard Nixon's "house." And certainly I did not rewrite the entire play, but I do recall the closing: "A chopper, a chopper, My Kingdom for a chopper." That is how Presidents leave their grounds these days and escape. They climb into a helicopter and fly away. And in hindsight this was unexpectedly prescient, because it was just four years later that Richard Nixon climbed into a chopper and fled Washington, DC after he resigned the Presidency. He resigned his Kingdom for a chopper and avoided an impeachment trial.
To be sure, the parallels between the two Richards are not strong. By Shakespeare's count, Richard III is directly responsible for the execution of eleven kin, close and distant, as he cleared his path to the British monarchy. With the commencement of US bombing in Cambodia, Richard Nixon merely set in motion events that indirectly connect him to the deaths of four students at Kent State University. Richard Nixon suffered a far different fate than Richard Plantagenet of York, Richard III, king of England for just two short years (1483-1485). While shamed after avoiding a criminal prosecution thanks to a pardon from his successor, Richard Nixon rebuilt his reputation to some considerable degree and lived for 20 more years after relinquishing his kingdom. Richard III's rule was extinguished when he was slain in battle by his enemies, like many of his Plantagenet kin.
We have a much different means and structure for removing someone from power today, although modern polities are certainly not uniform in the way they approach the transfer of political power. The manner in which Richard III was removed from power certainly persists in a few nations, and battle to the death, execution, and murder was considerably more common in the 14th and 15th centuries. The interesting storyline about Richard III's demise and removal from power is that it was all in the family.
Clearly as I read Richard III in 1970, Richard Nixon was part of my mental association. Forty-three years later, kin selection was on my mind as I turned the pages. One definition of kin selection is this: Kin selection is an evolutionary theory that proposes that people are more likely to help those who are blood relatives because it will increase the odds of gene transmission to future generations. The theory suggests that altruism towards close relatives occurs in order to ensure the continuation of shared genes. The more closely the individuals are related, the more likely they are to help one another. That "help" may include sacrificial behavior. (See September 17, 2012, September 12, 2012, October 13, 2010, and November 4, 2009 posts).
The House of Plantagenet obviously did not seriously contemplate increasing the odds of their gene transmission to future generations during their monarchical reign over England in the 14th and 15th centuries. Altruism and self-sacrifice were not in their blood; conspiring against and slaying each other was. "A house divided against itself cannot stand," said Abraham Lincoln during his 1858 campaign against Stephen Douglas, nearly four hundred years after the death of Richard III. Lincoln's remarks, emanating from the book of Mark, and later modified by Thomas Hobbes in his Leviathan, could very well have been written by William Shakespeare for Richard III.
The Plantagenet family tree is worth a look. There are some recognizable names from the British royal line. But look a little closer at some in this dysfunctional family:
The House of Plantagenet came to include over time, two "cadet" branches: the House of Lancaster established by the son of Henry III, and the House of York, established by the son of Edward III. The cadet House of Lancaster captured the British throne with the accession of Henry IV, and lost the throne to the House of York with the accession of Edward IV. Richard III was a member of the House of York, succeeding Edward IV. These two cadet branches represented the divided House of Plantagenet and ultimately led to the Wars of the Roses between these two family subunits.
Edward II: His invasion of Scotland in 1314 to suppress revolt resulted in defeat at Bannockburn. When he fell under the influence of a new favourite, Hugh le Despenser, he was deposed in 1327 by his wife Isabella (1292–1358), daughter of Philip IV of France, and her lover Roger de Mortimer, and murdered in Berkeley Castle, Gloucestershire. He was succeeded by his son, Edward III.
Richard II: Richard was born in Bordeaux. He succeeded his grandfather Edward III when only ten, the government being in the hands of a council of regency. His fondness for favourites resulted in conflicts with Parliament, and in 1388 the baronial party, headed by the Duke of Gloucester, had many of his friends executed. Richard recovered control in 1389, and ruled moderately until 1397, when he had Gloucester [14th child of Edward III] murdered and his other leading opponents executed or banished, and assumed absolute power. In 1399 his cousin Henry Bolingbroke, Duke of Hereford (later Henry IV of the House of Lancaster), returned from exile to lead a revolt; Richard II was deposed by Parliament and imprisoned in Pontefract Castle, where he died probably of starvation.
Henry VI: King of England from 1422, son of Henry V. He assumed royal power 1442 and sided with the party opposed to the continuation of the Hundred Years' War with France. After his marriage 1445, he was dominated by his wife, Margaret of Anjou. He was deposed 1461 in the Wars of the Roses; was captured 1465, temporarily restored 1470, but again imprisoned 1471 and then murdered. The unpopularity of the government, especially after the loss of the English conquests in France, encouraged Richard, Duke of York, to claim the throne, and though York was killed 1460, his son Edward IV proclaimed himself king 1461.
Edward IV (House of York): He was the son of Richard, Duke of York, and succeeded Henry VI in the Wars of the Roses, temporarily losing his throne to Henry when Edward fell out with his adviser Richard Neville, Earl of Warwick. Edward was a fine warrior and intelligent strategist, with victories at Mortimer's Cross and Towton in 1461, Empingham in 1470, and Barnet and Tewkesbury in 1471. He was succeeded by his son Edward V.
Edward V: King of England 1483. Son of Edward IV, he was deposed three months after his accession in favour of his uncle (Richard III), and is traditionally believed to have been murdered (with his brother) in the Tower of London on Richard's orders.
Richard III: King of England from 1483. The son of Richard, Duke of York, he was created Duke of Gloucester by his brother Edward IV, and distinguished himself in the Wars of the Roses. On Edward's death 1483 he became protector to his nephew Edward V, and soon secured the crown for himself on the plea that Edward IV's sons were illegitimate. He proved a capable ruler, but the suspicion that he had murdered Edward V and his brother undermined his popularity. In 1485 Henry, Earl of Richmond (later Henry VII), raised a rebellion, and Richard III was defeated and killed at Bosworth. After Richard's death on the battlefield his rival was crowned King Henry VII and became the first English monarch of the Tudor dynasty which lasted until 1603.
Henry VII: Henry was the son of Edmund Tudor, earl of Richmond, who died before Henry was born, and Margaret Beaufort, a descendant of Edward III through John of Gaunt, Duke of Lancaster. Although the Beaufort line, which was originally illegitimate, had been specifically excluded (1407) from all claim to the throne, the death of the imprisoned Henry VI (1471) made Henry Tudor head of the house of Lancaster. At this point, however, the Yorkist Edward IV had established himself securely on the throne, and Henry, who had been brought up in Wales, fled to Brittany for safety. The death of Edward IV (1483) and accession of Richard III, left Henry the natural leader of the party opposing Richard, whose rule was very unpopular. Henry made an unsuccessful attempt to land in England during the abortive revolt (1483) of Henry Stafford, Duke of Buckingham. Thereafter he bided his time in France until 1485 when, aided by other English refugees, he landed in Wales. At the battle of Bosworth Field, Leicestershire, he defeated the royal forces of Richard, who was killed. Henry advanced to London, was crowned, and in 1486 fulfilled a promise made earlier to Yorkist dissidents to marry Edward IV's daughter, Elizabeth of York. He thus united the houses of York and Lancaster, founding the Tudor royal dynasty. Although Henry's accession marked the end of the Wars of the Roses, the early years of his reign were disturbed by Yorkist attempts to regain the throne.
The Plantagenets are hardly the picture of our altruistic nature. Shakespeare is the chronicler of this blood-stained line of royals (Henry IV, Richard II, Henry V, Henry VI, Richard III), and Richard III brings us to the conclusion of their chronicles beginning in the waning months of the life of Edward IV with the members of the House of York reminding each other just who killed whom over the course of the latter years of the Wars of the Roses. As one source summarizes this strife, there was division not merely between the two cadet Houses of the same family, but within the House of York itself: "The next round of the wars arose out of disputes within the Yorkist ranks. Warwick and his circle were increasingly passed over at Edward’s court; more seriously, Warwick differed with the King on foreign policy. In 1469 civil war was renewed. Warwick and Edward’s rebellious brother George, duke of Clarence, fomented risings in the north; and in July, at Edgecote (near Banbury), defeated Edward’s supporters, afterward holding the King prisoner. By March 1470, however, Edward regained his control, forcing Warwick and Clarence to flee to France, where they allied themselves with the French king Louis XI and their former enemy, Margaret of Anjou. Returning to England (September 1470), they deposed Edward and restored the crown to Henry VI. Edward fled to the Netherlands with his followers and, securing Burgundian aid, returned to England in March 1471. Edward outmaneuvred Warwick, regained the loyalty of Clarence, and decisively defeated Warwick at Barnet on April 14. That very day, Margaret had landed at Weymouth. Hearing the news of Barnet, she marched west, trying to reach the safety of Wales; but Edward won the race to the Severn. At Tewkesbury (May 4) Margaret was captured, her forces destroyed, and her son killed. Shortly afterward, Henry VI was murdered in the Tower of London. Edward’s throne was secure for the rest of his life (he died in 1483)."
Quoth Shakespeare's Henry VII as the curtain closes on Richard III, "England hath long been made and scarred herself: The brother blindly shed the brother's blood; The father rashly slaughtered his own son; The son, compelled, been butcher to the sire. All this divided York and Lancaster. Divided in their dire division."
What Richard III never really enjoyed, but Richard Nixon did, was abiding loyalty. John Dean ultimately broke the Nixon clique's conspiracy of silence. Everyone else in the President's inner circle maintained their silence, and Nixon stood by his men. Richard Nixon divided a nation, not his family or followers. Richard III's inner circle peeled away, some who refused to carry out his purportedly (if Shakespeare's history is accurate) criminal commands, perhaps out of principle, perhaps out of fear of slaughter, and in the end he had few to stand by him as he cried, "A horse, a horse, my Kingdom for a horse."
Thursday, December 19, 2013
Wednesday, December 11, 2013
Daniel Kelly, Yuck! The Nature and Moral Significance of Disgust (2011)
I have an aversion to lima beans. aver·sion, noun \ə-ˈvər-zhən, -shən\
There has been much written in recent years about disgust as a moral emotion or the potential nexus between disgust and moral judgments/ethical norms. The example of incest avoidance is one example of "moral" behavior that is at the heart of this discussion. Avoiding consumption of lima beans is not, and I am reasonably certain that the avoidance of lima beans is not considered a "moral behavior" across any culture or society. Marc Hauser described disgust as "the most powerful emotion against sin, especially in the domains of food and sex. . . . In the absence of a disgust response, we might well convince ourselves that it is okay to have sex with a younger sibling or eat vomit, act with deleterious consequences for our reproductive success and survival respectively." Incest avoidance is worthy of study -- in contrast to my aversion for lima beans -- because it is virtually universal across cultures, and is therefore almost unique among things humans generally avoid. Aversion to eating pork, for example, is not universal, and like lima beans there is a substantial part of the human population that likes eating pork and is not harmed by eating pork. In some cultures, the avoidance of eating pork is considered a "moral behavior."
Disgust did not begin as a moral or even a social emotion. There is common agreement, among those who have studied disgust that the emotion's evolutionary origins lie in response to the ingestion of something: spoiled food, toxins; it later evolved as a response to the presence of disease and parasites, which is referred to as core disgust. Daniel Kelly documents this body of research in Yuck! And a common feature of this emotion is the automatic facial feature called face gape. As one article explains, "At its root, disgust is a revulsion response -- "a basic biological motivational system" -- that Darwin associated with the sense of taste. Its function is to reject or discharge offensive-tasting food from the mouth (and/or the stomach), and its fundamental indicator, the "gape" or tongue extension, has been observed in a number of animals, including birds and mammals. In humans, the characteristic facial expressions of disgust that coincide with gaping include nose wrinkling and raising the upper lip, behaviors usually accompanied by a feeling of nausea and a general sense of revulsion. Together these behaviors and sensations facilitate the rejection of food that has been put into the mouth." Evolutionarily, disgust began with distaste, but at some point it adapted in humans to protect against infection by pathogens and parasites. Daniel Kelly explains his thesis that the two responses became "entangled." "Other previously puzzling features of disgust also fall into place once its role in parasite avoidance becomes clear. Together, eating and sex constitute two of the most basic evolutionary imperatives. Both behaviors are unavoidable ingredients of evolutionary success, but both involve the crossing of bodily perimeters at various points. By virtue of this, both activities leave those engaging in them highly vulnerable to infection. The upshot is that disgust's role in monitoring the boundaries of the entire body (rather than just the mouth) makes much more sense in light of its connection to infectious disease. Moreover, both feeding and procreating are activities that require those boundaries to be breached. They are highly salient to disgust both because they are universal and unavoidable and because they are two of the most potent vectors of disease transmission." It is this entanglement of distaste and core disgust that is unique to humans. There is evidence that each is independently found in other species.
Importantly, disgust appears to be activated in the cortex. Kelly identifies the insula, which is not part of that ancient subcortical system of the forebrain or midbrain that Jaak Panksepp documents is associated with basic emotional systems. (See May 19, 2013 post). The word disgust never appears in Panksepp's book and is not even found in his description of the fear system involving in the amygdala and the hypothalamus which, because fear stimulates flight, sounds like it might be related to an emotion like disgust that stimulates avoidance. On the other hand, Kelly identifies an area of the forebrain, the putamen, as an area of the brain associated with processing disgust, but the putamen is involved in emotional facial recognition, so it may not be something disgusting that activates the putamen, but the recognition of face gape that activates the putamen. The putamen is primarily associated with the motor cortex, so the connection to the putamen as an emotional processor is not at all that clear. I suspect that the putamen, if it is implicated in disgust, it is not part of what Rozin and Haidt refer to as inputs (cognitive appraisals or environmental events), but our biological output that results in virtually automatic, nearly uniform facial expressions. Research on patients with Huntington's Disease would seem to confirm this observation. But the insular cortex is involved in maintaining the homeostatic condition of the body; it maintains an awareness of various parameters of the body; the insula is also believed to process convergent information to produce an emotionally relevant context for sensory experience. More specifically, the anterior insula is related more to olfactory, gustatory, vicero-autonomic, and limbic function, while the posterior insula is related more to auditory-somesthetic-skeletomotor function. The insula has an important role in pain experience and the experience of a number of basic emotions, including anger, fear, disgust, happiness and sadness. If disgust is seated in the insular cortex, this would confirm the significance of cognitive appraisal in producing disgust. The insular cortex is a mammalian development, so it evolved later than those ancient emotional systems that Panksepp discusses.
Morality varies across cultures and even within cultures and smaller social groups. Edward Westermark advanced the thesis that there was an innate aversion to sexual intercourse between persons living very closely together from early youth, and as applied to persons who are closely related this generates a feeling of horror of intercourse with close kin. But is this aversion really innate? Or is it learned? Or is something innate triggered because, in this case, one must first experience living closely with someone at an early age? Because there is survival value associated with avoiding inbreeding, there is arguably an evolutionary imperative associated with incest avoidance and should explain in substantial part why incest avoidance is universal across cultures. Inbreeding reduces the fitness of a given population. Incest avoidance seems to have nothing to do with morals. If it is considered moral behavior, it is only because humans have put that label on a form of behavior that likely predates social norms and morals. If it is considered disgusting, it is only because humans have put that label on it.
Outside the example of the inversion to inbreeding, interpersonal and social-moral disgust is something that is learned. It is not innate. Rozin observes that up to about two years of age, children show no aversion to primary disgust elicitors such as feces. Toilet training may be the learning event that leads to core disgust. Later in childhood development, an event or object that was "previously morally neutral becomes morally loaded." At this point in the learning process, disgust becomes recruited. Disgust "becomes a major, if not the major force for negative socialization in children; a very effective way to internalize culturally prescribed rejections (perhaps starting with feces) is to make them disgusting."
Kelly theorizes that disgust migrated from being an emotional response to toxins, parasites and disease to a socially shared emotion because face gape is automatic and the emotion as revealed in the facial gesture was communicated in a way that was empathetically received. Like other facial communications (see July 16, 2010 post), there is a reliable causal link between the production of an emotion and its expression here that acts as a signal to avoid. And to the extent that group selection (or more narrowly kin selection) is engaged, shared disgust becomes a survival mechanism for the group to avoid toxins, parasites and disease, Kelly posits that the genetic underpinnings of the neural correlates of the emotion and gape face were recruited by human culture to perform several novel functions. And what the emotion qua emotion of disgust has in common with the socially shared emotion is that the object of the emotion's attention is distasteful and/or impure. As a cultural phenomenon, the socially shared emotion becomes connected to social, moral or ethical norms, group identity and avoidance of others. Culture then labels something to be avoided or averted as disgusting.
Social or ethical norms are not necessarily social or ethical norms because of a common emotional stimulus like disgust, although they could be and common aversion to foods and attitudes toward sexual behaviors within a culture are a few examples that coms to mind. Because most social norms are learned and not instinctual, interpersonal and social-moral disgust is little more than a social label for one's attitude toward behavioral transgressions of group rules. Interestingly, the label may not be shared by all within the group. Consider, for examples, how societal attitudes toward homosexuality --- a behavior that large segments of many societies consider "disgusting" --- are rapidly changing.
Kelly concludes, disgust is "far from being a reliable source of special, supra-rational information about morality" and we should be extremely skeptical of claims that disgust deserves any "epistemic credit" as a trustworthy guide to justifiable moral judgments or deep ethical wisdom in repugnance. Justifying a moral or ethical rule on disgust can easily slide in dehumanization and demonization itself, which in itself is problematic. There is a ready and recent example that highlights Kelly's concern in today's news that the youthful leader of North Korea had his elder uncle, also in the leadership of North Korea's government, executed for treason. In language that recalls Rozin and Haidt's discussion of animal-nature disgust, the uncle was labeled as "despicable human scum" and "worse than a dog," and said he had betrayed his party and leader." In other words --- particularly with the association to animals and scum (certainly there are parasites and disease in scum) --- the uncle was disgusting. His purported "disgustingness" became a post hoc justification for his execution.
This entire discussion implicates the relationship between genes and culture, and Kelly devotes an entire chapter to "gene culture coevolution," sometimes referred to as the dual inheritance theory. (See September 12, 2012 and June 17, 2010 posts) and its application to the evolution of disgust. Rozin and Haidt concur: "The interaction between biology and culture is clear, because the output side of disgust remains largely ruled by biological forces that originally shaped it, while the input/appraisal/meaning part has been greatly elaborated, and perhaps transformed in some cases."
: a strong feeling of not liking something. **** 2
a : a feeling of repugnance toward something with a desire to avoid or turn from it aversion >
b : a settled dislike : antipathy aversion to parties>
c : a tendency to extinguish a behavior or to avoid a thing or situation and especially a usually pleasurable one because it is or has been associated with a noxious stimulus
Yuck! But is this aversion a form of disgust? dis·gust noun \di-ˈskəst, dis-ˈgəst also diz-\
: a strong feeling of dislike for something that has a very unpleasant appearance, taste, smell, etc.
: annoyance and anger that you feel toward something because it is not good, fair, appropriate, etc.
: marked aversion aroused by something highly distasteful : repugnance
I find lima beans distasteful, noxious. I avoid eating them. Apparently, I am not the only one. Years ago as a child, I felt the same way toward other food items, but today only lima beans remains associated with a noxious stimulus of some kind that I cannot define. I know well that others like lima beans and are not harmed by them, but something triggers certain neurons firing in my brain that creates this reaction to the lima bean. Are lima beans disgusting, by which I intend to describe lima beans an elicitor of disgust? By definition, lima beans are disgusting . . . at least to me. dis·gust·ing, adjective
: so unpleasant to see, smell, taste, consider, etc., that you feel slightly sick
: so bad, unfair, inappropriate, etc., that you feel annoyed and angry
You see in these definitions of disgust and disgusting two distinctive feelings. One centers on an aversion, dislike or repugnance toward something that is distasteful, or suffers from a bad smell or appearance; although the definitions do not elaborate on what is distasteful or smells bad, it is easy to think of something associated with the mouth and ingestion such as rotten food or a poison. The second feeling centers on a dislike for another group or behavior considered annoying or unfair by some standard. In both cases, the consequence of the feeling is likely rejection: rejection of the noxious substance; rejection of the other person or group.
Paul Rozin at the University of Pennsylvania and Jonathan Haidt at the University of Virginia and now NYU have devoted more attention and research to the subject of disgust than anyone else at this moment in time. In their contribution to the 1999 edition of the Handbook of Cognition and Emotion entitled Disgust: The Body and Soul Emotion, they made several key points and arguments:
- Distaste and disgust are different. Other animals, particularly other mammals, show they react to ingesting a substance because it is distasteful by rejecting it. Disgust, on the other hand, is uniquely human because, in addition to some biological rejection of distasteful or foul smelling substance, the feeling has a cognitive content that is not elicited by sensory properties. Like other emotions, disgust links together cognitive and bodily responses, which can be analyzed as an affect program, in which outputs (behaviors, expressions, physiological responses) are triggered by inputs (cognitive appraisals or environmental events). While the biological outputs that represent disgust have been reasonably stable among human populations over time, there has been an enormous expansion on the cognitive appraisal side, which expansion varies with history and culture and takes disgust far beyond its animal precursors and well beyond an aversion to lima beans.
- For humans, the elicitors of core disgust are generally of animal origin, although there is research that plants and vegetables can elicit core disgust.
- The rejection response is now harnessed to the offensive idea that humans are animals, and thus disgust is part of affirming our unique humanity by suppressing every characteristic that we feel to be 'animal'. They call this animal nature disgust and distinguish it from core disgust. The cognitive notion here includes associating something deemed disgusting with something labeled impure.
- There is also another form of disgust they call interpersonal disgust, which is rejection of persons outside one's social or cultural group. Hindu caste behavior is a prime example, but there are hundreds of other examples, racial, religious, and the like, that are readily recognized.
- Finally, thee is social-moral disgust where violations of social norms trigger a feeling of disgust. Not all violations of social norms trigger disgust. Bank robbery they point out, while viewed as "wrong," does not trigger necessarily disgust or rejection.
There has been much written in recent years about disgust as a moral emotion or the potential nexus between disgust and moral judgments/ethical norms. The example of incest avoidance is one example of "moral" behavior that is at the heart of this discussion. Avoiding consumption of lima beans is not, and I am reasonably certain that the avoidance of lima beans is not considered a "moral behavior" across any culture or society. Marc Hauser described disgust as "the most powerful emotion against sin, especially in the domains of food and sex. . . . In the absence of a disgust response, we might well convince ourselves that it is okay to have sex with a younger sibling or eat vomit, act with deleterious consequences for our reproductive success and survival respectively." Incest avoidance is worthy of study -- in contrast to my aversion for lima beans -- because it is virtually universal across cultures, and is therefore almost unique among things humans generally avoid. Aversion to eating pork, for example, is not universal, and like lima beans there is a substantial part of the human population that likes eating pork and is not harmed by eating pork. In some cultures, the avoidance of eating pork is considered a "moral behavior."
Disgust did not begin as a moral or even a social emotion. There is common agreement, among those who have studied disgust that the emotion's evolutionary origins lie in response to the ingestion of something: spoiled food, toxins; it later evolved as a response to the presence of disease and parasites, which is referred to as core disgust. Daniel Kelly documents this body of research in Yuck! And a common feature of this emotion is the automatic facial feature called face gape. As one article explains, "At its root, disgust is a revulsion response -- "a basic biological motivational system" -- that Darwin associated with the sense of taste. Its function is to reject or discharge offensive-tasting food from the mouth (and/or the stomach), and its fundamental indicator, the "gape" or tongue extension, has been observed in a number of animals, including birds and mammals. In humans, the characteristic facial expressions of disgust that coincide with gaping include nose wrinkling and raising the upper lip, behaviors usually accompanied by a feeling of nausea and a general sense of revulsion. Together these behaviors and sensations facilitate the rejection of food that has been put into the mouth." Evolutionarily, disgust began with distaste, but at some point it adapted in humans to protect against infection by pathogens and parasites. Daniel Kelly explains his thesis that the two responses became "entangled." "Other previously puzzling features of disgust also fall into place once its role in parasite avoidance becomes clear. Together, eating and sex constitute two of the most basic evolutionary imperatives. Both behaviors are unavoidable ingredients of evolutionary success, but both involve the crossing of bodily perimeters at various points. By virtue of this, both activities leave those engaging in them highly vulnerable to infection. The upshot is that disgust's role in monitoring the boundaries of the entire body (rather than just the mouth) makes much more sense in light of its connection to infectious disease. Moreover, both feeding and procreating are activities that require those boundaries to be breached. They are highly salient to disgust both because they are universal and unavoidable and because they are two of the most potent vectors of disease transmission." It is this entanglement of distaste and core disgust that is unique to humans. There is evidence that each is independently found in other species.
Importantly, disgust appears to be activated in the cortex. Kelly identifies the insula, which is not part of that ancient subcortical system of the forebrain or midbrain that Jaak Panksepp documents is associated with basic emotional systems. (See May 19, 2013 post). The word disgust never appears in Panksepp's book and is not even found in his description of the fear system involving in the amygdala and the hypothalamus which, because fear stimulates flight, sounds like it might be related to an emotion like disgust that stimulates avoidance. On the other hand, Kelly identifies an area of the forebrain, the putamen, as an area of the brain associated with processing disgust, but the putamen is involved in emotional facial recognition, so it may not be something disgusting that activates the putamen, but the recognition of face gape that activates the putamen. The putamen is primarily associated with the motor cortex, so the connection to the putamen as an emotional processor is not at all that clear. I suspect that the putamen, if it is implicated in disgust, it is not part of what Rozin and Haidt refer to as inputs (cognitive appraisals or environmental events), but our biological output that results in virtually automatic, nearly uniform facial expressions. Research on patients with Huntington's Disease would seem to confirm this observation. But the insular cortex is involved in maintaining the homeostatic condition of the body; it maintains an awareness of various parameters of the body; the insula is also believed to process convergent information to produce an emotionally relevant context for sensory experience. More specifically, the anterior insula is related more to olfactory, gustatory, vicero-autonomic, and limbic function, while the posterior insula is related more to auditory-somesthetic-skeletomotor function. The insula has an important role in pain experience and the experience of a number of basic emotions, including anger, fear, disgust, happiness and sadness. If disgust is seated in the insular cortex, this would confirm the significance of cognitive appraisal in producing disgust. The insular cortex is a mammalian development, so it evolved later than those ancient emotional systems that Panksepp discusses.
Morality varies across cultures and even within cultures and smaller social groups. Edward Westermark advanced the thesis that there was an innate aversion to sexual intercourse between persons living very closely together from early youth, and as applied to persons who are closely related this generates a feeling of horror of intercourse with close kin. But is this aversion really innate? Or is it learned? Or is something innate triggered because, in this case, one must first experience living closely with someone at an early age? Because there is survival value associated with avoiding inbreeding, there is arguably an evolutionary imperative associated with incest avoidance and should explain in substantial part why incest avoidance is universal across cultures. Inbreeding reduces the fitness of a given population. Incest avoidance seems to have nothing to do with morals. If it is considered moral behavior, it is only because humans have put that label on a form of behavior that likely predates social norms and morals. If it is considered disgusting, it is only because humans have put that label on it.
Outside the example of the inversion to inbreeding, interpersonal and social-moral disgust is something that is learned. It is not innate. Rozin observes that up to about two years of age, children show no aversion to primary disgust elicitors such as feces. Toilet training may be the learning event that leads to core disgust. Later in childhood development, an event or object that was "previously morally neutral becomes morally loaded." At this point in the learning process, disgust becomes recruited. Disgust "becomes a major, if not the major force for negative socialization in children; a very effective way to internalize culturally prescribed rejections (perhaps starting with feces) is to make them disgusting."
Kelly theorizes that disgust migrated from being an emotional response to toxins, parasites and disease to a socially shared emotion because face gape is automatic and the emotion as revealed in the facial gesture was communicated in a way that was empathetically received. Like other facial communications (see July 16, 2010 post), there is a reliable causal link between the production of an emotion and its expression here that acts as a signal to avoid. And to the extent that group selection (or more narrowly kin selection) is engaged, shared disgust becomes a survival mechanism for the group to avoid toxins, parasites and disease, Kelly posits that the genetic underpinnings of the neural correlates of the emotion and gape face were recruited by human culture to perform several novel functions. And what the emotion qua emotion of disgust has in common with the socially shared emotion is that the object of the emotion's attention is distasteful and/or impure. As a cultural phenomenon, the socially shared emotion becomes connected to social, moral or ethical norms, group identity and avoidance of others. Culture then labels something to be avoided or averted as disgusting.
Social or ethical norms are not necessarily social or ethical norms because of a common emotional stimulus like disgust, although they could be and common aversion to foods and attitudes toward sexual behaviors within a culture are a few examples that coms to mind. Because most social norms are learned and not instinctual, interpersonal and social-moral disgust is little more than a social label for one's attitude toward behavioral transgressions of group rules. Interestingly, the label may not be shared by all within the group. Consider, for examples, how societal attitudes toward homosexuality --- a behavior that large segments of many societies consider "disgusting" --- are rapidly changing.
Kelly concludes, disgust is "far from being a reliable source of special, supra-rational information about morality" and we should be extremely skeptical of claims that disgust deserves any "epistemic credit" as a trustworthy guide to justifiable moral judgments or deep ethical wisdom in repugnance. Justifying a moral or ethical rule on disgust can easily slide in dehumanization and demonization itself, which in itself is problematic. There is a ready and recent example that highlights Kelly's concern in today's news that the youthful leader of North Korea had his elder uncle, also in the leadership of North Korea's government, executed for treason. In language that recalls Rozin and Haidt's discussion of animal-nature disgust, the uncle was labeled as "despicable human scum" and "worse than a dog," and said he had betrayed his party and leader." In other words --- particularly with the association to animals and scum (certainly there are parasites and disease in scum) --- the uncle was disgusting. His purported "disgustingness" became a post hoc justification for his execution.
This entire discussion implicates the relationship between genes and culture, and Kelly devotes an entire chapter to "gene culture coevolution," sometimes referred to as the dual inheritance theory. (See September 12, 2012 and June 17, 2010 posts) and its application to the evolution of disgust. Rozin and Haidt concur: "The interaction between biology and culture is clear, because the output side of disgust remains largely ruled by biological forces that originally shaped it, while the input/appraisal/meaning part has been greatly elaborated, and perhaps transformed in some cases."
Wednesday, November 27, 2013
Jose Saramago, The Lives of Things (2013, 1995)
At a very early age, children learn the difference between artifacts and living things: the inanimate and the animate. In fact, one of the things that the human mind does quite well from an early age is to categorize things. These are often referred to as ontological categories. Children recognize intentionality in animals, and they recognize that artifacts lack intentionality. The animate are characterized by motion, and their ability to communicate in some capacity. Artifacts do not move on their own, nor do they have an ability to communicate. Very early in life humans develop certain expectations about these categories. Yet there are some adults who come to believe that they can talk to artifacts and that the artifacts can listen to them and perhaps respond as if they were living things. Religious icons are an example of these artifacts. When we believe that icons can hear us speak and respond in some way, this violates our expectations of the inanimate. When those violations occur, we have entered the realm of the supernatural, the paranormal. Why does this happen?
In Religion Explained, anthropologist Pascal Boyer poses a number of supernatural notions --- many of which are linked to a religious idea associated with a particular religion (not just the predominant religions), and some others that he just makes up --- and what he is interested in is determining whether the listener (or reader) can say that a particular religion has been built up around the idea. They range from "Some people get old and then one day they stop breathing and die and that's that" to "There is one God! He knows everything we do" to "Dead people's souls wander about and sometimes visit people" to "When people die, their souls sometimes come back in another body" to "We worship this woman because she was the only one ever to conceive a child without having sex" to "We pray to this statue because it listens to our prayers and helps us get what we want" to "This mountain over there eats food and digests it. We give it food sacrifices every now and then, to make sure it stays in good health" to "The river over there is our guardian. It will flow upstream if it finds out that people have committed incest." Obviously, the first does not seem like a notion that a religion would build up around because it is part of our every day experience. But we should recognize a religious affiliation with the others.
Religious representations, says Boyer, are particular combinations of mental representations that satisfy two conditions. "First, the religious concepts violate certain expectations from ontological categories. Second, they preserve other expectations." A very frequent type of counterintuitive concept is produced by assuming that various objects or plants have some mental properties, that they can perceive what happens around them, understand what people say, remember what happened, and have intentions. A familiar example of this would be that of people who pray to statues of gods, saints or heroes. Not just artifacts but also inanimate living things can be "animated" in this sense. Boyer reports, "The pygmies of the Ituri forest for instance say that the forest is a live thing, that it has a soul, that it "looks after" them and is particularly generous to sociable, friendly and honest individuals. These will catch plenty of game because the forest is pleased with their behavior."
What Boyer is getting at is quite consistent with Jeff Hawkins' model of the cortex. (See November 16, 2013 post). Boyer describes these as inference systems. We quickly make inferences about something we experience derived from higher level categories --- object vs animal vs plant, for example. In Hawkins' model, our memory of these higher level categories, "concepts," is retained in the higher cortical areas. Within the synapses of the hierarchical structure of the cortex described in On Intelligence below the higher cortical areas contains our memories of more narrow categories of more specific objects, animals, and plants and their respective attributes. The hierarchical structure of the cortex described by Hawkins resembles the structure of taxonomy. Taxonomy is a "powerful logical device that is intuitively used by humans in producing intuitive expectations about living things. People use the specific inference system of intuitive biological knowledge to add to the information given." But why then does the brain persistently retain memories of non-real --- supernatural --- concepts?
Biological inferences are not always valid, Boyer admits. He refers to this as the enrichment of intuitive principles.
In describing his research about supernatural concepts, Boyer writes, "Our reasoning was that the present explanation of supernatural concepts, on the basis of what we know from anthropology, also implied precise psychological predictions. Cultural concepts are selected concepts. They are the ones that survive cycles of acquisition and communication in roughly similar forms. Now one simple condition of such relative preservation is that concepts are recalled. So [we] designed fairly coherent stories in which we inserted various new violations of ontological expectations as well as episodes that were compatible with ontological expectations. The difference in recall between the two kinds of information would give us an idea of the advantage of violations in individual memory. Naturally, we only used stories and concepts that were new to our subjects. If I told you a story about a character with seven-league boots or a talking wolf disguised as a grandmother, or a woman who gave birth to an incarnation of a god after a visit from an angel, you would certainly remember those themes; not just because they were in the story but also because they were familiar to start with. Our studies were supposed to track how memory stores or distorts or discards novel material." Boyer's research showed that "long-term recall (over months) shows that violations [of expectations] were much better preserved [in memory] than any other material." It is not strangeness per se that is preserved; it has to be a violation of an ontological category. So among the expected attributes of the ontological categories of living animals is that they die. Anthropocentric gods who purportedly live forever violate expectations associated with our ontological category of living things; they are therefore more likely to be preserved in memory. Likewise for statues that talk and listen and respond to human speech or thinking. This is true across cultures, only the details on top of the ontological/conceptual violations vary.
So concepts that violate our expectations of reality stick in memory. This does not sound particularly surprising. A previous post, not surprisingly connected to a Jose Saramago publication, emphasized the relationship between storytelling and memory (February 26, 2013 post): "This is not the first time that a post in this blog has connected Saramago's work with the subject of memory. In The Notebook (September 28, 2010 post), the Nobelist created a memory bank in blog form. In the posting on his final novel, Cain (December 20, 2011 post) I remarked, "I also believe storytelling evolved in part to preserve our memories of things past. (See August 15, 2011 post). And storytelling, whether historical or fictional or both, enables the construction of both personal and social/group identity." And Saramago is a master at clutching collective memory --- history we call it --- and creating stories --- fiction we call it --- as in The Year of the Death of Ricardo Reiss (June 28, 2011 post) and Baltasar and Blimunda (January 1, 2013 post)."
This finally brings us round to Saramago's collection of six early short stories, The Lives of Things. These are stories that stick in memory. Three of the short stories are constructed around artifacts --- "things," a centaur, and a chair that topples an oppressive dictator --- that violate our expectations of the ontological category of these objects. A few sentences from the story "Things" triggered my memory of Pascal Boyer's research:
"There was a time when the manufacturing process had reached such a degree of perfection and faults became so rare that the Government (G) decided there was little point in depriving members of the public (especially those in categories A, B, and C) of their civil right and pleasure to lodge complaints: a wise decision which could only benefit the manufacturing industry. So factories were instructed to lower their standards. This decision, however, could not be blamed for the poor quality of the goods which had been flooding the market for the last two months. As someone employed at the Department of Special Requisitions (DSR), he was in a good position to know that the Government had revoked these instructions more than a month ago and imposed new standards to ensure maximum quality. Without achieving any results. As far as he could remember, the incident with the door was certainly the most disturbing. It was not a case of some object or other, or some simple utensil, or even a piece of furniture, such as the settee in the entrance-hall, but of an item of imposing dimensions; although the settee was anything but small. However it formed part of the interior of furnishings, while the door was an integral part of the building, if not the most important part."
The "incident with the door" occurs as the story opens, virtually ordinary in the way Saramago describes it: "As it closed, the tall heavy door caught the back of the civil servant's right hand and left a deep scratch, red by scarcely bleeding." The civil servant decides to have this small wound treated at office infirmary and when explaining to the nurse how he came to be scratched, the nurse responds that this is the third such case that day. This incident presages a wider series of incidents that pits objects against living things. What unfolds is a revolution of objects against living people --- presumably in response to the Government's policy that manufactured goods could be made to lower quality standards; as the title of this volume implies, things (objects) come alive. Ordinary useful everyday objects begin to disappear: a pillar box, a jug, doors, stairs, utensils, clothes, and ultimately entire buildings and blocks of buildings. Nobody sees anyone taking or removing these objects; they seem to disappear of their own volition when no is watching, in the dark. This tale violates our expectations (inferences) associated with the concept of artifacts (objects). Things do not have volition, intentionality. But the tale is likely to be preserved in memory a bit longer than had Saramago not presented these "things" as a metaphor for people whom the government/society's power structure treated as objects that could be made to lower quality standards.
There are those who contend that a god or other supernatural being (omniscient, eternal) must exist and our beliefs in these supernatural beings (as well as religion in general) is genetically hardwired. (See November 30, 2009 post). In contrast, I suggest that what is genetically hardwired is our brain's disposition to not discard, or discard only with significant cognitive effort, concepts that violate our expectations of reality. Hence, beliefs in the ability of artifacts and imaginary unseen things to engage in behavior that we associate with living things simply stick, and they are rendered stickier when human cultural institutions reinforce those beliefs or concepts. This is beginning to sound like Gene Culture Co-evolution or the dual inheritance theory. (See September 12, 2012 and June 17, 2010 posts).
What I have not been able to understand or reconstruct yet is why the sensory inputs that the lower levels of the cortex are constantly bombarded with during our encounters with the physical world do not overwhelm the sticky concepts stuck in higher cortical areas that violate intuitive expectations? Boyer describes humans as information hungry, in need of cooperation from other humans including the provision of information by other humans, and this cognitive niche is our milieu, our environment. We have a taste for gossip, information about other humans. Because humans are in need of cooperation, they have developed mechanisms for social exchange resulting in the formation of groups and coalitional dynamics. But the human mind is not constrained to consider and represent only what is currently going on in the immediate environment. The human brain spends a considerable amount of time thinking about "what is not here and now." Fiction --- a Jose Saramago story --- is the most salient illustration, says Boyer. "One of the easiest things for human minds to do is to produce inferences on the basis of false premises. Thus "thoughts are decoupled from their standard inputs and outputs." (See June 28, 2011 post). "Decoupled cognition," writes Boyer, "is crucial to human cognition because we depend so much on information communicated by others and on cooperation with others. To evaluate information provided by others you must build some mental simulation of what they describe. Also, we would not carry out complex hunting expeditions, tool making, food gathering or social exchange without complex planning. The latter requires an evaluation of several different scenarios, each of which is based on nonfactual premises (What if we go down to the valley to gather fruit? What if there is none down there? What if other members of the group decide to go elsewhere? What if my neighbor steals my tools? and so on). Thinking about the past too requires decoupling. As psychologist Endel Tulving points out, episodic memory is a form of mental "time travel" allowing us to re-experience the effects of particular scene on us. This is used in particular to assess other peoples behavior, to reevaluate their character, to provide a new description of our own behavior and its consequences and for similar purposes. . . The crucial point to remember about decoupled thoughts," says Boyer, "is that they run the inference systems in the same way as if the situation were actual. This is why we can produce coherent and useful inferences on the basis of imagined premises. . . Hypothetical scenarios suspend one aspect of actual situations but then run all inference systems in the same way as usual."
Thus fantasy --- which includes not only stories like those in Saramago's The Lives of Things, but also religious stories that violate our inferred expectations of reality --- succeed to the extent that they activate the same inference systems of the brain that are used in navigating reality. "Religious concepts," concludes Boyer, "constitute salient cognitive artifacts whose successful cultural transmission depends upon the fact that they activate our inference systems in particular ways. The reason religion can become much more serious and important than the inference systems that are of vital importance to us: those that govern our most intense emotions, sharpen our interaction with other people, give us moral feelings, and organize social groups. . . .Religious concepts are supernatural concepts that matter, they are practical." Boyer summarizes: What is "important" to human beings, because of their evolutionary history, are the conditions of social interaction: who knows what, who is not aware of what, who did what with whom, when and what for. Imagining agents with that information is an illustration of mental processes driven by relevance. Such agents are not really necessary to explain anything, but they are so much easier to represent and so much richer in possible inferences that they enjoy a great advantage in cultural transmission." So the fantastic that violates ontological expectations is not just sticky in our memory for that reason, but because of the facility by which the fantastic is culturally transmitted, its stickiness is enhanced as a matter of group memory and supports the human need for cooperation and social interaction. Will Saramago's stories of the fantastic --- for example, the Iberian peninsula breaking away from Europe and floating out to the Atlantic Ocean in The Stone Raft, a community's near complete loss of sightedness in Blindness, death taking a holiday in Death With Interruptions --- enjoy a great advantage in cultural transmission? Probably not. These stories are labeled fiction, and we understand the stories as fiction. They likely exploit the same inference systems in the brain that we rely upon to experience and navigate the real world. They may be memorable, but cultural exchange --- save the temporal book club meeting --- is not likely to be constructed around these stories. This is likely a result of what John Searle refers to when he mentions "the sheer growth of certain, objective, universal knowledge." (See January 21, 2011 post). Cultural transmission of religious concepts began long before there was any volume of certain, objective, universal knowledge. Religious concepts now compete for our attention to objective, universal knowledge, as evidence by the Dover, Pennsylvania litigation over teaching "creation science" in the schools. (See March 14, 2013 post). And while our brain's enduring capacity to retain and rely upon decoupled concepts is persistent, it might not be emergent in our modern cognitive niche if it had not been emergent centuries ago when we lacked substantial certain, objective, universal knowledge.
In Religion Explained, anthropologist Pascal Boyer poses a number of supernatural notions --- many of which are linked to a religious idea associated with a particular religion (not just the predominant religions), and some others that he just makes up --- and what he is interested in is determining whether the listener (or reader) can say that a particular religion has been built up around the idea. They range from "Some people get old and then one day they stop breathing and die and that's that" to "There is one God! He knows everything we do" to "Dead people's souls wander about and sometimes visit people" to "When people die, their souls sometimes come back in another body" to "We worship this woman because she was the only one ever to conceive a child without having sex" to "We pray to this statue because it listens to our prayers and helps us get what we want" to "This mountain over there eats food and digests it. We give it food sacrifices every now and then, to make sure it stays in good health" to "The river over there is our guardian. It will flow upstream if it finds out that people have committed incest." Obviously, the first does not seem like a notion that a religion would build up around because it is part of our every day experience. But we should recognize a religious affiliation with the others.
Religious representations, says Boyer, are particular combinations of mental representations that satisfy two conditions. "First, the religious concepts violate certain expectations from ontological categories. Second, they preserve other expectations." A very frequent type of counterintuitive concept is produced by assuming that various objects or plants have some mental properties, that they can perceive what happens around them, understand what people say, remember what happened, and have intentions. A familiar example of this would be that of people who pray to statues of gods, saints or heroes. Not just artifacts but also inanimate living things can be "animated" in this sense. Boyer reports, "The pygmies of the Ituri forest for instance say that the forest is a live thing, that it has a soul, that it "looks after" them and is particularly generous to sociable, friendly and honest individuals. These will catch plenty of game because the forest is pleased with their behavior."
What Boyer is getting at is quite consistent with Jeff Hawkins' model of the cortex. (See November 16, 2013 post). Boyer describes these as inference systems. We quickly make inferences about something we experience derived from higher level categories --- object vs animal vs plant, for example. In Hawkins' model, our memory of these higher level categories, "concepts," is retained in the higher cortical areas. Within the synapses of the hierarchical structure of the cortex described in On Intelligence below the higher cortical areas contains our memories of more narrow categories of more specific objects, animals, and plants and their respective attributes. The hierarchical structure of the cortex described by Hawkins resembles the structure of taxonomy. Taxonomy is a "powerful logical device that is intuitively used by humans in producing intuitive expectations about living things. People use the specific inference system of intuitive biological knowledge to add to the information given." But why then does the brain persistently retain memories of non-real --- supernatural --- concepts?
Biological inferences are not always valid, Boyer admits. He refers to this as the enrichment of intuitive principles.
In describing his research about supernatural concepts, Boyer writes, "Our reasoning was that the present explanation of supernatural concepts, on the basis of what we know from anthropology, also implied precise psychological predictions. Cultural concepts are selected concepts. They are the ones that survive cycles of acquisition and communication in roughly similar forms. Now one simple condition of such relative preservation is that concepts are recalled. So [we] designed fairly coherent stories in which we inserted various new violations of ontological expectations as well as episodes that were compatible with ontological expectations. The difference in recall between the two kinds of information would give us an idea of the advantage of violations in individual memory. Naturally, we only used stories and concepts that were new to our subjects. If I told you a story about a character with seven-league boots or a talking wolf disguised as a grandmother, or a woman who gave birth to an incarnation of a god after a visit from an angel, you would certainly remember those themes; not just because they were in the story but also because they were familiar to start with. Our studies were supposed to track how memory stores or distorts or discards novel material." Boyer's research showed that "long-term recall (over months) shows that violations [of expectations] were much better preserved [in memory] than any other material." It is not strangeness per se that is preserved; it has to be a violation of an ontological category. So among the expected attributes of the ontological categories of living animals is that they die. Anthropocentric gods who purportedly live forever violate expectations associated with our ontological category of living things; they are therefore more likely to be preserved in memory. Likewise for statues that talk and listen and respond to human speech or thinking. This is true across cultures, only the details on top of the ontological/conceptual violations vary.
So concepts that violate our expectations of reality stick in memory. This does not sound particularly surprising. A previous post, not surprisingly connected to a Jose Saramago publication, emphasized the relationship between storytelling and memory (February 26, 2013 post): "This is not the first time that a post in this blog has connected Saramago's work with the subject of memory. In The Notebook (September 28, 2010 post), the Nobelist created a memory bank in blog form. In the posting on his final novel, Cain (December 20, 2011 post) I remarked, "I also believe storytelling evolved in part to preserve our memories of things past. (See August 15, 2011 post). And storytelling, whether historical or fictional or both, enables the construction of both personal and social/group identity." And Saramago is a master at clutching collective memory --- history we call it --- and creating stories --- fiction we call it --- as in The Year of the Death of Ricardo Reiss (June 28, 2011 post) and Baltasar and Blimunda (January 1, 2013 post)."
This finally brings us round to Saramago's collection of six early short stories, The Lives of Things. These are stories that stick in memory. Three of the short stories are constructed around artifacts --- "things," a centaur, and a chair that topples an oppressive dictator --- that violate our expectations of the ontological category of these objects. A few sentences from the story "Things" triggered my memory of Pascal Boyer's research:
"There was a time when the manufacturing process had reached such a degree of perfection and faults became so rare that the Government (G) decided there was little point in depriving members of the public (especially those in categories A, B, and C) of their civil right and pleasure to lodge complaints: a wise decision which could only benefit the manufacturing industry. So factories were instructed to lower their standards. This decision, however, could not be blamed for the poor quality of the goods which had been flooding the market for the last two months. As someone employed at the Department of Special Requisitions (DSR), he was in a good position to know that the Government had revoked these instructions more than a month ago and imposed new standards to ensure maximum quality. Without achieving any results. As far as he could remember, the incident with the door was certainly the most disturbing. It was not a case of some object or other, or some simple utensil, or even a piece of furniture, such as the settee in the entrance-hall, but of an item of imposing dimensions; although the settee was anything but small. However it formed part of the interior of furnishings, while the door was an integral part of the building, if not the most important part."
The "incident with the door" occurs as the story opens, virtually ordinary in the way Saramago describes it: "As it closed, the tall heavy door caught the back of the civil servant's right hand and left a deep scratch, red by scarcely bleeding." The civil servant decides to have this small wound treated at office infirmary and when explaining to the nurse how he came to be scratched, the nurse responds that this is the third such case that day. This incident presages a wider series of incidents that pits objects against living things. What unfolds is a revolution of objects against living people --- presumably in response to the Government's policy that manufactured goods could be made to lower quality standards; as the title of this volume implies, things (objects) come alive. Ordinary useful everyday objects begin to disappear: a pillar box, a jug, doors, stairs, utensils, clothes, and ultimately entire buildings and blocks of buildings. Nobody sees anyone taking or removing these objects; they seem to disappear of their own volition when no is watching, in the dark. This tale violates our expectations (inferences) associated with the concept of artifacts (objects). Things do not have volition, intentionality. But the tale is likely to be preserved in memory a bit longer than had Saramago not presented these "things" as a metaphor for people whom the government/society's power structure treated as objects that could be made to lower quality standards.
There are those who contend that a god or other supernatural being (omniscient, eternal) must exist and our beliefs in these supernatural beings (as well as religion in general) is genetically hardwired. (See November 30, 2009 post). In contrast, I suggest that what is genetically hardwired is our brain's disposition to not discard, or discard only with significant cognitive effort, concepts that violate our expectations of reality. Hence, beliefs in the ability of artifacts and imaginary unseen things to engage in behavior that we associate with living things simply stick, and they are rendered stickier when human cultural institutions reinforce those beliefs or concepts. This is beginning to sound like Gene Culture Co-evolution or the dual inheritance theory. (See September 12, 2012 and June 17, 2010 posts).
What I have not been able to understand or reconstruct yet is why the sensory inputs that the lower levels of the cortex are constantly bombarded with during our encounters with the physical world do not overwhelm the sticky concepts stuck in higher cortical areas that violate intuitive expectations? Boyer describes humans as information hungry, in need of cooperation from other humans including the provision of information by other humans, and this cognitive niche is our milieu, our environment. We have a taste for gossip, information about other humans. Because humans are in need of cooperation, they have developed mechanisms for social exchange resulting in the formation of groups and coalitional dynamics. But the human mind is not constrained to consider and represent only what is currently going on in the immediate environment. The human brain spends a considerable amount of time thinking about "what is not here and now." Fiction --- a Jose Saramago story --- is the most salient illustration, says Boyer. "One of the easiest things for human minds to do is to produce inferences on the basis of false premises. Thus "thoughts are decoupled from their standard inputs and outputs." (See June 28, 2011 post). "Decoupled cognition," writes Boyer, "is crucial to human cognition because we depend so much on information communicated by others and on cooperation with others. To evaluate information provided by others you must build some mental simulation of what they describe. Also, we would not carry out complex hunting expeditions, tool making, food gathering or social exchange without complex planning. The latter requires an evaluation of several different scenarios, each of which is based on nonfactual premises (What if we go down to the valley to gather fruit? What if there is none down there? What if other members of the group decide to go elsewhere? What if my neighbor steals my tools? and so on). Thinking about the past too requires decoupling. As psychologist Endel Tulving points out, episodic memory is a form of mental "time travel" allowing us to re-experience the effects of particular scene on us. This is used in particular to assess other peoples behavior, to reevaluate their character, to provide a new description of our own behavior and its consequences and for similar purposes. . . The crucial point to remember about decoupled thoughts," says Boyer, "is that they run the inference systems in the same way as if the situation were actual. This is why we can produce coherent and useful inferences on the basis of imagined premises. . . Hypothetical scenarios suspend one aspect of actual situations but then run all inference systems in the same way as usual."
Thus fantasy --- which includes not only stories like those in Saramago's The Lives of Things, but also religious stories that violate our inferred expectations of reality --- succeed to the extent that they activate the same inference systems of the brain that are used in navigating reality. "Religious concepts," concludes Boyer, "constitute salient cognitive artifacts whose successful cultural transmission depends upon the fact that they activate our inference systems in particular ways. The reason religion can become much more serious and important than the inference systems that are of vital importance to us: those that govern our most intense emotions, sharpen our interaction with other people, give us moral feelings, and organize social groups. . . .Religious concepts are supernatural concepts that matter, they are practical." Boyer summarizes: What is "important" to human beings, because of their evolutionary history, are the conditions of social interaction: who knows what, who is not aware of what, who did what with whom, when and what for. Imagining agents with that information is an illustration of mental processes driven by relevance. Such agents are not really necessary to explain anything, but they are so much easier to represent and so much richer in possible inferences that they enjoy a great advantage in cultural transmission." So the fantastic that violates ontological expectations is not just sticky in our memory for that reason, but because of the facility by which the fantastic is culturally transmitted, its stickiness is enhanced as a matter of group memory and supports the human need for cooperation and social interaction. Will Saramago's stories of the fantastic --- for example, the Iberian peninsula breaking away from Europe and floating out to the Atlantic Ocean in The Stone Raft, a community's near complete loss of sightedness in Blindness, death taking a holiday in Death With Interruptions --- enjoy a great advantage in cultural transmission? Probably not. These stories are labeled fiction, and we understand the stories as fiction. They likely exploit the same inference systems in the brain that we rely upon to experience and navigate the real world. They may be memorable, but cultural exchange --- save the temporal book club meeting --- is not likely to be constructed around these stories. This is likely a result of what John Searle refers to when he mentions "the sheer growth of certain, objective, universal knowledge." (See January 21, 2011 post). Cultural transmission of religious concepts began long before there was any volume of certain, objective, universal knowledge. Religious concepts now compete for our attention to objective, universal knowledge, as evidence by the Dover, Pennsylvania litigation over teaching "creation science" in the schools. (See March 14, 2013 post). And while our brain's enduring capacity to retain and rely upon decoupled concepts is persistent, it might not be emergent in our modern cognitive niche if it had not been emergent centuries ago when we lacked substantial certain, objective, universal knowledge.
Labels:
artifacts,
decoupling,
inference systems,
Jose Saramago,
neocortex,
Religion
Saturday, November 16, 2013
Jeff Hawkins, On Intelligence (2004)
The scenario described in the previous post (see October 26, 2013 post) of the mass of commuter humanity changing trains in a crowded subway station, silently cooperating to avoid colliding with one another as they cross paths was intended to introduce the subject of humans reading and understanding the intentions of others as a foundation of human cooperative activity. But there is another characteristic of the human brain besides mindreading that supports this outcome: the human brain constantly anticipates, predicts the future. In this scenario, it predicts (perhaps not perfectly) the future behavior of others (more likely their immediate behavior), where they are directing their motion, where they are turning, whether they are accelerating or slowing down. Jeff Hawkins labels this intelligence: how the brain predicts behavior and future events is the subject of On Intelligence. Hawkins' interest is in understanding human intelligence to build a foundation for improved machine intelligence. The focus of his inquiry is the neocortex, the outermost layers of the human brain, and memory. What Hawkins offers up is the memory-prediction framework of intelligence. This differs from a computational framework.
Hawkins is not out to explain what makes us human (compare September 27, 2009 post). Nor is he out to explain human consciousness (compare April 8, 2011 post). But he does briefly touch on these matters. Previous posts in the blog address human imagination and creativity as a hallmark of what makes us "human," (see November 6, 2011 and May 22, 2011 post), and Hawkins presents a model discussed below about the role of the neocortex in imagination, including imagination by false analogy. What he does not touch on is the role of the brain in generating and controlling emotions, the subject of Jaak Panksepp's research (see May 19, 2013 post), which naturally links to the origins of the moral and social aspects of what makes us human. (See November 21, 2012 post). So while Hawkins does connect the neocortex and thalamus within his memory-prediction framework (see below), he does not elaborate upon the role of the large thalamo-cortical system that resides in the human brain that plays a substantial role in what makes us human and the biological basis of human consciousness. (See April 8, 2011 post).
Prior posts identify the critical role of the hippocampus in memory formation, but ultimately long-term memory is shifted to the cerebral cortex through a process known as consolidation that occurs during sleep. (See September 10, 2013 and November 6, 2011 posts). As a prior post described: "Memories are distributed in the same parts of the brain that encoded the original experience. So sounds are found in the auditory cortex, taste and skin sensory memories are found in the somatosensory cortex, and sight in the visual cortex. But procedural -- "how to" --- memories are stored outside of the cortex, in the cerebellum and putamen, and fear memories are stored in the amygdala." Hawkins' thesis is that the cortex is critical to human capacity to predict events because of the linkage to memory storage in the cortex. In focusing on the neocortex, Hawkins is looking at, evolutionarily speaking, the most recent adaptation in the development of animal neurological systems. The neocortex is unique to mammals, and the human neocortex is larger than the neocortex in any other mammal, facts that suggest the human neocortex is critical to understanding what makes us human. This is just the opposite of Jaak Panksepp's focus on the older parts of the brain, the brain stem and the midbrain. (See May 19, 2013 post). It is not as though Hawkins believes these older parts of the brain are irrelevant to human behavior. "First," Hawkins says, "the human mind is created not only by the neocortex but also by the emotional systems of the old brain and by the complexity of the human body. To be human you need all of your biological machinery, not just a cortex." But Hawkins is ultimately interested in the creation of an intelligent machine, and he believes that in the pursuit of that interest he needs to understand what makes humans "intelligent." He finds that understanding in how the neocortex is structured and proposes a model for how it operates to predict future events.
Hawkins' model is based on our current knowledge of the structure of the neocortex. That much is known. And here is a graphical representation of that structure:
SENSORY INPUT
Each region of the neocortex is known to consist of four areas, labeled 1, 2, 4 and IT. The graph above represents those four layers, with IT at the top and 4, 2, and 1 below it for one of the regions of the cortex (visual, auditory, somatosensory, motor). The visual cortex layers are labeled, from bottom to top, V1, V2, V4, and IT; the auditory cortex layers labeled A1, A2, A4 and IT; the somatosensory (touch) cortex layers labeled S1, S2, S4 and IT, and similarly for the motor cortex. The arrows are pointed in both directions, indicating that information moves in both directions between the areas.
Neurons fire in a specific pattern in response to a specific sensory stimulus. For the exact same sensory stimulus, the same neurons will fire in the same pattern within this hierarchy. For a different sensory stimulus, different neurons will fire in a pattern. The brain's capacity to recognize (predict) these patterns is at the heart of memory.
Recall the discussion in connection with Rodrigo Quiroges' book, Borges and Memory (September 10, 2013 post): "Each neuron in the retina responds to a particular point, and we can infer the outline of a cube starting from the activity of about thirty of them [retinal neurons]. Next the neurons in the primary visual cortex fire in response to oriented lines; fewer neurons are involved and yet the cube is more clearly seen. This information is received in turn by neurons in higher visual areas, which are triggered by more complex patterns --- for example, the angles defined by the crossing of two or three lines. . . As the processing of visual information progresses through different brain areas, the information represented by each neuron becomes more complex, and at the same time fewer neurons are needed to encode a given stimulus." The arrows representing sensory input from the retinal neurons are the arrows pointing to area V1 of the visual cortex. A particular pattern of neurons firing in V1 leads neurons in V2 to fire, and all the way up to IT. As just noted, in each higher layer "fewer neurons are involved." In V1, the cells are spatially specific, tiny feature-recognition cells that infrequently fire depending on which of the millions of retinal neurons are providing sensory input; at the higher IT, the cells are constantly firing, spatially non-specific, object recognition cells. One of way of thinking about this is that certain neurons in V1 fired in recognition of two ears, a nose, two eyes, and perhaps even more details like the texture of skin, facial hair, the color of hair; neurons in IT fired in recognition of an entire head or face. Cells in the IT encode for categories; Hawkins calls them "invariant representations." In philosophy, these invariant representations might be analogous to Plato's forms. It is here one would find neurons firing in response to things --- rocks, platypuses, your house, a song, Jennifer Aniston or Bill Clinton. (See September 10, 2013 post).
Psychologists recognize the same phenomenon, although in different terms. Paul Bloom asserts that humans are "splitters" and "lumpers," but for the most part we are lumpers. Borges' Funes was a splitter. (See September 10, 2013 post). "Our minds have evolved," Bloom says, "to put things into categories and to ignore or downplay what makes these things distinct. Some categories are more obvious than others: all children understand the categories chairs and tigers; only scientists are comfortable with the categories such as ungulates and quarks. What all categories share is that they capture a potential infinity of individuals under a single perspective. They lump." Bloom says, "We lump the world into categories so that we can learn." He adds, "A perfect memory, one that treats each experience as a distinct thing-in-itself, is useless. The whole point of storing the past is to make sense of the present and to plan for the future. Without categories [or concepts], everything is perfectly different from everything else, and nothing can be generalized or learned."
The neocortex consists of six horizontal layers of cells (I-VI) each roughly 2mm thick (shown below for area V1 of the visual cortex). The cells within each layer are aligned in columns perpendicular to the layers. The layers in each column are connected via axons, making synapses along the way. "Columns do not stand out like neat little pillars," explains Hawkins, "nothing in the cortex is that simple, but their existence can be inferred from several lines of evidence." Vertically aligned cells tend to become active for the same stimulus.
Again, as was the case with the different areas of a region of the cortex, information is moving both up and down the layers of a given area. Inputs move up the columns; memories move down the columns. "When you begin to realize that the cortex's core function is to make predictions, then you have to put feedback into the model; the brain has to send information flowing back toward the region that first receives inputs. Prediction requires a comparison between what is happening and what you expect to happen. What is actually happening flows up, and what you expect to happen flows down."
Memories are stored in this hierarchical structure. "The design of the cortex and the method by which it learns naturally discover the hierarchical relationships in the world. You are not born with knowledge of language, houses, or music. The cortex has a clever learning algorithm that naturally finds whatever hierarchical structure exists and captures it. When structure is absent, we are thrown into confusion, even chaos. *** You can only experience a subset of the world at any moment in time. You can only be in one room of your home, looking in one direction. Because of the hierarchy of the cortex, you are able to know that you are at home, in your living room, looking at a window, even though at that moment your eyes happened to be fixated on a window latch. Higher regions of cortex are maintaining a representation of your home, while lower regions are representing rooms, and still lower regions are looking at window. Similarly, the hierarchy allows you to know you are listening to both a song and album of music, even though at any point in time you are hearing only one note, which on its own tells you next to nothing." Critical to this capability is the brain's ability to process sequences and recognize patterns of sequences. "Information flowing in to the brain naturally arrives as a sequence of patterns." When the patterns are repeated through a repeated firing of a particular combination of neurons, the cortical region forms a persistent representation, or memory, for the sequence. In learning sequences, we form invariant representations of objects. When certain input patterns repeat over and over, cortical regions "know that those experiences are caused by a real object in the world."
One of the most important attributes of Hawkins' model is a concept called auto-associative memory. This is what enables the brain to recall something by sensing only a portion of that memory. In the case of the brain, that input may belong to an entirely different category than what is recalled. Auto-associative memory is part of pattern recognition: the cortex does not need to see the entire pattern in order to recognize the larger pattern. The second feature of auto-associative memory, says Hawkins, is that an auto-associative memory can be designed to store sequences of patterns, or temporal patterns. He says this is accomplished by adding a time-delay to feedback.
The cortex is linked to the thalamus. Hawkins says that one of the six layers of cells (L5 - second from the bottom in a given cortical area) within the neocortex is wired to the thalamus, which in turn sends information back to Layer I (the highest layer in a given cortical layer), acting as a delayed feedback important to learning sequences and to predicting. The thalamus is selective in what it transmits back to the cortex because the number of neurons going to the thalamus exceeds the number of neurons back to the cortex by a factor of ten. This requires an understanding of reentrant activity and recursion, which need not be explained here. But Layer 1 (at the top of a given cortical area) is also receiving information from higher cortical areas (e.g. in the case of the visual cortex, layer 1 in V4 from layer 6 in IT; layer 1 in V2 from layer 6 in V4, etc.) So layer 1 now has two inputs: from the thalamus and from the higher cortical area. Layer 1, Hawkins emphasizes, now carries "much of the information we need to predict when a column should be active. Using these two signals in layer 1, a region of cortex can learn and recall multiple sequences of patterns."
Cortical regions "store" sequences of patterns when synapses are strengthened by repeated firing. "If this occurs often enough, the layer 1 synapses [at the top of the region] become strong enough to make the cells in layers 2, 3, and 5 [below] fire, even when a layer 4 cell hasn't fired--- meaning parts of the column can become active without receiving input from a lower region of the cortex. In this way, cells in layers 2, 3, and 5 learn to 'anticipate' when they should fire based on the pattern in layer 1. Before learning, the column can only come active if driven by a layer 4 cell. After learning, the column can become partially active via memory. When a column becomes active via layer1 synapses, it is anticipating being driven from below. This is prediction. If the column could speak, it would say, 'When I have been active in the past, this particular set of my layer 1 synapses have been active. So when I see this particular set again, I will fire in anticipation.' Finally, layer 6 cells can send their output back into layer 4 cells of their own column. Hawkins says that when they do, our predictions become the input. This is what we do, he adds, when we daydream, think, imagine. It allows us to see the consequences of our own predictions, noting that we do this when we plan the future, rehearse speeches, and worry about future events. In Hawkins' model, this has to be part of what Michael Gazzaniga refers to as our decoupling mechanism. (See May 22, 2011 post)
This is Hawkins' model of the brain's capacity to predict, intelligence if you will. Of course, it is a more complex than I have regurgitated here. "If a region of cortex finds it can reliably and predictably move among these input patterns using a series of physical motions (such as saccades of the eyes or fondling with the fingers) and can predict them accurately as they unfold in time (such as the sounds comprising a song or the spoken word), the brain interprets these as having a causal relationship. The odds of numerous input patters occurring in the same relation over and over again by sheer coincidence are vanishingly small. A predictable sequence of patterns must be part of a larger object that really exists. So reliable predictability is an ironclad way of knowing that different events in the world are physically tied together. Every face has eyes, ears, mouth and nose. If the brain sees an eye, the saccades and sees another eye, then saccades and sees a mouth, it can feel certain it is seeing a face."
This begins at a very early age in our post-natal development. The two basic components of learning, explains Hawkins, are forming the classifications of patterns and building sequences. "The basics of forming sequences is to group patterns together that are part of the same object. One way to do this is by grouping patterns that occur contiguously in time. If a child holds a toy in her hand and slowly moves it, her brain can safely assume that the image on her retina is of the same object moment to moment, and therefore the changing set of patterns can be grouped together. At other times, you need outside instruction to help you decide which patterns belong together. To learn that apples and bananas are fruits, but carrots and celery are not, requires a teacher to guide you to group these items as fruits. Either way, your brain slowly builds sequences of patterns that belong together. But as a region of cortex builds sequences, the input to the next region changes. The input changes from representing mostly individual patterns to representing groups of patterns. The input to a region changes from notes to melodies, from letters to words, from noses to faces, and so on. Where before a region built sequences of letters, it now builds sequences of words, The unexpected result of this learning process is that, during repetitive learning, representations of objects move down the cortical hierarchy. During the early years of your life, your memories of the world first form in higher regions of cortex, but as you learn they are re-formed in lower and lower parts of the cortical hierarchy."
Michael Shermer (see June 12, 2011 post) made the same point in a slightly different way when he referred to "patternicity." According to Shermer, as sensory data flows into the brain, there is a "tendency" for the brain to begin looking for meaningful patterns in both meaningful and meaningless data. He calls this process patternicity. Shermer asserts that patternicity is premised on "association learning," which is "fundamental to all animal behavior from which is "fundamental to all animal behavior from C. elegans (roundworm) to homo sapiens." Because our survival may depend on split-second decisions in which there is no time to research and discover underlying facts about every threat or opportunity that faces us, evolution set the brain's default mode in the position of assuming that all patterns are real, says Shermer. A cost associated with this behavior is that the brain may lump causal associations (e.g. wind causes plants to rustle) with non-causal associations (e.g. there is an unseen agent in the plants). In this circumstance, superstition --- incorrect causal associations --- is born. "In this sense, patternicities such as superstition and magical thinking are not so much errors in cognition as they are the natural processes of a learning brain." Religion, conspiracy theories and political beliefs fit this model as well.
My surmise is that beliefs and concepts rooted in false analogy become stored in memory in higher cortical areas when they are reinforced over and over through cultural transmission. Something like this may be what Edward O. Wilson means when he refers to epigenetic rules and culture. "Human nature," Wilson says, is the "inherited regularities of mental development common to our species. They are epigenetic rules, which evolved by the interaction of genetic and cultural evolution that occurred over a long period in deep prehistory. These rules are the genetic biases in the way our senses perceive the world, the symbolic coding by which we represent the world, the options we automatically open to ourselves, and the responses we find easiest and most rewarding to make. . ." (See April 8, 2013 post). Storytelling --- the creation of works of fiction --- may be important to making and reinforcing memories. (See August 15, 2011 post). Thus, when a prediction based on a false analogy is violated and one would normally recognize an error, the error message is transmitted back up to the higher cortical areas for a check. But because the belief based in false analogy resides there in the higher areas, the false analogy may never be corrected. The false analogy becomes an invariant representation. Paul Bloom explains in Descartes' Baby just how these concepts and beliefs can be rooted in our brains at a very early age, and as Hawkins describes above, memories formed earlier in life form in the higher regions of the cortex. These false analogies can be difficult to dislodge.
Hawkins has been helpful in providing a model of the cortex as the part of the brain devoted to our capacity to predict. When tied into the models of other parts of the brain relating to consciousness and emotion discussed elsewhere in this blog, we begin to assemble the whole human brain and begin an appreciation of what makes us "human." (See September 27, 2009 post discussing Michael Gazzaniga's reference to Jeff Hawkins). While Hawkins' interest lies in the intelligent machine, he does not believe a machine can ever become "human."
And finally, Hawkins confirms why I have had held to my instinct that John Searle's Chinese Room argument was intuitively correct. The man in the Chinese Room must have been human.
Hawkins is not out to explain what makes us human (compare September 27, 2009 post). Nor is he out to explain human consciousness (compare April 8, 2011 post). But he does briefly touch on these matters. Previous posts in the blog address human imagination and creativity as a hallmark of what makes us "human," (see November 6, 2011 and May 22, 2011 post), and Hawkins presents a model discussed below about the role of the neocortex in imagination, including imagination by false analogy. What he does not touch on is the role of the brain in generating and controlling emotions, the subject of Jaak Panksepp's research (see May 19, 2013 post), which naturally links to the origins of the moral and social aspects of what makes us human. (See November 21, 2012 post). So while Hawkins does connect the neocortex and thalamus within his memory-prediction framework (see below), he does not elaborate upon the role of the large thalamo-cortical system that resides in the human brain that plays a substantial role in what makes us human and the biological basis of human consciousness. (See April 8, 2011 post).
Prior posts identify the critical role of the hippocampus in memory formation, but ultimately long-term memory is shifted to the cerebral cortex through a process known as consolidation that occurs during sleep. (See September 10, 2013 and November 6, 2011 posts). As a prior post described: "Memories are distributed in the same parts of the brain that encoded the original experience. So sounds are found in the auditory cortex, taste and skin sensory memories are found in the somatosensory cortex, and sight in the visual cortex. But procedural -- "how to" --- memories are stored outside of the cortex, in the cerebellum and putamen, and fear memories are stored in the amygdala." Hawkins' thesis is that the cortex is critical to human capacity to predict events because of the linkage to memory storage in the cortex. In focusing on the neocortex, Hawkins is looking at, evolutionarily speaking, the most recent adaptation in the development of animal neurological systems. The neocortex is unique to mammals, and the human neocortex is larger than the neocortex in any other mammal, facts that suggest the human neocortex is critical to understanding what makes us human. This is just the opposite of Jaak Panksepp's focus on the older parts of the brain, the brain stem and the midbrain. (See May 19, 2013 post). It is not as though Hawkins believes these older parts of the brain are irrelevant to human behavior. "First," Hawkins says, "the human mind is created not only by the neocortex but also by the emotional systems of the old brain and by the complexity of the human body. To be human you need all of your biological machinery, not just a cortex." But Hawkins is ultimately interested in the creation of an intelligent machine, and he believes that in the pursuit of that interest he needs to understand what makes humans "intelligent." He finds that understanding in how the neocortex is structured and proposes a model for how it operates to predict future events.
Hawkins' model is based on our current knowledge of the structure of the neocortex. That much is known. And here is a graphical representation of that structure:
SENSORY INPUT
Each region of the neocortex is known to consist of four areas, labeled 1, 2, 4 and IT. The graph above represents those four layers, with IT at the top and 4, 2, and 1 below it for one of the regions of the cortex (visual, auditory, somatosensory, motor). The visual cortex layers are labeled, from bottom to top, V1, V2, V4, and IT; the auditory cortex layers labeled A1, A2, A4 and IT; the somatosensory (touch) cortex layers labeled S1, S2, S4 and IT, and similarly for the motor cortex. The arrows are pointed in both directions, indicating that information moves in both directions between the areas.
Neurons fire in a specific pattern in response to a specific sensory stimulus. For the exact same sensory stimulus, the same neurons will fire in the same pattern within this hierarchy. For a different sensory stimulus, different neurons will fire in a pattern. The brain's capacity to recognize (predict) these patterns is at the heart of memory.
Recall the discussion in connection with Rodrigo Quiroges' book, Borges and Memory (September 10, 2013 post): "Each neuron in the retina responds to a particular point, and we can infer the outline of a cube starting from the activity of about thirty of them [retinal neurons]. Next the neurons in the primary visual cortex fire in response to oriented lines; fewer neurons are involved and yet the cube is more clearly seen. This information is received in turn by neurons in higher visual areas, which are triggered by more complex patterns --- for example, the angles defined by the crossing of two or three lines. . . As the processing of visual information progresses through different brain areas, the information represented by each neuron becomes more complex, and at the same time fewer neurons are needed to encode a given stimulus." The arrows representing sensory input from the retinal neurons are the arrows pointing to area V1 of the visual cortex. A particular pattern of neurons firing in V1 leads neurons in V2 to fire, and all the way up to IT. As just noted, in each higher layer "fewer neurons are involved." In V1, the cells are spatially specific, tiny feature-recognition cells that infrequently fire depending on which of the millions of retinal neurons are providing sensory input; at the higher IT, the cells are constantly firing, spatially non-specific, object recognition cells. One of way of thinking about this is that certain neurons in V1 fired in recognition of two ears, a nose, two eyes, and perhaps even more details like the texture of skin, facial hair, the color of hair; neurons in IT fired in recognition of an entire head or face. Cells in the IT encode for categories; Hawkins calls them "invariant representations." In philosophy, these invariant representations might be analogous to Plato's forms. It is here one would find neurons firing in response to things --- rocks, platypuses, your house, a song, Jennifer Aniston or Bill Clinton. (See September 10, 2013 post).
Psychologists recognize the same phenomenon, although in different terms. Paul Bloom asserts that humans are "splitters" and "lumpers," but for the most part we are lumpers. Borges' Funes was a splitter. (See September 10, 2013 post). "Our minds have evolved," Bloom says, "to put things into categories and to ignore or downplay what makes these things distinct. Some categories are more obvious than others: all children understand the categories chairs and tigers; only scientists are comfortable with the categories such as ungulates and quarks. What all categories share is that they capture a potential infinity of individuals under a single perspective. They lump." Bloom says, "We lump the world into categories so that we can learn." He adds, "A perfect memory, one that treats each experience as a distinct thing-in-itself, is useless. The whole point of storing the past is to make sense of the present and to plan for the future. Without categories [or concepts], everything is perfectly different from everything else, and nothing can be generalized or learned."
The neocortex consists of six horizontal layers of cells (I-VI) each roughly 2mm thick (shown below for area V1 of the visual cortex). The cells within each layer are aligned in columns perpendicular to the layers. The layers in each column are connected via axons, making synapses along the way. "Columns do not stand out like neat little pillars," explains Hawkins, "nothing in the cortex is that simple, but their existence can be inferred from several lines of evidence." Vertically aligned cells tend to become active for the same stimulus.
Again, as was the case with the different areas of a region of the cortex, information is moving both up and down the layers of a given area. Inputs move up the columns; memories move down the columns. "When you begin to realize that the cortex's core function is to make predictions, then you have to put feedback into the model; the brain has to send information flowing back toward the region that first receives inputs. Prediction requires a comparison between what is happening and what you expect to happen. What is actually happening flows up, and what you expect to happen flows down."
Memories are stored in this hierarchical structure. "The design of the cortex and the method by which it learns naturally discover the hierarchical relationships in the world. You are not born with knowledge of language, houses, or music. The cortex has a clever learning algorithm that naturally finds whatever hierarchical structure exists and captures it. When structure is absent, we are thrown into confusion, even chaos. *** You can only experience a subset of the world at any moment in time. You can only be in one room of your home, looking in one direction. Because of the hierarchy of the cortex, you are able to know that you are at home, in your living room, looking at a window, even though at that moment your eyes happened to be fixated on a window latch. Higher regions of cortex are maintaining a representation of your home, while lower regions are representing rooms, and still lower regions are looking at window. Similarly, the hierarchy allows you to know you are listening to both a song and album of music, even though at any point in time you are hearing only one note, which on its own tells you next to nothing." Critical to this capability is the brain's ability to process sequences and recognize patterns of sequences. "Information flowing in to the brain naturally arrives as a sequence of patterns." When the patterns are repeated through a repeated firing of a particular combination of neurons, the cortical region forms a persistent representation, or memory, for the sequence. In learning sequences, we form invariant representations of objects. When certain input patterns repeat over and over, cortical regions "know that those experiences are caused by a real object in the world."
One of the most important attributes of Hawkins' model is a concept called auto-associative memory. This is what enables the brain to recall something by sensing only a portion of that memory. In the case of the brain, that input may belong to an entirely different category than what is recalled. Auto-associative memory is part of pattern recognition: the cortex does not need to see the entire pattern in order to recognize the larger pattern. The second feature of auto-associative memory, says Hawkins, is that an auto-associative memory can be designed to store sequences of patterns, or temporal patterns. He says this is accomplished by adding a time-delay to feedback.
The cortex is linked to the thalamus. Hawkins says that one of the six layers of cells (L5 - second from the bottom in a given cortical area) within the neocortex is wired to the thalamus, which in turn sends information back to Layer I (the highest layer in a given cortical layer), acting as a delayed feedback important to learning sequences and to predicting. The thalamus is selective in what it transmits back to the cortex because the number of neurons going to the thalamus exceeds the number of neurons back to the cortex by a factor of ten. This requires an understanding of reentrant activity and recursion, which need not be explained here. But Layer 1 (at the top of a given cortical area) is also receiving information from higher cortical areas (e.g. in the case of the visual cortex, layer 1 in V4 from layer 6 in IT; layer 1 in V2 from layer 6 in V4, etc.) So layer 1 now has two inputs: from the thalamus and from the higher cortical area. Layer 1, Hawkins emphasizes, now carries "much of the information we need to predict when a column should be active. Using these two signals in layer 1, a region of cortex can learn and recall multiple sequences of patterns."
Cortical regions "store" sequences of patterns when synapses are strengthened by repeated firing. "If this occurs often enough, the layer 1 synapses [at the top of the region] become strong enough to make the cells in layers 2, 3, and 5 [below] fire, even when a layer 4 cell hasn't fired--- meaning parts of the column can become active without receiving input from a lower region of the cortex. In this way, cells in layers 2, 3, and 5 learn to 'anticipate' when they should fire based on the pattern in layer 1. Before learning, the column can only come active if driven by a layer 4 cell. After learning, the column can become partially active via memory. When a column becomes active via layer1 synapses, it is anticipating being driven from below. This is prediction. If the column could speak, it would say, 'When I have been active in the past, this particular set of my layer 1 synapses have been active. So when I see this particular set again, I will fire in anticipation.' Finally, layer 6 cells can send their output back into layer 4 cells of their own column. Hawkins says that when they do, our predictions become the input. This is what we do, he adds, when we daydream, think, imagine. It allows us to see the consequences of our own predictions, noting that we do this when we plan the future, rehearse speeches, and worry about future events. In Hawkins' model, this has to be part of what Michael Gazzaniga refers to as our decoupling mechanism. (See May 22, 2011 post)
This is Hawkins' model of the brain's capacity to predict, intelligence if you will. Of course, it is a more complex than I have regurgitated here. "If a region of cortex finds it can reliably and predictably move among these input patterns using a series of physical motions (such as saccades of the eyes or fondling with the fingers) and can predict them accurately as they unfold in time (such as the sounds comprising a song or the spoken word), the brain interprets these as having a causal relationship. The odds of numerous input patters occurring in the same relation over and over again by sheer coincidence are vanishingly small. A predictable sequence of patterns must be part of a larger object that really exists. So reliable predictability is an ironclad way of knowing that different events in the world are physically tied together. Every face has eyes, ears, mouth and nose. If the brain sees an eye, the saccades and sees another eye, then saccades and sees a mouth, it can feel certain it is seeing a face."
This begins at a very early age in our post-natal development. The two basic components of learning, explains Hawkins, are forming the classifications of patterns and building sequences. "The basics of forming sequences is to group patterns together that are part of the same object. One way to do this is by grouping patterns that occur contiguously in time. If a child holds a toy in her hand and slowly moves it, her brain can safely assume that the image on her retina is of the same object moment to moment, and therefore the changing set of patterns can be grouped together. At other times, you need outside instruction to help you decide which patterns belong together. To learn that apples and bananas are fruits, but carrots and celery are not, requires a teacher to guide you to group these items as fruits. Either way, your brain slowly builds sequences of patterns that belong together. But as a region of cortex builds sequences, the input to the next region changes. The input changes from representing mostly individual patterns to representing groups of patterns. The input to a region changes from notes to melodies, from letters to words, from noses to faces, and so on. Where before a region built sequences of letters, it now builds sequences of words, The unexpected result of this learning process is that, during repetitive learning, representations of objects move down the cortical hierarchy. During the early years of your life, your memories of the world first form in higher regions of cortex, but as you learn they are re-formed in lower and lower parts of the cortical hierarchy."
Michael Shermer (see June 12, 2011 post) made the same point in a slightly different way when he referred to "patternicity." According to Shermer, as sensory data flows into the brain, there is a "tendency" for the brain to begin looking for meaningful patterns in both meaningful and meaningless data. He calls this process patternicity. Shermer asserts that patternicity is premised on "association learning," which is "fundamental to all animal behavior from which is "fundamental to all animal behavior from C. elegans (roundworm) to homo sapiens." Because our survival may depend on split-second decisions in which there is no time to research and discover underlying facts about every threat or opportunity that faces us, evolution set the brain's default mode in the position of assuming that all patterns are real, says Shermer. A cost associated with this behavior is that the brain may lump causal associations (e.g. wind causes plants to rustle) with non-causal associations (e.g. there is an unseen agent in the plants). In this circumstance, superstition --- incorrect causal associations --- is born. "In this sense, patternicities such as superstition and magical thinking are not so much errors in cognition as they are the natural processes of a learning brain." Religion, conspiracy theories and political beliefs fit this model as well.
My surmise is that beliefs and concepts rooted in false analogy become stored in memory in higher cortical areas when they are reinforced over and over through cultural transmission. Something like this may be what Edward O. Wilson means when he refers to epigenetic rules and culture. "Human nature," Wilson says, is the "inherited regularities of mental development common to our species. They are epigenetic rules, which evolved by the interaction of genetic and cultural evolution that occurred over a long period in deep prehistory. These rules are the genetic biases in the way our senses perceive the world, the symbolic coding by which we represent the world, the options we automatically open to ourselves, and the responses we find easiest and most rewarding to make. . ." (See April 8, 2013 post). Storytelling --- the creation of works of fiction --- may be important to making and reinforcing memories. (See August 15, 2011 post). Thus, when a prediction based on a false analogy is violated and one would normally recognize an error, the error message is transmitted back up to the higher cortical areas for a check. But because the belief based in false analogy resides there in the higher areas, the false analogy may never be corrected. The false analogy becomes an invariant representation. Paul Bloom explains in Descartes' Baby just how these concepts and beliefs can be rooted in our brains at a very early age, and as Hawkins describes above, memories formed earlier in life form in the higher regions of the cortex. These false analogies can be difficult to dislodge.
Hawkins has been helpful in providing a model of the cortex as the part of the brain devoted to our capacity to predict. When tied into the models of other parts of the brain relating to consciousness and emotion discussed elsewhere in this blog, we begin to assemble the whole human brain and begin an appreciation of what makes us "human." (See September 27, 2009 post discussing Michael Gazzaniga's reference to Jeff Hawkins). While Hawkins' interest lies in the intelligent machine, he does not believe a machine can ever become "human."
And finally, Hawkins confirms why I have had held to my instinct that John Searle's Chinese Room argument was intuitively correct. The man in the Chinese Room must have been human.
Saturday, October 26, 2013
Michael Tomasello, The Origins of Human Communication (2008)
Every work morning I board the subway to travel to my office, and approximately halfway on this journey I change trains. Changing trains entails exiting a train into a crowd of people who are looking to board the train I am leaving, walking some 50-70 steps to a staircase while passing many people who are, like myself, leaving the train I just left, or, heading in the opposite direction to the train I just left. I descend stairs to another platform and then walk roughly 20 steps on the platform to wait for an oncoming train that will take me to my destination. During this change of trains, I probably come close to 100 or more persons within a 5-10 foot radius of my person. I do not know these people. Most people are not talking. A few I recognize as having seen previously on this journey, but still I don't know them. I don't talk with them. Some I see only out of my peripheral vision. The amazing part of this brief, everyday journey navigating through a mass of people is that I almost always avoid any physical contact with them, and the same is true for most of them as well. It is easy to think that each individual is merely moving autonomously toward their individual goal, but the reality is that each individual is acting cooperatively with the others to ensure that the others are able to move toward their individual goals by not colliding (with modest, likely accidental exception) or inhibiting the others as they move. It is like a dance. Occasionally someone crosses diagonally in front of me, but I avoid a collision by slowing down or moving sideways. Avoiding contact with and staying out of the way of other moving persons is a shared intention; in the case of humans, cultural rules have played a key role in enabling the individual to realize their shared intention: e.g., stay on the right of oncoming persons; follow the person ahead of you. But the individuals are not merely blindly following rules; they are watching the faces and body movements of others and reading their minds.
Place a camera high above this subway station and make a video of the masses transitioning between trains. The flow of people almost seems choreographed; it is not as chaotic as one might think it could be. Now watch this video of ants marching. The movement of ants seems just as orderly as my video of humans passing in the subway station. But it is different. Ants communicate differently, relying on chemicals and touch. They are acting automatically, inflexibly. Chemotaxis (see May 19, 2013 post) is at play here. In contrast, humans read the intentions of other humans in their facial expressions, gaze, and motions (see July 16, 2010 and September 18, 2009 post), even when no words or spoken, and this is relatively unique in the animal kingdom. Apes are known to understand intentions of others, and apes are known to experience empathy. (See November 9, 2010 post). Mirror neurons, which some argue enable us to feel what others are feeling or experiencing, were first discovered in monkeys. (See September 18, 2009 post). What apes do not do --- and humans do, according to Michael Tomasello --- is share intentions and goals with others.
Mindreading introduces us to theory of mind. (See November 21, 2012 and June 12, 2011 posts) It is not a "theory" so much as it is a state of awareness: our ability to attribute mental states to others --- mindreading. In my little "everyday vignette" just described, our theory of mind is almost unconscious, and I would submit that this ability is close to unique, if not unique in the animal kingdom. If apes enjoy a theory of mind, it is certainly not as well-developed as humans. It plays out in nearly every other human scenario imaginable because we are social animals. (See November 21, 2012 post). Our theory of mind undoubtedly varies among these scenarios, for example, if our attentiveness to something specific about another person is heightened, there is probably a heightened attentiveness to another's specific mental state; whereas in my vignette the subway passenger's theory of mind is likely ascribing generic mental states to the masses of other humans around.
Michael Tomasello's Origins of Human Communication is not specifically about verbal speech or language, which is the focus of Christine Kenneally's The First Word. (See August 31, 2009 post). It is about human communication. In Tomasello's view, language is not hardwired genetically into the brains of humans, and while he does not debate whether language is an adaptation or exaptation (see October 25, 2011 post), Tomasello treats language as an emergent property, emerging from antecedent forms of human communication --- specifically gestures such as pointing or pantomiming. And Tomasello is armed with a lot of research data to advance his argument. Given that humans do not generally utter a spoken word of language until they are older than one year (14-18 months), Tomasello finds his antecedents in babies and, with an evolutionary longer gaze, in apes, the genetically closest animal to homo sapiens sapiens. Pointing and pantomiming are gestures human babies use before they begin to speak (with language essentially becoming a substitute for pantomiming).
Tomasello's thesis is that the "ultimate explanation for how it is that human beings are able to communicate with one another in such complex ways with such simple gestures is that they have unique ways of engaging with one another socially. More specifically, human beings cooperate with one another in species-unique ways involving processes of shared intentionality." By "simple gestures," Tomasello is referring to the acts of pointing and pantomiming. He notes that apes point and respond to pointing, but the key difference here with humans is that when apes point they are making requests --- demanding action by others. "Bring me that food." Apes possess the ability to follow gaze direction. Apes want the other to see something and do something. Humans, by contrast, point not only to direct the other human's attention to something, but to share information with others, request help and cooperation, even when there is no benefit to themselves. (See September 27, 2012, September 12, 2012, and October 13, 2010 posts for discussions of direct and indirect reciprocal altruism). "Pointing," says Tomasello, "is based on humans' natural tendency to follow the gaze direction of others to external targets, and pantomiming is based on humans' natural tendency to interpret the actions of others intentionally. This naturalness makes them good candidates as an intermediate step between ape communication and arbitrary linguistic conventions [of humans]." While there are some primatologists who credit apes with more cooperative, social behavior than Tomasello acknowledges, what differentiates apes and humans, he says, is an underlying psychological infrastructure --- made possible by cultural learning and imitation that allows humans to learn from others and understand their intentions. That leads to shared intentionality --- sometimes referred to as "we" intentionality --- collaborative interactions in which participants share psychological states with one another and have shared goals and shared action plans. This brings us to the research observations about human babies. Tomasello (2007):
"[H]uman adults quite often teach youngsters things by demonstrating what they should do – which the youngsters then respond to by imitating (and internalizing what is learned. Adult chimpanzees do not demonstrate things for youngsters (or at least do this very seldom). Interestingly, when human adults instruct their children in this way (providing communicative cues that they are trying to demonstrate something), 14-month-old infants copy the particular actions the adults used, and they do so much more often than when adults do not explicitly instruct – in which case they just copy the result the adult achieved (Gergely & Csibra, 2006). Furthermore, there is some evidence that 1-year-old infants are beginning to see the collaborative structure of some imitative interactions. Thus, they sometimes observe adult actions directed to them, and then reverse roles and redirect the actions back to the demonstrator, making it clear by looking to the demonstrator’s face that they see this as a joint activity (Carpenter, Tomasello & Striano, 2005). Chimpanzees may on occasion redirect such learned actions back to their partners, but they do not look to their partner’s face in this way (Tomasello & Carpenter, 2005). Thus, chimpanzees’ social learning is actually fairly individualistic, whereas 1-year-old children often respond to instruction and imitate collaboratively, often with the motivation to communicate shared states with others.
"[O]ur proposal," Tomasello writes, " is the relatively uncontroversial one that human collaboration was initially mutualistic --- with this mutualism depending on the first step of more tolerant and food-generous participants [see e.g. March 28, 2013 post]. The more novel part of the proposal is that mutualistic collaboration is the natural home of cooperative communication. Specifically, skills of recursive mindreading arose initially in forming joint goals, and then this led to joint attention on things relevant to the joint goal (top-down) and eventually to other forms of common conceptual ground. Helping motives, already present to some degree in great apes outside of communication, can flourish in mutualistic collaboration in which helping you helps me. And so communication requests for help --- either for actions or for information --- and compliance with these (and perhaps even something in the direction of offering help by informing) were very likely born in mutualistic collaboration. At this point in our quasi-evolutionary tale, then, we have, at a minimum, point to request help and a tendency to grant such requests --- with perhaps some offers of help with useful information --- in the immediate common ground of mutualistic collaborative interactions."
Helping by informing becomes the cornerstone of indirect reciprocity, which Martin Nowak finds makes humans "supercooperators:" the only species that "can summon the full power of indirect reciprocity, thanks to our rich and flexible language." (See September 17, 2012 post).
Place a camera high above this subway station and make a video of the masses transitioning between trains. The flow of people almost seems choreographed; it is not as chaotic as one might think it could be. Now watch this video of ants marching. The movement of ants seems just as orderly as my video of humans passing in the subway station. But it is different. Ants communicate differently, relying on chemicals and touch. They are acting automatically, inflexibly. Chemotaxis (see May 19, 2013 post) is at play here. In contrast, humans read the intentions of other humans in their facial expressions, gaze, and motions (see July 16, 2010 and September 18, 2009 post), even when no words or spoken, and this is relatively unique in the animal kingdom. Apes are known to understand intentions of others, and apes are known to experience empathy. (See November 9, 2010 post). Mirror neurons, which some argue enable us to feel what others are feeling or experiencing, were first discovered in monkeys. (See September 18, 2009 post). What apes do not do --- and humans do, according to Michael Tomasello --- is share intentions and goals with others.
Mindreading introduces us to theory of mind. (See November 21, 2012 and June 12, 2011 posts) It is not a "theory" so much as it is a state of awareness: our ability to attribute mental states to others --- mindreading. In my little "everyday vignette" just described, our theory of mind is almost unconscious, and I would submit that this ability is close to unique, if not unique in the animal kingdom. If apes enjoy a theory of mind, it is certainly not as well-developed as humans. It plays out in nearly every other human scenario imaginable because we are social animals. (See November 21, 2012 post). Our theory of mind undoubtedly varies among these scenarios, for example, if our attentiveness to something specific about another person is heightened, there is probably a heightened attentiveness to another's specific mental state; whereas in my vignette the subway passenger's theory of mind is likely ascribing generic mental states to the masses of other humans around.
Michael Tomasello's Origins of Human Communication is not specifically about verbal speech or language, which is the focus of Christine Kenneally's The First Word. (See August 31, 2009 post). It is about human communication. In Tomasello's view, language is not hardwired genetically into the brains of humans, and while he does not debate whether language is an adaptation or exaptation (see October 25, 2011 post), Tomasello treats language as an emergent property, emerging from antecedent forms of human communication --- specifically gestures such as pointing or pantomiming. And Tomasello is armed with a lot of research data to advance his argument. Given that humans do not generally utter a spoken word of language until they are older than one year (14-18 months), Tomasello finds his antecedents in babies and, with an evolutionary longer gaze, in apes, the genetically closest animal to homo sapiens sapiens. Pointing and pantomiming are gestures human babies use before they begin to speak (with language essentially becoming a substitute for pantomiming).
Tomasello's thesis is that the "ultimate explanation for how it is that human beings are able to communicate with one another in such complex ways with such simple gestures is that they have unique ways of engaging with one another socially. More specifically, human beings cooperate with one another in species-unique ways involving processes of shared intentionality." By "simple gestures," Tomasello is referring to the acts of pointing and pantomiming. He notes that apes point and respond to pointing, but the key difference here with humans is that when apes point they are making requests --- demanding action by others. "Bring me that food." Apes possess the ability to follow gaze direction. Apes want the other to see something and do something. Humans, by contrast, point not only to direct the other human's attention to something, but to share information with others, request help and cooperation, even when there is no benefit to themselves. (See September 27, 2012, September 12, 2012, and October 13, 2010 posts for discussions of direct and indirect reciprocal altruism). "Pointing," says Tomasello, "is based on humans' natural tendency to follow the gaze direction of others to external targets, and pantomiming is based on humans' natural tendency to interpret the actions of others intentionally. This naturalness makes them good candidates as an intermediate step between ape communication and arbitrary linguistic conventions [of humans]." While there are some primatologists who credit apes with more cooperative, social behavior than Tomasello acknowledges, what differentiates apes and humans, he says, is an underlying psychological infrastructure --- made possible by cultural learning and imitation that allows humans to learn from others and understand their intentions. That leads to shared intentionality --- sometimes referred to as "we" intentionality --- collaborative interactions in which participants share psychological states with one another and have shared goals and shared action plans. This brings us to the research observations about human babies. Tomasello (2007):
"[H]uman adults quite often teach youngsters things by demonstrating what they should do – which the youngsters then respond to by imitating (and internalizing what is learned. Adult chimpanzees do not demonstrate things for youngsters (or at least do this very seldom). Interestingly, when human adults instruct their children in this way (providing communicative cues that they are trying to demonstrate something), 14-month-old infants copy the particular actions the adults used, and they do so much more often than when adults do not explicitly instruct – in which case they just copy the result the adult achieved (Gergely & Csibra, 2006). Furthermore, there is some evidence that 1-year-old infants are beginning to see the collaborative structure of some imitative interactions. Thus, they sometimes observe adult actions directed to them, and then reverse roles and redirect the actions back to the demonstrator, making it clear by looking to the demonstrator’s face that they see this as a joint activity (Carpenter, Tomasello & Striano, 2005). Chimpanzees may on occasion redirect such learned actions back to their partners, but they do not look to their partner’s face in this way (Tomasello & Carpenter, 2005). Thus, chimpanzees’ social learning is actually fairly individualistic, whereas 1-year-old children often respond to instruction and imitate collaboratively, often with the motivation to communicate shared states with others.
***
"Human children, on the other hand, often are concerned with sharing psychological states with others by providing them with helpful information, forming shared intentions and attention with them, and learning from demonstrations produced for their benefit. The emergence of these skills and motives for shared intentionality during human evolution did not create totally new cognitive skills. Rather, what it did was to take existing skills of, for example, gaze following, manipulative communication, group action, and social learning, and transform them into their collectively based counterparts of joint attention, cooperative communication, collaborative action, and instructed learning – cornerstones of cultural living. Shared intentionality is a small psychological difference that made a huge difference in human evolution in the way that humans conduct their lives.
"In terms of ontogeny, Tomasello et al (2005) hypothesized that the basic skills and motivations for shared intentionality typically emerge at around the first birthday from the interaction of two developmental trajectories, each representing an evolutionary adaptation from some different point in time. The first trajectory is a general primate (or perhaps great ape) line of development for understanding intentional action and perception, which evolved in the context of primates’ crucially important competitive interactions with one another over food, mates, and other resources (Machiavellian intelligence; Byrne; Whiten, 1988). The second trajectory is a uniquely human line of development for sharing psychological states with others, which seems to be present in nascent form from very early in human ontogeny as infants share emotional states with others in turn-taking sequences (Trevarthen, 1979). The interaction of these two lines of development creates, at around 1 year of age, skills and motivations for sharing psychological states with others in fairly local social interactions, and then later skills and motivations for reacting to and even internalizing various kinds of social norms, collective beliefs, and cultural institutions."
The cooperative homo hunter-gatherer phenomenon is believed to have emerged among homo erectus, hundreds of thousands of years before homo sapiens. (See November 21, 2012 post). Exactly when their social structures emerged is a matter of debate, but as forms of homo cooperation evolved forms of communication would be expected to emerge as well and how those forms of communication might have evolved is what Michael Tomasello explores in The Origins of Human Communication. The timing of the emergence of verbal language among homo is also a matter of debate and no consensus, but some put that event as occurring among homo sapiens 50-70,000 years ago and perhaps as early as 100,000 years ago, but perhaps earlier. That timing would correlate with what we believe is the evolutionary origins of modern humans, homo sapiens sapiens, in Africa. Whatever the timing for the origins of human speech, there is a gap of hundreds of thousands of years between the origins of communication and verbal speech. But it is over these hundreds of thousands of years, if not over a million years, of the evolution of cooperation among the species of genus homo, that the psychological infrastructure critical to human eusociality cited by Tomasello developed:
"[O]ur proposal," Tomasello writes, " is the relatively uncontroversial one that human collaboration was initially mutualistic --- with this mutualism depending on the first step of more tolerant and food-generous participants [see e.g. March 28, 2013 post]. The more novel part of the proposal is that mutualistic collaboration is the natural home of cooperative communication. Specifically, skills of recursive mindreading arose initially in forming joint goals, and then this led to joint attention on things relevant to the joint goal (top-down) and eventually to other forms of common conceptual ground. Helping motives, already present to some degree in great apes outside of communication, can flourish in mutualistic collaboration in which helping you helps me. And so communication requests for help --- either for actions or for information --- and compliance with these (and perhaps even something in the direction of offering help by informing) were very likely born in mutualistic collaboration. At this point in our quasi-evolutionary tale, then, we have, at a minimum, point to request help and a tendency to grant such requests --- with perhaps some offers of help with useful information --- in the immediate common ground of mutualistic collaborative interactions."
Helping by informing becomes the cornerstone of indirect reciprocity, which Martin Nowak finds makes humans "supercooperators:" the only species that "can summon the full power of indirect reciprocity, thanks to our rich and flexible language." (See September 17, 2012 post).
Missing from Tomasello's discussion of social cognition and the human communication is the emotional content and perhaps underpinnings of that communication. Tomasello cites inflexible ape vocalization as "tightly tied to emotion," but as prior posts in this blog point out there is uniquely human social behavior anchored in emotions arising in the more ancient part of our brains. (See May 19, 2013 and November 21, 2012 posts). Jaak Panksepp's discussion (May 19, 2013 post) of the care system, the grief system, and the play system are just examples of the emotional underpinnings of the psychological infrastructure that Tomasello relies upon to build his viewpoint. I do not purport to have read everything that Tomasello has researched and written, so maybe this discussion occurs elsewhere. I note that he does comment in his 2005 article, that "theorists such as Trevarthen (1979), Braten (2000), and especially Hobson (2002), have elaborated the interpersonal and emotional discussions of early human ontogeny in much more detail than we have here. We mostly agree with their accounts . . ."
There are two final points in this discussion. First, the evidence from babies supports the conclusion that our social cognition and behavior is not innate. Tomasello calls this ontogenetic, which is the nurture side of the nature versus nurture (post natal learning) debate. (See September 18, 2009 post). With respecting to language, for example, V.S. Ramachandran believes that what is innate is our "capacity for acquiring rules of language," not language or verbal speech itself. (See October 25, 2011 and February 15, 2012 posts). Tomasello certainly concurs with the view that "[t]he actual acquisition of language occurs as a result of social interaction. Ramachandran believes that language was enabled by cross linkages in the brain between different motor maps (e.g. the area responsible for manual gestures and the area responsible for orafacial movements). (Id.) Second, Marco Iacoboni's suggestion (September 18, 2009 post) that the individualistic or competitive models of human behavior leave much to be desired is worth invoking. As a prior post (id) observed, "Self and other are 'inextricably blended,' says Iacoboni. The sense of self follows the sense of 'us,' which is the first "sense" of awareness an infant has immediately following its birth as a result of mother infant interactions. We are social animals first." While competition and individual behavior is not necessarily a vice and might be deemed virtue (see January 30, 2010 post), it is not the endpoint in our understanding and modeling of our human cognitive framework. In the contentious political conversation that now embraces America, it is not sufficiently recognized that the social and cooperative dimension of the human cognitive framework is dominant to competition in that framework as well as our evolutionary history as a key to human survival. (See November 21, 2012 and August 22, 2012 posts). If we can cooperatively navigate our way successfully through a subway station without thinking too deeply about what we are doing, we ought to be able to collaboratively solve a public policy issue.
Labels:
mirror neurons,
ontogeny,
phylogentically,
pointing
Subscribe to:
Posts (Atom)