Tuesday, June 28, 2011

Jose Saramago, The Year of the Death of Ricardo Reis (1984)

We write literary fiction that is often based on personal experience, and without hesitation we call it a work of fiction. We write literary fiction that is entirely a fantasy of the mind, and again we call this a work of fiction. We write literary fiction based on recorded history, weaving historical facts that are widely accepted and acknowledged to be actual fact with a story that has no basis in fact, and without hesitation we call this a work of fiction. Similarly, we write literary fiction based on recorded history, and we revise some of these historical facts so they no longer correspond to recorded history, and we still call this a work of fiction. Yet we also write literary fiction that weaves historical facts, revises historical facts, with stories that have no basis in fact, and the work is sometimes presented as gospel fact --- non-fiction. We sometimes call these works "sacred texts." The difference between fiction and non-fiction is blurred.

There is a book that has been on the top of the recent paperback best-seller lists for non-fiction. I have not read it, and so I am unqualified to discuss it. The book is entitled Heaven is for Real, and here is how it is briefly described on the New York Times best seller list: "A father recounts his 3-year-old son’s encounter with Jesus and the angels during an appendectomy." The "what the heck" question that must come to everyone's mind is how on earth did the Times decide to put this book in the category of non-fiction? This is fiction. We know it when we see it. Well, maybe that one sentence did not do it justice. Here is how the full-length article on the book describes it:

"Just two months shy of his fourth birthday, Colton Burpo, the son of an evangelical pastor in Imperial, Neb., was rushed into emergency surgery with a burst appendix. He woke up with an astonishing story: He had died and gone to heaven, where he met his great-grandfather; the biblical figure Samson; John the Baptist; and Jesus, who had eyes that 'were just sort of a sea-blue and they seemed to sparkle,' Colton, now 11 years old, recalled."

Now we know our initial reaction is correct. This is fiction. Well maybe not. Maybe the non-fiction part is represented by the objective facts that the boy had appendicitis; he went to the hospital and had surgery; that he had a dream about meeting Samson, John, and Jesus in heaven; and that this 3-year old boy told his father about the dream. OK. So a non-fiction story that contains a description of a fictional event. But hold it. The title is "Heaven is for Real." That suggests that the fictional part of this story is somehow real. On the totality of the evidence, I vote to move this book to the fiction column.

In The Believing Brain, (see June 12, 2011 post) Michael Shermer reports on polls showing that a substantial majority of religious people believe in the afterlife: the eternal survival of the soul, heaven, and hell. The numbers are not as high for Jews, something I will discuss further below. Belief in the afterlife is constructed on what Shermer refers to patternicity and agenticity and theory of mind: we imagine ourselves and others as intentional agents continuing indefinitely into the future. As Damasio describes (see April 8, 2011 post), our brain unconsciously monitors everything that is going on in our body and it is consciously aware of the objective "me" --- our limbs, skin, the noises we make in the context of the environment we are in that are senses make us aware of --- and when this core self is coupled with the consciousness of a subjective autobiographical self -- "I" -- that autobiographical self becomes an extension of our body schema. The Theory of Mind , and what Michael Gazzaniga refers to as the decoupling device in our brains, enables projection of our unseen essence into the future, and combined with the left hemisphere of the brain's storytelling capacity we have the capacity to assign meaning and intentionality to our unseen essence, what we call our "soul," which has the ability to survive our physical death. Shermer uses the phrase "decentering," whereby we "imagine ourselves somewhere else from the Archimedean point beyond our body . . .we envision ourselves in the afterlife as a decentered image removed from this time and space into an empyreal realm." This is the brain's same capacity that enables us to create works of fiction. (See May 22, 2011 post). Our belief in heaven, hell, the eternal soul arises from our brain's capacity to write works of fiction and tell stories about that things that do not exist.

Fernando Pessoa is an authentic person, a beloved poet of Portugal. He was particularly admired by Jose Saramago, the author of The Year of the Death of Ricardo Reis. Pessoa invented "heteronyms" --- imaginary character(s) created by a writer to write in different styles --- one of which was Ricardo Reis. In Saramago's novel, Ricardo Reis arrives back in Portugal after sixteen years abroad in Brazil in mid-December 1939. Fernando Pessoa, the Portugese poet has just died on November 30, 1939 (a true historical fact). Reis is drawn to Pessoa's gravesite for a visit. Not long thereafter, Pessoa, who had been dead about a month, makes himself visible to Reis and they talk. "I have about eight months in which to wander around as I please," explains Pessoa. "Why eight months?" asks Reis, and Pessoa explains: "The usual period is nine months, the same length of time we spend in our mother's womb, I believe it's a question of symmetry, before we are born no one can see us but they think about us everyday. After we are dead they cannot see us any longer and every day they go on forgetting us a little more, and apart from exceptional cases it takes about nine months to achieve total oblivion." Perfect. So we have an "afterlife" of nine months that mirrors our "pre-life" of nine months duration. This is not illogical. But it's storytelling, no different than the storytelling in Heaven is for Real, and we do not place The Year of the Death of Ricardo Reis in the category of works of non-fiction.

Alan Segal wrote a very good history of the human belief in afterlife entitled Life After Death: A History of the Afterlife in Western Religion. Segal cites the same polling data that Michael Shermer refers to about the substantial number of people who believe in an afterlife. Belief in afterlife is documented to go back as far as we have a written record of human culture --- to Egypt and Mesopotamia. Ironically, the culture that coalesced around a single god --- the founders of monotheism --- the Jews (First Temple Judaism) was devoid of a story of the afterlife in pentateuch --- the five books of Moses. The afterlife did not creep into the Jewish psyche until a couple of hundred years BCE, around the time of the Book of Daniel or the period of the Maccabees, and it later became a part of the rabbinical tradition. This probably explains why belief in the afterlife is significantly lower among Jews than among Christians and Muslims. Ironically, Spinoza was excommunicated by his Jewish community in Amsterdam in the late 17th century because, in substantial part, he professed that there was no eternal soul. Segal also singles out adherents of Confucianism for very low belief in an afterlife.

Segal's history of human belief in afterlife, and even Spinoza's experience, confirms the role of culture and social structure --- religion, government, and family --- in reinforcing beliefs such as belief in the afterlife. Shermer's list of biases do not seem sufficient to explain why belief in the afterlife is so engrained --- although one bias that Shermer only mentions in passing --- the authority bias. the tendency to believe the word of persons in positions of authority --- would go part of the way in explaining the strength of this belief. There is something more in the reinforcement of this belief than just the biases that Shermer explains are critical to confirming belief. Storytelling is part of our social nature, and our capacity for storytelling is clearly something that has a survival value for us, even when it is a work of fiction. Belief in the afterlife is part of the social reality that many have constructed for themselves,. Its origin has explanation in adaptive strategies for survival at some point in our past.

Sunday, June 12, 2011

Michael Shermer, The Believing Brain (2011)

This is a better, more thorough book on the subject of how we come to believe things that are not real than Supersense (see June 5, 2011 post). I say that for multiple reasons. Not just because Michael Shermer avoids a gimmicky term like "supersense," but primarily because the structure of the author's presentation and the evidence marshaled in support of his thesis is more persuasive than Hood's more anecdotal presentation. While there is much in common with the subject-matter of Supersense, in The Believing Brain Michael Gazzaniga's "left hemisphere" interpreter is presented as a candidate for the brain's storytelling capability, which reconstructs events and weaves those events into a meaningful story --- including stories that are simply works of fiction. As I said in the previous post, this storytelling capability is critical to our narratives of belief. Finally, Shermer treats his subject with seemingly more objectivity than Hood. For Shermer, the study of how we believe something is not limited to the strange or unreal --- ghosts, angels, gods, aliens and phantom conspiracies --- but also politics, economics, and even hard science.

According to Shermer (and Hood, see previous post), as sensory data flows into the brain, there is a "tendency" for the brain to begin looking for meaningful patterns in both meaningful and meaningless data. He calls this process patternicity. A second tendency of the brain is to "infuse patterns with meaning, intention, and agency." Shermer labels this tendency agenticity. Because of these tendencies, we form beliefs first and only later do we try to inform our beliefs with facts. The overall process Shermer calls belief-dependent realism (well, maybe there is one gimmicky term). This model not only purports to explain not only how we form entirely mistaken, fantasy beliefs, but also beliefs that are later embraced under the gospel of science (e.g. hypothesis later substantiated by empirical and repeatable testing).

Harkening back to David Hume (see February 27, 2011 post), Shermer asserts that patternicity is premised on "association learning," which is "fundamental to all animal behavior from C. elegans (roundworm) to homo sapiens." Because our survival may depend on split-second decisions in which there is no time to research and discover underlying facts about every threat or opportunity that faces us, evolution set the brain's default mode in the position of assuming that all patterns are real, says Shermer. A cost associated with this behavior is that the brain may lump causal associations (e.g. wind causes plants to rustle) with non-causal associations (e.g. there is an unseen agent in the plants). In this circumstance, superstition --- incorrect causal associations --- is born. "In this sense, patternicities such as superstition and magical thinking are not so much errors in cognition as they are the natural processes of a learning brain." Religion, conspiracy theories and political beliefs fit this model as well.

The difficulty faced by humans applying later learned facts to earlier-formed beliefs is that beliefs are often difficult to shake despite the fact that our factual knowledge is entirely inconsistent with the belief. The difficulty Galileo faced in convincing others that the earth revolved around the sun in violation of the long-held belief in Aristotle's geocentric view and the words of the Bible that the sun revolved around the earth is an example. What Shermer does, however, to document this difficulty is to catalog a series of biases that reinforce beliefs, which he labels cognitive heuristics that confirm the beliefs are correct. A heuristic is a brain's capacity to solve problems through intuition, trial and error, rules of thumb, or other informal means, when there is no formal means for solving the problem. These biases are part of our documented behavior, including the tendency to seek and find confirmatory evidence in support of an existing belief and to ignore or reinterpret disconfirming evidence (confirmatory bias), the tendency to reconstruct the past to fit the present (hindsight bias), the tendency to rationalize decisions after the fact to confirm that our action were the best we could have done under the circumstances (self-justification bias), the tendency to attribute different causes for our own beliefs and action than that of others (attribution bias), the tendency to believe in something because we have already invested so much in that belief (sunk-cost bias), the tendency to opt for whatever we are used to or familiar with (status quo bias), the tendency to value what we own more than what we do not own (endowment effect), the tendency to draw different conclusions based on how data is presented (framing effect), the tendency to rely too heavily on a past reference or on one piece of information (anchoring bias), tendency to miss something obvious while attending to something special and specific (inattentional blindness bias), and the tendency to recognize the power of cognitive biases in others, but to blind to their influence on ourselves (bias blind spot) and others.

Regrettably, Shermer does not provide a biological explanation for these biases (or "tendencies," as he seems inclined to refer to them). And "bias" may be too charged a term. Bias is a term that arguably reflects a cultural or community predisposition of some sort (whether the community consists of a family, a village, a region, nation or species), which is really within the province of nurture rather than nature. These "tendencies," however, can also be predispositions grounded in our biology, particularly our neurobiological system. I strongly suspect that if we dissected human brains and a network of connected neurons from a representative sample of humans, we would find a very high level of near identity among brains. There will be some differences due to DNA, and there will be some pathological differences as well, perhaps caused during embryonic development. But I believe that by and large we will find that human brains, neuron by neuron, are organized and folded and layered in substantially identical ways. This is the product of evolution. Some of these neuronal networks are unconsciously and automatically operational as part of what Damasio calls the protoself -- the body monitoring itself, (see April 8, 2011 post) and are functioning and repeating the transmission of electro-chemical signals to and from the brain even before we are born. Other neural networks are only activated in response to sensory experience with the external physical world as part of what Damasio calls the "core self," which process begins immediately with our birth. Finally, when the neural networks within the cerebral cortex are activated to trigger autobiographical memory and our storytelling capability, we find our autobiographical self. While much of the neuronal activity in the cerebral cortex is connected to the automated, unconscious neuronal networks tied to monitoring our own body and our external sensory experiences that are part of our core self experience, there is much that is influenced more by nurture than nature --- culture and learning at the social level. At the level of the protoself and the core self, except to the extent that a particular individual may have a variant pathological condition, there is a high level of commonality in the activation of neuronal networks repeatedly sending and receiving electro-chemical impulses. This is reflected in the high level of commonality of similar emotional and feeling experiences humans share in response to the same stimuli --- whether the stimuli is within the body or outside of it. There will be more differences among us at the level of the core self, because we will not always share sensory experiences with the same level of frequency as all other humans. I am now in speaking in terms of the brain's capacity for memory and learning, which, as Eric Kandel demonstrated, is influenced by synaptic connections strengthened through repeated experience and learning. In contrast, at the level of the neural networks in the cerebral cortex, there is likely to be greater disparity among us in the precise neural networks activated, because we are going to have significantly different cultural and social learning experiences. It is here where biases may be formed. But at the level of the protoself and core self, important biological predispositions or tendencies have significant influence on instincts and intuition that are part of the belief-forming process. And the tendencies that Shermer identifies may be the outcome of both of biologically hardwired predispositions and culturally learned biases.

The Believing Brain has a discussion entitled "Synaptic States and Believing Neurons" in which Shermer recognizes that neurons communicate information in three ways: (1) firing frequency, (2)firing location, and (3) firing number. This discussion also includes a recognition that certain types of chemicals in the neuronal network act to reinforce learning and belief. This discussion, in my view, supports what I just said. Shermer, on the other hand, does not incorporate this discussion into his discourse on biases, and this is a shortcoming of his discussion on biases.

Our senses evolved, Shermer observes, for "perceiving objects of middling size." I wrote in an earlier post (see August 31, 2009 post) that in my worldview, there are three very big subjects for human inquiry: 1) the realm of the very large --- the universe (and whether there is more than one, making the word universe a possible oxymoron), its origin and history; 2) the realm of the very small -- the smallest molecular (sub)units, and their behavior; and 3) the human mind --- how it works, consciousness, intentionality. "We are not equipped to perceive atoms and germs, on the one end of the scale, or galaxies and expanding universes, on the other end," writes Shermer. I disagree, but I understand Shermer's point. There are several cognitive biases, listed above (e.g., the status quo bias, the inattentional blindness bias), that discourage the general population from looking beyond objects of middling size to the realms of the very large and the very small. But frankly, most of us simply lack the education, training, and time to focus on the realms of the very large and the very small. We do not understand them. Shermer seems to have concluded that only "scientists," those trained in the scientific approach to knowing and understanding the world, are capable of neutralizing these various biases. But the substantial variation among the human population in education, training, and the amount of time we have to focus on the large and small confirms that our biases are significantly tied to culture and nurture. Are we all born with the capacity to become an Einstein or a Steven Weinberg? I am not saying that. But I don't believe Einstein became Einstein entirely on his own.

Finally, I want to return to the concept of agenticity -- the tendency to impart patterns with intention and agency, even patterns we perceive in inanimate objects, that lead us to believe that unseen agents control the world. Now we enter the world of spirits, gods, souls, devils, angels, aliens, and yes, intelligent designers. This is the domain of teleology, which I have mentioned in previous posts. (See March 24, 2010 and May 12, 2010 posts), as well as that part of philosophy that discusses intentionality --- how our brain relates to objects external to ourselves. Shermer's discussion of essentialism flows from the writings of Paul Bloom in Descartes' Baby, which concept is elaborated upon by Hood as well. Essentialism refers to the brain's ability to abstract about a physical object and to ascribe to that object an unobservable essence --- beliefs, desires, intentions and goals --- and treat these invisible essences as real. In the case of animate objects like other humans or animals, notes Michael Gazzaniga in Human, we believe they must have beliefs, desires, intentions, etc. just like our own. Anthropomorphism and teleological thinking are born.

Two questions emerge from the development of this discussion: what evolutionary value would essentialism provide to humans? And what is the biological process in the brain that explains it? Gazzaniga explains that our ability to reason about unobservable entities or unobservable essences allows humans to predict and explain events --- to predict the behavior of others. In other words, we can predict the behavior of another animal by inferring its psychological state (sometimes referred to as a theory of mind (TOM)), which would have enormous significance for survival. It also appears to relate to our storytelling capability about things that do not exist. Gazzaniga explains it this way in Human:

"Sometimes our predilection for explaining the cause of things or behaviors with teleological thinking runs amuck. One of the reasons is that the agency-detection device [in our brain] is rather zealous. Barrett calls it hyperactive. It likes to ...find animate suspects even when there are none. When you hear a sound in the middle of the night, the question that first comes to mind is Who is that? rather than What is that? When you see a wispy something moving in the dark, Who is that comes to mind because the detective device is not modern and up-to-date. The detective device was forged many thousands of years ago before there were inanimate objects that could move or make noise on their own. To first consider a potential danger as animate is adaptive. It worked most of the time. Those who did it survived and passed their genes to us. *** The hyperactive detective device, combined with our need to explain and teleological thinking, is the basis of creationism. To explain why we exist, the hyperactive detection device says there must be a Who involved. Teleological reasoning says there must be an intentional design. The cause must be the desires and intentions and behavior of the Who. The we were designed by a Who. *** All of this is reminiscent of what the left-brain interpreter would do, which it has been demonstrated to do in other settings."

The research into so-called mirror neurons (see September 18, 2009 post) puts us on the road to explaining how the brain helps us form beliefs about other people's intentions, and Shermer incorporates this research into his discussion of agenticity. So does Paul Ekman's research into facial descriptions, as described in Dacher Keltner's book Born to Be Good (see July 16, 2010 post).

I disagree with Shermer and others that we are hardwired to believe in God. Hardwired means that it is in our DNA, and I have rejected the idea that there is a "god gene" in prior posts (see November 30, 2009 post). We are hardwired to be disposed to believe first that if we do not immediately perceive a causal explanation for some event, that some unseen agent with intentions and desires and beliefs very similar to our own is the causal explanaton for this event. That unseen agent may be another person we do not see, another animal we do not see, a ghost we do not see, or the devil we do not see. We are hardwired to search for patterns in what we perceive, and we are hardwired with a storytelling capacity to explain what we perceive, and sometimes that story is based on things we can actually see and touch, and sometimes that story is made up out of whole cloth. But even if that story is made up out of whole cloth, it is a story about an agent that thinks, desires and behaves just like ourselves. God was made by humans in our image, not the other way around. In this explanation, we also begin to understand our ability for creativity --- engineering, songwriting, and writing works of fiction.

Sunday, June 5, 2011

Bruce M. Hood, Supersense - Why We Believe in the Unbelievable (2009)

This is another book where I suspect the editor or publisher, not the author, dictated the title, but maybe I am wrong. The term 'supersense' is used liberally throughout this volume. This is a silly term, which suggests that there is some additional sensory modality beyond those we are all familiar with, and suggests that it is "superior" to all other senses. I disagree with the suggestion. [Postscript, October 15, 2011: the paperback version of this book has been renamed -- The Science of Superstition: How the Developing Brain Creates Supernatural Beliefs. A good choice.]

For humans, the most special sensory modality is vision, yet all the sensory modalities, both somatic (temperature, touch, pressure, pain, body position) and the five special senses (smell, hearing, taste, vision and balance) simultaneously provide important stimuli and convert energy into nervous impulses that are mapped in the brain at about the same time. And these different neuronal inputs, arriving initially in different areas of the brain, are subsequently joined in convergence zones in the brain, which integrate information arriving from the different sensory modalities that enables a unified experience of our environment rather than a series of fragmentary experiences. This is sometimes referred to as the "binding problem," whereby multisensory inputs are integrated in the brain to form a unified experience.

This concept of sensory experience is not what author Bruce Hood has in mind. For Hood, "supersense" is an "inclination that [our experiences, which are not substantiated by a body of reliable evidence] may be real." In this vein, "supersense" is derivative of what we have labeled "extrasensory perception" and is not a "sense" at all. What Hood is getting is what we refer to as instinct, intuition, and inference, all of which are derivative of the known sensory modalities and the brain's capacity to search out and recognize patterns that explain our experiences. In the course of searching out and trying to find patterns, we initially form beliefs about our experience. This belief-forming capability begins in our infancy, a time when we are not cognitively capable of explaining what we believe or why we believe it. Our minds are designed, writes Hood, " to see the world as organized, [and] we often detect patterns that are not really present."

The following passage highlights that intuition is at the foundation of the so-called "supersense": "There is good evidence that children naturally and spontaneously think about the unseen properties that govern the world. They infer forces to explain events they cannot directly see, understand that living things have life energy, and reason in terms of essence when thinking about the true nature of animals. And, of course, they begin to understand that other people have minds. These processes are not taught to children. They are reasoning, though it is not clear that they can necessarily reflect on why or how they re coming up with their decisions. That is why their reasoning is intuitive.***Intuition is often called a 'gut feeling.' Sometimes we get a 'vibe' when we sense a physical feeling of knowing . . .The neuroscientist Antonio Damasio calls this the somatic marker: it indicates the way emotions affect reasoning in a rapid and often unconscious way." Hood's point: a lot of our responses to what our sensory modalities experience is automatic and "unlearned." It is in our nature, therefore, that there will likely be a variance between what we believe and what is real.

Hood is not original in this way of thinking and the body of research that supports it. Paul Bloom's book, Descartes Baby, demonstrated that from a very early age children distinguish between physical states and mental states, making us "natural born dualists," and that we reason differently about the two states. And within the realm of mental states, children discern intentions, purpose, goals, and emotions in other animate objects. Children believe and recognize that other animate objects have "minds," and believe they can recognize intentions, purpose, goals and emotions in others: we can read minds. In a previous post (see September 18, 2009 post), the role of "mirror neurons" have been identified with our ability to "feel" the actions and perhaps the intentions of others. Numerous researchers, including Michael Gazzaniga (see September 27, 2009 post) have written about the neural mechanisms of nonconscious mimicry and emotional contagion (taking on the mood of others) that begins in early infancy --- particularly in the case of child and mother.

According to Hood, the intuitions of children include the belief that there are no random events or patterns, that events are caused by intention, that complexity is neither random nor spontaneous but a product of design, and that living things differ because of some essential, invisible property inside them. These intuitions "are fertile soil for creationism," concludes Hood. As children, we "infer structures where there may be none," and it is these inferences that give rise to supernatural beliefs. Natural selection is counter intuitive. As children grow older, they tend to categorize their experiences and "generate naive theories that explain the physical world, the living world, and ultimately the psychological world of other people. While children's naive theories are often correct, they can be wrong because the causes and mechanisms they are trying to reason about are invisible...When we misapply the property of one natural kind to another, we are thinking unnaturally. If we continue to believe it is true, then our thinking has become supernatural ...This is where the supersense comes from." From this point, what we call "animism" and "anthropomorphism" is a natural next step. We create mythological "gods" in our own image, and then we use our storytelling capability to say that "god" created us in god's image. At this point, culture takes over and reinforces what began as "naive theories." Children's misconceptions are intuitive and not taught, "but they feed into a cultural context to become folklore, the paranormal, and religion. We know that social environments are important in providing frameworks of belief, but they only exist in the first place because of the supersense."

Instinct, intuition, and inference are not culprits in this story. All are essential to human survival, and hence the neurobiological system that has evolved with our modern human mind is an evolutionary outcome that favors taking shortcuts, including adopting beliefs about things that might not be true. "Intuitive reasoning," Hood notes, "has the advantage in the race to influence our decision-making because it is so effortless, covert, and rapid." It consumes less energy, and without it our decision-making would be slower making it more likely that we humans might be someone else's lunch.

What is missing from Hood's account of how we come to believe that things that are not real are real is the mind's role in storytelling --- explaining beliefs in either fictional or non-fictional terms. This is something that is probably unique to humans, because it requires a language faculty. Hood acknowledges that "humans are compelled to understand the nature of the world around as part of the way our brains try to make sense of our experiences." But the closest Hood comes to discussing this subject is his discussion of free-will. We have the experience of making "conscious decisions" to undertake action, but brain research shows that the point in time when we think we have made a choice occurs after the event (action) has occurred, suggesting that the mind constructs a story that fits with a decision after they have been made. In Human (see September 27, 2009 post), Michael Gazzaniga writes that while "other animals and humans use observables to predict, it may be that humans alone try to explain." For Gazzaniga, this is the role of the left side of our brain, which acts as an interpreter of our conscious experience, which is "driven to generate explanations and hypotheses regardless of the circumstances." From studies of brain lesions, we know that the interpreter on the left side of the brain is only as good as the information it receives, and the outcome of its efforts to make sense of wacky incoming information can be a lot of "imaginative stories." Gazzaniga adds, "just because you can imagine something does not mean it true. You can imagine a unicorn, a satyr, and a talking mouse. Just because you believe or imagine that the mind and body are separate does not mean they are." So the structure of the human brain has either evolved or adapted to interpret and explain both the environment around it and its own subjective experiences. Language provides the faculty to convert interpretation into a story, which may be either true or false, but which become beliefs nevertheless.