Tuesday, December 20, 2011

Jose Saramago, Cain (2011)

The story of Cain in Genesis is brief: No more than 30 or so sentences over 25 verses. Cain, the first child of Adam and Eve, is born and he grows to become a farmer; Cain and his brother Abel, a shepherd, make an offering to God, and Cain's offering is snubbed by God while Abel's offering is not; Cain takes this badly and kills his brother; God asks Cain what happened to Abel and Cain lies to God by stating that he does not know (here Cain utters the memorable phrase, "Am I my brother's keeper?"); God knows that Cain has lied and that he has killed his brother; Cain is told by God that he will wander the earth forever as punishment (a fate worse than death); God places a mark on Cain's forehead so that no one will slay him to ensure this fate worse than death; Cain wanders to the land of Nod, where he sleeps with a woman who bears him a son named Enoch. That is it. We do not know how the woman Cain slept with came to be born, unless the woman is Eve herself, which would not be totally out of character with other incest stories of the bible. Adam and Eve had one more son named Seth, and we are not informed of any daughters. The inference is that there must be other humans out there besides Adam and Eve copulating and multiplying, but the Genesis story provides no illumination of their existence. A shortcoming in the tale of the primordial first family.

Leave it to Jose Saramago's imagination to fill in the blanks of the Cain story. And not only does Saramago fill in a few blanks of Cain's story, but the biblical stories of Adam and Eve, Abraham and Isaac, Sodom and Gomorrah, Moses, Joshua, Noah, and Job as well. For Cain is not merely a wanderer, but he is a time traveler as well: Cain, travels back and forth in time, and, with the fortune of a Forrest Gump, Cain just happens to make an appearance on the stage of many significant biblical events recorded in Genesis and Exodus. Who knew? And the wisdom Cain arrives at at the end of his journey is that God is not only petty, but he is a bad guy, not much different than the devil. God is neither merciful nor loving, and he certainly has little empathy or compassion for man or his Chosen People. This is a world of fallen human beings (see August 28, 2011 post): humanity characterized by lust, incest, rape, murder, jealousy, deception, and real estate wars.

Saramago is careful to point out that Cain is neither Jewish nor Israeli. While standing at the walls of Jericho with Joshua's army, Cain is asked whether he is Israeli. He says he is not. Well then if you are not Israeli, what are you? Saramago explains, "When Cain was born, there was no such thing as the Israelites, but neither was he a Hittite, an Amorite, a Perizzite, a Hivite, or a Jebusite." He is a man without an identity, or as Saramago describes, a "man without definition." But in one sense --- as wanderer --- his wandering is the story of the Israelites, the Jews. And as a time traveler, he is like the "prophets" Jeremiah and Isaiah whose prophesies of the future are based on the vantage point of the biblical authors in the future who project themselves into the past.

The Lord murders innocents at Sodom and Gomorrah; orders Moses to slaughter thousands of Israelites because they turned their attention to an idol; orders Abraham to slay and sacrifice his son; tolerates the devil heaping misery upon Job; the Lord commits genocide of nearly the entire human race when he causes the flood. Cain abhors what he witnesses in the Lord's behavior. But Cain is neither Abraham, nor Noah, nor Moses, nor Job. Cain refuses to be respectful of the Lord. In the end, Cain sabotages the Lord's plan to purify the human race by designating Noah and his family to protect a select few of each species, including the human species, from the Lord's flood, and to start life anew. As Noah's ark endures the flood and they wait for the waters to subside, Cain slays Noah and his family, so when the waters have finally subsided, Cain is the only human left. How now can humans possibly multiply? Sounds like the same internal procreation riddle that plagues the Genesis story from the birth of Cain. As Cain disembarks the ark, the Lord asks where are Noah and his family? This time Cain tells the truth: they are dead. I killed them. And the Lord and Cain argue who among the two of them is more loathsome. The lingering question that remains at the close of Saramago's narrative is how all those events that Cain, our wandering time traveler, witnessed far into the future ever happened, now that he and the Lord have killed off the entire human race except for Cain? There are no women remaining for Cain to procreate with.

Imagination is a wonderful human attribute, and it is certainly one important characteristic that distinguishes our species from its closest relatives. Paul Bloom has contemplated why imagination has survival value. It triggers emotional responses, and it creates pleasure, both of which have survival value. "Imagination is Reality Lite," writes Bloom, "a useful substitute when the real pleasure is inaccessible, too risky, or too much work." I have a difficult time, however, succumbing to a conclusion that fantasy has significant survival value. Fantasy is at best an exaptation, just as V.S. Ramachandran concludes that our ability for abstraction is an exaptation (see October 25, 2011 post).

Imagination is what we use when we plan for the future, and that capability has significant survival value. (See May 22, 2011 post, indeed another post revolving around a novelist who took liberties with biblical stories.) Being able to view the past in different ways, seeing the strengths and weaknesses of the ways our predecessors carried out their lives, and even fictionalizing the past so as to provide an angle on the past that our predecessors could not see, is part of the art of imagination and planning for the future.

As I wrote previously in this blog, I also believe storytelling evolved in part to preserve our memories of things past. (See August 15, 2011 post). And storytelling, whether historical or fictional or both, enables the construction of both personal and social/group identity. The biblical stories served exactly that purpose as they were merged, edited, and redacted ultimately to serve the national goals of the Jewish kingdom.

I don't think Saramago is out to preserve our memory with his imaginative retelling of the Cain story. And while Saramago is whimsical in his treatment of the Lord in Cain, he is not satisfying a desire for pleasure from fantasy either. But I do think he is out to remind us that our memories are fragile, capable of being rewritten in ways that make us look at things differently, perhaps in a revolutionary way, causing us to pause to plan our future in a different way than we have been carrying on all these millenia. What Saramago's imagination has succeeded in doing, however, is to show that the biblical stories are no more than products of a fertile imagination, just as Saramago's Cain is the product of a fertile imagination. The God of the Bible and the God of Cain simply cannot be real. If God made man in his image, which is the biblical telling of our creation, we are rotten murderers, mass murderers to the core. If man imagined God in our image, then God cannot be perfect. The God of the Bible and the God of Cain dovetail in these respects: God is a fallen character. But I submit there is a third perspective: that (1) man imagined God as an unseen agent, who is not defective and is all-powerful, to explain what we are unable to explain about the world and universe around us, and, (2) as social structures among homo sapiens developed, leaders among men claimed special powers to communicate with God to either justify or sustain their relationship of power (authority) over other men. And it is the leaders among men who choose to make God what they want to make of him (or her): at one-time loving and empathetic, at another time a warrior and mass murderer, and at other times utterly indifferent. This third perspective is more consistent with the history of God in the Abrahamic religions. By this third perspective, God is wholly imaginary as well, a work of fiction.

Saturday, December 10, 2011

Rebecca Skloot, The Immortal Life of Henrietta Lacks (2010)

The Immortal Life of Henrietta Lacks reports about a significant development in medical research that occurred during the 1950s and continues to provide benefits for both research and therapy for decades thereafter. If that was the tag line about this book, it might never had ended up on the best-seller lists. Add to that tag line that the book confronts a thorny question of medical ethics --- what sorts of disclosures must be made by medical researchers who take body tissue (in this case, cancer cells from a dying patient), and the fact that those in the medical research supply chain subsequently earn millions of dollars from that body tissue, and perhaps some in the public begin to pay attention. And add to that stew the fact that the donor was a poor African-American woman whose heirs never benefited from her donation; whose descendants suffered anxiety and some level of anger because of misinformation and ignorance when their mother's name and contribution to medical science became public knowledge two decades after she died; whose descendants never sought financial compensation or benefits, only recognition for their mother's contribution to medical science, and you have a best-selling page-turner. While there are several protagonists in this story --- including the author --- the primary protagonist are the so-called "HeLa" cells taken from the cervix of Henrietta Lacks by a doctor at Johns Hopkins University. The HeLa cells were the first "immortal" human cells ever grown in culture --- cells that reproduced themselves prolifically. They were essential to developing the polio vaccine, as well as other scientific landmarks since then have used her cells, including cloning, gene mapping and in vitro fertilization. The significance of these cells is that because they created a virtual pipeline of human cells that could be replicated in the billions, they enabled medical research to be undertaken on actual human cells, rather than on humans themselves or some other animal.

So that is the story. But there is a subliminal story that leads me back to the subject of the previous post --- the subject of identity, specifically group identity. Rebecca Skloot, the author of The Immortal Life of Henrietta Lacks, does not explore the subject of cultural identity or racial identity, but she does explore, at least superficially through her narrative, group identity in the context of a family and the family members' genetic links to their past. Genetic identity is linked to various other forms of group or social identity: political identity (the heritability of power), ethnic identity, religious identity, family identity, racial identity, and caste. But as an article in the British Medical Journal notes, "These identities overlap in various ways, and genetic evidence will not affect them all equally . . . confusion looms when genetic markers conflict with other kinds of markers of group membership such as shared culture or historical narrative." This was a point I was making in the previous post, when I pointed out that Jews do not appear to define Jewish identity strictly in terms of a genetic link to the past. It may be, as genetic research on some Jewish communities suggests, that within some ethnic/religious groups genetic identity is stronger than other groups, but is this an attribute that anyone, Jewish or not, seriously wants to promote about what it really means to be part of a human subgroup? It smacks of Nazism.

As I read Skloot's story of Henrietta Lacks and her family and thought further about The Finkler Question, I was reminded of a Hebrew word, mizpah, which means an emotional bond between people who are separated (either physically or by death). The word might have application, for example, to Jews physically separated from one another by virtue of the diaspora and their mutual recognition of a shared Jewish identity. But it would also have application to Henrietta Lacks and her children and even the grandchildren that she never knew. This bond is a type of identity, and at its core is human emotion. Several books that have been discussed in this blog have highlighted the importance of emotion in building social relations. (See April 11, 2011 post, discussing David Hume and Antonio Damasio) As Dacher Keltner writes, "Emotions are involuntary commitment devices that bind us to one another in long-term, mutually beneficial relationships." (See July 16, 2010 post). In the human context, certainly emotional bonding among the persons is an attribute of group identity, although the strength of this attribute may vary among the persons. As Frans DeWaal observes, however, this same emotional bonding may exist in some other species as well (see November 9, 2010 post).

Genetic identity is, of course, nearly definitive in the discussion of group identity at the species level, but within a species, different groups become influenced by nurture and culture and social constructs are developed. And while genes may be at the foundation of our physical design that triggers various emotions and behaviors that make us more or less social, nurture and culture are at the foundation of reinforcing and making some emotional social bonds stronger than others leading to the formation of social groups and their identity.

The tug of genetic identity is felt by Henrietta Lacks' daughter, Deborah, whose anxiety grows after she learns that her mother's cells have become famous. One of her fears is that the cancer that killed her mother might take her life as well, a fear which is quelled only when she learns that her mother's cervical cancer was caused not by a genetic disease or predisposition, but the human papillomavirus. Research indicates that individuals struggle with their genetic identity, but for reasons that appear to be related to the difficulty of many in understanding just what it is about genes that is important enough to focus upon in terms of identity. More significantly, it is an emotional bond that establishes this family identity, a bond developed during the early years' of nurturing Henrietta's children, and a shared historical narrative that ties mother and children together --- a narrative that Rebecca Skloot has told well.

Friday, December 2, 2011

Howard Jacobson, The Finkler Question (2010)

While I am not uninformed or without experience on the subject of this book --- Jewish identity --- I am probably not qualified to address it, at least competently. This small vignette from the novel explains my sentiment. One of the novel's characters, Hephzibah, who is the curator of a museum-in-the-making on Anglo-Jewish Culture, asks her non-Jewish lover, Julian Treslove, to take a break from his role as "assistant curator" because he would not have much to contribute until the museum was open. Almost immediately after making the request, she regrets it, because she realizes she has effectively said that a non-Jew is not competent to organize a museum about Jewish culture. "It wasn't fair to him. Jews might have been possessed of a crowded almanac of Jewish events, a Jewish Who's Who extending back to the first man and woman, but Treslove couldn't be expected to know in every instance Who Was and Who Was Not, Who Had Changed His Name, Who Had Married In or Out. What is more he would have no instinct for it. Somethings you cannot acquire. You have to be born and brought up a Jew to see the hand of Jews in everything. That or be born and brought up a Nazi." I don't have the instinct to address Jewish identity. Except for the last sentence of this quote, Hephzibah is probably correct, but I ask, does it have to be that way?

At a certain level, it is possible to generalize the subject of identity so that what is true for the Jewish identity is true of the Russian identity, the German identity, the Muslim identity, the atheist identity, the WASP identity, the African America identity. But reducing human cultural identity to its lowest common denominators deprives us the richness of the stories that define the culture and ethnicity of different humans.

The subject of identity, in this case Jewish identity, has to be approached in two ways: personal identity, what does it mean to be Jewish?; and cultural or group identity, what does Jewish mean? Howard Jacobson zigs and zags between both, as he should, because it is impossible to separate the two. Personal identity owes its existence to group identity: we humans are social animals. The title of Jacobson's novel, The Finkler Question, reveals the inseparability of the two. Finkler is Sam Finkler, Julian Treslove's lifelong friend, and a man with his own personal identity issues. For Treslove, his friend Sam Finkler is emblematic of all Jews. Privately, Treslove refers to Jews as "Finklers." "It took away the stigma, [Treslove] thought. The minute you talked about the Finkler Question, say or the Finkler conspiracy, you sucked out the toxins. But he was never quite able to get around to explaining this to Finkler himself." Substitute Finkler for Jewish and one of the themes in this book is immediately revealed by it title: the Jewish Question.

The phrase the "Jewish Question" first appears in 18th century England as part of a debate over the rights of Jews in England --- voting rights, property rights. I did not dwell on it in my discussion of David Liss' The Paper Conspiracy (see November 16, 2011 post), but Liss fairly describes and frequently mentions the legal status of Jews in early 18th century England and their lack of property rights and voting rights. But while it began as a neutral phrase, the "Jewish question" evolved and took on an anti-semitic tone by the 19th century. A discussion of the "Jewish question" ultimately could not avoid a discussion of a "solution" --- whether it be assimilation, deportation and resettlement --- and by 20th century Nazi Germany, the solution became wedded with malice in the so-called "final solution," extermination. The phrase the "Jewish question" is a little bit like Treslove substituting Finkler for Jewish in his lexicon. Here the word "question," particularly in its anti-semitic context, really means "problem." Problems are in need of solutions.

The origins of the "Jewish question" begin long before 18th century England. They begin with the demonization of the Jews by Christians as early as the founding of the Catholic church, as former Catholic priest, James Carroll, has so thoroughly documented in Constantine's Sword. Regrettably, oppressors --- in this case, both Christians and Christian institutions --- almost never confess to their own problems and look at themselves in the same way that they look at others, and so we do not hear simultaneously from them about the "christian question." But there is in this history of demonization a "christian question," just like in other times and places there is a "muslim question," a "hindu question," a "communist question," or a "Hutu question." Jacobson at least implicitly, if not expressly, recognizes this through Finkler, who turns the tables on the "jewish question" by drawing attention to the plight of Palestinians in Israel and Gaza at the hands of their Zionist/Israeli oppressors through his involvement with a group of Anglo Jewish intellectuals known as "ASHamed Jews." Samuel Finkler, a man who is so thoroughly cloaked with a Jewish identity, has the capacity to assess critically his own cultural identity in the same way he assesses the identity of others. The "Finkler question" is as much about a Jewish assessment of the Jewish identity as it is about a non-Jew's assessment of the Jewish identity or a Jew's assessment of the non-Jewish identity.

Turning to the question of cultural identity, what does "Jewish" mean? Is the term's meaning found in the story told by Thomas Cahill in The Gifts of the Jews: the first people to provide humanity with a narrative history of near linear progress and hope that tomorrow will be better than today, which is better than the day before, in contrast to the "circle of life and death" narrative common to other cultures, such as the Mesopotamians or Hindus before the common era? Is the term's meaning found in the first people to embrace monotheism? After all, the first four of the Ten Commandments, purportedly given by god to Moses and the Jews, are about the unity and singularity of one God who is the source of all life. Is the term's meaning found in the covenant story of chapter 17 of Genesis, a story of a real estate bargain for a narrow swath of land east of the Mediterranean, south of Assyria, and north of Egypt, cleaved and sealed with a promise that all of Abraham's male descendants would thereafter be circumcised? A significant part of The Finkler Question's conversation, as well as Treslove's ruminations, are about circumcision and land. The ASHamed Jews' diatribe about Zionists and Israel is as much a diatribe about the illegal occupation of land as it is about the contradictions of Jews who "throw their weight around and then tell you they believe in a compassionate God." Is it found in the larger story read and re-read every year, along with the laws, rules, customs of the Torah, beginning with Abraham, continuing through to the story of Moses and those who wandered out of Egypt with Moses, found in Genesis, Exodus, Leviticus, Numbers, and Deuteronomy, and perhaps the oral tradition as well? Or is the meaning of Jewish found in the diaspora story, a wandering tribe in exile, yet seemingly able to assimilate in the communities of others until someone comes along and says we need to purify our community?

I did not ask whether the meaning of Jewish is found in connections among people to some mitochondrial DNA that links a genealogical family of people across generations. The phrase l'dor v'dor suggests that what Jewish means is connected through the traditions that are passed along between generations. The Nazis certainly thought in terms of a blood connection. But the Jewish treatment of the Marranos who converted to Christianity in order to avoid persecution in the times of Inquisition indicates that those who failed to continue to practice Judaism, even privately, were no longer full Jews. Likewise conversion to Judaism by a non-Jew brings about full membership in the Jewish community. Blood lineage is not an essential element of what Jewish means. Jacobson does not seem to think so either. Finkler is married to a woman, Tyler Gallagher, who converts to Judaism after they marry, and the story contains enough information that could lead one to conclude that Tyler is "more Jewish" than Finkler.

As for the novel's main character, Julian Treslove, unsure of his own identity, explores creating a Jewish identity. Treslove cannot maintain a relationship with any woman more than an evening, a day or a week. As the novel opens he seems to adore every woman who walks by him. He is the father of two children conceived during what appears to be little more than a one-night stand. In the novel's central act, Treslove is walking home after a dinner with Finkler and their mutual friend, an older Jewish man, Libor. He is assaulted by a woman, injured, and robbed, and she curses at him. At first he thinks he hears his assailant say, "Your jewels." He thinks again, maybe she said, "You're Jules." And the more he thinks about, he thinks she said "You Jules, you." No, it was more succinct: "You Ju." Treslove concludes that his assailant has mistaken him for Finkler. She thinks Treslove is a Jew. He rejects the idea that mistaken identity is one of mistaken appearance: he is the wrong size, the wrong temperature, the wrong speed to be perceived as Jew. It must be something else, he tells Finkler: it is a matter of spirit and essence. Spiritually I am like the Jews. He is reminded that Finkler once told him, "Ours is not a club you can join." And then he meets Hephzibah --- at a seder-in-November meal hosted by Libor. He begins a year long relationship based on mutual love and appears to be on the cusp of trying to establish a Jewish identity. Once again, however, he cannot consummate a relationship after a year of trying. Treslove cannot establish his personal identity.

As far as individual identity is concerned, there is no monolithic Jewish personal identity in Jacobson's novel. There are Jews who regularly go to synagogue and read the Torah every year; there are Jews who do not go to synagogue or rarely go to synagogue and do not read the Torah every year. There are Jews who are ardent Zionists and defenders of the modern Jewish state of Israel, and there are Jews who believe that the Zionists and the State of Israel have lost their way. There are Jews who will only marry another Jew. There are Jews who marry non-Jews, and sometimes they raise their children as Jews and sometimes they do not. There are Jews who are converted Jews. There are Jews who will only have marital affairs with non-Jews. There are Jews who consider themselves Orthodox, there are Jews who consider themselves "conservative," and there are Jews who consider themselves "reform." There are European Jews and there are Sephardic Jews.

Personal identity is a matter of autobiographical memory. This is our autobiographical self (see April 8, 2011 post). But our autobiographical memories are shared, and this facilitates social bonding and the building of relationships. It also influences our story-telling and the stories we tell each other, whether represented as fact or fiction. Cultures are built on the sharing of autobiographical memory, yet at the same time personal identity is strongly influenced by the culture that one personally experiences. While at the outset I said that personal identity owes its existence to cultural or group identity, the reverse is true as well as cultural identity ultimately owes its existence to the sharing of many personal identities. Autobiographical memories are merged and revised into a collective memory. But as we have seen in prior posts, memory is fluid, constantly changing and redeveloping in marginal ways. (See November 6, 2011 post). What is meant by "Jewish" for one era, is, and is likely to have a slightly different meaning in another era. What it means to "be Jewish" in one era, is likely to have a slightly different meaning in another era.

Sunday, November 20, 2011

Leonard Mlodinow, Euclid's Window (2001)

This blog is, in part, an effort to connect dots --- particularly among the books that have come off my Bookshelf. A number of dots connected in my mind as I read Leonard Mlodinow's Euclid's Window.

Euclid's Window opens in Greek antiquity in the port of Miletus on the west coast of what is now Turkey. Mlodinow asserts that a "revolution in human thought, a mutiny against superstition and sloppy thinking," occurred here in the 7th century BCE. Around 620 B.C., Thales of Miletus, who Mlodinow describes as humanity's "first scientist or mathematician," lived here and is purportedly responsible for the systematization of geometry, a methodology that would later be incorporated in Euclid's Elements. This blog first mentioned the scientific contributions of the Milesians in a prior post (see March 28, 2010 post), noting historian David Lindberg's comment, "[I]n the answers offered by these Milesians we find no personification or deification of nature; a conceptual chasm [that] separates their worldview from the mythological world of Homer and Hesiod. The Milesians left the gods out of the story. What they may have thought about the Olympian gods we do not (in most cases) know; but they did not invoke the gods to explain the origin, nature or cause of things." (See March 24, 2010 post). The inference from Lindberg's observation is that when the human mind frees itself from myth, religion, and superstition --- the types of "beliefs" that Michael Shermer wrote about in The Believing Brain (see June 12, 2011 post) --- scientific progress is unshackled.

While Mlodinow does not make the same observation about the Milesians, he does seem to fall into a trap that Lindberg encourages historians to avoid: blaming Christianity entirely for Europe's failure to maintain the scientific progress that the Greek's initiated before the first millennium A.D. (see March 24, 2010 post). Apparently relying on Edward Gibbon, Mlodinow cites the Christians for burning down the greatest library of its era at Alexandria, Egypt and all the scientific and philosophical works that were part of that library. This claim may not be true or entirely true, but the fact the Mlodinow seems to harbor this belief, as revealed in this reference and other statements he makes, strains his credibility as a writer of science history (at least about science and mathematics in the era of the Dark and Middle Ages). It is true that once the institutions of the Dark and Middle Ages lost touch with Greek scientific inquiry and knowledge, institutional biases developed that made it very difficult for that knowledge to be rediscovered, and among those institutions was the Catholic Church. Yet, as Lindberg documents, those institutions also had a small role in the rediscovery of Greek science and thought.

As Mlodinow moves from his portraits of the geometers and Euclid to Descartes to Carl Gauss (and Reimann), the reader senses the impending merger of geometry and physics (or perhaps the impending takeover of geometry by physics) with the development of non-Euclidean geometry. Since the times following Euclid, Euclid's Fifth Postulate (the parallel postulate) had proven troublesome. Euclid stated a proposition that would determine whether two co-planar lines were parallel, converging, or diverging: take two lines and cross them with a third line; if the sum of the two inner angles on the same side of the crossing line is less than two right angles (180 degrees), then the two lines are converging (on that side of the crossing line). The postulate seems intuitively correct. The problem is that the fifth postulate could not be proven as a theorem would be proven. It was assumed as a fact, until non-Euclidean geometry began to address surfaces that are curved and the parallel postulate failed. Geometry began not only to take a hard look at spherical surfaces and topography --- the earth, but it began to turn its attention to space.

Enter Albert Einstein, relativity, and the influence of gravity on the shape of space. Even Mlodinow's brief discussions of Euclid, Descartes, and Gauss made me recall Rita Carter's discussion of the posthumous study of Einstein's brain in Mapping the Mind (see November 6, 2011 post). Carter reports that researchers at McMaster University in Canada found that Einstein's brain "was different from most in several ways, the most notable being that two sulci (infolds) in the parietal cortex had merged during development, creating a single enlarged patch of tissue where usually there would be a division. In normal people, one of these areas is primarily involved in spatial awareness, while the other does (among other things) mathematical calculation. The merging of these two areas in Einstein's brain," Carter speculates, "may well account for his unique ability to translate his 'vision' of space-time into the most famous mathematical equation of all time, e=mc2." Here Carter has been discussing synaethesia, the phenomenon where, because of the close proximity of two parts of the brain, there is a merging of sensory phenomena: e.g., hearing the sound of a certain word or number takes is associated with a certain color. Have the brains of certain mathematicians who can develop mathematical theories or even practical algorithms that describe physical phenomena or physical space developed in a way that facilitates their mathematical skill and insights, in contrast to the brains of most humans? Mlodinow's historical survey of the history of geometry certainly makes one wonder about that.

Reading Euclid's Window also reminded me of a quote from Michael Shermer that I mentioned in the June 12, 2011 post, "We are not equipped to perceive atoms and germs, on the one end of the scale, or galaxies and expanding universes, on the other end." Yet clearly, as Mlodinow's portrait of Einstein and later Edward Witten moves from relativity and quantum mechanics to the "standard model" and ultimately to string theory, it is clear that some minds are clearly capable of not only envisioning atoms, but even smaller particles, and some minds (sometimes the same minds) are capable of envisioning galaxies and expanding universes. Without this capability, Mlodinow would never have had a story to tell. Mlodinow sums this up as follows: "Through Euclid's window we have discovered many gifts, but he could not have imagined where they would take us. To know the stars, to imagine the atom, and to begin to understand how these pieces of the puzzle fit into the cosmic plan is for our species a special pleasure, perhaps the highest. Today, our knowledge of the universe embraces distances so vast we will never travel them and distances so tiny we will never see them. We contemplate times no clock can measure, dimensions no instrument can detect, and forces no person can feel. We have found that in variety and even in apparent chaos, there is simplicity and order."

This is not a deep book. It is written for the general public who has an interest in mathematics and the history of science. I began by criticizing Mlodinow for his knowledge of history in the Dark and Middle Ages, but by the end of the book and the discussion of string theory, I came to conclude that I wished I had read this book before embarking on other deeper books about string theory.

Wednesday, November 16, 2011

David Liss, A Conspiracy of Paper (2000)

The year is 1719. The scene is London, England. King George I, recently arrived from Germany, sits on the throne of England. Unlike the French, the English have not yet created a local or national police force to protect its citizens. The entrepreneurial class filled the official void, and established themselves as "thief-takers," bounty hunters hired to capture criminals. The most notorious of the thief-takers, Jonathan Wild, exploited his status to form an organized crime gang of thieves who stole property only to be hired by the victim, who would pay for its return, to "find" the same stolen property.

England is suffering financially at this time under the weight of a growing national debt because of the War of Spanish Succession. The South Sea Company, a business organized in the early 18th century as a stock company, buys half of the national debt in exchange for its stock, pursuant to a plan to convert that debt to lower interest debt that would ease the government's financial burden, but also provide the South Sea Company with steady revenue. The South Sea Company then pursued a program to drive up the price of its stock and a speculative frenzy ensued. By 1720, the infamous South Sea Bubble, the first stock market crash, occurred, leading to bankruptcies and other financial problems across Europe.

There is a nascent, unregulated stock market operating out of coffeehouses on and around Exchange Alley, where "stock jobbers" trade in company stocks. Stock jobbers are not held in high reputation, apparently for all the reasons that, over 200 years later, the United States of America established a Security and Exchange Commission to regulate this trade.

All of this is true, and against this background, David Liss' fictional story of a competing thief-taker, Benjamin Weaver, begins in earnest. Weaver, a Jew among the predominantly Protestant community of England, has assimilated reasonably well. He has recently become a "thief-taker," retired from his earlier professional roles as highwayman and competitive boxer. He now competes with Jonathan Wild for clients, but unlike Wild he forswears the unethical practice of stealing only to later "find" the stolen booty for a fee. Weaver is the grandson of Miguel Lienzo, the protagonist of Liss' third novel, The Coffee Trader, a prequel of sorts to A Conspiracy of Paper. Liss' oeuvre, if we want to call it that is not generic historical fiction, but economic historical fiction. The marketplace is as much a part of his work as the cast of characters and the plot.

A murder has allegedly occurred, and A Conspiracy of Paper is essentially a whodonit. The precise "who" is known early on, but the "official" conclusion is that the death was merely an accident. Others, however, suspect foul play --- a "conspiracy." Suspects abound as to who is really behind the conspiracy.

This is the period of the English Enlightenment. John Locke is fifteen years in the grave. David Hume (see February 27, 2011, post) is only 8 years old. But Isaac Newton is in the golden era of his illustrious life. Bernard Mandeville was editing his Fable of the Bees (see January 30, 2010 post), and George Berkeley was active trying to undo Locke's view of a materialist world.

To solve the murder mystery, however, none of these Enlightenment philosophers contributes to a method of investigation. A French mathematician and Catholic philosopher, Blaise Pascal, provides a source of inspiration. Probability theory is invoked. And so is right brain/left brain wisdom. I refer to the mind's right brain capacity to make intuitive hunches, and the left brain capacity to meditate, analyze, and sort through information. Weaver's friend, Elias, advises him, "[Pascal's] thinking is precisely what will allow you to resolve this matter, for you must work with probability rather than facts. If you can only go by what is probable, you will sooner or later learn the truth." Weaver responds, "Are you suggesting I conduct this matter by randomly choosing paths of inquiry?" Elias responds, "Not randomly. If you know nothing with certainty, but you guess reasonably, acting upon those guesses offers the maximum chance of learning who did this with the minimal amount of failure. Not acting offers no chance of discovery. The great mathematical minds of the last century, Boyle, Wilkins, Glanvil, Gassendi --- have set forth the rules by which you are to think if you are to find your murderer. You will not act on what your eyes and ears show you, but on what your mind thinks probable."

The murder victim is Weaver's father --- the son of Miguel Lienzo, who migrated from Amsterdam to London with his brother. Weaver's father is a stock-jobber, and Weaver suspects, as information starts to become available, that his father uncovered a conspiracy to manipulate the value of South Sea Company stock.

Only later in the novel, as Weaver laments how difficult it has become to bring the investigation of his father's murder to a conclusion, and he says, "Your philosophy [referring to Elias] has brought me this far, but I cannot see how it takes me much farther," Elias responds, "If philosophy no longer yields results, perhaps it is not because you have reached your limit to understanding philosophy. I think it is far more probable that philosophy had done what philosophy can do, and you would be wise to trust your instincts as fighter and a thief-taker. . . Trust your instincts." What would Sherlock Holmes say?

In the end the crime is solved, not because of instinct, but because a crucial piece of information suddenly falls into Weaver's lap -- the revelation of a lie that reveals the identity of the murderer. The revelation is not accidental. Jonathan Wild pushed the information in front of Weaver to help him out. The conspiracy behind the murder of his father turns out to be a vastly different conspiracy than Weaver initially postulated and conceived. Probabilities and beliefs did not solve this murder. Factual information did, much in the way that scientific experimentation during the Enlightenment era was undermining long-held beliefs that were products of mental reasoning, faith, and bias.

Decisions are made based on probabilities because we have incomplete information, uncertainty. Hume essentially made the point (see February 27, 2011 post), and he could not have been the first. Some people have access to more information than others and can act on superior information to their advantage. Wild is such a person, and in Wild's version of thief-taking where the taker is also the thief, it is just an early version of what we now call insider-trading. A Conspiracy of Paper was written and published as the 20th century came to a close and the technology stock bubble burst. And stock market manipulation and insider trading have not disappeared either. Liss is an excellent storyteller, and he is very clever at detecting in the annals of economic history, just as he did too in The Coffee Trader, the parallel times of the human past that reverberate in the modern mind.

Sunday, November 6, 2011

Rita Carter, Mapping the Mind (Rev. 2010)

Five postings in 2011 on books and subjects related to the mind and brain, and I was consciously aware of that fact and a desire to move on to something new. But as I wrapped up V.S. Ramachandran's The Tell-Tale Brain (see previous post), his remark that as neuroscientists map the brain they are "grouping their way toward the periodic table of elements" reminded me that a book on The Bookshelf that I had purchased last year, Rita Carter's Mapping the Mind, was waiting to be read. I found this book at the bookshop at the conclusion of the American Museum of Natural History's exhibit on The Brain. I recalled that what impressed me toward a purchase was the exquisite drawings of the brain, many of which included arrows to illustrate the interconnectedness of specific brain regions to explain a specific neuronal process. For those who are not practicing neurologists, a picture can nicely supplement a thousand words.

By the end of Mapping the Brain, I wondered if this should have been the first book I ever read on the brain. Would I have better appreciated all the other material I have read on this subject if I had already read this book? I can't answer that, but studying Rita Carter's text after I had read these other books, many of which are discussed or mentioned in prior posts, was facilitated by my prior exposure to the subject. Either way, Mapping the Brain is a good overview and introduction to the brain and a good review as well.

Ramachandran's analogy that neuroscience's understanding of the brain is moving in the direction of establishing something akin to chemistry's periodic of table of elements is best left as a metaphor rather than a suggestion of equivalence (as I think the statement was intended). It is fair to say, like the periodic of table of elements, that the brain is organized, but it is not sequential in the same sense that the elements can be organized sequentially according to their atomic weight or related in their properties as part of one of 18 different groups. While thinking about this, it crossed my mind that evolutionary age would be one way of sequentially organizing the parts of the brain. Antonio Damasio did something like this in Self Comes to Mind (see April 8, 2011 post), describing the sequential evolution of the brain stem, the limbic system, and ultimately the cerebral cortex. But the brain is indeed very complex, as Carter notes in her closing paragraph, when she says that "today's mind voyagers are discovering a biological system of awe-inspiring complexity." One could also try to organize the parts of the brain sequentially, starting with a particular sensory input, and follow the connections to other parts of the brain to conscious awareness and action (or unconscious action, as the case may be) that ensues. In the end, however, that effort would not be particularly useful given the multi-layer network of sensory inputs that are processed simultaneously, including the presence of variable emotional reactions in connection with a particular sensory input that could stimulate a different behavioral outcome.

Both Carter and Ramachandran caused me to question a statement I made in a prior post (June 12, 2011 post) while discussing Michael Shermer's The Believing Brain. I wrote:

"I strongly suspect that if we dissected human brains and a network of connected neurons from a representative sample of humans, we would find a very high level of near identity among brains. There will be some differences due to DNA, and there will be some pathological differences as well, perhaps caused during embryonic development. But I believe that by and large we will find that human brains, neuron by neuron, are organized and folded and layered in substantially identical ways." There is substantial truth in my remark --- I did not say identical, but I did say "near identity" and "substantially identical." These words came at the risk of maybe overstating the case. Carter writes, "Human brains are constructed along fairly standard lines and so we all tend to see the world in a fairly standard way." And as I noted, there are differences due to DNA, pathological injury, and embryonic development. But I overlooked perhaps the largest exception --- experience (nurture) and its impact on memory -- and what is referred to as synaptic plasticity. And it is long-term memory -- enhanced by repeated experiences in some that others do not share --- that gives rise to our autobiographical self and what makes each one of us unique. So while we may "tend to see the world in a fairly standard way," Carter notes:

"Every brain constructs the world in a slightly different way from any other because every brain is different. The sight of an external object will vary from person to person because no two people have precisely the same number of motion cells, magenta-sensitive cells, or straight line cells. . . . An individual's view is formed both by their genes and by how their brain has been moulded by experience. Musicians, for instance, have brains which are physically different from others and which work differently when they play or hear music. . . . Extraordinary individual ways of seeing things may also arise from strange 'quirks' of brain development. Albert Einstein, for instance, had a very oddly constructed brain which might account for his astonishing insights into the nature of space and time."

Carter's treatment of language pretty much follows that of Ramachandran. And her treatment of memory restates much of what Antonio Damasio (see April 8, 2011 post) and Daniel Schacter (see September 20, 2011 post) discuss, but I still learned something new. For example, "Episodes that are destined for long-term memory are not lodged there right away. The process of laying them down permanently takes up to two years. Until then they are still fragile and may quite easily be wiped out. It is this replay from hippocampus to cortex and back again -- a process known as consolidation, that slowly turns fleeting impressions into long-term memories. . . Much of the hippocampal replay is thought to happen during sleep. Recordings from hippocampal cells show them engaging in a 'dialogue' with cortical cells, during which they signal one another, back and forth, in a call-and-reply formation. Some of this is known to take place during the 'quiet' phase of sleep, when dreaming, if it occurs at all, and is vague and instantly forgotten. Until memories are fully encoded in the cortex they are still fragile and may quite easily be wiped out. And even when they are established, they are not fixed. A memory is not, in fact, a recollection of an experience but the recollection of the last time you recalled the experience. Hence our memories are constantly changing and redeveloping. The process by which a memory changes is more or less the same as the consolidation process that lays it down for the first time. As we will see, each time we recall something, it is changed a little because it becomes mixed up with things that are happening in the present. Reconsolidation is a process by which this slightly altered memory effectively replaces the previous one, writing over it, so to speak, rather like re-recording over a rewritable DVD." I mentioned this phenomenon in the September 20, 2011 post discussing Daniel Schacter's discussion of the consistency bias, whereby the brain infers past beliefs from our current state and the reference to Joseph LeDoux's discussion of reconsolidation. Carter explains it better.

I also learned that not all memories are stored in cortical areas. While long-term memories are initially stored in the hippocampus, as described above, over the course of roughly two years they are transferred to the cortex and the hippocampus is no longer required for their retrieval. These memories are distributed in the same parts of the brain that encoded the original experience. So sounds are found in the auditory cortex, taste and skin sensory memories are found in the somatosensory cortex, and sight in the visual cortex. But procedural -- "how to" --- memories are stored outside of the cortex, in the cerebellum and putamen, and fear memories are stored in the amygdala.

Carter also addresses, albeit briefly, the subject of imagination, which I have touched on in several previous postings (see, for instance, July 30, 2011 post and May 22, 2011 post) and describes the connection to memory. "Our ability to conjure up scenarios which have not actually happened is prodigious. Imaginative capacity runs along a spectrum from the mundane skills required to envisage what your supper might taste like if you combined the onion, mushrooms, and left over chicken in the fridge with some curry sauce, through to the awe-inspiring visions of artists, writers, and excitable children. Even the humblest of these skills outranks the abilities --- as far as we can tell --- of every other species. . . . At first sight memory and imagination seem quite distinct: the first in concerned, after all, with what happened already whereas the second is all about what has not. But recent studies show that imagination is wholly dependent on memory, because memories are its building blocks. When we imagine something happening we root around in our memory and come up with experiences which seem likely to recur, then combine them, chop them, shake them and blend them until they come out as something entirely different." In my view, this is cannot be unrelated to the process of reconsolidation that I mentioned above and some of the biases that other writers have described (see September 20, 2011 post and June 12, 2011 post) whereby memories are altered.

A text box entitled "Remembering the Future" by Eleanor Maguire is inserted by Carter, which perceptively quotes Lewis Carroll's Alice in Wonderland: "It's a poor memory that only works backwards." Maguire cites an MRI study that found recalling past experiences and imagining future possible ones activated a network of brain regions in common including the hippocampus. She notes a new theory of 'scene reconstruction,' which allows for the internal rehearsal of events or scenes, which underpins the process of creating a simulated event. Maguire writes: "[I]n humans, the use of this scene reconstruction process goes far beyond simply predicting the future, to general evaluation of fitness for purpose and ultimately creativity. For example, a scriptwriter or novelist who is writing a passage in a film or book may play out the whole scene using their construction system, not with the idea of predicting the future, but instead for the purpose of evaluating its aesthetic suitability." This is the sort of discussion I was hoping to find in The Tell-Tale Brain (see previous post). What is missing from her discussion is the identification of the parts of the brain involved in imagination, although the text seems to identify the hippocampus as one candidate for assembling disparate memories. Also missing is the evolutionary basis for this ability. Obviously, the ability to plan for the future has survival value, and our planning capacity is lodged in the frontal cortex, where working memory occurs. The building blocks for understanding creative imagination are before us, and understanding a collateral capacity --- the capacity for deception, including self-deception --- by imaginatively rearranging memories and treating them as factual, when they are not, needs to be understood as well. Part of this is reflected in the previous discussions of mental biases. Carter notes that the subject of belief and non-belief is not unrelated to how the brain treats statements it believes to be true and those it believes to be false. Research, she says, shows that truth telling appears to be the default position for the human brain, and that telling lies involves extra cognitive effort. I am not sure this is entirely true, as bias mechanisms appear to be short-cuts for resolving conflicts in our memories.

Tuesday, October 25, 2011

V.S. Ramachandran, The Tell-Tale Brain (2011)

My original expectations for this book never materialized, but still there was much learned . From my cursory examination in the bookstore, I was expecting to learn something about the role of the brain in creating art, particularly storytelling, works of fiction and imagination. I noticed a couple of chapters on art and aesthetics, but these turned out to be ruminations and speculation on the role of the brain in appreciating visual arts. That is not to say these ruminations were without value, but only that I was expecting a different treatment of the subject.

Second, I knew that V.S. Ramachandran , one of the world's leading neuroscientists, promoted research findings about mirror neurons, a subject that was covered in the September 18, 2009 post discussing Marco Iacoboni's book, Mirroring People. Ramachandran, however, despite his disclaimer that mirror neurons do not explain everything about the brain, seems to suggest that they explain an awful lot: certainly a lot more than the subtitle of Iacoboni's book suggests --- empathy for others and how we are able to comprehend the actions and perhaps the intentions of others from their body movements. Ramachandran suggests that mirror neurons may also be key to understanding our own self-awareness: awareness of what our own mind is thinking. Regrettably, the proof is not yet there to support his surmises in this regard; too often Ramachandran says that this area of the brain is involved in such and such, and that area of the brain is known to be rich in mirror neurons, and with that association it might be the case that mirror neurons explain such and such. That is not to say he is wrong, but he may be more right than wrong when he says that mirror neurons do not explain everything.

Here is an example of what I just said. Ramachandran strongly believes that mirror neurons, while initially identified in monkeys, are an important piece in understanding human evolution. "[O]nly in humans do they seem to have developed to the point of being able to model aspects of others' minds rather than merely their actions. Inevitably this would have required the development of additional connections to allow a more sophisticated deployment of such circuits in complex social situations. . . It is difficult to overstate the importance of understanding mirror neurons and their function. They may well be central to social learning, imitation, and the cultural transmission of skills and attitudes --- perhaps even of the pressed-together sound clusters we call 'words.' By hyperdeveloping the mirror-neuron system, evolution in effect turned culture into the new genome. Armed with culture, humans could adapt to hostile new environments and figure out how to exploit formerly inaccessible or poisonous food sources in just one or two generations --- instead of the hundreds or thousands of generations such adaptations would have taken to accomplish through genetic evolution." This is an astonishing hypothesis, to borrow a phrase from Francis Crick. But Ramachandran strongly believes he is on to something because of the presence of mirror neurons located in certain parts of the human brain that are relatively unique to humans, in contrast to the apes, our nearest relative. Ramachandran is referring to Wernicke's area in the left temporal lobe (the area of the brain that is responsible for our comprehension of language -- where speech acquires meaning), the prefrontal cortex (an area responsible for decision-making and cognition, which includes the motor cortex where commands are sent to muscles for movement, and it includes Broca's area, which is responsible for speech), and the inferior parietal lobe (IPL). In humans, Ramachandran notes, the IPL evolved into two parts not found in apes --- the supramarginal gyrus (an area responsible for our ability to "visualize" words and action) and the angular gyrus (an area of the brain that Ramachandran says is connected to metaphor comprehension, but also finding a common denominator between two superficially dissimilar things). The genetic change that created these areas of the brain "rich in mirror neurons," surmises Ramachandran, "freed us from genetics by enhancing our ability to learn from one another" --- "liberat[ing] our brain from its Darwinian shackles, allowing the rapid spread of unique inventions" (tools, new words, constructing shelter, creating communities) that are at the foundation of culture. "Instead, increased sophistication of a single mechanism --- such as imitation and intention reading --- could explain the huge behavior gap between us and apes."

Ramachandran is a neuroscientist who has a wealth of clinical research observations about damage to specific areas of the brain and the consequences of that damage. And with that wealth, my appreciation for the role of specific areas of the brain and their connections to other areas of the brain during normal operations continued to grow. Other books, some of which are discussed in previous posts, have contributed to my appreciation, but Ramachandran's presentation in this book, including his drawings of the regions of the brain together with his glossary and specific case studies begins to put it all together. From a big picture point of view, with this book Ramachandran is pursuing the same subjects that Michael Gazzaniga pursued in examining what makes us "human" (see September 27, 2009 post) and that Antonio Damasio pursued in examining what creates our sense of self (see April 8, 2011 post). And Ramachandran contributes his own views on the origin of human language, a subject that was extensively reviewed in Christine Kenneally's The First Word (see August 31, 2009 post).

Let's discuss Ramanchandran on the evolution of language, because this is really the part of The Tell-Tale Brain that demonstrates what I have just said about this book. In discussing the evolution of language, Christine Kenneally reviews the views of Noam Chomsky (language is not the outcome of evolution, but is simply a built-in property of the brain), Stephen J. Gould (language initially evolved as a way to represent the world --- namely thinking about the world, and only later became a means of communication), and Steven Pinker (language is an instinct, an adaptation that is unique to humans that evolved specifically for communications purposes). Ramachandran does likewise, dismisses Chomsky quickly (his thesis cannot be tested), and then says that there is a grain of truth to both Gould and Pinker, but they just did not go far enough. For Ramanchandran, language did not evolve from some general mechanism for thinking, but neither did it evolve specifically for purposes of communication. What is innate and what evolved, says Ramachandran, is the competence to acquire rules of language. The actual acquisition of language occurs as a result of social interaction. Ramachandran believes that language was enabled by cross linkages in the brain between different motor maps (e.g. the area responsible for manual gestures and the area responsible for orafacial movements). Ramachandran calls this synkinesia ("together" "movement"), but he is borrowing from research on synesthesia, a brain phenomenon where senses are joined by cross-activation of two sensory maps. Thus what humans have is a built-in capacity to for translating gestures (movement) into words. The original "words," if you will, may have been grunts or other noises that accompanied a gesture (a proto-language). Synkinesia alone probably does not explain speech and language, but the human capacity for mimicry is critical as well, and hence the subject of mirror neurons enters the discussion of speech development, and again we return to the linkages between Broca's area, Wernicke's area, and the supramarginal gyrus and the angular gyrus in the IPL that enable what Ramachandran labels "cross-modal abstraction."

Ramachandran borrows from Gould the latter's concept of exaptation. An adaptation is an evolutionary response to natural selection. An exaptation is a refinement of an adaptation whereby a function is borrowed and used for a different function. Ramachandran believes that the IPL did not evolve for higher forms of abstraction such as giving names (words) to something, but evolved to provide hominids for refined interaction between vision and muscle and joint position while negotiating branches on tree top, a type of cross-modal abstraction. A subsequent exaptation was the development of these areas of the brain --- and their capacity for abstraction --- to develop tools and subassemblies of tools (e.g. an axehead and a suitably designed handle), says Ramachandran. He sees "a tantalizing resemblance" between the wielding of a tool made from composite parts and a full blown sentence including noun phrases and verbs. Speech, including syntax, emerged from the area of the brain that was key to tool manufacture, he speculates, and this became Broca's area. Add to that the areas of the parietal lobe (Wernicke's area and the IPL) and the human brain now has a language acquisition device. By the end of his discourse, Ramachandran concedes that he has been speculating on the evolution of language and thought, and he still has not resolved it, but he believes that it is not inconsistent with what neurologists know about patients who are damaged in these areas of the brain, and what we know about the evolution of other body parts. The problem is that language and abstraction are "software," and they can't be found in the fossil record that is all we have to understand what happened tens and hundreds of thousands of years ago. Unlike Christine Kenneally, Ramachandran does not discuss the debate over the FOXP2 gene and its possible connection to human language. Ramachandran asks, why were genes for language competence selected, but he does not illuminate what those genes might be., and perhaps that is because he recognizes that language is the outcome a several different modules in the brain, and there can be no single gene that would explain that fact.

There is much more to this book than just mirror neurons and the evolution of language, but there are discussions of memory, brain plasticity, the self, and free-will. Marco Iacoboni believes that, in view of research on mirror neurons and social behavior, the concepts of the self and free-will are blurred --- at least our Western sense of the individual is not as real as we sometimes think. (See September 18, 2009 post). Just how "free" and unique are we? he asks: Self and other are inextricably blended, says Iacoboni, and our behavior is subtly influenced by mirror neurons which produce automatic imitative influences based on what others are doing or saying. Iacoboni believes we have only "limited autonomy." Ramachandran does not touch this subject, but he does profess that humans have a sense of agency --- a desire to act and our belief in our ability to perform that act. Ramachandran says that there is evidence that the anterior cingulate in the frontal lobe is involved with wanting and intention. Damage to this area leads to apathy. The anterior cingulate, says Ramachandran, receives inputs from the parietal lobe, including the supramarginal gyrus, which as noted above, is involved with our ability to conjure up images of action (movement). These connections lead Ramachandran to believe there may be a neurological basis for free-will, signifying that it is not just a philosophical problem.

Ramachandran summarizes the current state of neuroscience with a comparison to chemistry: neuroscience is now at about the same stage that chemistry was in the 19th century, discovering the basic elements, grouping them into categories, and studying their interactions. As neuroscientists "map" the brain, they are "grouping their way toward the periodic table" of elements, but are not anywhere near atomic theory. As I noted in a previous post (see January 21, 2011 post and April 8, 2011 post), this is not fatal to the proposition that the era of skepticism is over. Ramachandran notes numerous experiments that remain to be started that would confirm one hypothesis or another.

And a final note. While Ramachandran failed to provide specific insight into how the brain creates art or fiction, on reflection, the elements of that mechanism may very well have been covered. We are, after all, talking about imagination, and the parts of the brain that enable abstraction, when combined with the parts of the brain that enable language (not just speech, but semantic content as well), when combined with the parts of the brain that enable us assemble things and to mimic what others do, must be involved in the creation of fiction and other forms of art that are the production of imagination.

Tuesday, September 20, 2011

Daniel Schacter, The Seven Sins of Memory (2001)

I started this blog, in small part, because I could not always recall when I read a book and what I got out of it. I thought about making notes in a notebook, like I did in college, and thought why not a digital notebook? The subject memory has continuously popped up in my postings, as the previous post recalls. In The Seven Sins of Memory, Harvard psychologist Daniel Schacter explains why I can't recall when I read a particular book and other failings of the human memory. And it's not all bad. In fact, Schacter says it's OK to forget. It's common. It's natural.

A clever title. Like the "seven deadly sins," the title is not an allusion to sinful behavior. It is a reference to our vulnerabilities and imperfections. Seven imperfections of memory: transience (memory that fades with the passage of time); absent-mindedness (rapid forgetting due to attentional lapses); blocking (hopefully a merely temporal phenomenon, like the name on the tip of your tongue that you can't remember); misattribution (mistaken identification); suggestibility (tendency to incorporate misleading information from external sources by suggestion into personal recollections); bias (five tendencies in which we generalize to reduce dissonance, reconstruct the past to fit the present, organize and regulate our mental life, and categorize; and persistence (remembering things we really want to forget).

All of us are familiar with and have experienced most, if not all of these imperfections of our memory. Some memories are encoded and "stick." Eric Kandel and others have described to us much of the biological basis about why we understand certain memories stay around longer than others --- long term potentiation. But not all memories stick. Some memories disappear immediately (absent-mindedness, because encoding fails due to divided attention); other memories stick, but are not retrievable --- sometimes for reasons we still do not fully understand --- but sometimes because we have little opportunity ever to recall a personal experience or piece of knowledge again and we forget(transience). And some memories are simply not accurate --- misidentification due to the inability to have specific recall of people or events that we are generally familiar with, or revised because external sources of information suggest to us that something different occurred, or biasing influences.

The bias influence is something we are already familiar with from Michael Shermer's book, The Believing Brain. (See June 12, 2011 post). What Shermer called the "confirmatory bias," Schacter refers to as the "consistency bias," where we infer past beliefs from our current state. This bias promotes psychological stability, perhaps avoiding cognitive dissonance. Similarly, hindsight bias, Schacter says, is ubiquitous, and reconstructs past to fit the present. What Shermer called the "self justification" bias, Schacter labels the "egocentric bias" that gives more credence to our own recollection of events --- reflecting the role of "self" in organizing and regulating our mental life. Studies show that when the mind encodes new information by relating it to the self, subsequent memory improves. Finally, the "stereotypical bias" reflects the minds tendency to categorize. What Shermer failed to address is the neurobiological basis for these biases. Schacter tries to address this, and, not surprisingly, calls on research by Michael Gazzaniga (see September 27, 2009 post) to explain it. The source of bias probably begins in the left hemisphere of the brain, which is adept at coming up with explanations and rationalizations for situations. This is the region where Gazzaniga's "interpreter" resides, the area responsible for language and symbols. The left hemisphere is responsible for inferences and generalizations to relate past and present. It draws on general knowledge and past experience to bring order to our psychological world. In contrast, the right hemisphere is where images and spatial location are prominent. The right hemisphere has a proclivity to respond on a literal basis to our environment, and acts as a check on the generalization of the left hemisphere. The left hemisphere probably contributes to consistency bias, hindsight bias, and egocentric bias. The left hemisphere is where our storytelling capacity resides. It is also where memories may be strengthened. Thinking and talking about our experiences changes the likelihood of subsequent remembering, reducing transience: elaborate encoding occurs to create memory.

I am reminded of an article by Joseph LeDoux of New York University in The Scientist back in 2009, entitled "Manipulating Memory." LeDoux reports on work that supports a theory of "reconsolidation" whereby memories are updated and influenced by new information. In other words, once a memory is recalled it is in a fragile state, and iit s susceptible to disruption by the very act of remembering so that what is "remembered" is not the same as what was experienced at the time of the event recalled. Several of Schacter's "sins" seem to support this view of human memory that it is neither fixed nor permanent.

These vulnerabilities of memory are not design flaws. They are likely by-products of of adaptive features of memory that have, in the long-run, served human memory well. Schacter reminds me of Antonio Damasio's reliance on the concept of homeostasis (see April 8, 2011 post) when he refers to the brain's "trade-off" between reducing the need to access information that is not needed or has not been needed and the cost of forgetting. The brain has adapted to an equilibrium condition. Transience occurs because there is little utility in storing unimportant, noncurrent information. Retrieval of "too much information" (data overload) has negative consequences and sacrifices abstract thinking. The brain does go on to autopilot --- repetitive events are handled by automatic processes, freeing up the brain's ability to devote attention to things with consequences; as a consequence occasional absent-minded errors occur when the mind is focused on something else, a small cost for the larger benefit of being able to pay attention. Memory sometimes operates because cues trigger recall. Contrary to some popular philosophers, David Hume's associationism is not dead. (See February 27, 2011 post).

The ability to record (encode) the gist of what happens and not every detail is a strength of memory, says Schacter. This capacity is fundamental to categorizing and comprehending and allows us to generalize across experiences without dwelling on details. Misattribution (false recognition) is probably a price we pay for the benefit of generalization. Generalizing, as discussed above, also leads to biases. Some features of memory are adaptations in evolutionary terms -- larger hippocampuses that facilitate finding stored food, and the amygdala that plays a role in emotional conditioning (contributing to the sin of "persistence"). But several features of our memory are unintended by-products of an existing feature or functionality of the brain. Schacter postulates that aspects of memory are what Stephen J. Gould referred to as "exaptations," features co-opted from an existing function that enhance fitness, but were not built by natural selection. Biases, Schacter concludes, are an incidental by-product of general knowledge and beliefs; blocking, absent-mindedness, misattribution, and suggestibility are exaptations of a memory system that does not routinely preserve all details required to specify the source of an experience.

There are practical concerns that arise from the learning in this book. The "sins" of misattribution, suggestibility, and bias all have implications for our justice system -- particularly criminal justice. Eyewitness misidentification, false confessions, witnesses influenced by suggestive questioning that retrieves a memory that does not match the actual experience are particularly worrisome. Law enforcement has taken notice and is learning from this research.

That we recognize that human memory is vulnerable, and that it is neither fixed nor permanent, informs us with respect to our history as well. While there are histories that are literally revisionist histories --- Schacter cites George Orwell's 1984 where a totalitarian government deliberately revises the past to suit the present --- our oral and written histories are known to have been revised to suit a current agenda. Knowing that "gospels" are the product of human writers, editors and redacters, it is difficult to understand why some have difficulty in acknowledging that the stories in these gospels suffer from the same vulnerabilities that human memory suffers. There are undoubtedly other psychological states that produce such difficulty.