Saturday, March 22, 2014

Moral Sense Colloquium II, Photos and Feedback

Dr. Diana Reiss
Prof. Julie Hecht

Luke Kluisza, Tyler Perkins, Dr. David Lahti, Dr. Gregory F. Tague

Dr. Kristy Biolsi, Jeannette Raymond, Lorianna Colon, Andrew Salzillo

Dr. Kevin Woo

Snacks and Relaxation at the End of the Day

Full Program and Abstracts HERE

Some Student Responses and Feedback HERE

Thursday, March 20, 2014

Peter Swirski on Biterary Studies (review)

Peter Swirski. From Literature to Biterature: Lem, Turing, Darwin, and Explorations in Computer Literature, Philosophy of Mind, and Cultural Evolution. Montreal: McGill-Queen’s UP, 2013. Hardcover, 252 pgs. 19.99. ISBN: 9780773542952.

In a nutshell, Swirski’s book is fascinating. Erudite and sophisticated, elegantly written and witty, the book offers insight into the history and future of artificial intelligence. The book’s packed subtitle does not promise more than Swirski can deliver, and so eventually the reader is treated to an array of compelling information covering all subjects. As for biterary studies, the book will elucidate for the uninformed, for those hard-core traditionalists, and for any remaining post-modernists that not only is human culture a product of evolution but that literary arts might soon flow not only from an author’s pen but from an adaptable computer chip. Essentially, Swirski’s book is about creativity and patterns in nature, in human nature, and in computing and artificial intelligence. Although Swirski pushes the envelope with a challenging discussion about the question of whether or not computers can think, such daring discourse activates a high level of neuroplasticity in his readers who heretofore might have been brain-dead to such a concept.

Peter Swirski is professor of American literature and culture at the University of Missouri-St. Louis and the author of twelve previous books (and, according to his UMSL page, there are two more books forthcoming). Literature to Biterature is a handsome, well-constructed volume, and has numerous black-and-white photographs, Notes, Bibliography, and Index. The book consists of three parts and eleven chapters divided evenly for ease of reading, all organized in a satisfyingly cumulative way.

The thrust of the book argues that computers will eventually be able to turn out high-quality stories. Biologists Marion Lamb and Eva Jablonka have written about evolution in different dimensions, Brian Boyd has written about the origin of story, and now Swirski suggests that computer intelligence is evolving so that there will be biterature, “a species of literature” (7). Dare this reviewer offer the nomenclature bitic selection? Some readers no doubt will be put off by Swirski’s argument when he asks, is “thinking in computers different from that in humans?” (8). But he is not joking, and although he is playful at times, he seems quite serious. While Swirski has no crystal ball to peer into the future, and in fact he cites some examples of people who had made artificial intelligence prognostications (Hans Moravec and Ray Kurzweil) only later to scale back, he seems certain that some dimension of evolution will impact the design and function of computers. Swirski sees, in fact, an evolution of thought regarding natural selection itself, where theory about “self-organization and autocatalysis” (10) will enable a computer to integrate and develop naturally its own software and hardware. Critics will object to any analogy between that which is organic and that which is plastic; but already we see evidence of computer algorithms functioning, reacting, and increasing independently.

From this platform Swirski delves into Stanislaw Lem and computorship, attempting to define a basis for authorship. If there’s a person behind a program, how can a computer be said to create a piece of writing? But in the early 1950s the Manchester Mark 1 computer with Alan Turing was able to write short love notes, which of course caused a controversy about creativity. (Lem and Turing are clearly the heroes of Swirski’s book.) Computers are quite able to manipulate human code, viz Google Translate, but that is not a sufficiently creative act. Swirski points out that in the early 1980s Bill Chamberlain and Thomas Etter supposedly programmed a computer with a word synthesizer so that the machine could author original works, but there was more of the human hand involved in the process than the computer (29). The problem, Swirski suggests, is that from a human evolutionary perspective we will certainly attempt to interpret any ambiguous scrap of information that is put in front of us, even if it is written by a computer. So the machine might not exactly be creative, yet. And there are writerly programs that operate from enormous data chunks fed into the computer (e.g., Hemingway’s oeuvre), but these too are not real creativity which is, after all, “spontaneous” (34). Perhaps Swirski is being ironic: a real person might create spontaneously, but when she is in the process of creating will call forth, consciously or not, all the literary data she has read. Even with spontaneity there is still a question of worth. Will most people over time find what she has written worth reading again and again? Will a computer read and interpret differently? On 18 March 2014 BBC reported that a computer generated a story for the L.A. Times about a California earthquake minutes after the occurrence. This reviewer read the story, and it merely states facts in a dry manner making it worthy of the recycling bin once perused.

That’s where bitic selection now stands in terms of computorship.

Contrary to what some have said concerning the inability of computers to surprise us with anything original, Swirski notes that we run programs precisely because they can tell us what we don’t already know. He cites instances of computer-generated artworks and musical scores in galleries and concert halls well attended and appreciated by human beings. The question is: “How do criteria of originality affect the criteria of originator?” (41). Indeed, there are, as there has been for quite some time, hack writers who compose poorly, use unimaginative words and phrases, and simply create dribbling prose from a formula that repeats itself. Yet we can accept such hacks as agents of creation but not a computer which will work similarly, since we don’t believe computers can learn and certainly cannot think. Surely the Holy Grail in writing a novel is, even more than the literary style, the creation of sympathetic and enduring characters. How can a computer do that? Swirski hovers around the answer to the question without quite landing.

In the early 1970s and 1980s, Swirski recalls, there were programs (AM and then EURISKO) that with a little success attempted the computer’s ability to learn, which can be an instinctual response or marking over inherent information. Learning is the capacity “to evaluate one’s own cognitive, conative, and affective states both at the ground level and at the meta level . . .” (47). Here Swirski is imagining a machine equipped with homeostasis or the means of adjusting physiology to maintain equilibrium. This is different than the early twentieth century Vorticist movement, beyond mere machine dynamism. Impressive as such systems appear, Amazon and Netflix, Swirski says, do not think: based on data we input (e.g., book selections) the program learns what we like and so generates more suggestions. The accumulation of date is not equivalent to thinking, and he bemoans the fact that in spite of decades and billions of dollars of research we have yet not developed a computer capable of thinking. We have processors that only style data.

However, Swirski seems confident that at some point computers will be able “to reprogram and redesign themselves . . .” so that their own hardware will be subject to self-analysis and updating (51). Such would be a machine which could evolve, adapt, think, and create. With these capacities, so-called computer authors will consist of distinct literary writers and not, as now, mere helpmates to scholars. The drawback: “With so much art on tap, tomorrow’s axiological criticism will become as obsolescent as a carrier pigeon is today” (66). We are not there yet, and the type of computer art deliberately imagined would be superior to a water color by a chimpanzee. What are the implications for biterature and biterary studies that a book such as Swirski’s exists, that intelligent, informed people are having this conversation? We evaluate and judge works in the context of others similarly placed. Swirski hints that with computorship human understanding will be quashed, made obsolete. In other words, who are we to say what is good or bad biterature? In his typically amusing, but not condescending or commonplace way, Swirski notes that we have plenty of self-inflated literary garbage already.

There is an inherent human resistance to anything artificially created, since many people still cling to the notion of special creation and the notion of a soul. For any machine to think or create on its own, says Swirski, is in the eyes of most people an act of “godless audacity” (80), pretty much the accusation hurled at Darwin. At the same time, human thought has generated, just as one example, stories about statues coming to life. Perhaps this is why those in Darwinian studies would appreciate this book, for indeed Swirski tries and succeeds to break forms and not adhere to any hide-bound codes, rules, or norms. He tries to do for the computer what Darwin did for the human: eradicate any mind/body duality. There will be no grand moment in computer literary creation, says Swirski, since it’s an evolutionary process. And for this reviewer we can see Swirski pushing the envelope as far as he can to the end of the table when he begins to talk about laws, legislation, and rights for computorship when we still fall far short in having established universal animal rights.

Much of Swirski’s book hinges on Alan Turing and the latter’s seminal question of whether or not a machine can be said to think. The basic Turing test follows this plan: an interrogator speaks to a fellow human being and to a computer, all three separated and anonymous, and if the interlocutor cannot distinguish answers between the person and the machine, then the machine has passed the thinking test. A consequence of what Turing suggests is to have artificial intelligence engineers fashion computers as “social beings” in advance of subjecting them to the test, since in large part the elements of the test are about sociality (98).

Thinking is part of consciousness, a most difficult area for neuroscientists (e.g., Antonio Damasio), cognitive psychologists (e.g., Joshua Greene), and philosophers of mind (e.g., John Searle). Swirski says the standard objection by Searle, whom he calls “shrill” (111), to the Turing test is the absence of consciousness. Playfully, tossing away the social brain hypothesis, Swirski posits that we don’t know whether or not other people are conscious anyway, but it helps us to believe so (101). Then there is the disability objection, which simply states that computers are not functioning persons, but to counter, Swirski reminds us that we all know people who are not functioning in any number of ways – are not friendly, cannot learn, or do not know how to use language properly. Concerning Searle’s strenuous objections to a thinking machine, no single part of a human brain thinks, per se, and yet the brain is constituted of so many of these parts. Thinking and understanding reside in a totality of neurons and synapses, in an “emergent property” (116).

Any complex system will exhibit an emergent state, which is to say such a system is self-organizing. Such behavior, though, is unpredictable (121), as it would need to be in order to adapt. In 1952 Turing refused to define thinking, since a machine could think and yet fail a Turing test (124). So even assuming that a computer can think, we have the added problem of mind reading or theory of mind, guessing intentions. Nevertheless, Swirski feels confident that, in time, computers will be integrated culturally and so would have the context to guess intentions. Right now, like Amazon and Netflix, Facebook makes (sometimes woefully) inadequate estimations about pages one might be curious to investigate. In human beings, of course, theory of mind is flawed and often inaccurate, though we utilize it continuously. Theory of mind is not only cultural in context but bodily, dependent on the expression and reading of emotions. How could a machine possess such biology? This is a difficult, and perhaps unfair, question that can be answered only with another question: “What will make . . . [a computer] want to want?” (145). Any answer has something to do with unpredictability, or what Darwin would call variation in competition that gets inherited.

At any rate, artificial intelligence is now moving to studying behavioral patterns with so called zoobotics that have been made to test evolutionary theories about adaptation (162). Moving well beyond robotic nursing and therapy by computer (not now uncommon), MIT created a chip that “simulates how brain synapses adapt in response to new information” (163). The same year (2011) IBM unveiled a ten hertz chip, operating at the same slow speed as the human brain, as part of a processor that would include over 250,000 “programmed synapses” and over 60,000 “learning synapses,” a stunning effort to reverse-engineer a brain (166).

Toward the end of his book Swirski explores robotic wars, specially made DNA bombs to target an individual, bacteria that could biocompute, “microbiotic armies” of “autonomous learning agents” (181), and micro-bots in the form of dust that can evaluate any given environment (182). In spite of such research and development, search engine giant Google cannot pass the Turing test since it does not know what the searcher wants and simply dumps out loads of data (199). But Swirski’s conclusion is that “the future belongs to artificial intelligence” (204), thinking and brilliantly creative computers, although there have been and will continue to be evolutionary blips and glitches along the way. As Darwin says in chapter six of On the Origin of Species, Natura non facit saltum.

- Gregory F. Tague

Copyright © – All Rights Reserved

(Reprinted here courtesy editor of Consciousness, Literature and the Arts)

Saturday, March 8, 2014

Joshua Green on Moral Tribes (review)

Joshua Greene, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. NY: Penguin, 2013. $29.95U.S. ISBN: 9781594202605

How does one serve to the American public, half of whom don’t believe in evolution or are anti-intellectual (viz Jerry Coyne, Why Evolution is True), cognitive and evolutionary theories about morality, from experimental psychology no less? Such ideas need to be made palatable in a casual tone riddled with references, mostly political, to contemporary American life. Moral Tribes takes the difficult and abstract philosophical ideas of Joshua Greene’s research and serves them up in an easy-to-read fashion. To see some of Greene’s academic papers, visit his site. On the one hand, Greene’s book is admirable since it will reach and educate a much wider audience than his papers, but, on the other hand, some academics might find the commercialized packaging of such ideas disheartening. For instance, at the end of chapter 8, we find: “Readers, be warned: The next two chapters are a heavy lift . . . . If you’re satisfied that utilitarianism is a good metamorality . . . you can skip the next two chapters . . .” (208). As it turns out, chapter 9, which covers the experimental research concerning intuitive and cognitive moral reactions to runaway trolley scenarios, is worth the price of the book. Why would the editors or publisher ask Greene to prompt his non-academic readers to skip it?

Without question, Moral Tribes is not only an important but a valuable book, and it will go a long way in helping the general reader understand the important neural mechanisms and biology that underlie emotional response and decision making. However, Greene grounds his entire thesis around establishing a “metamorality” housed in Utilitarianism, difficult for the average audience he seems to be targeting. But maybe that’s the point, and Greene certainly deserves credit for bringing this philosophy into the public arena.

The book opens with a tribal fable to which the author returns repeatedly, and he wastes no time shifting gears to Obamacare and the controversy over the current state (circa 2012-13) of the American economy. While politics and the economy are important issues, this reader would prefer an academic text, even one written for a general audience, to eschew reams of references that will grow stale quickly. Christopher Boehm, for example, who does field research, can reference real tribes; but one who works in an experimental lab, perhaps, needs to reference politics. Nevertheless, Greene’s points about the tribal mentality, “our interests and values versus theirs,” (14) comes across clearly. Throughout the book Greene nobly strives to establish via Utilitarianism an over-arching morality that will attend to and care for all. But if our core biology and history are tribal, and if tribes have very different values, how is that going to change? Parts of the book are distracting – there is a long section on abortion, and some parts seem more about Greene than anything else. The simplest solution to many of the world’s problems is not to philosophize and seek a meta-morality that will cover all the tribes like a warm blanket but, rather, find a tribe that can act as a go-between for the dangerously competing tribes, a meta-tribe, which we have already in the United Nations and NATO.

There are twenty-five pages of notes in small, almost hard to read print. One suspects that some of Greene’s finer points of research (the papers on his site) have been relegated to these pages. This reviewer guesses that this shift of emphasis from scholarly to popular was an editorial decision (and not necessarily Greene’s). For instance, to mention a few, in a note (to page 76) that runs about one page of fine print, Greene delineates scholarly prejudice; another, on axioms (to page 194) runs about two pages of fine print; and then a note on Rawls (his book A Theory of Justice to page 333) is over two pages of fine print. While notes might be the bread-and-butter of academics, one wonders why such important information has been literally and figuratively reduced and minimized.

Although natural selection equates to self-interest, morality (Greene’s word) is tied to cooperation, since through cooperation even selfish individuals can gain an advantage. The key point here is that cooperation evolved as an advantageous contrivance only among certain people in a group, a tribe. According to Greene, our brains “did not evolve for cooperation between groups . . .” (23). Of course some groups who learned to compete advantageously with other groups had the upper hand, so that competition underlies cooperation. Those who cooperate can be more successful, and Greene suggests he does not endorse group selection (24), but with his overriding concern for the greater good and his sweeping generalizations about people this seems doubtful. In other words, what we call morality is really in-group cooperation, which evolved to help one group overcome another. At this point Greene expresses his optimism, a positive attitude that encourages him to call for what he terms a metamorality.

The early parts of the book offer an excellent primer on, for example, the prisoner’s dilemma (competing individual and group interests); the dictator’s game (anonymous control); the ultimatum game (fair division); and tit-for-tat (reciprocity). Such competitive strategies rely more on cognition rather than feeling. However, extensive research and experimentation show that we tend to be concerned about others, even in some cases strangers, to the effect that we exhibit sympathetic visceral responses to their misfortunes or misery. While we can and at times do help others, we’ll do so if the cost to us is not great: these findings are groundwork for later parts of the book where Greene will attempt to convince us that the Utilitarian perspective is the best approach since by helping others we create an overall better environment.

Greene recapitulates research (such as Paul Bloom’s on the so-called moral life of babies) to show how infants are capable of evaluating behavior and favoring cooperation and ignoring non-cooperators. In terms of tribalism and more so parochial altruism (individual sacrifice to help one and to harm another group), research demonstrates, Greene notes, that we use accents and other speech cues to make judgments about our willingness to engage the trust of others (50). However, simply because we can be tribal does not ultimately mean, Greene stresses, that we are “hardwired for tribalism” (55). There are a number of factors on the personal, parental, peer, group, and social levels that can influence the neuroplasticity of tendencies to adhere to a group.

In other words, only the human brain has evolved what we label morality as a means to permit group-to-group cooperation. We have, on the one hand, emotions that motivate us to care for those close to us and yet, on the other hand, emotions that dispose us to avoid and even punish others, especially those we feel as uncooperative. Nevertheless, Greene says, we’ve also adapted feelings that permit us, for strategic reasons of cooperation, to forgive transgressors. Even as infants we tend to judge according to loyalty and reputation, how someone behaves nicely or not to another, which implies that we, too, know our reputations are at stake. Such self-consciousness and ability to be embarrassed tie in with how we fit into a group or not. We are concerned about our own status and are free to punish those who do not reciprocate. So there are many traits that have evolved on an emotional and not necessarily on a cognitive level to help otherwise selfish creatures cooperate.

Tribalism or cooperative partnerships is a given, evident from research and studies on human infants and monkeys. That is, hostility towards outsiders is part of the evolution of cooperation. At this point Greene seems to suggest, beyond tribalism and groups, cultural differences, noting how, whether right or wrong, true or false, members of a group will adopt the group culture. He also seems to suggest that severe global issues such as extreme poverty, local violence, and climate change could escalate into inter-country problems. The reason we have conflicts between tribes, Greene says, is that some societies favor individual rights over the group, some have an obscure honor code, and some value religious beliefs more than others. In spite of, or perhaps because of, our inherently selfish tendencies, our moral problems tend to escalate into an attitude of us versus them.

At this point Green gets into a very detailed and rich chapter (the one he or his editor suggests readers skip) on the variations of the runaway trolley scenario. For readers unfamiliar with this moral problem, see Wikipedia; one can also search for trolley at the Stanford Encyclopedia page. Essentially, the trolley problem involves the question of switching the track of an oncoming train to kill one rather than five, or pushing a man onto the tracks to stop the train and so save five. Bottom line: “our intuitions tell us that the action . . . is wrong” (117). Emotionally, in what Greene calls our automatic mode, we know that we should not harm someone else; but on another level that involves higher cortical regions, in what he calls our manual mode, we understand that harming one for the greater good is not only necessary but morally justifiable. There is a difference between hitting a switch to kill one and save five as opposed to pushing one off a footbridge to stop a train to save five: our moral intuitions are such that we are very reluctant to engage in physical force on a personal level to help others, but we will. With Utilitarian decisions there almost seems to be ventromedial prefrontal damage in that there is only cognition without feeling (118). The crux of the book, then, is on what Greene calls our dual-process brain, for if we acted on instincts alone we’d not be able to think through alternative situations or scenarios (132).

In the control or manual mode we are able to look in all directions, consider, and weigh options, as opposed to the automatic or instinctive mode which simply reacts quickly. This is not to privilege, Greene goes on, the manual mode over the automatic – we have evolved instinctual responses and have retained them since they serve important survival and social functions. Moreover, different parts of the brain balance decisions, so that while one might strive to act for the greater good, visceral emotions might interfere.

So in terms of brain evolution and function as related to moral decisions, we can handily manage cooperation in a group, Greene says, but not between groups. Certainly because of kin altruism (see, e.g., Hamilton and Trivers) this observation is not startling. There is a conflict between what our viscera makes us feel and what our rational mind makes us see (148). Darwin works on an individual level, but for the Utilitarian the collectivist v. individualistic thinking is supposed to, in spite of radical differences, have the same result – doing what is best for all concerned (150). While Greene is very accomplished at explaining Utilitarianism with long asides on Bentham, Mill, and happiness, he should not expect as he does, readers to become Utilitarian. But what could we expect from a thinker who operates on the group level, whose entire discussion is filled with generalities about the group to the exclusion of the individual.

Certainly there is a distinction to be made between the individual acting for the greater happiness of many and the more ancient notion of excellence (arête) where the individual strives to sharpen her own wits, intelligence, strength, or moral virtue and so be happy. This concept of excellence is essentially driven by individual character, not the group. Not everyone is equal or wants exactly the same things, ideas, or types and quantities of, to use Greene’s Utilitarian word, happiness. Additionally, from a non-teleological Darwinian perspective, there is no progress or goal to happiness, since all is a diurnal combination of variation, competition, and inheritance for the individual. Greene optimistically wants “to encourage people to behave in ways that maximize happiness” (163), but such thinking is a recipe for disaster considering our inherent self-interest and competitiveness. Whose happiness? Greene says the Utilitarian ideal is impartiality (166) and “avoiding bad consequences” (168), but this reminds us of Adam Smith who paradoxically pits sympathetic caring (Moral Sentiments) against self-aggrandizement (Wealth of Nations).

Greene goes on to say that happiness is the bottom line and should apply across the board (170). But it appears Greene’s equation for happiness is in high moralistic terms – saving some lives at the expense of one – and not in the more diurnal, routine, basic functions or character issues. Granted Greene is not writing a self-help book, but yet in his last chapter he indeed directs readers to embrace certain practices.

Greene says objections to Utilitarianism come from automatic settings (194). Surely, since we are first and foremost emotional beings. We do not act with reason but first react, and then later (quickly or slowly) employ other brain areas related to reason. Schopenhauer, cutting across Western philosophy, famously saw human beings not as rational but as irrational creatures, and so we are. We have moral emotions (Haidt) and a moral sense (Hume) which, depending on whom one reads, is either a faculty (Hutcheson) or not. Our default mode is selfishness (excepting kin), but we can be sympathetic. Yet even in deliberation we might think away (rationalize) the concerns of others, not favor them or their so-called need for happiness. Who cares? While Utilitarianism might make sense on some high, idealized plane, it is not working in reality, in spite of what Pinker calls (borrowing from Lincoln) our better angels. There might be less overt statistical violence, but that does not preclude aggressive or violent urges, to say nothing of an entire entertainment industry that thrives off our visceral desire to consume violence virtually through various media.

The Utilitarian says, “no one is objectively special” (204). If Francis or Claire of Assisi were to be on the side-track, we should hit the switch to kill either one of them to save the gang of thugs congregating down below and getting ready to attack Albert Einstein? Utilitarianism seems too clinical, without gut, appears superficial, minimizing a basic truth about being human, that not only does each one of us think he or she is special, but we most likely also hold some others as special. If according to Utilitarianism everyone is equal across the board, then why not kill the five and save the one? Where is it written that quantity trumps quality?

However, Greene’s findings, including work of others outlined in chapter 9, are compelling in terms of killing the one (the pushing scenario) to save five: thirty one percent approved. And yet if we have pretty much the same scenario but have the man fall through a trap door activated by a switch (reminiscent of the original trolley scenario), sixty-three percent approve. The killing of one is collateral damage to save the five since the one even when physically pushed is in the way (so to speak) for the person to save five. Eighty-one percent approve of killing one to save five if they see the death of one not as a means with the use of personal force but as a side effect (219). In this way, we are “emotionally blind” but not “cognitively blind” since from our deep past our ancestors premeditated actions, including violence (225). Reciprocity is also in our distant past, dictating an emotional reluctance to engage in close, physical contact of harm since any hurt might return to us. We are nonetheless blind to “foreseen” side effects of violence (an action that does not fully account for consequences) and so can push a man off a footbridge in order to save five (228).

We sense harm to another as a means (pushing) but not so much as a side effect. We tend to be Utilitarian in more cognitively complex cases, where side effects are not necessarily visualized clearly, such as flipping a switch, as opposed to simpler and more straightforward causality, such as pushing a man onto the tracks. Greene refers to the emotional response in these cases as an alarm call, but as a Utilitarian fails to consider the wide differences in amygdala reactivity, temperament, or sensitivity (see Jerome Kagan and Elaine Aron). In the psych lab these results are solid, but even Greene admits that a lab is not reality. Brain scan machines in a controlled environment indicate which regions get hot, but this does not measure outcomes. At least Greene says although we might push the man off the footbridge to save five we feel that such an action is wrong. We sympathize with and act on helping tendencies for people we can see or know, what we might call an identifiable victim.

Toward the end of the book, there are more approvals about Utilitarianism, how it asks us “to be morally better” (284). Such an assertion, though, seems empty. Does this mean than any philosophy that urges one or a group to act more morally (can morality be quantified?) is Utilitarian? No other moral system, biological or religious, stimulates one to be good? Does Greene mean purely good without any self-interest, if that is at all possible? But Greene’s point is not lost, for he says that the more we think about a moral problem the more we tend to gravitate toward our core, tribal, biased beliefs (296). That is, we tend to rationalize our behavior, and such self-interested psychological posturing is not precisely moral. In this way tribal differences can increase since so-called rights are established and asserted at all costs. Greene, then, moves into a discussion about abortion, finally, since he has already covered slavery, rape, and genocide. By his own admission, Greene cribs much of the abortion section from Pinker, and as with other sections of the book, the focus on contemporary, hot-button issues infused with references to ephemeral political trends (and even some politicians) is distracting. Perhaps that is why chapter 9 seems so inviting, even though we have all read about the trolley problem before. Writing about politics in this context takes academic issues down to a journalistic level.

Moral Tribes is not about the evolutionary roots of moral tribes (morality or tribalism) but more about how, in Greene’s opinion, Utilitarianism can solve many societal and worldly dilemmas. For instance, in a sentence here is the upshot of the sixteen pages on abortion: While we know that abortion is morally wrong, it serves an important societal function. If Utilitarianism is such a moral pot of gold, why then do we have so many political problems, social strife, and world misery? Surely we can be and are cooperative on a grand level at times, but from an evolutionary perspective, Utilitarianism seems to have been selected out in favor of, more mildly, benevolence and definitely self-interest.

This book could easily, and perhaps more accurately, have been called Tribal Politics with its emphasis on how various groups “have different moral intuitions” (335). Interestingly, this book does not mention or references (as far as this reviewer could see) the former Harvard psychologist Marc Hauser. Greene is the John and Ruth Hazel associate professor of the Social Sciences and the director of the Moral Cognition Lab in Harvard’s Psychology department. If you go to the site of the College of the Holy Cross, you will find a conference where both Greene (talking about trolleys) and Hauser share a panel that covers the sources of moral reasoning. Hauser no doubt would disagree with some of Greene’s presentation in this book, i.e., Greene’s emphasis more on cognition and less on sensation. Greene says, simply, that while we experience such sensations, we will override (rationalize, ignore) them to accommodate our group beliefs, and here he differs, too, in terms of what is emphasized, from Jonathan Haidt.

Notwithstanding any such uninformed quibbles made here, Greene’s book is timely and important, and will go quite far in helping not only general readers but graduate students and academics in multiple disciplines understand the complex cognitive and neural workings of, and differences between, moral emotions and moral reasoning.

- Gregory F. Tague

Copyright © – All Rights Reserved

Saturday, March 1, 2014

Moral Sense Colloquium II - Program

Moral Sense Colloquium, II. 7 March 2014, Noon to 6pm. St. Francis College.
Presentations and Panels, Founders Hall.
Breaks, and Reception, the Callahan Center.

12:00   Sign-in/coffee, Callahan
12:30   Welcome and Opening Remarks by Dr. Allen Burdowski, Dean of Academic Program Development, and Gregory F. Tague. Introductions of Dr. Diana Reiss and Julie Hecht by Dr. Kristy L. Biolsi
12:45   Dr. Diana Reiss [30 minutes]
1:15     Q/A regarding Dr. Reiss’s presentation [15-20 minutes]
1:35     Julie Hecht [30 minutes]
2:05     Q/A regarding Julie Hecht’s presentation [15-20 minutes]
2:30     Snack Break
3:00     Panel – Moral Sensations. Dr. Tague and Dr. David Lahti. Students Luke Kluisza and Tyler Perkins.
4:00     Panel – Evolved Ethics. Dr. Biolsi (Evolved Ethics Introduction). Students Jeannette Raymond (Neural Philosophy of Moral Behavior), Lorianna Colon (The Neural Mechanisms of Morality), and Andrew Salzillo (Innate Morality: The Code of Ethics Concerning the Human Captivity of Other Species). Dr. Kathleen Nolan (Evolved Ethics Conclusion).
5:00     Presentation: Dr. Kevin Woo. Cowardly Punks Travel in Packs: Social Responsibility in an Urban Environment. [15 minutes followed by Q&A]

5:30     Reception

Full Program, Bios, and Abstracts available HERE

Thursday, December 26, 2013

The Neuroscience of Aesthetic Experience

G. Gabrielle Starr. Feeling Beauty: The Neuroscience of Aesthetic Experience. Cambridge, MA: MIT Press, 2013. 978-0262019316. Hardcover. 272 pgs. $25.00US

Feeling Beauty by G. Gabrielle Starr is an elegantly written (lucid and even literary) examination of the neurobiology of aesthetic experience crossing poetry, visual art, and music. In part drawing from laboratory work Starr conducted with neuroscientists at New York University (as well as deftly culling from and expanding on her previous thought and writing in cognitive studies), this small but potent book promises to become a classic among texts addressing the pressing questions about the relation of emotions to aesthetic experience and how such experience differentiates individuals. Starr makes a bold and convincing analysis in clear and compelling prose (with vivid examples) to chart neuroscientifically the very definition, and explore the parameters, of beauty. Starr is comfortable and competent in explaining the works of the ancients (from Aristotle to Ovid), the eighteenth century (from Addison to Burke), and current neuroscientists and aestheticians (Scarry). Fundamental to Starr’s argument is the brain’s default mode network – the self (inward) and others (outward) – which is geared to a process of emotional movement (pleasure and reward) related to aesthetic experience. Feeling Beauty holds immense value for anyone on any level studying or teaching the arts and is indispensable in light of the indisputable importance of cognitive cultural studies.

The physical properties of the book are good. There is an Introduction followed by three chapters. There are abundant Notes, a full Bibliography, and a concise Index. Two key features of the book are the Figures (nineteen of them, some in color) and an Appendix. The Appendix is an extract from a study by Starr and two neuroscientists, Edward A. Vessel and Nava Rubin – difficult to read since it is laced with technical jargon; but its presence in the book serves to demonstrate how a skilled writer and interpreter such as Starr can make such findings accessible to the common reader. G. Gabrielle Starr is (reading from the dust jacket of the book) Seryl Kushner Dean of the College of Arts and Science and Professor of English at New York University.

In large part the feeling of beauty means to be moved – emotionally on a neural level. Brain matter, even on the synaptic level, is particulate and moves and connects with other synapses (indeed, creates synaptic connections) when stimulated. Following Elaine Scarry, Starr argues that the “aesthetic value” of literary works (indeed, of other arts) stems from “images of motion” (8) which activate neurons (albeit differently across individuals). By implication Starr raises crucial questions (timely and pertinent) about the nature and role of arts in education. (This reviewer refers readers to, for instance, Learning, Arts, and the Brain, a report by the Dana Foundation, 2008.) The arts enable one to negotiate (in an attempt at coherence) the onslaught of visual and aural stimuli. As Starr puts it, the arts help shape perception (14). Nevertheless, invoking the eighteenth century philosopher Francis Hutcheson, Starr demonstrates how aesthetics is less about externals and more about personal value judgments (16), what Shaftesbury (before Hutcheson) would call one’s feeling of approval or disapproval (and which he relates to moral sensations) – the brain’s default mode network which tends toward introspection.

There is nothing stable or static about aesthetic experience, though. One of Starr’s key points concerns the almost organic, flowing process of how one sees, feels, and contemplates art, what Martha Nussbaum (paraphrased by Starr) calls a paradigm shift (20). For example, the default mode network is implicated in memory, theory of mind, fantasizing, and creativity. On a related note, another recent book, on the neurobiology of reading by Paul B. Armstrong (How Literature Plays with the Brain), explores how literary works are a form of pretense and play in the brain, and Semir Zeki (whom Starr cites) has written about how our brains are not averse to embracing, so as to tackle and accommodate, ambiguity. In other words, Starr claims, an aesthetic experience gives rise to our valuing something (or some occurrence) over something else (21). Using the word twice within a span of six pages, Starr says that when the brain encounters (and is rewarded by) an aesthetic experience, one learns how to qualify likenesses with what at first sight appears “incommensurable” (21, 27).

Although Starr spends time talking about consciousness (and cognition), there is no acknowledgment of the adapted mind, and she delineates what she calls the inadequacy of various evolutionary psychologists in addressing individual differences (27). While there is truth in some of the broad ranging assertions of early writing in evolutionary psychology, at the same time evolutionary psychologists would of course rely on a Darwinian model, and as such variation (along with competition and inheritance) is essential. Human brain processes such as cognition, consciousness, and reason are evolved mechanisms advanced from variation (as well as competition and inheritance). One of the leading authorities on consciousness, Christof Koch, asserts that the physicality of subjective feelings has provided an evolutionary advantage (Consciousness, 2012, pg. 31). Starr also says that with such a deep view of prehistory evolutionary psychologists are short-sighted in terms of historical cultures and nations (27). But Darwinists deliberately look at the evolution of culture (before the rise of nations). Perhaps this line of thought explains why there is no mention of (to name only one) Ellen Dissanayake (who has written extensively on the origins and prehistory of art). Rather than waging a teapot tempest here, we must agree that there is prehistory (as per Stephen Mithen and Richard Klein, e.g.), and then the important neural leap and modular mind (circa 50,000 years ago) that led to the cultural and cognitive flourishing after which Starr and others proceed.

Starr’s point (not evolutionary) is that the arts have the ability to alter human perception and emotion (28), but she does not admit that art (culture and humanistic ideas) is a human creation and provides an adaptive function, otherwise natural selection would have eliminated it many thousands of years ago. In South America, Darwin marveled at what he saw and says, “It creates a feeling of wonder that so much beauty should be apparently created for such little purpose” (quoted in Janet Browne, Charles Darwin: Voyaging, pg. 216). Remarkably, Darwin, at this early time, says the beauty is created, and only later does he come to realize that the forms, spectacles, sounds, movements, and colors are all a matter of natural and especially sexual selection. As others, including this reviewer, like to put it: A humanist will ask, What is art? while an evolutionist will ask, Why make art?

Starr’s argument is well taken, for she and others in exploring (indeed, in measuring) subjective aesthetic experiences are on a new frontier in helping us understand what art is (and the complex emotional responses to art) by considering how it is differently evaluated across individuals (ch. 1). Koch asserts that consciousness is exclusively physical (neuronal connections across brain areas) and surely evolutionary (echoing Zeki and Armstrong’s views above). Besides, consciousness is not all it is stacked up to be. Walter J. Freeman has described consciousness as a hurricane, and Zeki has characterized consciousness as disunity. In spite of how well the human mind has evolved, there are very old rudiments to the complex networks that give rise to consciousness (or what Darwin called descent with modification). In other words, individual variables in aesthetic experience could perhaps be correlated to other temperamental differences.

Starr’s thrust (broadly speaking) seems to be that while the brain can generate the ability for consciousness, how such consciousness manifests itself and changes is determined by the individual experiencing something aesthetically. Aesthetic experience seems to matter to our neurobiology in our willingness to be absorbed by art and abstracted out of the world, Starr says (59, 63). Note, though, that in terms of learning, recent studies demonstrate that academic accomplishment (flowering from one’s entire personality) is genetic. (See, for example, Shakeshaft et al., “Strong Genetic Influence,” PLOS One, 8.12, 2013). Starr suggests that brain reward response to some visual (aesthetic) stimuli need not be evolutionary (survival and reproduction), and she is probably correct based on what we are increasingly learning about epigenetics (i.e., how the epigenome is in effect nuclear DNA in the environment).

In her readings of Keats and Ovid, Starr is particularly brilliant when analyzing and explaining movement and motion through imagery (since the visual as imaginative, via philosopher Alva Noë, involves movement) (81). Central to her thesis of the potential of aesthetic experience to energize a revaluation of ideas is the metaphor of motion. Mirror neurons, of course, are implied not only in the motor imagery in arts but also in sympathy (83-84). Again drawing from Scarry (and others), Starr notes how some critics have gone as far as suggesting that when one observes visual (or is engaged in literary) art there is a sensation of the artist’s creative movements. Some art that moves us triggers mirror neurons and therefore “offers a promising route for modeling . . . aesthetic pleasures . . .” (101).

In ch. 3 Starr expands on the notion of pleasure (cognitive reward) by saying that in perception there is a competition among “ideas, emotions, and sensations” (113). Starr takes these notions further by explaining that by its nature beauty “is necessarily about comparison, contrast, integration, and competition . . .” (117) in how it surprises us (and supersedes another thought or image). There is, then, motion involved here, too; hence variant readings of a text (even by the same person over time). Such movement is especially evident in music, and Starr provides a compelling analysis of music (Bluegrass and Beethoven) in this respect. In a final example about movement and reappraisal, Starr shows how “beauty is always necessarily a momentary event . . .” (139) as she examines work (painted over) by Van Gogh.

Feeling Beauty by G. Gabrielle Starr is highly recommended as its author masterfully touches on all of the important issues (problems, questions, controversies, findings, and directions) melded into the experience and teaching of the arts. In view of the recent barrage of news stories bemoaning the death of the humanities, Starr’s work provides a much-needed and refreshing salve, and we look forward to her future work.

Gregory F. Tague

Copyright © – All Rights Reserved

(Reprinted here with permission, Daniel Meyer-Dinkgräfe, editor CLA journal)

Tuesday, November 12, 2013

The Biology of Aesthetic Experience

Paul B. Armstrong. How Literature Plays with the Brain: The Neuroscience of Reading and Art. Baltimore, MD: Johns Hopkins UP, 2013. 978-1421410029. Hardcover. 240 pgs. $49.95US

Paul B. Armstrong’s How Literature Plays with the Brain is a neurobiological account of brain processes that, on the one hand, look for patterns and yet, on the other hand, invite ambiguity. From an evolutionary perspective the ability to manage ambiguity is adaptive since it affords the possibility of establishing new patterns. The human response to the arts is not consistently constant since the brain itself, to use Armstrong’s word, is “decentered” (x). That is, for the brain, engagement with the world and especially with arts and ideas is an important form of play where creative flexibility against rigidity has been the means for the survival and increasing sophistication of the human race. Armstrong’s book is timely since he makes some keen distinctions between neurobiology (brain structures and functions including mirror neurons and canonical neurons) and cognitive literary studies (psychological processes including theory of mind and simulation theory). Armstrong ably addresses the neurobiological play of reading by employing hermeneutics (part/whole) and phenomenology (being in the world) in a challenging but vital work in the neuroscientific turn in literary studies. Literary scholars will find the book of immense worth since it treats the neuroscience of reading from a biological, cognitive, and evolutionary perspective.

The physical properties of the book are excellent. In addition to a Preface and an Epilogue, there are five main chapters, Notes, and an Index. There are also a number of useful illustrations in the book as well. Reading from the biography on the back of the work, Paul B. Armstrong is a professor of English at Brown University and the author of, for example, Conflicting Readings: Variety and Validity in Interpretation; and Play and the Politics of Reading: The Social Uses of Modernist Form.

While he draws on a number of important sources, Armstrong’s thesis of our neurobiological wiring that accepts both constancy and flexibility relies on Stanislas Dehaene’s notion of how the brain recycles functions for “object recognition,” Semir Zeki’s ide of the neurobiology of ambiguity, Antonio Damasio’s theory of the as-if body feedback loop, and Giacomo Rizzolatti’s research on mirror neurons. Critical of cognitive cultural studies that emphasize psychology over neurobiology, Armstrong nevertheless calls on phenomenologist critics (e.g., Wolfgang Iser) and philosophers (e.g., Edmund Husserl, Martin Heidegger, and Maurice Merleau-Ponty) quite often since they seem to validate the weight he places on the biology of the aesthetic experience. Attempting the consilience and congruence we often hear about, Armstrong thus engages in a dialogue with scientists and humanists. The scientist must explain “the conflict of interpretations that is characteristic of humanistic inquiry,” since our species not only enjoys but also values creative works that are ambiguous. But at the same time explain is not quite the most accurate verb since our best and most current fMRI technology, Armstrong notes, cannot render a fully accurate account of what happens in the brain during the process of reading, much less decipher how consciousness derives from brain cells and chemicals (5). We at least know there is no “art neuron” and that aesthetic experience (in the presence of visual art or in the process of reading a literary work) is spread out across the brain among various functions (though there are special areas for visual word form, color, and facial recognition) (12).

So while Jean-Pierre Changeux and Stanislas Dehaene say that harmony is the mark of an aesthetic experience, Armstrong says that V.S. Ramachandran suggests, on the contrary (and closer to his thesis and Zeki’s) that art can appeal to us with its “distortions” (13). Part of the answer to this curiosity for the discontinuous is that we experience art emotionally, not in real life, but as if it is real (17). The complex incongruities of art are reflected in the complex mapping (response) in the human brain. There is no dichotomy between harmony-distortion; rather, both are parts of a whole neurobiological process of challenge, test, play, and tentative evaluation.

The brain is a complex organ of multifaceted parts, areas, and patterns separate and yet connected and not (according to Alva Noë, neuroscientist and philosopher) a teleological agent (25). Likewise, the brain evolved over a very long period in circumstances different from the past six thousand years (or so) in which writing developed, so our brains have jerry-rigged other functions to help us read. For instance, there is a visual word form area in the brain’s visual cortex important for reading and which is primarily employed in identifying “visual forms” as it is near brain portions implicated in object and facial recognition. More precisely, this visual word form area becomes active when lettering of any kind in any language is introduced, suggesting the brain’s neurons in this small spot have accommodated themselves to cultural and not only evolutionary forces (28).

Armstrong spends a good deal of time covering neuronal change through use, disuse, and plasticity. Though controversial, research suggests neurogenesis in some brain regions via history and repetition, and these patterns of use/disuse reflect (and are reflected in) the give-and-take movements of reading. Some neuroscientists (Zeki) speak of reward systems in the brain while others (Irving Biederman and Edward Vessel) similarly speak of pleasure systems. Thus there is a playful exchange between that which is informational and that which is pleasurable, evident in dissonant music and complex, indefinite works of art and novels. Armstrong’s point is that this playful interchange is supported by neurobiology itself since the brain’s organizational structure is “decentered” (52). Of course there would be evolutionary advantages in the brain’s openness to being challenged and stimulated to say alert.

Reading is forward-looking, expectant, hermeneutic: one comprehends the whole over time through parts, each of which comes at different times. Does one’s interpretation merely exhibit a projection or reinforcement of what one believes? More likely reading is a test of beliefs and abilities. Can ambiguity result in such inner conflict that there is no meaning? Since there are no terminal points in our consciousness (since it is always active) that seems unlikely. In fact, Armstrong’s idea seems to be that our brain is wired to be tested and so to negotiate many variables. While Semir Zeki says our visual cortex hearkens for constancy (e.g., color) “to create the useful fiction of stability . . .” Ellen Spolsky insists that the human brain is open in terms of content, and such content is often unfinished and not tuned finely (64). Such acceptable incongruities permit the brain to admit conflicting data, and hence the notion of play. We see this tendency to congruity/incongruity in our ability to build metaphors, and that capacity reflects the human brain’s willingness to embrace oppositions, to attempt meaning through creation and destruction. A difficult text or complex image reveals various meanings among individuals by virtue of each brain’s history (Damasio’s somatic markers) and plasticity. Surely this flexible neurobiology is adaptive, allowing the brain not simply to receive data but to render decisions about such input.

Armstrong relies on phenomenology to bolster his points. Calling on Heidegger, he says that there is always a “gap” between being in the world and our neurobiological reaction (101). This gap, however, motivates reflection and a crisscross between a reader and a text; play results in back-tracking in order to move forward interpretively. All this is a prelude to a long discussion of axons, action potentials, cell polarization, refractory periods, synchronous neuronal activity, excitation/relaxation, delta/theta/alpha/beta/gamma waves to demonstrate that what we might ultimately see and feel as meaning is the product of bunched but dissimilar neuronal patterns (110-111). Harmony implies disruption, the latter of which produces action potentials that increase sensitivity to learning. Homeostasis might be the tendency of the body, but stasis is shunned by an active mind.

Furthermore, citing Husserl, Damasio, Shaun Gallagher, Dan Zahavi, Noë, William James, and Ponty, Armstrong says (paraphrasing Noë) that what we consider the self is not to be equated with our neurobiology; instead, our neurobiology is part of the self (126). That is, self is not merely a brain state but part of temporal reality (and hence his reliance on phenomenology). Continuing with his theme of de-centeredness, Armstrong goes on to demonstrate (or argue, depending on the reader’s perspective) that the brain is “a society” (127) of multiple but interlocking “processes” and not necessarily an “individual” (128). Is this too theoretical for a phenomenologist?

José Ortega y Gasset famously said I am myself and my circumstance, and if I cannot save it, I cannot save myself. History is a system, and each of us creates (from his or her inner, genetically inspired character) a history that simultaneously interacts with the world (and so adds to the history, as a sculptor adds clay to a statue). Therefore it is not precisely clear what Armstrong says here, as he can tend to be theoretically abstract. The bottom line is that (contrary to what he suggests) my neurobiology is my own since it dies with me; if there is ultimately no individual, then I am not responsible. Perhaps this reviewer is too much of a staunch materialist.

Nevertheless, Armstrong’s general topic is well taken, for certainly consciousness (as even William James knew) is messy and continuous, and character (as Kant and Schopenhauer knew, in spite of their differences) is multi-dimensional and flexible. Truly, personhood and personality are complex organic forms, and from an evolutionary perspective the somewhat amorphous quality of personality falls in line with variation. But Armstrong fails to address who is responsible for the circumstances; the discussion should not simply be about the neurobiology of brain processes, but why those processes eventuate different outcomes among different individuals.

The discussion shifts to theory of mind, simulation theory, and mirror neurons to help us negotiate personal and social emotions. Simulation can occur immediately after birth, whereas theory of mind occurs around age four (since only by then can one understand how others do not share the same beliefs). Critics of simulation theory say the problem is that we are supposedly simulating something we already know; but the upshot is that we surely have simulation and theory of mind capacities, and the bridge to both might lie in mirror neurons (132). There are skeptics who do not place such high importance on mirror neurons, but that fact is that discussion of them (along with theory of mind and simulation) re-centers any debate about the value of the arts around the social brain hypothesis. Armstrong suggests that what is key here is the notion of alter ego – the paradox of knowing oneself through another, and this doubling capacity (135) is clearly part of our psychology and neurobiology (whether theory of mind, simulation theory, or mirror neurons) and what accounts for play in reading.

Since mirror neurons are located in a motor cortical area, and since the thrust of Armstrong’s thesis has been on the metaphorical motion and movement of play in reading, he understandably spends quite a bit of time exploring mirror neurons. In fact, Armstrong goes as far as saying that motor neurons (perhaps more than theory of mind or simulation) are responsible for speculating about another person’s intentions (139). Armstrong values theory of mind and simulation (as he seems to be more a synthesizer than a destroyer of approaches), but he keeps hovering over and finally landing on mirror neurons and their motor reflection. For instance, he talks about so-called canonical neurons which are stimulated not simply by action but by objects that can (or have the potential to) act (149). This revolutionary finding means that a property of some mirror neurons is in control of our response to cultural artifacts (150).

The mirror neuron area is also involved in language (since vocalization is rooted – still apparent – in gesture), and so narrative and reading are akin to the doubling, as-if feedback we see in simulation theory and especially in mirror neurons. Language is a neurobiological social connector because it literally “reanimates” (158) us via visual and motor areas in the cortex. Armstrong’s book is a testament to the value of the arts and the humanities since their processes and productions generate ideas that are literally the physical (neurobiological) stuff of which we are made.

- Gregory F. Tague

Copyright © – All Rights Reserved

Tuesday, August 27, 2013

Yesterday and Today: Jared Diamond on Traditional Societies

Jared Diamond. The World until Yesterday: What Can We Learn from Traditional Societies? New York: Viking, 2012. Hardcover $36 U.S. ISBN: 978-0670024810

There is a long history, dating at least back to Tacitus’ Germania, of authors examining more traditional societies and detailing laudable traits from them that their own more technologically advanced societies should emulate. As its title suggests, Jared Diamond’s The World until Yesterday: What Can We Learn from Traditional Societies? fits squarely within this tradition. It highlights differences between traditional and modern societies in areas ranging from conflict resolution and what Diamond terms “constructive paranoia” to child rearing and nutrition. In the process, it details—with varying levels of success—aspects of traditional societies that people living in the industrialized world should incorporate into our own lives and suggests ways that society as a whole should change.

Diamond, the winner of the 1998 Pulitzer Prize for his earlier book Guns, Germs, and Steel: The Fates of Human Societies, is well placed to discuss traditional societies. Although currently a professor of geography at UCLA, his original training and PhD are in physiology, and he has also conducted extensive ornithological research. He indeed refers to himself as an “evolutionary biologist” in the book. Like in his previous works, Diamond calls upon his wide-ranging knowledge in the natural and social sciences in writing The World until Yesterday.

Over the past fifty years, Diamond’s ornithological research has frequently brought him to New Guinea, an island containing a large percentage of the world’s remaining traditional societies. Many of the book’s insights and anecdotes are gleaned from Diamond’s personal interactions with these groups. In fact, it occasionally reads like a memoir of his most memorable experiences in New Guinea. The book also examines a large number of traditional societies with which Diamond has no first-hand experience such as the North Slope Inuit and Great Basin Shoshone in North America and the !Kung and Pygmies of Africa.

The World until Yesterday is not the first time that Diamond has compared traditional and modern societies. In Guns, Germs, and Steel he argued that environmental factors explain why some human groups have evolved into more complex state societies while others have not. Developments such as political centralization were the result of increased population density, which was in turn caused by the intensification of food production due to the domestication of various crops and animals. In order for this process to occur, humans needed plants and animals suitable for domestication, but such species are concentrated in only a few places around the world. Human groups living in areas with these species developed larger, more complex societies. Those who did not continued to live in societies virtually unchanged from those in which their ancestors had lived for countless millennia.

Diamond continues to discuss environmental factors in The World until Yesterday. Indeed, he convincingly argues that the environment plays an important, albeit not exclusive, role in differences between traditional societies. For example, a group of people living in an environment that forces them to constantly be on the move in order to feed themselves is much more likely to euthanize its elderly than a group that leads a more settled existence. The amount of language diversity in an area is also primarily caused by environmental factors such as climate and the productivity of the land in which various groups live. But in his new book Diamond’s emphasis has changed from the evolution of societies to a study of those societies whose environment kept them from developing into more complex state societies, and what people living in modern societies can learn from them.

According to Diamond, the answer is a lot. People have, after all, lived in traditional societies until “yesterday” in the overall lifespan of the human race. As a result, studying traditional societies both helps us understand our past and elucidates what elements from these societies remain with us still. Studying traditional societies also emphasizes the diversity of human nature and moves researchers away from basing their findings just on the “narrow and atypical slice of human diversity” of modern industrialized societies (8). Diamond seems rightly disturbed that 96% of psychological research conducted in 2008 was from such societies. (Around 80% of research was on an even smaller grouping: college undergraduates enrolled in psychology courses!)  Finally, he believes that both individuals and modern society as a whole could benefit from adopting certain traits found in many traditional groups. This final lesson is by far the most emphasized in The World until Yesterday. In almost every section of the book, Diamond’s focus is on how we can better our lives by adopting aspects of traditional societies into them.

Diamond’s emphasis on what his readers can learn from traditional societies does not mean that he idolizes them. He recognizes that people living in traditional societies usually adopt the trappings of modern ones when given the opportunity—and for good reason. As he puts it, “Many traditional practices are ones that we can consider ourselves blessed to have discarded—such as infanticide, abandoning or killing elderly people, facing periodic risk of starvation, being at heightened risk from environmental dangers and infectious diseases, often seeing one’s children die, and living in constant fear of being attacked” (9). Diamond’s emphasis on the violence present in traditional societies has even led him to be attacked by some supporters of traditional peoples for supposedly portraying them as savages (The Observer, 2/2/13)—an accusation that is not supported by the contents of the book. Diamond, however, argues that even traditional groups’ negative traits can teach us the important lesson of appreciating elements of our own society that we might otherwise take for granted.

Diamond’s writing is on the whole engaging, and his definitions and explanations are easy to follow. His clear prose is sometimes marred, however, by the overly complex and often unnecessary tables that he includes. Rather than assisting the reader like they should, tables, for instance, listing examples of gluttony in traditional societies when food is abundant, providing sixteen scholarly definitions of religion, and describing in excruciating detail objects traded by a large number of traditional societies instead bog the reader down. The book includes an excellent array of relevant photographs, divided into separate sections of color and black and white plates. But these, too, are marred by poor organization. For example, why did Diamond and the editors at Viking choose to make an image of Ishi, the last Yahi Indian, the first black and white plate when he is not first mentioned until page 398?

The World until Yesterday examines the differences between modern and traditional societies in eight different areas: peaceful dispute resolution, war, raising children, treatment of the elderly, “constructive paranoia,” religion, multilingualism, and diet. Diamond admits that he has left out a large number of topics that have been studied by social scientists, but he argues that his goal is not to paint a comprehensive portrait of all aspects of human society. That is his right, of course, although one wonders how he chose to include the above topics while leaving out equally important ones such as gender relations. Each section usually begins with an anecdote relevant to the subject, often drawn from Diamond’s experiences in New Guinea, then gives an overview of various traditional societies’ norms in this area, and concludes with the lessons that can be gleaned from traditional practices.

The first two topics that Diamond covers are peaceful conflict resolution and war, which in traditional societies are the two ways that individuals handle disputes. Unlike in modern societies where disputes are usually between two or more strangers and the government’s overarching goal is to maintain social stability, the goal of peaceful dispute resolution in traditional, small-scale societies is to restore relationships between two individuals who either know each other or at least know of each other. Diamond is careful not to overemphasize the potential advantages of this traditional system of conflict resolution as failed efforts at reconciliation frequently deteriorate into cycles of violence and war, something that does not typically happen in state societies. Indeed, studies show that traditional societies’ frequent conflicts result in an average death rate from war that is three times higher than even the most war-torn countries of the twentieth century. But Diamond does believe that modern societies can learn a few lessons from traditional groups’ emphasis on restoring relationships. One suggested change is to provide more mediation in conflicts where the two sides do know each other such as divorce and inheritance disputes. Diamond argues that even strangers should be given the option to choose mediation to resolve disputes.

Diamond next discusses how traditional societies raise children and treat the elderly. While traditional societies’ behavior towards the elderly varies, Diamond argues that they are remarkably similar when it comes to the basic elements of raising children. For example, the average age of weaning in traditional societies is three, and many hunter-gatherer groups practice continual nursing in which an infant nurses in brief spurts every 15 minutes or so, a practice that they share with our closest primate relatives. Diamond huffs that “modern human mothers have acquired the suckling habits of rabbits, while retaining the lactational physiology of chimpanzees and monkeys” (183). In climates that allow it, most hunter gatherers also retain constant skin-to-skin with their babies, and every traditional society surveyed engages in co-sleeping. Most traditional societies also deal with crying children immediately, give their children more autonomy, encourage creative play rather than bombarding them with toys, and practice allo-parenting in which individuals beyond the family assist in raising a child. Diamond believes that parents in modern societies should consider adopting all these practices, observing that “other Westerners and I are struck by the emotional security, self-confidence, curiosity, and autonomy of members of small-scale societies, not only as adults but already as children” (208). While traditional societies’ treatment of the elderly vary greatly, Diamond argues that many rely on the elderly for historical memory and tasks such as childcare—areas in which modern societies should utilize their aged population more as well.

The most important lesson that Diamond learned from his time among traditional groups in New Guinea is “constructive paranoia,” an oxymoron that reflects the importance of being aware of one’s environment and the potential dangers within it. Diamond believes that one close correlation to this lesson that his readers can learn is to think more clearly about the dangers we face in state societies. As such we should not focus our fears on something such as genetic modification, which has an extremely low chance of killing us, and focus instead on driving safely and wearing a helmet while biking both of which would save many lives a day.

Diamond’s interesting discussion on religion does not really fit with the rest of the book, as he does not really attempt to describe what his readers can learn from traditional religions. Diamond instead offers a learned exposition about how religion possibly originated among humans in order to explain the world around them and make predictions about it. He also explains how the functions of religious belief differ between traditional and modern societies. For instance, religion’s role in defusing anxiety was greater in traditional societies where the threat of violence and other dangers were much higher than in modern societies. On the other hand, religion’s function in larger states of providing people with codes of behavior when interacting with strangers was much less necessary in smaller traditional societies where you knew everyone.

The section on multilingualism begins by making an impassioned plea for the preservation of traditional languages, sadly noting that a language disappears every 9 days. Diamond believes that this trend is tragic as “each language is the vehicle for a unique way of thinking and talking, a unique literature, and a unique view of the world. Hence looming over us today is the tragedy of the impending loss of most of our cultural heritage” (370). Diamond then notes that multilingualism is widespread among small-scale societies that will frequently come into contact with groups speaking a language different than their own. The section ends with Diamond forcibly arguing that people living in mostly monolingual societies such as the United States need to strive to learn other languages. Besides its cross-cultural benefits, studies show that learning a different language results in more flexible minds and can even stave off the effects of Alzheimer’s for a time.

The book’s last section details how the study of traditional societies provides guidelines to reduce hypertension and diabetes in today’s industrialized societies. In it, Diamond points out that the rates of non-communicable diseases are extremely low in traditional societies and correctly argues that many of these diseases can usually be staved off by lifestyle changes. The section ends with Diamond’s prescription for leading a healthy lifestyle.

As it details what people living in modern states can learn from traditional societies, The World until Yesterday often reads as some sort of weird self-help book filled with insights that range from the useful and interesting to the unoriginal and humdrum. The conclusions that Diamond draws from traditional societies about how to lead a healthy lifestyle definitely fall into the latter category. After giving the standard advice about limiting one’s intake of calories, exercising more, not smoking, and eating more fruits and vegetables, Diamond admits: “This advice is so banally familiar that it’s embarrassing to repeat it.”  Although he then goes on to justify his conclusions by stating that “it’s worth repeating the truth,” this reviewer at least was left thinking: “Yes, your prescriptions in this area are quite banal, aren’t they?” (451).

One wonders how the average reader of Diamond’s book could implement some of his other most worthwhile suggestions. Many readers will agree that bilingualism is important, but immersing children in multiple languages early in life is extremely difficult in countries with one dominant language unless a family has the money to hire caregivers who speak a foreign language and/or send their children to a special school. Much of Diamond’s advice for childrearing is equally difficult to follow. Although a large percentage of his readership could presumably implement allo-parenting to some extent, few harried parents are in a situation where they can engage in continuous nursing or have constant skin-to-skin contact with their child. Diamond himself acknowledges that at least one of the lessons taught by traditional societies, the methods that many of them use to resolve conflict peacefully, is a change that should be adopted more at the societal rather than the individual level.

The World until Yesterday does a good job of providing an overview of differences between traditional and state societies in the areas that Diamond chooses to highlight. But the lessons that he argues modern individuals and societies should glean from traditional groups are often either trite or too difficult for the average person to implement.

- Eric Platt

Copyright – All Rights Reserved