Thursday, April 11, 2013

Neurophilosophy Challenges the Strategic Use of Moral Reasoning: A Review of Churchland's Braintrust


Patricia Churchland. Braintrust: What Neuroscience Tells Us About Morality. Princeton: Princeton University Press, 2011. 288 pgs. $24.95 US Hardcover.  ISBN: 978-0691156347.
 
Questions at issue: 1. Where do moral sentiments come from?  2. Are the biological origins of moral sentiments relevant in evaluating moral norms and the motivated reasoning of moral authorities?
“We need a critique of moral values, the value of these values themselves must first be called in question—and for that there is needed a knowledge of the conditions and circumstances in which they grew, under which they evolved and changed.” Nietzsche, Genealogy of Morals, ¶6.
Critical investigation into the disturbingly non-transcendent origins of morality is not new.  Evolutionary and neurological investigations have been trickling out of the academy and into the popular press for a couple of decades. However, these have so far produced more reaction than consideration, both in the general public and among academics. If anything, prevailing beliefs about the origins of morality have been wrapped in anti-scientific rhetorical defenses, most of which deny out-of-hand that science could make any contribution to the formulation of personal ethics or public policy.
No stranger to the bulwarks constructed to shield the humanities from empiricism, neurophilosophy pioneer and academic blockade-runner Patricia Churchland offers perhaps the strongest and most concise defense of the interdisciplinary study of human morality. Churchland’s 2012 book Braintrust: What Neuroscience Tells Us about Morality focuses on the deceptively simple question of where values come from. Though the question is not significantly different from that posed by Nietzsche, its 21st century incarnation cannot be answered by speculative aphorisms. To refine the question and establish a methodology for answering it, Churchland constructs two mutually-reinforcing arguments, one scientific and the other philosophical. In the scientific argument, Churchland proposes that our feelings about social responsibility, self-restraint, etc. may have emerged from the neurochemical reward system that ensures parent-child bonding in all mammals. The philosophical argument, equally important and skillfully interwoven with the scientific argument, is that rhetorical attempts to exorcise science from the discussion of moral norms and public policy are logically indefensible.
         
Neuro-Morality
The second, third, and fourth chapters of Braintrust contain the groundwork for a hypothesis of brain-based pro-social behavior. Churchland points out the nontrivial point that morality is inherently social. While I may like to believe that I would act according to a particular ethos even if no one was watching, the fact that I want other people to applaud my integrity manifests its social utility. Living in a group is evolutionarily adaptive, but it requires a mechanism to constrain self-interest in order to ensure group cohesion. Churchland examines the evolutionary history of neural systems which extend the instincts for self-preservation, first to offspring and genetic relatives, and eventually to the social group composed of both genetic kin and non-kin on whom the individual depends for survival and reproduction. Churchland is particularly interested in the role of neurochemicals, especially oxytocin and arginine vasopressin, in constructing emotional bonds between parents and children, parents and parents, and even allo-parents caring for offspring that are not their own. Citing studies involving a range of animal species—rats, rhesus monkeys, even fruit flies—Churchland explores the powerful, if complex, influence of oxytocin and vasopressin on animal behavior. Her favorite exemplars of the social effects of neurochemistry are the monogamous prairie voles and their promiscuous cousins, the montane voles.  Not only do the two species seem to differ in little more than their brains’ stocks of oxytocin, but artificially increasing the oxytocin levels in the montane vole turns players into family men—just as reducing oxytocin in prairie voles brings on a seven-year-itch. While demonstrably influential in bonding behavior, such neuropeptides are not simple, one-cause-one-effect agents. Male rats who receive a shot of oxytocin become tender toward in-group members, but they simultaneously become hostile toward intruders. Oxytocin does not turn an individual into a universal altruist so much as it extends the individual’s self-promoting instincts (somatic effort) to family and, potentially, to immediate community. Just as parental affection may be expanded into care for others, the child’s feelings of attachment to the mother expand to create fears of social isolation in the adult—the origins of shame and approval-seeking. “Depending on ecological conditions and fitness considerations,” Churchland contends, “strong caring for the well-being of offspring has in some mammalian species extended further to encompass kin or mates or friends or even strangers, as the circle widens. This widening of other-caring in social behavior marks the emergence of what eventually flowers into morality”(14).
As the social circle expands to include non-genetic relatives, brains that evolved with greater social intelligence yielded an adaptive advantage.
Expanded memory capacities greatly enhanced the animal's ability to anticipate trouble and to plan more effectively. These modifications support the urge to be together, as well as the development of a ‘conscience’ tuned to local social practices; that is, a set of social responses, shaped by learning, that are strongly regulated by approval and disapproval, and by the emotions, more generally. More simply, mammals are motivated to learn social practices because the negative reward system, regulating pain, fear, and anxiety, responds to exclusion and disapproval, and the positive reward system responds to approval and affection. (15-16)
In other words, culture, like morality, emerges from brain systems that have adapted to form cooperative social units.  The norms as well as the individual’s receptivity to those norms both depend on a brain that is wired to care what other people think. In the fourth chapter, Churchland surveys the specifically human variables influencing or constraining social behavior, from market complexity to institutionalized religious identities, all of which depend on an interaction between internal (neural) and external (cultural) components. Churchland explores the impact of neurochemicals that influence the more reflective phenomenon of “theory-of-mind” in social cognition. The “human” social phenomena of cheating, punishment, hierarchy, cooperation, and philanthropic grand-standing have a surprising number of parallels in studies of animal behavior. In the sixth chapter, Churchland identifies brain areas (particularly the prefrontal cortex [PFC]) integral in the sort of predictive social thought needed to create and preserve extended networks of cooperation. While it is the seat of human reflective consciousness, the PFC is not an organ of perfect rationality. Churchland proposes that our focus on the moral or immoral actions of others (including essentialized cultural and religious identities) serves a primarily strategic purpose—shared morality is a means of predicting another’s behavior. As such, it is a heuristic engine. We distrust those who don’t share our moral prejudices, even when their beliefs can be shown to be more mutually beneficial than our own.
    
Qualified language
Any book that attempts to communicate the findings of cognitive science to the non-specialist is bound to trick some readers into making untenable over-generalizations about the scientific evidence or its implications. However, Churchland carefully separates what in the study of moral origins can be empirically studied from what cannot. She is reductionist in this sense, but not in the sense that the general public uses the word (meaning a sort of intrusive cynic who does violence to the transcendent object under study). She also inserts qualifying statements which discourage the reader from jumping to single-cause explanations (e.g. “oxytocin causes morality”). She reminds us that in even the simplest questions regarding the neural correlates of morality, “the answers are certainly going to be complex, even in voles, since the neurons affected are part of a wider system, meaning that what is going on elsewhere—in perception, memory, and so forth—will have an impact” (50). “Single genes seldom have big effects, but are part of multinode gene networks, and part of gene-brain-environment networks with recurrent loops”(53).  “[I]f a certain form of cooperation, such as making alarm calls when a predator appears, has a genetic basis, it is likely to be related to the expression of many genes, and their expression may be linked to events in the environment”(102). These statements are the dry, qualified, scientific versions of the humanists’ reminder of the roles of culture and experience in individual development. Churchland goes on to question the hypotheses of cognitive scientists such as Marc Hauser and Jonathan Haidt, whose propositions about human morality are based on empirical evidence but might exceed the parameters of the particular data. She even challenges claims by neuroscientists Marco Iacoboni and Giacomo Rizzolatti, whose research in mirror neurons has promoted a great deal of speculation about the nature of empathy and imitation. Whereas mirror neurons have been assumed to cause one individual to understand another by first understanding her/himself, Churchland argues that the causal order could actually be reversed—that mirror neurons function primarily to simulate another’s action to enable the individual to predict or imitate it. Rather than beginning as self-representations, mirror neurons may be necessary in creating self-representations from observed experience. While the reader might make the simplified observation that Churchland plays the proper role of philosopher by carefully analyzing logical inconsistencies in scientific hypotheses, the fact that her counter-arguments are equally grounded in empirical research should lead us to ask why we ever began to think that philosophy and science were different disciplines.
    
The Naturalistic fallacy fallacy
Framing her scientific argument, Churchland crafts a philosophical argument directly engaging the common claim that science has no place in the discussion of ethics or public policy. This claim takes various forms. Some forms are little more than tautological “semantic wrangles,” such as “only humans have human morality,” or the assumption that morality requires reasoning and reasoning requires language, therefore only humans are moral.  One common argument politely demonizes scientific approaches as “scientism,” a vaguely-defined crime that serves to do little more than distinguish “us” (humanists/theologians/policy-makers) from “them” (scientists and interdisciplinary traitors like Churchland).   Another tactic exploits a passage from David Hume’s Treatise on Human Nature (3.1.1.27) that has been decontextualized and over-simplified to say “you can’t get an ought from an is,” (i.e. moral conclusions are not based on factual premises). Such mixing of factual arguments with moral ones was dubbed the “naturalistic fallacy” by philosopher G. E. Moore. We may think of plenty of cases in which such a transition would, indeed, be fallacious. We commonly assume that something that is “natural” is, therefore, “good,” and “unnatural” is bad, until we come across obvious exceptions such as naturally-occurring influenza and its unnaturally manufactured vaccine. This is clearly an example of fallacious reasoning. But, as Churchland illustrates, there are plenty of cases in which moral arguments that are logically consistent but heedless of the facts of nature prove to be too presumptuous and abstract to find any consistent implementation in reality. Even the most popular rule-based morals fail in practice, not so much due to human frailty as to the frailty of rule-based reasoning, itself. As Churchland demonstrates, even the Golden Rule cannot function as a rule without a host of prior, unexamined assumptions to guide its interpretation. It also carries some unrecognized consequences. If a self-mutilator wants others to find the same salvation-through-pain that he does, is he morally obligated to torture them? The Golden Rule has a function, but not as an a priori rule.  According to Churchland, the Golden Rule primarily serves to activate empathetic, pro-social behavior already rooted in our evolved neuroanatomy, not in any set of rule-governed cultural norms. Proposed categorical imperatives by Immanuel Kant, Jeremy Bentham, John Rawls, and Peter Singer have similar problems. The idea of rules, like the idea of reason, is the problem. It creates an imagined antecedent that is not, ultimately, its origin. As philosophers from Aristotle and Mencius to Hume and Nietzsche recognized, our reflective rules are ad hoc generalizations. Churchland cites the now-famous interview of Georgia congressman Lynn Westmoreland by Stephen Colbert. Westmoreland vociferously advocated the inclusion of a graven image of the Biblical Ten Commandments in a Louisiana courthouse because, he insisted, those commandments are the origin of all morality. Despite this, the zealous congressman could only recall three commandments, and those in highly abbreviated form. Unsurprisingly, the three he recalled (“Don’t murder…don’t lie…don’t steal”) are featured in law codes predating the Bible, such as Hammurabi’s Code and the Laws of Manu, not to mention isolated cultures across the globe that have had scant contact with the West and none at all with Judaism or its offshoots. Churchland’s argument is that, instead of denying or lamenting the ad hoc nature of morality, we will achieve more substantive moral progress by admitting and systematically studying the evolved neurological structures that precede our discursive norms.
    
The Evolution of Bioethics
The relevance of Braintrust is not limited to the academy or the armchair. If the is/ought distinction is unduly exaggerated in moral philosophy, it becomes a weapon in the sphere of public policy—an excuse to defund or severely regulate research that does not reinforce popular prejudice. After all, what is at stake is the power to shape and regulate the behavior of others, and maintaining that power depends on popular appeal rather than empirical evidence. Churchland seems to have learned this political truth in 2008 when she presented a paper to George W. Bush’s Council on Bioethics.
The council was already notorious as an ideological star chamber established to construct an intellectual façade for the administration’s war on stem cell research. With a few exceptions (including Michael Gazzaniga, who seems to have adopted a curious methodological relativism), the council was composed primarily of Right wing political pundits, such as Francis Fukuyama and Charles Krauthammer, rather than research scientists. The council was originally chaired by Leon Kass, who was appointed shortly after the publication of his anti-cloning essay, “The Wisdom of Repugnance” (The New Republic, June 2, 1997, 216.22). In this essay, Kass appeals to inarticulate emotional reactions, not only as a justification for banning scientific research, but as a justification for dismissing reasoned arguments which contradict those emotional reactions.
We are repelled by the prospect of cloning human beings […] because we intuit and feel, immediately and without argument, the violation of things that we rightfully hold dear. [… R]epugnance may be the only voice left that speaks up to defend the central core of our humanity. Shallow are the souls that have forgotten how to shudder.
Not only does Kass use a gut reaction to argue for the implementation of government policy, he uses it to divide the in-group from the out-group, the moral from the “shallow souls.” Kass’ argument exemplifies, perhaps deliberately, Hume’s claim that reason is the slave of the passions. At the same time, it abdicates any pretense of prioritizing reason over gut feeling.
As chair of the Council on Bioethics, Kass removed any “shallow souls” who would not ratify the Council’s foregone conclusions—most famously molecular biologist and Nobel Prize winner Elizabeth Blackburn, one of only 3 research scientists on the 18-member council. Though Kass was eventually replaced by Edmund Pellegrino, the council’s strategy remained dependent on ad hoc arguments and emotionalistic platitudes, particularly the malleable abstraction of “human dignity.” After bioethicist and council member Ruth Macklin publicly pointed out that the term “dignity” served only as a rhetorical red herring, the council, in an effort to salvage its own credibility, invited papers from philosophers, theologians, lawyers, physicians, and politicians, which were published as the report, Human Dignity and Bioethics. Though a handful of bioethicists, such as Churchland and Daniel Dennett, tried to explain the nature of Macklin’s argument, most of the articles (including one by Leon Kass, himself) aimed to ratchet up the emotional valence of the term rather than clarify precisely how it justified a government ban on life-saving research.
Churchland’s contribution to the report, “Human Dignity from a Neurophilosophical Perspective,” may have been the germ of Braintrust. Besides calling attention to the neural origins of moral sentiment, Churchland describes the tragic history of “misplaced moral certitude.” She points out that past advances in medical technology, including vaccination for smallpox, anesthesia for use in surgery and childbirth, dissection of corpses, organ donation, and blood transfusion were all initially prohibited by religious and political authorities with similar moral certitude (and “wisdom of repugnance”) at the cost of tens of thousands of preventable deaths. The loss of life in these historical examples bears its own emotional valence to those who see human suffering as a greater harm than rule-breaking. More importantly, they serve to undermine the is/ought dichotomy by juxtaposing moral norms with the measurable, real-world consequences disregarded by tautological, ought-ought moralizing.
In the council’s published report, Churchland’s essay is followed by a reply from council member and theologian Gilbert Meilaender. Rather than engaging the tenets of Churchland’s argument, Meileander simply launches an ad hominem attack on Churchland, herself, for “breath[ing] a spirit of condescension.” Rather than qualifying or refuting Churchland’s evidence, Meileander denies her right to cite it. Like Kass, Meileander appeals to sentiment as a power greater than reason and claims that if Churchland does not feel the same disgust a Catholic feels at HPV vaccinations or stem-cell research, she is therefore unfit to question them. “Unless and until one is capable of that,” Meileander demands, “the most dignified thing to do would be to remain silent.” In other words, only those who share the same foregone conclusion are allowed to question its logic or implications. Conspicuously, Meileander invokes the term “dignity” in an attempt to silence Churchland, proving her (and Macklin’s) original point—“dignity” like “wise disgust” is not a reason but a rejection of reason and testable evidence in moral arguments. What Meileander forgets to mention is that this emotionalistic certainty which is immune to rational criticism drafts public policy and impacts the lives of thousands, if not millions of people with Parkinson’s disease, cervical cancer, and other potentially preventable diseases. Neither Meileander nor Kass inquire into the gut feelings of those crippled by these diseases, nor do they invoke “human dignity” in their defense.
By openly exhibiting and even prioritizing the same sorts of behavior observable in monkeys and rats, professional moralists like Kass and Meileander prove Churchland’s argument in the very tactics they use to attack it. Moral arguments begin with evolved, brain-based heuristics which precede and structure conscious reasoning. This does not make them bad or good, but it makes them deceptively convincing when they are at their most self-indulgent. The most highly educated modern human is all-too-capable of ignoring evidence and abandoning reason whenever he feels like it. More importantly, moralists don’t seem to regard these feelings, themselves, as needing explanation. This is as problematic in the philosophy of Emmanuel Lévinas (whose empathy-based morality famously failed to find real-world application in the Israeli-Palestinian conflict) as it is in the theology of Gilbert Meileander or the punditry of Leon Kass. Since demands for “ethics in science” can be a smoke-screen for imposing irrational restrictions on scientific research and its ability to save and improve lives, we might at least counterbalance the ethics of science with a science of ethics. By investigating the cognitive and evolutionary origins of moral sentiment, we do not invalidate that sentiment in policy discussion. Sentiment is inextricable from human thought. Rather, the science of ethics imposes a burden of proof on those who would exploit isolated anecdotes to evoke irrational emotion and then leap to non sequitur generalizations which would regulate the lives of others. It requires us to factor in actual outcomes, such as the loss of life that follows from denial of treatment, instead of assuming that Providence will protect the righteous.
The introduction of these new criteria will require a reevaluation of those who have been designated as moral authorities. Recognizing the all-too-human (or mammalian) motivations of moralists naturally prompts a reevaluation of trust, and it is with the question of trust, particularly when it comes to the formation of institutions like the Bioethics Council, that Churchland concludes Braintrust.
[W]hat kind of regulations should govern stem cell research? To begin to make progress on that question, one has to know quite a lot of science—what stem cells are, what about them makes them suitable for medical research and therapy, what diseases might be addressed using stem cell research, and what objections might be raised against it. (204)
These are simple questions, but they illustrate the false dichotomy of is and ought. While these questions do not exclude moral philosophers, theologians, or arm-chair commentators, they do introduce new requirements for methodological rigor, predictive accuracy, and accountability in a discourse which has traditionally relied on ad hoc reasoning and sensationalist anecdotes.
As research into the structure of the brain progresses, questions about brain-based morality are going to become even more common and more heated. Recently, President Barack Obama introduced the BRAIN Initiative, a project akin to the Human Genome Project. Assisting him with this introduction was NIH Director Francis Collins, who is serving as de facto director of the BRAIN Initiative in its early stages. In the past, Collins has not been shy about his belief in the metaphysical origins of moral judgment. Explaining his book, The Language of God: A Scientist Presents Evidence for Belief, Collins explicitly bars moral cognition from scientific study, implying that some sort of social collapse will follow if we get too inquisitive:
After evolution had prepared a sufficiently advanced ‘house,’ the human brain with all of its neurological complexity, God gifted humanity with something special that makes us different from all the animals, the knowledge of good and evil, the Moral Law, with free will, which is not an illusion, and with a soul. ... If the moral law is just a side effect of evolution, then there is no such thing as right or wrong, good or evil. It’s all an illusion. We’ve been hoodwinked by natural selection into thinking that there is such a thing. Are any of us, especially the strong atheists, really prepared to live our lives within that worldview? (2008)
The answer to that last question would be equally well put to Collins, himself. A geneticist and professional administrator, he is new to neurobiology, and it remains to be seen if his stated beliefs will conform to the evidence or if he will follow in the footsteps of morally-certain policy makers like Kass and Meileander. For neurophilosophers, the short answer to Collins’ question is “Yes.” Collins may not like Churchland's thesis in Braintrust, but it is precisely because the people who hold the purse strings for scientific research frequently share his dichotomized view that Braintrust is a very timely and important argument.

- Eric Luttrell
 
Copyright – All Rights Reserved