Reaching Heaven: How and Why to Perpetuate the Myth of Free Will
September 13, 2012
Predictably Irrational. Dan Ariely’s work on “the hidden forces that shape our decisions” captures well the truth of human decision making: “free agents” exercise their agency in ways both (1) predictable and (2) irrational.
I read Ariely’s work about a year ago as a research assistant in law school for a text about law and logic. Through that research, I learned that our decisions are not free in the sense that each alternative is equally available for selection in the way, say, typing Y is just as easy as typing N. Instead, our choices are influenced by a host of cognitive biases, heuristics, and contextual factors such as the recency effect, availability heuristic, and consistency bias.
Over the year since I finished that research, I have gradually converted to the determinist camp. Below I illustrate (A) why it is useful to treat illusory free will as non-illusory, and (B) how the illusion of free will informs the ethical use of technology.
(A) Why it is useful to act as though free will is non-illusory
First, let me acknowledge that many well-founded thinkers contest my premise that free will is illusory. That debate, however, is outside the scope of this article. To avoid a lengthy discussion of causal determinism, hard materialism, quantum decoherence, and compatibilism, I will thus disclose my bottom-line conclusions and move on to why I argue it is useful to treat free will as existing.
So here’s the disclosure: I am a hard materialist, incompatibilist, and hard determinist. For those new to the free will dialogue, that means I think that, at the level of a simulating consciousness such as your average Joe, decisions are determined rather than free. As J. J. C. Smart suggests: if determinism is true, all our actions are predicted and we are not free; if determinism is false, our actions are random and still we do not seem free. As one author points out, Sam Harris offered a similar opinion:
“In his book, The Moral Landscape, author and neuroscientist Sam Harris mentions some ways that determinism and modern scientific understanding might challenge the idea of a contra-causal free will. He offers one thought experiment where a mad scientist represents determinism. In Harris’ example, the mad scientist uses a machine to control all the desires, and thus all the behaviour, of a particular human. Harris believes that it is no longer as tempting, in this case, to say the victim has “free will”. Harris says nothing changes if the machine controls desires at random - the victim still seems to lack free will. Harris then argues that we are also the victims of such unpredictable desires (but due to the unconscious machinations of our brain, rather than those of a mad scientist).”
For reasons discussed at length on my blog, I agree with Smart and Harris. However, in the context of moral decision making, I find pragmatic reasons to sustain the illusion of free will.
If all behavior is determined, then certainly there is no behavior worth praising or censuring, right? To illustrate: a man commits a crime while sleepwalking or because of a brain tumor. Certainly we wouldn’t hold him as morally accountable as we would a comparable criminal with full capacities: right?
Drawing on my pragmatist philosophical foundation, I would posit that moral responsibility is as true as anything, because it works in the context we’re familiar with and communicating in. Permit an explanation, using the illustration of fully capable Jane Employee, who has just been hired and needs to decide which of five health plans to select. As she views the screen in front of her, Jane’s consciousness flits back and forth between five alternatives almost effortlessly (though, in truth, thoughts are not free, but are instead shackled to the price of the action potentials and neurochemical motion that constitute them- good luck thinking “freely” without them). Like most humans, though, Jane is a lazy decision maker and will avoid choosing if she can. “What’s the default if I don’t choose?” she might ask, hoping to finish her election before lunch. Also like most human decision-makers, Jane’s cognitive capacities are hobbled by a host of biases and heuristics, to say nothing of competition for her limited conscious bandwidth. For example, she might fall prey to the recency effect, and select the last option merely because it was the one most recently presented to her. Alternatively, she might choose the first (primacy effect). If Jane was raised in a collectivist culture (say, east Asian) rather than an individualistic Western one, she might weight which plan spreads cost the most equitably amongst the most people. Or, perhaps her father counseled her to eliminate the middleman, but she feels angry that he didn’t call her on her birthday and so subconsciously rebels against his counsel and feels inclined to choose the middle of the three options (associative bias).
The point of this illustration is that the number of factors relevant to any decision event are extremely large, and fall far outside our conscious computational capacity. From our perspective, the constricted data inputs we sense, combined with the incredible complexity of our world, creates the perception of agency and of a future that we can help create. Taken together, it appears to us as though we have a conscious self, exercising moral agency in a world where future states are highly unpredictable. This perception is sufficient to justify treating moral responsibility as though it were true (even though free will is, upon examination of the brass tacks, an illusion). One might analogize this paradigm to Einstein’s theory of relativity: call it the “Theory of Agentic Relativity” if you wish.
This Theory of Agentic Relativity maintains the ability to make moral judgments, even in the absence of free will. The Theory can even survive the
Argument that Free Will is Required for Moral Judgments,
which goes like this:
1 The moral judgment that you shouldn’t have done X implies that you should have done something else instead
2 That you should have done something else instead implies that there was something else for you to do
3 That there was something else for you to do implies that you could have done something else
4 That you could have done something else implies that you have free will
5 If you don’t have free will to have done other than X we cannot make the moral judgment that you shouldn’t have done X
Our ignorance of the knowledge needed to predict the effects that deterministically flow from the predicate causes of the past creates the needed #2 and #3. It appears to us as though we have options (and indeed that very perception becomes a cause that affects the effect that is the choice: the decision might be quite different sans the perception of agency). This perception makes morality meaningful, but only in the context of (A) an empathy-programmed community possessing (B) a limited ability to model the determinants of their own behavior. Thus far, evolution has provided modern Homo sapiens communities with both.
To summarize, there is no reason to refrain from pursuing or enforcing morality as long as these two hold: (1) ignorance about the future (creating the perception of agency) and (2) the existence of an empathy-based community of consciousnesses (creating the relevance of morality).
(B) How the Theory of Agentic Relativity informs ethical use of technology
In order to have the empathy-based community which makes morality an interesting question, the members of the community must possess the mechanics of empathy. At some level, these mechanics must include the ability to (1) perceive another’s emotion and (2) internalize that emotion. (In this context, I define emotions roughly as well-being status updates, you know, like the ones you used to read on the legacy Face—s that preceded Face—book).
The wellspring of empathy in Homo sapiens is the TPJ/MNS neurosystem, regulated by hormones. Human brains have a temporal-parietal junction system (TPJ) and a mirror-neuron system (MNS). The TPJ separates “self” and “other” emotions and searches the brain for solutions. The MNS allows you to feel the emotion of another (emotional empathy). To illustrate, I draw on these gender-averaged facts:
* When the male’s face stops imitating the emotion of a woman (he’s left the MNS), for instance, she feels he doesn’t care - whereas the guy merely switched to TPJ and is trying to solve the problem.
* Men turn off or disguise their facial expressions to suppress showing their emotions. Females, on the other hand, exaggerate an observed emotion in another.
* These realities are correlated to estrogen/oxytocin for women and testosterone/vasopressin for men. Switch the hormones and you switch the MNS/TPJ ratio.
These on-average gender behaviors demonstrate the two empathy ingredients noted above: the ability to (1) perceive another’s emotion and (2) internalize that emotion.
However, these empathic communities are by no means guaranteed. Emerging technologies will alter the parameters that make the illusion of free will pragmatic to uphold:
* More human consciousnesses will become ever-more aware of the determinants of their own choices
* The possibility of creating consciousnesses outside a community of other empathic consciousnesses will become a reality
* The ability to predict the decisions of legacy humans will rise
However, until those parameters entirely dissolve, there are some tweaks that tech users and governors can employ to leverage and perpetuate free will benevolently.
First, use what Cass Sunstein and Richard Thaler suggest in their 2008 book, Nudge: Improving Decisions about Health, Wealth, and Happiness: choice architecture. Understanding the predicates of human decision making, a company might, say, set the health care plan that incentivizes daily exercise as the first option (to benefit from the primacy effect). Or, a state licensing board might set organ donation as the default, mandating an opt-out rather than an opt-in.
Second. As we create consciousnesses in the future, we can and should deliberately add the building blocks of free will:
* The ability to simulate
* The meme, “I am a free agent”
* A block to awareness of the causal factors underlying decisions of the self
* Ample empathy
As we design the future to include the mechanics of the golden rule (empathy and free will), we engender a future where benevolence is not only possible, but probable. Who knows? We just might make heaven: and that is the most unpredictably rational outcome of all.