Philosophy: Free will and politics | The Economist

Free will and politics

by M.S., economist.com
January 12th 2012 6:19 PM

MY COLLEAGUE has a beautifully written post at Big Think on recent challenges to the idea of free will. He argues that the claim that free will is an illusion is itself illusory, as it misunderstands the idea of the term "illusion". To claim that something is an illusion is to say that it is a deceptive representation of some other actually existing thing which one has experienced. For example, a desert mirage looks like the sea, but there ain't no water out there. But internal experiences such as the sense of having free will don't have any external referent; the sense of having free will is only an internal experience, and there's no way someone could have an illusory experience of the sensation of freedom as opposed to a real experience of the sensation of freedom. It'd be like having an illusion of being angry. Or as he more elegantly puts it:

"That free will you thought you felt, that was an illusion." What? How would you know? Maybe you have a theory that says every event is necessitated by the laws of nature and the prior history of the universe. In such a world, can there be something it is like to experience the absence of necessitation?

Now I deliberately and willfully touch the tip of my nose. I certainly don't feel myself compelled by laws of the cosmos and the sum of time. Neither do I feel myself not so compelled. I feel myself willing a nifty little nose-touching.

I think this misses the most important point about contemporary challenges to the idea of free will. The notion that free will is an illusion because reality is mechanistically determined is the old 19th-century challenge to free will, Dostoevsky's "Underground Man" piano-key universe of predictable physical collisions. The new challenge to free will comes from a different direction: neuroscience's discovery that people's brains are a collection of diversely oriented modules, and that our understanding of our own intentionality is to a great degree a legitimating fiction which one module in the left hemisphere of the brain retroactively imposes over the decisions different modules make. The old challenge to free will came from the "free" side. The new challenge comes from the "will" side.

Karl Smith says the two main things I would have said. First, something doesn't necessarily need an existing, or even physically possible, external referent for there to be an illusory experience of it. He uses the example of an illusion of a ghost. Second, he uses the perfect hypothetical to illustrate the point about modular brain function:

First, we set up a video camera. Then we open Will’s skull. Then we sever the corpus callosum. We then place a divider between Will’s right and left eyes. We then post a message seen only by the left eye that says “Touch Your Nose”.

Will will likely touch his nose.

Then we ask Will. Why did you touch your nose. He might say something like – I just wanted to make sure you hadn’t paralyzed me yet.

We say thank you very much. We sew Will back up and then we play the recoding for him.

Will then sees that while he believed the touching of his nose to be contingent on a process he was consciously aware of, this was not the case... Because of this we can meaningfully say that Will experienced the illusion of free will.

The example Mr Smith uses here is a version of experiments cited by neuroscientist Michael Gazzaniga, whose recent book "Who’s In Charge: Free Will and the Science of the Brain" seems to have touched off the whole furore. In the experiments he talks about, subjects with severed left and right hemispheres would perform actions in response to visual stimuli sent only to their right hemisphere; their left hemisphere, which controls language interaction but had no knowledge of the stimuli, would then confidently invent and report obviously inaccurate explanations for the actions the subject had just performed. Show the left brain a chicken claw and the right brain snow; then ask which of a new set of images are most relevant. The subject's left hand will point to a snow shovel, the right hand points to a chicken. Ask "Why did you point to the shovel?" and the subject (or rather the subject's left brain) explains "he" had pointed to a shovel because you need a shovel to clean out the chicken shed. Here's Mr Gazzaniga in an interview in Scientific American:

What neuroscience has been showing us, however, is that it all works differently than how we feel it must work. For instance, neuroscientific experiments indicate that human decisions for action are made before the individual is consciously aware of them. Instead of this finding answering the age-old question of whether the brain decides before the mind decides, it makes us wonder if that is even the way to think about how the brain works. Research is focused on many aspects of decision making and actions, such as where in the brain decisions to act are formed and executed, how a bunch of interacting neurons becomes a moral agent, and even how one’s beliefs about whether they have free will affect their actions.

So, listen. Everything has a political relevance, and this being a political blog, I'm now going to turn to the political-philosophy implications of stances on free will. Or, rather, I'm going to let Mr Gazzaniga bring them up first. As he says in the interview,

For me, it captures the fact that we are trying to understand a layered system. One becomes cognizant there is a system on top of the personal mind/brain layers which is yet another layer--the social world. It interacts massively with our mental processes and vice versa. In many ways we humans, in achieving our robustness, have uploaded many of our critical needs to the social system around us so that the stuff we invent can survive our own fragile and vulnerable lives.

Some currents in modern politics, especially in America, lay a lot of stock in the vision of the citizen as a freely acting subject who determines his own priorities, makes his own choices, and can thus be held accountable for those choices. To the extent that you view human decision-making as centered in a conscious monad that decides "I'm going to touch my nose now" or "I'm going to drink that soda" or "I'm going to pay $5,000 a month for this health insurance policy because I estimate the chances that I will need coverage above $1m in lifetime cancer treatment costs at a percentage that makes this a rational decision", you may not think that social institutions, including government, need to concern themselves much about what decisions that subject makes. But to the extent that you view citizens as each made up of collections of interacting and often internally conflicted modules, you start to see a much wider role for social institutional actors.

I'm going to have a very hard time condensing what I mean here into a paragraph at the end of a blog post, but roughly: we assign responsibility for desired public outcomes to decision-making units that communicate well internally and have internally shared interests in that outcome. So in general, it makes a lot of sense to make individuals responsible for themselves: modules inside one person's brain may be distinct but they're usually in very close communication and generally share a common interest. However, some brain modules don't communicate well and may conflict with each other. One module in your brain wants to be fit; a different one wants to drink that soda. Taking as a given for the moment that we have a public interest in people being fit, it may make sense to have social institutions work collectively with the modules in everyone's brains that want to be fit, rather than depending on each individual to resolve the contest between their get-fit module and their drink-soda module. The shift in thinking here isn't necessarily so different from the Freudian development of the idea of the subconscious mind. But like psychoanalysis, neuroscience's challenge to the idea that individuals are coherent subjects who make their decisions consciously and can be held responsible for them tends to shift the way one thinks about society and politics. In many cases, it's not only unfair to hold individuals accountable for the actions of the modules in their heads, it's also completely counterproductive, while solutions pursued at either a neuropsychological-pharmacological level or at a social level would be the effective ones.

(Drawing credit: Andreas Vesalius)

Original Page: http://www.economist.com/blogs/democracyinamerica/2012/01/philosophy

Shared from Read It Later

 אל

Comments