Evolution, Morality and Truth

Published by timdean on

There’s a widely held – yet mistaken – belief that all cognitive processes are somehow intended to find truth; we think in order to understand the way the world is. And this applies equally for moral cognition: it’s intended to find the moral facts of the matter: is it right or wrong to kill an innocent? Is it right or wrong to lie or cheat?

Makes some intuitive sense – after all, why employ the energy and time consuming cognition if it’s not to improve the accuracy with which we apprehend the world? And surely a more accurate representation will also aid us in decision making, right?

But this rests on two crucial – and oft unexamined – assumptions that are not nearly as robust as they might appear at first glance. But if these assumptions are indeed flimsy, then it could be that morality actually has nothing to do with finding truth, and is instead simply about finding the best course of action in a given circumstance – ‘best’ for the individual at the time by their proximate reckoning, and best for the individual’s genes on an ultimate level.

Rational Agents

spockThe first confounding assumption is that humans are, rational agents. A rational agent “always chooses to perform the action that results in the optimal outcome for itself from among all feasible actions,” according to good ol’ Wikipedia. Basically, a rational agent has beliefs and desires, and draws upon these beliefs in an attempt to satisfy their desires. All very mechanical, very rational, very Mr Spock.

But… we’re so not rational agents. We don’t actually weigh up these options in a balanced, impartial way. We, instead, use short and fast heuristics fuelled by emotional impetus to make decisions. There’s a growing mount of research that undermines the rational agent hypothesis, rational choice theory and exchange theory (Lawler & Thye, 2006; Collins, 1993; Hammond, 2006; amongst others). In their place, it’s better to think of us as emotional-intuitive agents (Haidt, 2008).

Now, that seems an odd way to find the truth of the matter, such as which possible behaviour is better for us to take. But, nevertheless, that’s how we function. Truth isn’t as important as we might think in directing behaviour.

Autonomy Assumption

The second assumption is sometimes called the ‘autonomy assumption’. It goes a little something like this:

People have, to greater or lesser degrees, a capacity for reasoning that follows autonomous standards appropriate to the subjects in question, rather than in slavish service to evolutionarily given instincts merely filtered through cultural forms or applied in novel environments. Such reflection, reasoning, judgment and resulting behavior seem to be autonomous in the sense that they involve exercises of thought that are not themselves significantly shaped by specific evolutionarily given tendencies, but instead follow independent norms appropriate to the pursuits in question (Nagel 1979).
Stanford Encyclopaedia of Philosophy

What this is saying is that the cause of reasoning is not as important as the reasoning itself, and it formed the foundation of Nagel’s attack against any kind of strong evolutionary ethics. Basically, it doesn’t matter what caused someone to start thinking about a particular thing, but it matters what their reasons are for believing it. Evolution might have endowed us with a disposition to think more about mathematics than music, but the content of the thoughts about maths – such as whether 2 + 2 = 4 – has nothing to do with evolution; evolution can’t make 2 + 2 = 5.

So, too, for ethical thinking, says Nagel. “No one, to my knowledge, has suggested a biological theory of mathematics, yet the biological approach to ethics has aroused a great deal of interest,” says Nagel (1979). But, this argument – and assumption – only holds if the subject matter is truth-seeking. Mathematics is truth-seeking. Physics is truth-seeking. But what if ethics is not?

Going to the definition of the atonomy assumption above, what if ‘reflection’, ‘reasoning’ and ‘judgement’ don’t actually influence behaviour in ethical thinking? What if the ‘excercises of thought’ are themselves shaped by evolutionary processes – such as through unconscious domain-specific modules or moral intuitions? What if behaviour or discourse in the social sphere isn’t about finding the truth of the matter, but is about achieving desirable outcomes for the agents involved quickly and with limited information and energy expenditure? Perhaps over-rationalising the discipline will misrepresent what it’s actually there for in the first place.

Morality isn’t About Truth

If we are comfortable abandoning these assumptions, and the notion that moral thinking and moral discourse is about finding truth, then we can simply wash away many of the most perplexing problems in ethics and metaethics as non-problems. Moral discourse might feel like it’s talking categorically; talking about truths. But if it’s not, then we can just ditch cognitivism and move on.

Of course, such a move raises other issues, such as how to justify moral judgements. But that’s another story.


2 Comments

Amoralism and moving past morality « Irresistible (Dis)Grace · 3rd June 2009 at 11:56 am

[…] a belief in subjective experience (such as moral foundations existing as a function of emotions or pragmatic concerns) over objective and external reality (which may be unknowable, nice agnosticism), and I hinted […]

The Problem of Cardinal Values « Ockham’s Beard · 14th June 2009 at 6:33 pm

[…] The only solution I can think of so far is to simply declare something a cardinal value – say, happiness or cooperation – and justify it using entirely rational means without recourse to natural facts and evolution. However, I’m sceptical of this approach for other reasons. […]

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *