February 2010 archive

What’s so Special about Morality?

It might seem a facetious question to ask: what’s so special about morality? But it’s a question I think worth asking in a serious way before embarking on any in-depth discussion of the nature of morality. And one that I don’t think is asked seriously often enough.

Plenty of discussions in ethics begin by stating that the starting question is, as Bertrand Russell puts it: “what sort of actions ought men to perform” (Russell, 1910). So it often begins, with ‘ought’ as the starting point.

But not any old ‘ought’. This is no simple ‘should’ of prudence or expedience. No ‘should’, in the sense that one ‘should’ do something, but is not obliged to do so. This is a bare knuckled ‘ought’, implying a certain obligation, nay, inescapability.

Richard Joyce (2006) calls this feature of morality practical clout, a term I’m happy to adopt to describe the apparent gravity of ‘ought’ (although, if it was up to me, I would have called it practical mojo).

I’ve cobbled together a list of some of the ‘speical’ features of the moral ought – these aren’t stated as being necessarily ought-worthy, nor would they be agreed upon by all thinkers, but they do appear enough throughout the literature to bear a mention:

  • Impartial – morality is beyond self-interest
  • Universal – morality applies to everyone (or everyone within a particular group, or every moral agent, depending on your bent)
  • Objective – something makes moral propositions true other than our subjective whim or personal preferences
  • Justifiable – a moral norm is defendable, there are reasons for accepting it, it’s not just arbitrary
  • Overriding – moral norms are more important than norms of custom or individual preferences, they override them when there’s an apparent conflict
  • Non-negotiable – moral norms are binding, obligatory, you can’t just weasel out of them if you don’t want to obey them
  • Action-guiding – moral norms steer behaviour, they’re not just matters of theoretical interest, where one might proclaim a moral principle but feel not inclination to follow it in practice
  • Imperatives – imbued with the force of necessity rather than contingency
  • Other-focused – moral norms are not just about individuals in isolation but are about the interaction between individuals
  • Harm – many moral norms are centred around preventing harm to others
  • Fairness – many moral norms are also centred around preventing injustice
  • Altruism – many moral thinkers believe that even if morality doesn’t spring from altruism, it’s compatible with altruism and the Golden Rule
  • Happiness – moral norms either directly or indirectly lead to happiness or wellbeing
  • Rational – moral norms come from rational agents, they’re justifiable using reason
  • Concerns ‘value’ – as opposed to ‘fact’

I’m sure this isn’t an exhaustive list, but I think it captures a decent proportion of the features that many thinkers believe makes morality ‘special’.

But I have my doubts about some of these features. I’ve read altogether too many papers in ethics that take some (or all) of these things as being so obvious as to be beyond question. And I hardly believe that’s a healthy way of beginning an ethical enquiry.

Now, I don’t doubt that many of these features are intuitively appealing. But it’s the very fact that they seem obvious in everyday experience that ‘ought’ to raise an eyebrow.

Compounding this concern is my growing conviction that morality is best understood as an evolved device for encouraging a social animal to behave itself in a social context. And if that’s the case, then many of these obvious ‘special’ features of morality are illusory.

It might have been useful to treat morality as if it was inescapable, rational, non-negotiable etc, but that doesn’t necessarily mean that it is. In fact, moral norms might be entirely prudential ‘shoulds’ driven by instrumental values that simply carry a greater emotional weight because of the gravity of the implications of ‘moral’ decisions compared to non-moral decisions.

That doesn’t really make morality special, but it makes it no less important.

Rethinking Moral Universality

It’s a popular notion: that morality is somehow universal. If murder is wrong, then it’s wrong for everyone. Or  more pointedly, if murder is wrong, then regardless of what I might think on the matter, it’s wrong for me.

There certainly seems to be something about morality that sets it apart from other nominally prescriptive things, like etiquette (chewing with your mouth open is wrong) or instrumental value (using a hammer to drive in a screw is wrong). These appear to be more contingent, dependent on the situation at hand, the culture of the day and the objective that’s set to be achieved.

Not so morality. Murder is wrong. That’s all there is to it. It’s not a matter of convention, nor a matter of instrumental expedience. It’s just plain wrong.

But the pickle in the ointment is the seemingly intractable problem of getting everyone to agree on the universal moral norms that apparently apply to everyone. Moral disagreement is a real bugger for the notion of universality.

There are a few responses to the problem of moral disagreement, many of which are summed up tidily in a paper by John M. Doris and Alexandra Plakias called ‘How to Argue about Disagreement: Evaluative Diversity and Moral Realism’, which appears in volume two of the wonderful Moral Psychology series, edited by Walter Sinnott-Armstrong.

One common response is to deny that there is, in fact, any moral disagreement. Many moral realists contend that all moral disagreements would dissolve if the agents occupy optimal conditions, i.e. they’re furnished with all the relevant facts and have the rational capacity to understand them.

I don’t know about you, but this response trouble me. If we’re dependent on agents occupying optimal conditions to resolve moral disagreements, then we’re pretty much screwed when it comes to building a real-world moral system. There’s no guarantee than anyone ever has, or ever will, occupy the optimal conditions, which might, in real terms, relegate moral facts to some unknowable realm about which we can only speculate. That might be acceptable to some, but to me it just reinforces the notion that we need to develop a practical system of morality that actually works in the real world.

Another reason I’m wary of this notion of universality of norms is that I think it misconstrues the notion of universality in the first place. This is because it focuses on norms, the rules and principles themselves. However, this isn’t the only way to think of universality.

If it turns out that the norms are just strategies for solving a deeper problem – such as the problem of how to get large unrelated groups of individuals to live and work together without screwing each other over – then we can see that there might be many strategies for solving this problem. However, no one strategy is going to be best in every circumstance.

As such you’d expect there to be a variety of norms – or strategies – for solving the problem, and you’d expect that many of the norms would be in conflict. This is because sometimes it’s better to forgive and sometimes it’s better to punish and sometimes it’s better to make an example of someone, for example.

So, when we’re confronted with moral disagreement, instead of looking upon the conflicting norms and shaking our heads with resignation, we should look at the problem the norms are tying to solve. It might turn out that one norm is more effective at solving that problem in the circumstances on hand. Or it might turn out that both norms are effective in different environments, and we lack the information to know which environment we’re in, so we must be agnostic as to which one to prefer. We could also reconcile those who hold the opposing norms as being true by reminding them that they’re both trying to achieve the same end – or solve the same problem – but they both have different strategies for doing so.

I’m not suggesting this will be easy. The notion that norms are universal is deeply rooted in our psychology. It takes a step to see that it’s not the norms that are universal, but the problems they’re trying to solve. Even then, the problems may not be objective, as such, but they are likely to be fewer than the norms that arise to solve them. Still, I think this approach has a lot more going for it than hoping agents will hit upon the optimal conditions and discover that one set of norms rules above all others. For one, I think it might actually work.