Posts Tagged ‘haidt’

Ockham’s Beard at the Sydney AAP Conference!

Well, I say “Ockham’s Beard”, but the referent is really just me, Tim Dean. Yep, I’ll be giving a paper at the AAP Conference entitled “Evolution and Moral Diversity” on some of my more crackpot ideas about evolution and, well, moral diversity. The abstract ought to give some hints as to the content:

How can we account for the vast diversity of moral attitudes that exist in the world, not only between cultures but between individuals within a single culture? Part of the answer may come from looking at our moral psychology and how it has been influenced by evolution. If we have evolved a moral sense that encourages prosocial and cooperative behaviour, as suggested by Haidt & Joseph (2004) and Haidt & Graham (2007), amongst others, we might actually expect there to be a diversity in the function of this moral sense rather than it being homogenous across all humans. In this paper I argue that the problems of encouraging prosocial behaviour have no single solution that is dominant in all environments, a phenomenon modelled by game theory, particularly the Prisoner’s Dilemma. As such, evolution has primed us with a spectrum of moral inclinations that represent different strategies to solving the problems of cooperation, and these differing inclinations contribute to the observed diversity of moral attitudes.

Basically, I’m arguing that if we have evolved a moral sense that influences the way we make moral judgements, and it evolved to help solve coordination problems that stem from being social animals (because fostering prosocial and cooperative behaviour advanced individual fitness), and these coordination problems don’t have one single dominant solution in every environment, then we’d expect evolution to endow us with a variable and conditional moral faculty that would promote a diversity of ‘solutions’ to the coordination problems. And that’s precisely what we see when we look into the world.

This goes against the idea that if something is based on biology, then it must be universal and fixed – the old ‘genetic determinism’ line. Take height, which is strongly genetic (around 80% of the variation in height is accounted for by genes), but we don’t expect everyone to be the same height. In fact, we expect a variety of heights. Likewise with personality types. Or metabolic rates. Or MHC. Evolution is canny in that it doesn’t put all its eggs in one strategy in many situations. It varies things, and makes them conditional on the environment.

Why is this interesting to philosophers? Not only does it go some way to accounting for observed moral diversity in the world without collapsing into relativism, it also clarifies an important distinction between norms (strategies) and the underlying function that morality plays. We might all agree that morality fosters prosocial behaviour, but we can happily disagree about the best strategies to promote prosocial behaviour.

It also weighs in on the whole moral disagreement issue, particularly as it affects moral realism. Some believe that moral disagreement (usually couched in terms of disagreement over particular norms) would dissolve if suitably situated agents had access to all the relevant facts of the matter. I don’t believe this is true, for a number of reasons, but mainly because there are many ‘moral’ situations where there is no perfect answer – at least in practice.

This is primarily a descriptive thesis – I’m just saying that evolution has confronted these coordination problems and come up with some solutions. I’m not advocating that we abdicate our will to our evolved moral sense absent any critical reflection. But the problems that evolution has faced are largely the same ones we face, i.e. how to get a bunch of unrelated individuals to live and work together without stabbing each other in the back.

If you’re attending the conference, I’ll be giving my paper on Thursday 8th of July at 3pm in Morvern Brown room G4. Please do attend and provide support/critical feedback/biscuits.

Evolutionary Psychology Myths #1: Human Universals

Evolutionary psychology is a complex and, um, evolving endeavour. And it is often misunderstood, not only by its critics, but also often by its proponents and practitioners. So I present the first in a series of ‘evolutionary psychology myths’, not to debunk EP, but to dissolve some of the myths – some of the straw men – that are hoisted by opponents and assaulted with wasted vigour.

The first fallacy is that evolutionary psychology implies that human nature is somehow universal; that many, if not all, of our evolved characteristics are shared by all humanity.

There are some high profile evolutionary psychologists who have implied just this. Take this excerpt from a paper 2005 by none other than EP’s dynamic duo, John Tooby and Leda Cosmides:

The  long-term scientific goal toward which evolutionary psychologists are working is the mapping of our universal human nature. By this, we mean the construction of a set of empirically validated, high-resolution models of the evolved mechanisms that collectively constitute universal human nature.

Cosmides and Tooby have stated a similar position several times – that their research is not interested in human differences, but human universals. That’s all good and well, but it doesn’t mean Cosmides and Tooby – and many other evolutionary psychologists – don’t believe there are evolved differences in human nature. It’s just that the focus of their study is human universals.

However, sadly, this position can give a misleading impression that evolutionary psychology is only concerned with human universals – or even more extreme, that the only aspects of our psychology that have evolved are shared by all humans. That’s not the case.

In fact, if evolution has influenced our minds – and I’m inclined to agree with evolutionary psychologists that it has – then we would not expect evolution to furnish us all with the same psychology – the same personalities, intuitions, heuristics etc. In fact – and in keeping with our understanding of evolution and its impact on biology at large – we would expect it to lend us a diversity of psychological features.

This is for the simple reason that many of the problems that evolution has sought to solve don’t have one single answer, whether it’s the best way to find food or a mate, or how to interact with other members of your species. An individual organism’s strategy is dependent on their environment, which can fluctuate wildly, and even more important, on the strategies employed by other organisms.

As such – especially given our complex social nature and the convoluted problems that emerge from that – we would fully expect that evolution would endow us with a range of strategies for solving these problems. As such, evolution suggest psychological diversity, not psychological uniformity.

This manifests in two main ways: the first is in our problem solving modules (although I’m wary of that term ‘module’ – it may not be a discrete unit unto itself), such as a facial recognition module or our moral emotions; the other is that other evolved faculty for abstract reasoning.

This notion of evolved psychological diversity is central to my own thesis on evolution and morality. Far from evolution advocating any one particular value system or political ideology, it equips us to develop a vast plurality of values and thus ideologies to give us a wide range of responses and solutions to the problems we’ve faced in our evolutionary past. This is the idea I call Moral Diversity – a kind of moral semi-pluralism.

At the base level is the problem trying to be solved, which in the case of morality is: ‘how do you get a bunch of unrelated individuals to live and cooperate together for mutual benefit without them defecting on each other and ruining the whole venture?’

There is no one solution to this problem; there are many. And some solutions are better than others in different circumstances. Sometimes it pays to be open, trusting and highly cooperative. Sometimes that strategy leaves you open to defection. Sometimes it pays to be closed, suspicious and only cooperative with your immediate family or local community. Sometimes that approach means you miss out on potentially lucrative cooperative ventures with outsiders.

As such, evolution has furnished us with faculties, heuristics, intuitions etc that respond in different ways to the environment to produce a diversity of responses.

So, far from an evolutionary perspective on morality suggesting that ‘if it’s evolved, it’s universal’, it suggests that ‘if it’s evolved, it’s diverse’. That’s one straw man down – many left to go…

The Meaning of ‘Moral’

One of the things I’ve notice while looking at evolution and morality is the vast and unbridled equivocation that goes on when the word ‘moral’ is evoked. Some, such as Franz de Waal, observe cooperation, punishment and concern amongst non-human primates and thus calls them ‘moral’. Others, such as Jonathan Haidt, speak of surges of feeling concerning permissibility or impermissibility and call these intuitions ‘moral’. Others still stress that it takes a special kind of reasoned deliberation about rightness and wrongness to call a judgement truly ‘moral’.

But are they all talking about the same thing? I think not.

In fact, I think the general lack of clarity over what we mean by ‘moral’ is unnecessarily muddying discussion of evolutionary ethics. It’s for this reason that I propose the following basic taxonomy of moral terms:

1) Moral behaviour

Behaviour of an organism that appears to involve concern for the welfare of others besides the acting agent, including cooperation, sharing, helping, punishment, reciprocal exchange etc is ‘moral behaviour’. This behaviour may or may not  be intentional or the result of conscious deliberation. As such, this covers behaviour of  organisms that engage in altruistic behaviour – such as improving the evolutionary fitness of another organism at a cost to one’s own – as well as directed human behaviour driven by moral principles. It’s all moral behaviour.

2) Moral emotion/sentiment

Any emotion that serves to encourage moral behaviour, including empathy, sympathy, gratitude, guilt, outrage etc. These emotions serve as heuristics – rough and ready shortcuts – that direct behaviour without necessarily requiring reasoned deliberation. Humans and non-human primates – and quite likely many other animals as well – possess emotions of this kind, although it appears as though humans possess a particularly broad range of these moral emotions.

3) Moral intuitions

The immediate feeling of permissibility or impermissibility of an action. Moral intuitions, as described by Jonathan Haidt, spring forth rapidly and without conscious deliberation, fuelled by moral emotions, to yield a ‘preliminary’ moral judgement. Arguably, non-human primates can experience moral intuitions, even if they lack the capacity for moral reasoning.

4) Moral reasoning

The conscious process by which abstract moral principles (below) are deliberated upon and applied to a particular situation. Moral reasoning appears to be unique to humans, and involves abstract reasoning, conscious deliberation, imagination and an ability to predict future outcomes of potential behaviours.

5) Moral principles

The abstract propositions – often couched in categorical or universal terms – that concern permissibility and impermissibility (and obligatoriness etc) of actions.

6) Moral judgement/justification

The last term I reserve for ‘considered’ moral judgements, in contrast to the ‘preliminary’ moral judgements yielded by moral intuitions (above). A moral judgement may direct behaviour, if deliberated upon before acting (although the empirical evidence suggests this is rare, or at least only occurs in cases of moral dilemmas where there’s conflict between a moral intuition and an abstract moral principle), or it can be used post hoc as a means of justifying an action via moral reasoning and the weighing up of moral principles.

Why would such a taxonomy as this be helpful? It allows us to talk more clearly about things such as primate morality – i.e. some primates are capable of moral behaviour, even if they don’t engage in moral reasoning using moral principles – as well as provide a more nuanced account of our moral decision making process – i.e. moral emotions lead to moral intuitions, and these sometimes directly lead to behaviour, but at other times we engage in moral reasoning using moral principles to arrive at a moral judgement. Yes, lots of usage of the “moral”, but they all have slightly different meanings. You get the point.

Liberals, Conservatives and Moral Diversity

Nicholas Kristof’s column in the New York Times about the psychology of liberals and conservatives has been getting some attention this past week. Probably because the research on which it’s based resonates so clearly with so many people. It’s research by Jonathan Haidt, whom regular readers of this blog will recognise as being a great influence upon my own research.

However, Haidt’s exploration of the psychology that underpins the political spectrum – fascinating and illuminating though it is – is not the end of the story. For when you combine Haidt’s research with another intriguing finding that our political views are largely influenced by genes (Alford & Funk & Hibbing, 2005), it raises a big fat question: why does our psychology – and biology – vary in the way it does?

I have a theory. It’s called Moral Diversity. It goes a little something like this:

(more…)

Evolution, Morality and Truth

There’s a widely held – yet mistaken – belief that all cognitive processes are somehow intended to find truth; we think in order to understand the way the world is. And this applies equally for moral cognition: it’s intended to find the moral facts of the matter: is it right or wrong to kill an innocent? Is it right or wrong to lie or cheat?

Makes some intuitive sense – after all, why employ the energy and time consuming cognition if it’s not to improve the accuracy with which we apprehend the world? And surely a more accurate representation will also aid us in decision making, right?

But this rests on two crucial – and oft unexamined – assumptions that are not nearly as robust as they might appear at first glance. But if these assumptions are indeed flimsy, then it could be that morality actually has nothing to do with finding truth, and is instead simply about finding the best course of action in a given circumstance – ‘best’ for the individual at the time by their proximate reckoning, and best for the individual’s genes on an ultimate level.

Rational Agents

spockThe first confounding assumption is that humans are, rational agents. A rational agent “always chooses to perform the action that results in the optimal outcome for itself from among all feasible actions,” according to good ol’ Wikipedia. Basically, a rational agent has beliefs and desires, and draws upon these beliefs in an attempt to satisfy their desires. All very mechanical, very rational, very Mr Spock.

But… we’re so not rational agents. We don’t actually weigh up these options in a balanced, impartial way. We, instead, use short and fast heuristics fuelled by emotional impetus to make decisions. There’s a growing mount of research that undermines the rational agent hypothesis, rational choice theory and exchange theory (Lawler & Thye, 2006; Collins, 1993; Hammond, 2006; amongst others). In their place, it’s better to think of us as emotional-intuitive agents (Haidt, 2008).

Now, that seems an odd way to find the truth of the matter, such as which possible behaviour is better for us to take. But, nevertheless, that’s how we function. Truth isn’t as important as we might think in directing behaviour.

Autonomy Assumption

The second assumption is sometimes called the ‘autonomy assumption’. It goes a little something like this:

People have, to greater or lesser degrees, a capacity for reasoning that follows autonomous standards appropriate to the subjects in question, rather than in slavish service to evolutionarily given instincts merely filtered through cultural forms or applied in novel environments. Such reflection, reasoning, judgment and resulting behavior seem to be autonomous in the sense that they involve exercises of thought that are not themselves significantly shaped by specific evolutionarily given tendencies, but instead follow independent norms appropriate to the pursuits in question (Nagel 1979).
Stanford Encyclopaedia of Philosophy

What this is saying is that the cause of reasoning is not as important as the reasoning itself, and it formed the foundation of Nagel’s attack against any kind of strong evolutionary ethics. Basically, it doesn’t matter what caused someone to start thinking about a particular thing, but it matters what their reasons are for believing it. Evolution might have endowed us with a disposition to think more about mathematics than music, but the content of the thoughts about maths – such as whether 2 + 2 = 4 – has nothing to do with evolution; evolution can’t make 2 + 2 = 5.

So, too, for ethical thinking, says Nagel. “No one, to my knowledge, has suggested a biological theory of mathematics, yet the biological approach to ethics has aroused a great deal of interest,” says Nagel (1979). But, this argument – and assumption – only holds if the subject matter is truth-seeking. Mathematics is truth-seeking. Physics is truth-seeking. But what if ethics is not?

Going to the definition of the atonomy assumption above, what if ‘reflection’, ‘reasoning’ and ‘judgement’ don’t actually influence behaviour in ethical thinking? What if the ‘excercises of thought’ are themselves shaped by evolutionary processes – such as through unconscious domain-specific modules or moral intuitions? What if behaviour or discourse in the social sphere isn’t about finding the truth of the matter, but is about achieving desirable outcomes for the agents involved quickly and with limited information and energy expenditure? Perhaps over-rationalising the discipline will misrepresent what it’s actually there for in the first place.

Morality isn’t About Truth

If we are comfortable abandoning these assumptions, and the notion that moral thinking and moral discourse is about finding truth, then we can simply wash away many of the most perplexing problems in ethics and metaethics as non-problems. Moral discourse might feel like it’s talking categorically; talking about truths. But if it’s not, then we can just ditch cognitivism and move on.

Of course, such a move raises other issues, such as how to justify moral judgements. But that’s another story.

The Future of Morality

As I struggle and strain, on a daily basis, to make sense of that strangest of human capacities that is morality, and struggle to suggest to my peers that it might not be what they – and two millennia of philosophers – think it might be, I come across a chapter in a book that might well be a manifesto to the New Synthesis in Morality.

colored_pencils_chevre_sIt’s authored by Jonathan Haidt and Selin Kesebir, and it’s to appear in an upcoming edition of The Handbook of Social Psychology, although it’s available for download directly from Haidt’s home page.

Get it. Read it. Because this is it, folks: this is the End of the Beginning of the New Synthesis, the path hacked through the jungle of confusion to a new destination, and the Beginning of the Middle of the actual hard work of mapping the complex terrain of our moral faculty.

Haidt et al’s main thesis is as follows: (more…)

Right-thinking as a Moral Foundation

I’ve finally had an opportunity to read through the challenge to Jon Haidt’s Moral Foundations Theory by Craig Anderson, of the University of Wisconsin, and I’ve found there are differences with my own similar challenge.

Craig suggests that truth/right-belief could be considered as a sixth moral foundation:

The central aspect of this morality is that people tend to moralize the beliefs that they hold to be true.  Not only do individuals care that they themselves have proper beliefs, but they further feel that others should share those same beliefs.

On the surface, this is similar to my claim that truth/honesty could be a sixth moral foundation. However, there is a significant difference that becomes apparent when you dig deeper into what Anderson is suggesting.

First of all, I’d disagree that “people tend to moralize the beliefs that they hold to be true.” That includes too much that isn’t moral. I believe it’s true that ‘in Australia we drive on the left-hand side of the road’. If someone thinks we drive on the right, I don’t experience moral outrage, although I may be prompted to correct them. Further, if they think we should drive on the right, again I don’t feel moral outrage in the same sense as if they’d said we should cheat on our friends or eat our children. We don’t moralise beliefs that we hold true, we do the opposite, which brings me to Anderson’s next point.

I think the key point that Anderson is making is encapsulated in the second sentence in the quote above, particularly that people “feel that others should share those same beliefs.” Anderson offers the following example:

A very clear example of this form of morality can be seen in the clash between religions, or even somewhat in the clash between the religious and the secular. People of each religious group see their own canon of beliefs as the truth, and that others are wrong, immoral, or sometimes even inhuman, just for not believing in the right god(s).

However, I would suggest that Anderson is not talking about a moral foundation as such, but he’s adroitly recognised an underlying characteristic of all moral discourse. What makes moral matters different to matters of convention is that morals feel categorical: they feel as though they should apply everywhere and to everyone.

Driving on the left or the right is a norm of convention. A relevant authority could overturn the norm and none of us would be left reeling in moral outrage. On the other hand, inflicting harm on others is a moral norm, and cannot be arbitrarily overturned by an authority. Morals are different in that we feel that they’re somehow universal, and that if they apply to me, they should also apply to everyone else. As a consequence, it’s fully expected that individuals would “feel that others should share those same beliefs.”

Why is this not a moral foundation? Haidt proposes there are “innate and universally available psychological systems are the foundations of ‘intuitive ethics’.” He’s talking about things like harm/care as one vector, in-group/loyalty as another, etc. But they are all categorical: if I believe it’s wrong to harm person X in situation Y, or to be disloyal to authority Z, then I believe it’s also wrong for others to behave the same way.

The categorical nature of moral norms cuts across all the moral foundations, and thus cannot be a discrete moral foundation in itself. Instead the categorical nature of moral norms is an underlying functional characteristic that defines them as moral norms.

To test this, we could try to tease right-thinking/categoricalness from the other moral foundations. If we could find issues that triggered on the right-thinking/categoricalness scale that didn’t trigger the other moral foundations, then it could be a discrete moral foundation. For example, if there were issues where it was impermissible for person X to do Y but not person Z to do Y. But I’d expect it would be found that there are no moral issues that are categorical without also being related somehow to another moral foundation. Further, I expect it would be found that all other moral foundations would also trigger right-thinking categoricalness.

In contrast, my challenge suggests truth/honesty as a moral foundation, distinct from the other moral foundations. As I mention in my earlier post, an example might be that it’s judged wrong for person X to lie even if it has a positive outcome according to one of the other moral foundations.

Now, Anderson does mention truth and lying, so there is some crossover with my challenge, but truth is really contingent to Anderson’s main claim about right-thinking. It’s just that people promote right-thinking/categoricalness only about things they hold true, not that being truthful is morally obligatory in and of itself.

Honesty and the Moral Foundations

Further to yesterday’s post is a fascinating story that’s all over the news here in Australia – a story that illustrates my point that honesty is one of the pillars of morality, and is discrete from other Moral Foundations.

In January 2006, former Australian Federal Court judge Marcus Einfeld was speeding in his car. Normally, the penalty for such an offence, minor as it is, is $77. However, Enfield currently faces the prospect of going to jail for a considerable length of time – some have even called for life imprisonment.

Why? Not because of the trivial speeding offence. But because he deliberately mislead the court concerning his involvement in the incident, i.e. perjury.

So we’re faced with a situation where a prominent member of society (who is likely being punished more harshly because of his prominence as well as for hypocrisy) is facing an incredibly steep punishment over a trivial offence – all because he lied.

In this case, we can see that – legally, at least – dishonesty is treated to be a major transgression, while the original offence of speeding is negligible in comparison. Honesty, in and of itself, is morally significant – significant enough that I think it could comfortably qualify as a Moral Foundation distinct from the other five.

A Sixth Moral Foundation?

Here’s one way to make a buck in during the global economic downturn:

If anyone can demonstrate the existence of an additional [moral] foundation, or show that any of the current 5 foundations should be merged or eliminated, Jon Haidt will pay that person $1,000.

todd_oathThis comes from MoralFoundations.org, the home of Jonathan Haidt’s Moral Foundations Theory. This posits that there are “five innate and universally available psychological systems are the foundations of ‘intuitive ethics.’ Each culture then constructs virtues, narratives, and institutions on top of these foundations, thereby creating the unique moralities we see around the world, and conflicting within nations too.”

(more…)

1 2