An Evolutionary Theory of Moral Value
- Let’s say there’s no such thing as intrinsic value. (I won’t present an argument to this effect here, although many others have done so.)
- As a result, let’s say instead of there being intrinsic values, we project values upon the world. (Again, I won’t present an argument for this here, but let’s run with it for now.)
- These values we project on the world can come from any number of sources – such as emotion or reason – but there is no ultimate arbiter of what values are true or best. Call this a kind of value nihilism, if you will.
- However, there is a way the world is, and evolution is an important aspect of that world.
- As such, certain values will ‘survive’ better than others in certain environments, either by spreading culturally or by lending a selective advantage to their practitioners (or by not lending a selective disadvantage to their practitioners).
- One can choose for themselves whatever values they wish and justify them however they wish – basing them on emotion, reason, God, whatever. However, as there is no ultimate arbiter of these values, this constitutes a kind of value relativism.
- However, depending on the values adopted, and how they affect behaviour, these values may increase or decrease the fitness of the practitioner. I hasten to add, in light of the above, there is nothing intrinsically good or bad about this fact. It just is.
- In fact, the world being what it is, values that survive – or that lend a selective advantage to their practitioners – will tend to out compete values that lend a selective disadvantage to their practitioners. As such, these values will tend to survive and propagate.
- If one desires their values to persist (i.e. they value their values), then it would be prudent for them to choose values that will improve fitness and not degrade it, so that their values might propagate. Again, nothing good or bad about this.
- If the above gives you the concern that suddenly survival or fitness sneak in to become foundational values, or that self-interest becomes an overriding value, then take heart that, as a matter of empirical fact – being the social creatures we are living on the fragile interconnected planet we do – if you chose values that will survive, you’d do well to choose cooperation, environmental sustainability, peace etc as well as highly conditional ruthlessness when it comes to matters of survival.
- There’s nothing intrinsically good or bad about choosing values like this, but there’s nothing intrinsically good or bad about choosing, say, hedonist or Kantian deontology values, except that by choosing the latter two values you might place yourself at a considerable selective disadvantage and, as such, those values might not be likely to last in the long term.
- There you have a naturalistic theory of value that is compatible with evolution, that isn’t committed to any metaphysically suspect properties such as intrinsic value, that doesn’t commit the naturalistic fallacy and that promotes things that we intuitively want to promote, like cooperation and environmental sustainability.
- Discuss.
34 Comments
Sabio Lantz · 9th January 2010 at 9:59 pm
I am reading Robert Wright’s “Evolution of God”. He seems to agree with you on the source of morality but he feels it is directional. It is this directionality he calls “God”.
Pete · 10th January 2010 at 5:06 am
This idea may provide an innovative model for the existence and movement of values on a macro level. However, since there is no normative component to the survival of values, it provides no “moral compass” for me to choose my actions as an individual. Just because a value has virally repoduced itself due to natural selection doesn’t (and i’d go so far as to say shouldn’t) have anything to do with my moral actions in day to day life.
Tim Dean · 10th January 2010 at 11:39 am
Hi Sabio. I haven’t read Wright’s new book, but I’ve heard a bit about it. I get the impression I’d agree with much of what Wright says about the historical shifts in morality (see my earlier posts about cooperation, game theory and morality on this).
However, I’m less convinced about evoking ‘God’ in the way Wright does. I’d prefer to appropriate other elements of religion – group bonding, moral compass, cultural institutions, sense of wonder at the world etc – except do so without calling anything divine or supernatural.
And Pete, I agree this framework gives no specific moral compass for choosing your first-order values. As such I guess you could call this evolutionary imperative a zeroth-order value. You can still choose whatever first-order values you want (happiness, duty etc), but you can temper them with regard to their survivability. You don’t have to, but if you value your values (and that’s only ‘if’ you do), then you might also hold to the zeroth-order value.
James Gray · 10th January 2010 at 8:01 pm
There are multiple anti-realist theories, but I suspect R.M. Hare’s Universal Prescriptivism is the most popular one.
I wrote a little about finding first-order values given anti-realism in my post “A Moral Anti-Realist Perspective.” We can basically find out that certain things are required for an ultimate justification given a psychological understanding of people. Something like what Aristotle did to find final ends could be used. Aristotle was an ethical egoist, but cooperation could still be valued because we tend to have social desires.
One problem I have with altruistic/universalist anti-realist ethics is that I should promote such a theory to others but secretly want to break the rules whenever it would benefit me. We have no choice but to want good things for ourselves, and it makes sense to want others to be altruistic, but if you can break the rules and get away with it, why not?
I wrote down more potential problems with anti-realism in my argument for moral realism.
Paul · 12th January 2010 at 3:21 pm
As for what Mr. Gray said, “one problem I have with altruistic/universalist anti-realist ethics is that I should promote such a theory to others but secretly want to break the rules whenever it would benefit me,” there might be a solution to this in some sort of virtue ethics. If the universalist part were simply human excellence, like Aristotle discussed, one might want to promote such a theory to others because you are a lover of the excellent without wanting to break the rules yourself. However, there are downsides to such a universalistic, aristocratic world view, such as the exploitation of the weak by the strong. Also, I am not sure if it is realist or anti-realist if such a view finds its anchor in human nature.
Tim · 12th January 2010 at 6:08 pm
On the question of promoting ethics to others but secretly wanting to break them myself – I think the problem stems from equivocating over what we mean by ‘good’.
First there’s the moral ‘good’, which relates to promoting cooperation and pro-social behaviour. That’s the ‘good’ that we want to promote to others. That’s the root of morality, evolved and contemporary. But we only promote that good because it’s ultimately ‘good’ for us.
That latter ‘good’ is the instrumental good. It’s what is ‘good’ for me. Now, there are lots of things that are (instrumentally) good for me, and one of them is promoting and behaving the (ethical) good way.
What this leaves us with is gap between instrumental good and ethical good – which seems right to me, because not everything that is good for me (eating a high fibre diet, for example) is ethically good. And sometimes, behaving immorally is good for me too. But by doing so, you’re going against the moral good, and others would be morally justified in being pretty upset by that.
As such, morality is just one strategy we employ to promote our own ends. Might sound harsh, but I reckon we’ve been doing that all along, and it’s worked okay so far.
James Gray · 12th January 2010 at 10:18 pm
Tim,
Are you agreeing then?
It might have worked pretty well merely because people are sheep. It’s basically a psychological disorder that Freud called the superego. That isn’t good enough for everyone. The people in powerful positions don’t seem very interested in moral behavior and I understand that the Columbine murderers worked pretty hard to weaken their sense of sympathy.
If I find out that people don’t matter, then I might decide that my sense of sympathy is merely a disorder. It’s not exactly pleasant to emphasize with people when that means I have to share in their pain and so on.
Many ancient philosophers thought pity was an irrational emotion, and it seems plausible that some sort of moral realism was their main reason for endorsing charity.
It is true that most evil people (like people in the mafia) do want to have a social life, but we can protect our friends and family while oppressing and destroying strangers. That’s basically the US government policy.
Tim Dean · 12th January 2010 at 11:46 pm
A couple of things. First, morality isn’t all about rationality. We can talk about moral theories and metaethics, but the motivating force behind moral behaviour is still going to be emotion, intuition (and, I’d argue, perception).
Evolution has already imbued us with these moral sentiments and moral intuitions that promote the kind of pro-social and cooperative behaviour I’m talking about. No matter what moral (or immoral) theory you have, sympathy is a potent force, and one not easily dismissed by reason.
So pity *is* irrational – but that’s because reason is cold and unmotivating by its own action. It takes emotions – those heuristics built in to our nature – that drive moral behaviour.
Second, I stress the divorce of moral from instrumental good. US policy might be good for the US, but it might still be immoral. So, in a way, I am arguing that not all actions need be motivated by morality. Morality, however, can and should motivate behaviour when it comes to interacting with others because I believe (and this is an empirical claim) that acting morally often serves an individual’s instrumental good. Add game theory, and you can (kinda) universalise this (if conditionally).
Basically, the kind of evolutionary anti-realism I’m proposing still promotes what we would happily call a moral code, but it foregoes any reliance on a metaphysics that can’t be justified. Just like the saying ‘without God, anything goes’ is fallacious, so too is ‘without moral realism, anything goes’, in my opinion. This makes morality more like politics, where individuals negotiate their morality – similar to the social constructivist position you’ve written about.
Not sure if this makes sense, but I’d be interested in your reply. I’d expect you to disagree, being a moral realist, but I find your arguments to be very valuable and thought provoking none the less.
James Gray · 13th January 2010 at 9:11 am
I agree that desires might be necessary for motivation, but the difference is that caring for strangers might not make sense for an anti-realist. On the other hand a realist can say, “Caring for others is the only way I can be altruistic, so I should nurture that quality of myself.” I touched upon this idea in my post “Moral beliefs can’t motivate.”
Your game theory point is a lot like the idea of contract theory. We personally should still want to break the rules if possible, so totalitarianism could be necessary to maximize security and so forth. That’s basically the conclusion that Hobbes came to.
John Rawls has one of the best contract theories right now, but his answer to “breaking the rules” is also the use of coercion and punishment. That’s potentially problematic for the same reason as Hobbes’s answer. It could lead to hidden cameras in every building, etc.
You might think, “Well, isn’t freedom more important that security?” That is an interesting question, but right now society is moving towards totalitarianism. If someone has a bomb in his shoe, you need to inconvenience everyone else to try to stop it from happening again. If kids go to school and shoot people up, we might need better gun control. We are willing to spend outrageous amounts of money to try to control everyone.
A moral realist alternative to totalitarianism is to attempt moral education. This might work for anti-realists as well if he or she can explain why altruism is personally a good idea, but so far I don’t think it has been explained sufficiently by any anti-realists.
I don’t mean to say that we should have faith in realism because of this nice alternative. It might be necessary to give up on the moral realist answer if it is implausible. However, theories that lead to totalitarianism seem to be false for giving wrong moral answers. Our moral experiences tell us that totalitarianism is exactly the opposite of the right thing to do.
Tim Dean · 13th January 2010 at 10:20 pm
Interesting thoughts.
In terms of care of strangers, first it might be in my best interests to care for strangers, instrumentally speaking; cooperation yields many fruits for us all. Second, evolution has already learned that lesson and equipped us with moral emotions that encourage care of strangers. Handy, and I’d encourage their use.
And yes, my employment of game theory does make my moral theory very much like a contract theory – although I think the very same game theory shows that totalitarianism is not a very good (or stable) system in most circumstances. That said, highly authoritarian and hierarchical systems might be effective in highly hostile environments – such as when a group is under an existential threat. In safer environments, however, I think its stifling of cooperation is its downfall.
I’m partially sympathetic to Rawls, but I’m wary of his Kantian roots and I disagree with his estimation of how his agents would behave in the contract negotiations – and why they would be motivated not to defect once the negotiations are done.
And I do think punishment is an important tool, but only instrumentally. That said, there’s a sliding scale from safetyfreedom, and I don’t think there’s any one true answer to where a society should fall on that line – it would vary with the environmental conditions. Plus, there’s no such thing as 100% security, so it’s vain to seek it (wish someone told the Bush administration that).
(That said, American gun laws leave me utterly perplexed. Want to make the country safer, bollocks to terrorism, there’s your low hanging fruit…)
And I agree about moral education – because acting ‘morally’ fosters social interaction and cooperation, and that benefits everyone. Well, there are some who it won’t benefit, but the others will gang up and make laws to prevent those defectors ruining the party – not that they will ever eradicate defectors entirely.
Hope that clarifies my position.
Pity we can’t sit down and chat about this over a beer…
James Gray · 13th January 2010 at 11:08 pm
I think we are almost done discussing the challenge I raised against anti-realism. I will just say a few things to sum up and clarify my view. My understanding of your view is that morality “tends” to be in our personal interest, but you admit that it might be perfectly rational for some people to reject morality when they can replace it with a kind of rational self-interest, which is pretty similar to what Hobbes had in mind.
I agree that cooperation is often mutually beneficial and something like a social contract can be developed. My point is merely that coercion is especially important for anti-realists to fill in the gaps. It isn’t proven that it is always within my self-interest to be “moral” and just the opposite appears true “when we can get away with it.” I don’t know if I think morality is “just about rationality,” but I do think that moral realism (if true) could give people better reason to be moral when doing so requires self-sacrifice (or a much lower personal payoff).
I also agree that we probably have social instincts and emotions. My point here is that some people decide to neglect them rather than nurture them. Doing so actually makes a lot of sense, even if moral realism is true. Why? Because these instincts tend to get in our way of self-interest (and personal happiness.) Empathy in particular is unpleasant. Moral realism doesn’t justify altruism based on these instincts and emotions, but some relevant “desire” might be important. We would only rationally obligated keep these instincts and emotions if and when they are necessary for virtue.
Additionally, using a coercive element on everyone isn’t realistic. The necessity of a coercive element leads to the classic problem, “Who watches the watchmen?” Hobbes admitted this “problem” with his system and basically just said that we need a sovereign who has exclusive rights to violence (and who is allowed/expected to be immoral).
Hobbes wanted an anti-realist system and he got one. Improving upon his system in superficial ways isn’t too hard, but improving upon it by avoiding the classic problems of anti-realism entirely is much more difficult. This is even more relevant this very moment because our governments are basically based on Hobbes’s political philosophy. Although the US’s founding fathers supposedly agreed with John Locke, that is certainly no longer true of our government. (The US Government doesn’t even recognize the right of citizens to revolt against an unjust dictatorship.)
Perhaps what is more realistic than a coercive element is propaganda, but this is hardly the kind of “moral education” I think we need. We don’t want to trick people into being moral. We want them to decide to be moral because it makes sense to do so. (Most moral education does appear to be propaganda at this time.)
Finally, I want to say that the moral realist position that being moral when doing so requires self-sacrifice “is rational in some sense” (and the opposite is irrational) is intuitive and supported by general moral experience. To reject such a belief is not preferable. A meta-ethical theory that can say this fact is true has one point for it rather than against it. If you have rejected this belief, then you are probably doing so because of some sort of cost/benefit analysis. (You think anti-realist will have a higher score.) It isn’t easy to figure out which theory is “best” because there is so much to consider.
Tim Dean · 13th January 2010 at 11:22 pm
Very nice final comments. Nothing more for me to add – although I think we could continue debating for a lot longer.
One final point, though – I actually think that despite our metaethical differences, were we to paint a picture of what our moral theories would look like in practice, they’d be almost indistinguishable. Sure, the justifications would be different, and I might lean a bit more towards pluralism, but in practice, they’d basically be the same. Hard to tell for sure, but that’s my hunch.
James Gray · 14th January 2010 at 9:05 am
An interesting question is whether we agree because of intuition on a case by case basis or if it is theoretical. My master’s thesis was on moral theory. In particular, I introduced two new forms of stoic ethics. I will put that on my website eventually.
My ethical theories seemed to rely on moral realism, but I actually hoped at least one wouldn’t, (and perhaps it doesn’t). It might be that neither does, but that would be even more surprising.
douglasbryant · 18th January 2010 at 8:33 am
Re 1-12: Agreed.
13. There’s some talk above about choosing our values, but that may underestimate the extent to which the evolutionary process has hardwired our values for us. I can’t just decide that I don’t like fairness or love or beauty. I can’t kill a person and just choose to not let it bother me. I can’t forsake companionship for solitude by fiat. It’s senseless to tell a sociopath to just change his values and start caring about other people. It’s senseless to tell the rest of us to forsake altruism if we find out it isn’t rationally justified. What’s interesting is that this places empirical limits on moral relativism, even for anti-realists. It’s theoretically possible that we could one day deduce the moral limits of a person from a precise enough physical description of the being.
Another interesting point: Beings like you and I have started to do something fairly novel-we’ve started using our conscious brains to override some of our hardwired impulses. Whenever we decline a second helping of food or a sugar-filled dessert, or don’t attempt to have sex with every potential mate we encounter, or don’t fight or kill everyone who makes us angry, we override certain instincts. In The Selfish Gene, Richard Dawkins describes this, with a bit of metaphor, as our genes having surrendered to our brains some of the responsibility for their survival. This process is thought to reach an apex when genes give over total decision making authority to the brain. That’s hard to think about, because it’s unclear what the brain would have to go on if it had ‘total authority’. Does it open up some room for choosing values? Maybe some, but probably not all. Some of our moral values aren’t just good for survival but necessary. The brain, for instance, develops entirely differently if it is not socialized in development. Your brain certainly can’t exercise unlimited biological authority – it can’t decide that eating tin cans is nutritious for example.
Tim Dean · 18th January 2010 at 9:41 am
Hi Douglas. Thanks for the interesting comments.
First off, crikey! Someone agrees with me! Sir, I’d shake your hand and buy you a beer were you not, as I presume, on the other side of the world to Sydney.
Second, I agree with you! Re: point 3, where I state values can come from a variety of sources, I’m talking metaphysically; I don’t believe it’s possible to find a firm bedrock of value that will be non-contingently common to everyone. For example, I disagree with Kant that morality can be discovered purely through rational investigation alone.
However, empirically, I agree that in practice our values come from our evolved intuitions and sentiments – making that point is central to the PhD I’m doing right now. That doesn’t mean some philosopher won’t (and they have) pull another value out of their rational hat, even if it contradicts our evolved intuitions. Then they cite the is/ought problem, and happily move on ignoring the weight of empirical evidence. Philosophically, that’s a fair move, if misguided, in my opinion. Thus, my position must, I believe, be characterised as value nihilism because I state that there is no single metaphysical foundation to morality.
Another ‘however’ though: evolution and human nature contribute to our values in two ways. The first is through our evolved intuitions and sentiments, which act as heuristics to guide moral behaviour. From these we abstract and generalise away ‘values’ in the form of principles – like ‘cheating is wrong’ or ‘helping someone in need is right’ – and we then pass these principles around. This view owes much to Jonathan Haidt’s characterisation of morality.
But on another level, evolution also defines the problem that these evolved sentiments are trying to solve. And that problem is one of survival and enhancing fitness – a problem that is more easily solved through cooperation than going it alone. As I state above, there’s nothing intrinsically good or bad about survival or fitness, but these are the things that have ultimately shaped morality as we know it today.
So, I agree with you again! We can use our reason to override our hardwired impulses – and often that’ll even be the ‘right’ thing to do. This is because our evolved impulses are heuristics – they’re rough and ready shortcuts that led to favourable outcomes in our evolutionary past. But they’re prone to error and they may or may not be suited to today’s environment (such as when presented with delicious sugar-filled deserts – thus my analogy of occasionally going on a ‘moral diet’).
This is why I characterise the foundation of morality (as contingently and empirically derived from facts about evolution) as being ‘fostering pro-sociality and cooperation’. If we agree that that’s the problem we’re trying to solve, then we can judge the suitability of a variety of devices, including our evolved intuitions, reason and various moral principles as being either instrumentally good or bad for their practitioners towards that end.
Yeah, there are gaps to be filled in my argument, but that’s the way I’m leaning. And it’s all thoroughly naturalist, anti-realist, empiricist, with a mix of cognitivism and non-cognitivism.
douglasbryant · 18th January 2010 at 10:46 am
Interesting stuff. I’m not sure what you mean by heuristics and the two ways evolution and human nature contribute to our values. I’ll keep reading and eventually get a sense of it. Speaking of cheating, however, you might enjoy “Can a General Deontic Logic Capture the Facts of Human Moral Reasoning? How the Mind Interprets Social Exchange Rules and Detects Cheaters.” Leda Cosmides and John Tooby, in Moral Psychology, Volume One, Walter Sinnott-Armstrong. The authors show (prove in my opinion) that morality is not the mere application of rationality. They show that ‘logic’ works differently, subconsciously, for different purposes, and that the brain has evolved different logic subroutines for moral and not moral purposes. What this amounts to is that we’re a lot better at cheater-detection than we are at simple logic. Pretty interesting stuff.
When you say, “if we agree that that’s the problem” I assume you mean the answer to the question of the foundation of morality, which you say is fostering pro-sociality and cooperation? Is that right? Can you clarify?
James Gray · 18th January 2010 at 1:54 pm
douglasbryant,
I don’t see what the problem you are referring to. You say, “It’s senseless to tell a sociopath to just change his values and start caring about other people. It’s senseless to tell the rest of us to forsake altruism if we find out it isn’t rationally justified.”
I am not sure if I agree or not. Either way, I did not necessarily say that a sociopath could start to have empathy out of nowhere, and I don’t necessarily tell people to “forsake altruism.”
We have a desire no to feel pain or to suffer self-sacrifice. Altruism involves pain and self-sacrifice. Therefore, altruism could be stupid and “unjustified.” Altruism isn’t just something we can decide to have to be moral beings. It’s something that could hurt us.
A moral realist doesn’t have this uncomfortable problem with altruism because the value of other people exists no matter what anyone desires. An anti-realist will have to explain why altruism isn’t stupid (even though it seems to be given that nothing really matters.)
lonnease · 19th January 2010 at 9:06 am
Hi Tim,
Nice, clear breakdown of your evolutionary moral value theory. Being philosophically “raised” a Sartrean, I’m totally on board with your basic stance on morality.
However, without going too far into it, I’m not on board with the natural selectionist baggage you bring in from evolutionary theory. I have no doubt that the natural selection+chance mutations mechanism operates as advertised, but I have very serious doubts that it is anywhere near sufficient to account for much of the evolutionary change and adaptations we find in the animal world and fossil record. Cf. Biological Emergences by Reid, Beyond Natural Selection by Wesson, Evolution in Four Dimensions by JaBlonka and Lamb, etc.
I know you’re always safer sticking to the current dominant theory in biology for your work, but what’s at stake is after all the heart of your thesis as such.
Tim Dean · 19th January 2010 at 9:03 pm
Hi Lonnease. Thanks for the references. I’ll look in to them.
However, as a philosopher, it’s not my place to argue biology nor to lean on scientific theories that don’t have broad support of the scientific community. That’s not to say any of these approaches aren’t the correct one, just that I lack the expertise to judge them, so by needs I rely on the judgement of the scientific community.
Also, as a philosopher, my programme is very much a: ‘if this picture of evolution is true, what else might be true’. And if the current picture of evolution changes, then my thesis will have to change as well. Such is the lot of the philosopher who builds his house on the sands of empiricism. But I’m cool with that.
That said, the picture of evolution to which I subscribe – and which I believe is the best accepted contemporary view – incorporates genetics, epigenetics, gene-culture co-evolution, and cultural evolution – all of which are operated on by natural selection as well as other forces, such as sexual selection, genetic drift and other stochastic forces.
Also, I suspect many aspects of my thesis would survive changes in evolutionary theory. As long as the basic tenets of evolution are true – the mechanisms of evolution are less important – then much of what I say stands. I think.
James Gray · 19th January 2010 at 9:44 pm
douglasbryant,
I found some information on the stanford encyclopedia of philosophy involving the philosophers you mentioned: http://plato.stanford.edu/entries/evolutionary-psychology/ and I found a pdf file of the essay you mentioned at http://www.psych.ucsb.edu/research/cep/papers/deonticCT2008.pdf
I will have to give this a look.
I’m not sure exactly what you mean about morality not being entirely about rationality. I don’t know what my options are here, but I think it is perfectly reasonable to give an aspirin to someone. I don’t have a precise definition of reason, but it makes sense in my mind. I don’t know a lot about deontic logic, and I don’t know if I fully endorse it at this point.
I realize that altruism might not be rationally justified, but that doesn’t make me want to be moral. Just the opposite. I took a moral sentimentalism class and learned about the idea that morality isn’t based on reason (or is greatly not based on reason) involving Hume, Adam Smith, and so on. So far I have not been impressed or attracted to that position. I might find it more relevant if I discarded realism, but then I might not find ethics as interesting.
A few thoughts that I have about morality and rational action. One, I am tempted to agree that most people don’t think very rationally, so I wouldn’t be surprised if they didn’t use reason when dealing with ethical situations. Morality doesn’t seem very meaningful to me until it becomes rational through philosophy or some kind of intuitive quasi-philosophical wisdom.
Two, I also agree with Aristotle that implementing moral wisdom will generally be a thoughtless automatic reaction (hopefully based on trial and error and so on). Thinking too much is often a sign that we lack the wisdom to deal with the situation at hand. This doesn’t mean that I think morality isn’t rationally justified. We can behave in a justified way without having to form an argument or think about it. There is a difference between moral theory used in hindsight in order to identify justified behavior and the procedures we use to try to behave morally in everyday life.
Three, sociopaths have been found to use utilitarianism when making moral decisions rather than some kind of emotional reaction. If you google ‘sociopaths utilitarianism’ you will find information about this. I suspect sociopaths might lack the Aristotelian impulse that would help them make good decisions in general, so they have to think more about their moral choices. However, everyone might have to (or should try to) step back and reason about morality in certain situations. This seems especially important when we step back and reflect on our behavior at the end of the day. Sometimes we will decide, “next time x happens, I will do y instead” based on our reflections.
I do admit that impulse and desire plays an important part in ethics, but I don’t really know what all the options are supposed to be involving reason and other things.
Tim Dean · 19th January 2010 at 11:27 pm
James:
I actually have the Sinnott-Armstrong books. They’re a wonderful overview of the many contemporary issues surrounding evolution and moral psychology.
And I think douglasbryant is referring to a kind of neo-sentimentalism when he says that “morality is not the mere application of rationality”. The moral psychology suggests that moral judgements are made without much involvement from conscious reflection, but are more driven by intuitions and moral emotions.
That doesn’t mean that a moral judgement can’t be seen as being ‘rational’, in the sense that it was rational given that agent’s beliefs and desires etc. Basically, separating the motivation from the justification; the former can be sentimentalist, while the latter is justified using reason.
But I don’t think you’d necessarily disagree with this picture. It contrasts Kantian deontology, where reason *does* (or should) play a role in directing moral behaviour, but it doesn’t necessarily disagree with utilitarianism, for example; Mill conceded that utilitarian thinking would be clunky in the heat of the moment, but it could be used to determine whether a particular at was morally justifiable after the fact.
And the Aristotlean picture you paint is quite similar to the moral psychology picture. According to moral psychology, if we’re to change the way people make moral judgements, you can’t just give them a long list of principles to obey, but you need to change the way they think – or even the way they perceive the world. Reason and reflection have their purpose – such as to resolve a moral dilemma or to justify an action – but they don’t steer much of our moral behaviour.
So again, I’m drawn to the conclusion that, in practice, our moral systems would play out in a very similar way, even if our metaethical pictures differ and our values come from different sources.
Douglas · 22nd January 2010 at 1:48 am
James,
What I mean to illustrate when I say that it’s no use telling a sociopath to feel bad or telling someone else not to feel empathy is that the overly rational conception you seem to have of human nature and morality is mythological. Even if I could prove to you that moral realism is false and altruism is irrational, you would not be able to abandon altruism, empathy, or sociality. You are hardwired to be an empathtic, altruistic member of a community. There’s no proof that can be given in philosophy that can make you feel okay about punching a homeless man in the face or be nonchalant about bashing a baby’s brains out. You can’t even lie about what you had for breakfast without exhibiting physical symptoms as evidence of your discomfort. Your moral feelings, like your physical disgust reflex, are part of your neurophysiology and neurochemistry. When you suggest that we might as well abandon altruism if it’s irrational or if moral realism is not true, you betray a philospher’s understanding of human ‘nature’, an understanding uninformed by modern science. I suspect that’s why you think it’s objection to anti-realism that altruism’s rationality cannot be accounted for on anti-realist premises. It’s not. There is no “unconfortable problem with altruism” for anti-realists.
The research on sociopaths and psychopaths shows that they typically deviate from deontological answers and stick to utilitarian answers. By contrast, the rest of us usally stick to utilitarian answers until emotions get involved. Because the ‘paths lack these emotions, they don’t feel the need to abandon the rational answer. A person can understand an internal logic of morality, like utilitarianism, without feeling it. A ‘path can also tell you the rules to monopoly or chess. It’s little different. What does make them different from us is the lack of emotion. That’s why I brought them up – to demonstrate the connection between biology/physiology and morality. Just as it’s a mistake to think that demonstrating a solid proof of the rationality of altruism could be a cure for ‘pathy, as though it would magically restore emotive feeling, it’s also a mistake, a mistake about the nature of human beings, to think that we could behave very differently if we found out that anti-realism is true and altruism is irrational.
Paul · 25th January 2010 at 8:13 am
I tend not to agree to that morality is supposed to foster pro-sociality and cooperative behavior. Living amongst ethnic strangers in a large metropolis, instincts that by and large evolved to foster in-group cohesion and in-group cooperation don’t, as a practical matter, inform one of how to live their life well, nor do they even provide a reliable answer of what to do and what not to do. Outside of moralities that appeal to ethnic chauvinism(many religious institutions in the U.S. are really more like ethnic community centers), there really isn’t anything that amounts to any system of morality on the ground in the U.S. to help guide and influence culture and behavior, much less politics. Also, if the point of morality is to promote pro-sociality and cooperative behavior, doesn’t that mean that ethno-states are the only states that are morally permissible?
Tim Dean · 25th January 2010 at 10:18 am
Hi Paul.
I agree that the base assertion that morality is about pro-social behaviour doesn’t include things like how to live life well. However, I’d suggest that the latter have been tacked on to morality by various philosophies and normative systems, but they are not central to ‘morality’ as such.
They are still important, and can still be added to a normative system, but the reasons and justifications for following certain rules in order to live life well are very different from the reasons and justifications for behaving cooperatively.
Also, evolution influences morality in two ways. One is by imbuing us with a moral sense – effectively a moral faculty along with a bunch of heuristics for solving moral problems, like the in-group instincts you mention (although ‘instinct’ is a problematic word). Another example would be guilt, which discourages us from behaving in ways that contravene social convention, or empathy, which encourages us to help others. However, these heuristics were shaped when we lived in small tribal groups, so they’re somewhat maladaptive in modern society. Also, as heuristics, they’re rough and ready rules, so are prone to error.
The other level where evolution has influenced morality is by defining the *problem* that morality is trying to solve, i.e. how to encourage pro-social behaviour. If we focus on the problem and understand that, we can see that the heuristics evolution lent us to solve it are imperfect. Thus we see parochial behaviour like people still applying moral values only to their in-group, whether that be religious, socio-economic or ethnic.
So, to answer your last question, ethno-states would only be the only morally permissible ones if we could demonstrate that they best solve the *problems* of encouraging pro-social behaviour, not because they emerge from the heuristics we have evolved to solve that problem. And I would suggest they don’t best solve the problems. They might work well for the community within, but they also foster conflict with other ethno-states, which can be detrimental to all.
Ultimately, I don’t believe there is any perfect – or any one morally permissible – answer to how best to promote pro-sociality. Hence, I support a pragmatic pluralist position that provides a framework for conflicts in value and norms to play out in a controlled way, much like a democracy for morals.
I hope that answers your questions.
Michael Kean · 25th January 2010 at 11:58 am
Just wondering about the role of moral habit in terms of brain plasticity and so called “hard-wired” moral behaviours. How much of our evolutionary inheritance may be overriden not so much by reason after the fact, but habit before the fact? I suspect an intelligent moral life is a dance with dynamic reality (with the very important feature of feedback that is the core of natural selection) rather than just a simple matter of evolutionary inheritance or rational choice. The dance steps won’t change quickly, but if we encourage and participate in a change in the dance steps (based on philosophical reasoning), maybe this has the potential to save our world from the insanity of WW III, greed in our corners of the vineyard, etc., etc. more than just explaining the logic?
Tim Dean · 25th January 2010 at 12:30 pm
I don’t believe there likely to be any ‘hard-wired moral behaviours’. I think there are hard-wired faculties and capacities – things like emotions and heuristics – and these lead to behaviour through a complex interaction with the environment.
However, plasticity is very important. I talk about this in my Moral Black Box essay (link at the top of the page).
In terms of the feedback, I promote an idea of moral perception (mentioned in the Moral Black Box piece) which flavours the way we see a scene, and ascribe value to it, before even emotion and reason have time to act. Then emotion kicks in to give the scene its salience. Then reason has a chance to interject and inhibit or steer the behaviour elsewhere. Experience then modifies the lens, starting the process again.
So, to change the way people make moral judgements, it’s not always best to throw more reasons or principles at them, but instead to change the way they perceive the situation in the first place.
James Gray · 25th January 2010 at 10:05 pm
Tim,
You said: So, to change the way people make moral judgements, it’s not always best to throw more reasons or principles at them, but instead to change the way they perceive the situation in the first place.
Can you give an example of this?
Obviously we don’t just want to use propaganda to trick them into being nicer, so I have to still wonder if this could work on someone who has rationally decided to be egoistic and do whatever it takes to make their company more money. (Get the supreme court to decide that campaign contributions from corporations shouldn’t be limited, for example.)
Douglas,
I read “Can a General Deontic Logic Capture the Facts of Human Moral Reasoning? How the Mind Interprets Social Exchange Rules and Detects Cheaters” and I still don’t see how I am using something overly rational. My approach could be seen as a conequentialist one. Maximizing values is good is justified if realism is true. If realism is false, then consequentialism is false. If consequantlialism is false, then I do not need to universalize what is good or bad. I can decide to try to just take care of myself (as much as I can). That might be limited, but it doesn’t mean it’s impossible. I claim practical rationality is capable of asking us to question/justify the value of our own desires. I don’t think that is impossible and therefore overly-rational.
Yes, I believe we can change our desires to some extent, but I never said we could abandon social impulses altogether.
I have already responded on your website about this topic, so that’s all I have to say for now.
Tim Dean · 26th January 2010 at 4:59 pm
Hi James. I think there are plenty of ways of changing the way people make moral judgments by changing the way they perceive the situation – and only some of them are propaganda. Although I wouldn’t advocate propaganda because it inherently involves deception.
As for examples, starting with the trivial, how about that old Chinese story of the farmer who’s hammer goes missing and he’s convinced his neighbour’s son stole it. Suddenly the neighbour’s sun looks conniving, deceitful, shady etc. Then the farmer finds his hammer – he’d only misplaced it – and suddenly his neighbour’s sun looks like a normal boy again.
Or consider someone who’s mugged by an individual of a particular ethnic group. I’d suggest they’re often more inclined to mistrust other members of that ethnic group, and more likely be more judgmental about them because when they see members of that group, they immediately experience feelings of anxiety and mistrust etc. That kicks in before reason has a chance to reflect on the reaction.
In fact, in-group/out-group distinctions are a big example of moral perception in action. I think there’s evidence that suggests many people don’t even hold some out-group members to be worthy of moral consideration – such as through the process of dehumanisation in war.
So if an individual sees an out-group member in pain, it may not even trigger a moral judgment – we tend to forget this phenomenon in our liberal everyone’s-in-my-in-group world. Consider some German concentration camp guards and their need to perceive many of their prisoners as less than human in order to carry out their jobs. Should they see them as human, it would kick their moral faculty in to gear and cause them to feel very differently – as many did after the war. I would suggest that perception plays a big role in this – which is not to say reason and moral principles don’t, but just that you can’t explain it all using reason alone.
In fact, in terms of why two individuals differ in most (non-dilemma) moral judgments, I’d say perception does most of the heavy lifting and reason and moral principles play only a minor roll in explaining the difference in their judgments.
There are also some examples in the moral psychological literature (see Simone Schnall’s ‘Disgust as Embodied Moral Judgment’, 2008), such as where they have people make moral judgments after they were primed to feel disgust. If the subjects are disgusted, their moral judgments are more harsh. And there are others where subjects judge trustworthiness of faces on a computer screen after being made to feel either hot, comfortable or cold – and it significantly changes their judgments. I’d say mood is just one aspect of moral perception that can influence moral judgment before reason kicks in.
Iris Murdoch also talked about moral perception, although it didn’t get much attention from the ivory tower brigade. That said, Lawrence Blum has a paper called ‘Moral Perception and Particularity’ in Ethics from 1991 that gives Murdoch’s ideas another spin.
James Gray · 26th January 2010 at 7:16 pm
Tim,
You didn’t quite give examples of “changing perception.” You might be right that perceptions are at work in those examples, but changing them is another matter. I suppose one way to get the enemy to see you as a person is to become friends or something.
My professor Peter Hadreas wrote a book recently “A Phenomenology of Love and Hate” that details how love and hate can exist, but he doesn’t talk a whole lot about how to get rid of hatred. To some extent he talks about the “logic” of hatred, which might imply that someone could realize the flawed reasoning involved, but that seems to admit that perspective and principles are related.
I like to think that moral facts and logical principles are very related to changing someone’s perspective. They don’t seem mutually exclusive.
Paul · 27th January 2010 at 7:11 am
As a practical matter, any morality, or system of customs or traditions, will need to attempt to convince its members to act in pro-social ways when they are tempted to do otherwise. I think that this is why almost all moral traditions(and by this I mean living and breathing ones, not those dreamed up in a book) have embedded within them a conception of the good life, and how exactly following the traditions are conducive to that. E.g., you shouldn’t sleep with your neighbor’s wife, because you will be guilty before God and denied true communion with him. So I agree with you that a natural history of morality does in itself provide an answer as to what problems morality solves, I just think that if you look at the world’s dominant moral traditions, all of them use a notion of the good life as an essential tool in motivating moral action.
Tim Dean · 27th January 2010 at 4:55 pm
Hi James. You’re right, I didn’t answer your question properly. And to tell you the truth, I don’t know if I have the answers to how to change people’s perception. I guess it’s more an empirical question, so I’d turn to psychology to see how people’s cognitive schemas, stereotypes and perception in general can be changed, then aim at that.
Just cleaning up public environments and making the world appear less threatening might foster more trust and cooperation. (So a ‘terror alert’ is exactly the thing you *don’t* want – unless the terror alert can be shown, in aggregate, to improve people’s lives, although I doubt that can be done.)
I guess another example would be ‘framing’, a la George Lakoff and changing ‘tax cuts’ to ‘tax relief’. However, there’s a grey area between framing and propaganda, so I’d be wary of going overboard on it.
I also think moral facts can change perception – in fact, I’d suggest the best way to give them real weight is to have them to just that, rather than leave perception unchanged and try to modify it using reason after the fact. For example, in many western countries, recycling has become a moral issue – at least in the sense that many people see something morally wrong with someone throwing recyclables in to the garbage. That started from principles – that ‘recycling was good’ – and eventually it gained traction when people started *seeing* recycling as good, and the converse as bad.
And Paul, I agree that a moral system can’t be promoted in isolation for the very reasons you mention (that’s one reason you don’t see many practising Kantians, IMO). My research focuses on making sense of the normative, but the next step after that (which I’ve talked about a bit on this blog) is to build a secular religion – basically a secular philosophy that can fill the gap left by traditional supernaturalist religion.
This would include many more things than just a moral philosophy – it would include guidelines on finding happiness, as well as a support network when things are tough, along with education, community building initiatives and charity services. Functionally, a religion, for all intents and purposes, except without relying on supernatural explanations. But that’s a whole other project…
Paul · 29th January 2010 at 3:25 pm
Tim, I wonder though if you can make sense of the normative part of morality without first understanding the good life. What good are pro-social and cooperative behavior if they lead to bad things, say for example all of the immense work that went into maintaining a caste system in India? Knowledge of how to encourage such behavior is indifferent morally without a conception of the good.
Tim Dean · 29th January 2010 at 4:17 pm
I think one can, and should, separate the ‘moral’ from the ‘good life’. They’ve been entangled for far too long.
That’s why my argument above allows for many conceptions of the ‘good life’. I’d also suggest that we’ll never arrive at a point of broad agreement amongst everyone about what the ‘good life’ is. Thus, one of the roles of normative ethics, in my argument, is to mediate between various conceptions of the ‘good life’.
But – and this is the crux of my argument – while we might have different 1st-order values, there’s a 0th-order value that I’d suggest we all adhere to, which is that we should have some way to mediate between all our various 1st-order values in such a way that is self-sustaining over time. That’s point (9) in my argument above.
This makes normative ethics more like politics, which (not so) long ago abandoned the notion that there was one right way to govern and tilted towards pluralistic democracy where many voices can be heard and the extremes can be kept in check.
Paul · 2nd February 2010 at 5:29 am
The notion that we should have pluralistic democracy as opposed to authoritarian collectivist government a la China is based upon culture specific 1st order values. The 0th order value that rules politics, especially politics between different groups of people and different polities, has always been simply the rule of the stronger. Are you suggesting, like Thrasymachus in the Republic, that justice or moral right is the rule of the stronger?
I think that such a position lies at the base of pluralistic democracy, whose first modern proponent, so far as I know, was Machiavelli. His goal being to emulate the Roman Republic, which produced and handed the reins to countless men of extreme martial and political virtue. What, then does that make normative ethics? A science of war, politics and psychology that enables the virtuous to rule? I am really confused at this point, because I think that your argument is leading you in a Straussian direction, but I am not sure that that is where you were intending to head.