Evolution and Moral Ecology, Mini PhD Version

Published by timdean on

I’ve posted a new static page with an outline of my PhD thesis on evolution and moral ecology. If you’re interested in my overarching theory, it’s worth reading. Hopefully it’ll put a lot of the other missives I write in context. Although I don’t doubt it’ll also raise a lot of questions and objections. Happy to hear them. Any criticism that can steer me in a better direction will improve my thesis. I call it PhD 2.0.


15 Comments

James Gray · 24th November 2010 at 6:22 pm

Not sure if you would rather get comments here or on the permanent page.

I find the term “moral sense” to be mysterious. Some people rate some values higher than others, but why call that a “moral sense?”

I don’t know what “strategies” has to do with a “moral sense” or rating some values higher than others. Some people might have different strategies than others (about how to live together, etc.) It’s unclear how such strategies relates to instincts or “human nature.”

We could be talking about “memes” here that could evolve culturally or individually rather than genetically. Unsuccessful “memes” must be discarded or the people could die. They would then be self-defeating or self-destroying.

I agree that temperament can be part of our moral outlook, which is based more on nature, but my temperament has little to do with my moral beliefs. I think I have a highly conservative temperament, but that doesn’t make me a political conservative or use a “conservative strategy.” I also don’t know that my “intuitions” support such a conservative strategy despite having conservative emotions. I don’t think moral intuition is little more than an emotional response for that reason.

You said before that morality is “overriding.” I want to know more about how that works. As far as I can tell we observe some people to be more pro-social than others and calling one action “better” than other is more emotive or noncogntive than anything else (based on your view).

As such, our moral faculty – including the moral intuitions and moral emotions that so influence moral philosophy – has not evolved to seek out truth per se, but is contingent on human biological interests and sociality.

Is that all that is going on? As a moral realist I agree with much of what you say, and I think emotions and bias can influence our “moral intuition,” but I don’t think that is “all it is.”

Instead of seeing morality as an objective feature of the world to be discovered, it might be more appropriate to understand it as a construct to be created by humans for our benefit. As there are no moral truths in the world – or moral1 facts, as Greene puts it – this doesn’t eliminate morality of the moral2 sort, which deals with issues of social interaction and concern for the interests of others.

How important is this for your thesis? Do you only want to defend error theory as a possibility or do you want to argue that error theory is probably true?

What exactly is a “free rider”? If we could get machines to feed everyone and meet our needs, would we then no longer have a reason to hate them?

If “free riders” are those who don’t increase the reproductive advantage of others by producing clothing, food, medicine, housing, etc. then almost everyone is a free rider; but I think it would be stupid to think “free riders” are bad people when they produce art, movies, games, comedy, useless philosophy, and so on.

Tim Dean · 25th November 2010 at 2:57 pm

Hi again James. In order:

By ‘moral sense’ I mean the fact that we appear to have an innate psychological faculty, or faculties, that produce moral judgements and direct moral behaviour. I mean it in a similar way to saying we have an aesthetic sense or a direction sense.

By ‘strategies’ I mean approaches to solving the problems of social living, which I take as the function of morality. Our evolved moral sense promotes some strategies to solve these problems, and weights different strategies more highly in different people. The strategies themselves are promoted by proximate mechanisms like empathy, righteous anger, guilt etc.

I don’t really touch on memes, but I’m sure they’re important. What I’m talking about is our moral psychology. How that moral psychology influences culture, which in turn feeds back into our moral psychology is a complex (and fascinating) subject in itself. I’m more interested in suggesting that our psychology varies, and that variation influences the way many people form moral attitudes and judgements.

I would suggest that your personality has influenced your moral beliefs, although perhaps less so than it would with a non-philosopher. Philosophers are trained to prioritise reason and discount the influence of emotions in forming moral beliefs, but most people don’t think that way.

Furthermore, the fact you’re a philosopher probably has something to do with your other psychological characteristics, such as high intelligence, propensity to reason, perhaps relatively subdued emotional temperance and tolerance of ambiguity etc – these to me suggest someone who would tilt towards pragmatic liberalism. Not sure if that’s accurate in your case…

Morality is overriding in a psychological sense that it’s strongly persuasive, often overriding other conventional norms. But it’s not categorically imperative, in the Kantian sense.

We can judge and action to be ‘better’ in a few ways. First is get an intuitive impression of the rightness or wrongness of an action – primarily emotive. We can also reflect on that action – primarily rational. Moral philosophy, which is obsessed with justification, operates primarily rationally, but that ignores how most people form moral judgements.

The take home message of that notion is that once moral philosophers are in agreement (ha!) on the correct interpretation of an action, they need to find a way to influence the intuitions (and perception) of others to have them also arrive at that same judgement. Big problem for moral philosophy – and largely overlooked to date.

On error theory, I’d suggest it’s probably true, but I’m not going to argue that in my thesis. Just give it a nod. Or I may drop it altogether – my supervisors already stress I’m covering too much ground already.

I mean ‘free-rider’ in the game theory sense: someone who exploits someone else’s costly altruistic behaviour to their own advantage. Preventing free-riders from disrupting social interaction is one of the (major) problems morality is there to solve.

And I’d stress that morality evolved because it lent a reproductive advantage in our evolutionary past, but it’s not about that today. Evolution can explain why we have the traits we have, but it’s up to use to decide how to use them.

Today I’d suggest morality is a device to facilitate social interaction and cooperation, which is desirable because social interaction and cooperation advance our aggregate interests. It doesn’t need to be defined such, and I’d be open to alternative definitions, but it’s not about promoting fitness.

Hope that answers some of your questions. Keep them coming, though. They remind me of areas I need to flesh out to prevent misunderstanding and/or to patch holes.

James Gray · 25th November 2010 at 5:23 pm

I don’t really touch on memes, but I’m sure they’re important. What I’m talking about is our moral psychology. How that moral psychology influences culture, which in turn feeds back into our moral psychology is a complex (and fascinating) subject in itself. I’m more interested in suggesting that our psychology varies, and that variation influences the way many people form moral attitudes and judgements.

I thought you suggest that certain strategies are “better” in the sense of being more successful and “survive” for that reason. But they could survive as a meme rather than genetically. Does this has any implications for your view?

Furthermore, the fact you’re a philosopher probably has something to do with your other psychological characteristics, such as high intelligence, propensity to reason, perhaps relatively subdued emotional temperance and tolerance of ambiguity etc – these to me suggest someone who would tilt towards pragmatic liberalism. Not sure if that’s accurate in your case…

Yes, emotions are influenced by beliefs but also by something more biological. I think my biological temperament tends to be very conservative, but I don’t think it’s a black and white issue. I have a tendency to have fondness for authority (including intellectual authorities), fondness for in-group loyalty, unease towards strangers, and unease concerning “new” experiences.

I would expect that most people would have similar conservative emotions to various levels, but some people count such things as something like intrinsic values. I don’t quite understand what exactly is going on with that. Most people don’t know what “intrinsic values” are, so it’s not easy to talk about it with most conservatives.

I think that my conservative temperament could very well be based on something that is beneficial from an evolutionary point of view and such things also seem to have some indirect relevance to morality.

Morality is overriding in a psychological sense that it’s strongly persuasive, often overriding other conventional norms. But it’s not categorically imperative, in the Kantian sense.

Is this just a descriptive fact then? That we tend to find moral norms to be motivational or something like that? I would still like to hear more on this issue.

We can judge and action to be ‘better’ in a few ways. First is get an intuitive impression of the rightness or wrongness of an action – primarily emotive. We can also reflect on that action – primarily rational. Moral philosophy, which is obsessed with justification, operates primarily rationally, but that ignores how most people form moral judgements.

I think philosophers are quite aware that moral beliefs can be formed by “culture,” bias, prejudice, and other emotions. That is simply not going to tell us when behavior is appropriate.

The take home message of that notion is that once moral philosophers are in agreement (ha!) on the correct interpretation of an action, they need to find a way to influence the intuitions (and perception) of others to have them also arrive at that same judgement. Big problem for moral philosophy – and largely overlooked to date.

Philosophers have been greatly ignoring the issues of “motivation.” You can say that stealing is wrong, but then steal due to a lack of motivation. Is that what you mean here? I am pretty interested in that issue.

I mean ‘free-rider’ in the game theory sense: someone who exploits someone else’s costly altruistic behaviour to their own advantage. Preventing free-riders from disrupting social interaction is one of the (major) problems morality is there to solve.

This definition looks very normative to me. If we already presuppose that free-riders are “exploiting” others (doing something bad), that will change how we interpret who counts as one.

Today I’d suggest morality is a device to facilitate social interaction and cooperation, which is desirable because social interaction and cooperation advance our aggregate interests. It doesn’t need to be defined such, and I’d be open to alternative definitions, but it’s not about promoting fitness.

This definition seems to be based on “majority rules,” but I’m not sure how an anti-realist could have a more intuitive definition of morality. We use the word “morality” when dealing with things that are “important,” which involves value. You seem to take value to be based on a certain kind of desire (basic desire). That seems a bit too broad.

I also think there are issues with equating aristotelian final ends (ultimate ends) with basic desires because there are some things that seem to “make sense” for us to value or disvalue. Pleasure itself is something that we can value for its own sake, but it doesn’t make sense to value money in that sense. It is possible to value money as an end in itself, but we “shouldn’t” do that.

Some philosophers talk about desire “coherence” (or compatible goals) and such to explain why some desires are better than others, and some desires might also be incoherent when considering society at large. I don’t find this view satisfying, but it’s a thought.

Hope that answers some of your questions. Keep them coming, though. They remind me of areas I need to flesh out to prevent misunderstanding and/or to patch holes.

No problem, I look forward to seeing the progress for your thesis if possible.

Tim Dean · 25th November 2010 at 6:30 pm

Yeah, I did chuck out an idea about moral memes competing. That won’t appear in my thesis (waaay too speculative), but I think there could be something to it. Basically, I reckon Kant would keep buying rounds of drinks till he’s broke while Machiavelli would get drunk and go home with a full wallet.

As for moral-political psychology – the discipline is really in its early days, although what you’ve mentioned suggests high Introspection more than anything else, which isn’t necessarily predictive of political attitudes. Openness to experience and Conscientiousness are the big predictors. And unease at ‘new’ experiences doesn’t necessarily mean low Openness – it can mean Introversion. More importantly for Openness is being open to new ideas, which as a philosopher, I’d suggest you are more than most non-philosophers. But hard to say from this small self-report.

From what I’ve read, it’s things like perception of threat, fear (not just unease) of outsiders, fear of death, low integrative complexity, aggressiveness, pessimism etc tend to lean conservative, but not always.

On overridingness – I’ll have to do a more thorough post on that. Basically I think morality is overriding in the sense that intense hunger is overriding. It strongly motivates behaviour, but not consistently or without being contested by other impulses. And doesn’t always override self-interest – hence the need for moralising and laws. Yet, I reckon this (descriptive) thesis is sufficient to explain moral phenomena. I don’t think we need to posit something that is metaphysically overriding.

On the normativity of my free-rider definition – it’s normative, but instrumentally, like all my normative statements. It assumes that we value promoting our interests, and protecting them against free-riders. And, as a matter of empirical fact, I’d suggest we do value these things.

And I wouldn’t necessarily appeal to majority rules to decide on what’s good and bad – the majority can be wrong. But, assuming people want to pursue their interests (which I reckon is true), then there are facts about the best way to do that, and a rational majority (ha!) would choose a social contract, or something similar, to steer behaviour to maximise aggregate interests.

This is bigger stuff than my thesis, and I haven’t settled on an account of how this works that I’m entirely satisfied with yet – but I think it’s doable, and it’d look like a pluralistic social contract kind of arrangement, a la Gauthier and Binmore.

On values – I’ve posted a bit about ‘interests’ before, and I’ll have to flesh that out. But I’d suggest that we naturally pursue our ‘intrinsic interests’, which are those interests we have by virtue of the properties we possess – square holes pursue square pegs. Not that we’re necessarily aware of what our intrinsic interests are, and we use many proxies to advance them, such as chasing pleasure or accumulating money etc. And there’s no metaphysical necessity that we ought to pursue our intrinsic interests, but I reckon as a matter of empirical fact, that we do. And it’s not such a bad thing either.

And yeah, this is pretty different from fluffy ‘coherence’ talk. In fact, I’m largely unimpressed by all the armchair moral psychology I’ve seen done by philosophers. So much stuff about ‘rational regret’ and definitions of pleasure/pain or well-being, and a lot of it is just too abstract and too divorced from real, messy, wet psychology.

James Gray · 25th November 2010 at 8:56 pm

I’m a little unclear about what you want to conclude in your thesis. Are you merely defending the following?

1. There are moral strategies, and this diversity can be a good thing.
2. We evolved an interest in morality. (It is an instinct or part of human nature.)
3. There is some unique biological differences in people that have an impact on their moral beliefs.

As for moral-political psychology – the discipline is really in its early days, although what you’ve mentioned suggests high Introspection more than anything else, which isn’t necessarily predictive of political attitudes. Openness to experience and Conscientiousness are the big predictors. And unease at ‘new’ experiences doesn’t necessarily mean low Openness – it can mean Introversion. More importantly for Openness is being open to new ideas, which as a philosopher, I’d suggest you are more than most non-philosophers. But hard to say from this small self-report.

From what I’ve read, it’s things like perception of threat, fear (not just unease) of outsiders, fear of death, low integrative complexity, aggressiveness, pessimism etc tend to lean conservative, but not always.

What psychologists call “temperament” has to do with genetically determined personality traits based on how we emotionally or perceptively respond to our environment. Fear of death is probably not genetically determined. The emotions you describe are probably culturally defined and based on language. Our temperaments can be influenced by beliefs and “interpretation.” A negative response could be “fear” or “anger” depending on the cognitive elements involved.

On overridingness – I’ll have to do a more thorough post on that. Basically I think morality is overriding in the sense that intense hunger is overriding. It strongly motivates behaviour, but not consistently or without being contested by other impulses. And doesn’t always override self-interest – hence the need for moralising and laws. Yet, I reckon this (descriptive) thesis is sufficient to explain moral phenomena. I don’t think we need to posit something that is metaphysically overriding.

I thought that your view was something like that and obviously that is different from my interest, which is whether or not it makes sense to “force” someone to do something, manipulate them, or persuade them. The overriding nature of morality is what tells us to force sociopaths to behave themselves, encourage people to attain more empathy, and so on. This is a huge part of the motivational component of morality — do we even want to be motivated? Is it right to try to get others to be motivated?

I could imagine that empathy makes sense for a moral realist because other people really do matter, but an anti-realist might rationally prefer to lose what little empathy they have to avoid the negative emotions involved. It might even be irrational for an anti-realist to want empathy.

On the normativity of my free-rider definition – it’s normative, but instrumentally, like all my normative statements. It assumes that we value promoting our interests, and protecting them against free-riders. And, as a matter of empirical fact, I’d suggest we do value these things.

And I wouldn’t necessarily appeal to majority rules to decide on what’s good and bad – the majority can be wrong. But, assuming people want to pursue their interests (which I reckon is true), then there are facts about the best way to do that, and a rational majority (ha!) would choose a social contract, or something similar, to steer behaviour to maximise aggregate interests.

Now we need to know what counts as “rational” in this context.

Also, it’s normal for us not to care about certain interests of others. We don’t care that others want to count grass in their spare time. We don’t value that interest. Is a person counting grass all day but benefiting from the hard work of others a free rider? If so, what about philosophers contemplating other things very few people care about while benefiting from the hard work of others (by getting money from the state, etc.)?

This is bigger stuff than my thesis, and I haven’t settled on an account of how this works that I’m entirely satisfied with yet – but I think it’s doable, and it’d look like a pluralistic social contract kind of arrangement, a la Gauthier and Binmore.

Is that one of the strategies you will discuss in the thesis?

On values – I’ve posted a bit about ‘interests’ before, and I’ll have to flesh that out. But I’d suggest that we naturally pursue our ‘intrinsic interests’, which are those interests we have by virtue of the properties we possess – square holes pursue square pegs. Not that we’re necessarily aware of what our intrinsic interests are, and we use many proxies to advance them, such as chasing pleasure or accumulating money etc. And there’s no metaphysical necessity that we ought to pursue our intrinsic interests, but I reckon as a matter of empirical fact, that we do. And it’s not such a bad thing either.

How does valuing money fit into that?

And yeah, this is pretty different from fluffy ‘coherence’ talk. In fact, I’m largely unimpressed by all the armchair moral psychology I’ve seen done by philosophers. So much stuff about ‘rational regret’ and definitions of pleasure/pain or well-being, and a lot of it is just too abstract and too divorced from real, messy, wet psychology.

Philosophers often have a problem explaining why you should care about what they are talking about. Most people think that all philosophy is like that. Hopefully some of this fluffy stuff isn’t such bad philosophy, but I’m not sure exactly what you have in mind in particular.

What people call “rational” isn’t entirely clear at all. Practical rationality often seems to be “true by definition.” At the same time epistemology itself isn’t as clear as I would like. We can’t just add the numbers up to see which argument is “better.” There are intuitions and assumptions involved with most arguments.

You suggested before that you might be an “epistemic anti-realist” and you might reject what I call “mental realism” (mental emergence). All these things are related, so It’s not easy arguing about moral realism (or anything else) unless we can get our analogies straight.

Tim Dean · 29th November 2010 at 9:28 pm

I’ve written here about 1, 2 and 3, although my thesis is focusing on 1 and 3 – although I’m only talking about diversity being ‘good’ in the evolutionary sense as a descriptive thesis. 2 has been talked about a great deal – there’s little I can add to that discussion except to agree with those who argue that evolution has shaped our moral psychology.

What psychologists call “temperament” has to do with genetically determined personality traits based on how we emotionally or perceptively respond to our environment. Fear of death is probably not genetically determined.

I would argue – and twin studies back me up – that genes influence our personality and temperament, and personality and temperament influence our moral attitudes. That doesn’t mean they fully determine them; environment contributes a lot, but not everything.

Eg, fear of death isn’t genetically determined, but the size and intensity of one’s fear response to mortality could be, and that would contribute to someone’s fear of death which, in turn, contributes to their political attitudes.

whether or not it makes sense to “force” someone to do something, manipulate them, or persuade them.

I’m also interested in this, and the my feelings are that we have two sources of motivation to be ‘good’: internal and external. Internally we have our moral sense as well as consciously held beliefs, but these sit in tension with our self-interested concerns. Fostering internal motivation is important to any moral system, but it’s not enough to guarantee ‘good’ behaviour. External motivation comes from others in the form of approval/disapproval, punishment and communicating moral beliefs.

To some degree, these influence internal motivation (fear of punishment; fear of moralisers’ judgement), but they also directly steer behaviour, like locking up a prisoner or demanding restitution.

Is it justified? Depends on the normative system you subscribe to. For mine, you can justify it through a social contract arrangement – we all want to maximise our interests, and the best way to do so is to give up some freedoms and in return others agree not to harm our interests. It’s in all of our interests to maintain this situation, whether through persuading others to behave or through punishing those who misbehave.

I could imagine that empathy makes sense for a moral realist because other people really do matter, but an anti-realist might rationally prefer to lose what little empathy they have to avoid the negative emotions involved. It might even be irrational for an anti-realist to want empathy.

Not sure why an anti-realist would want to abandon empathy; it’s one of the most important internal motivators. Sure, it’s fallible – but it’s a nifty heuristic that often encourages ‘good’ behaviour, so it’s worth keeping, but not without condition. In the same way our evolved sweet tooth can be harmful to us today, we need to be wary of our evolved faculties and try to mitigate them sometimes. But I’m more concerned about things like in-group preference than empathy.

By ‘rational’ I just mean the plain vanilla definition of instrumental rationality. I’m not talking about psychological ‘rationality’.

Also, it’s normal for us not to care about certain interests of others. We don’t care that others want to count grass in their spare time.

Morality is about fostering prosocial behaviour and preventing anti-social behaviour, not demanding cooperation. But it might be considered unfair if someone receives a benefit from others and doesn’t reciprocate after agreeing to do so.

Is that one of the strategies you will discuss in the thesis?

I probably won’t discuss social contract theories, except to give a nod to Mackie’s interpretation of what morality is for, and suggest that others have ventured to elaborate on that, such as Gauthier and Binmore. But only to flag that I don’t think I’m going down a dead end.

How does valuing money fit into that?

Valuing money is an instrumental interest – and I’d suggest someone who values it too much will probably end up compromising on one or more of their other interests.

You suggested before that you might be an “epistemic anti-realist” and you might reject what I call “mental realism” (mental emergence). All these things are related, so It’s not easy arguing about moral realism (or anything else) unless we can get our analogies straight.

You’re right – these things are all linked, and we probably need to establish our foundations etc before leaping into ethics. In short, I’m a pragmatist pluralist empiricist with a pinch of Madhyamaka, if that means anything.

I’m a realist in the sense that I believe there’s a world, and there is a way the world is, but I don’t believe there are real types or natural kinds etc. Instead we abstract the fuzzy concrete world into chunks that are useful for interacting with the world. But no one coherent and consistent set of abstractions will exhaustively describe the world. And I think the obsession with knowledge-that and propositions in epistemology is wasting a lot of time.

See my posts on cartography and metaphysics, and the one on knowledge-how for a sample of my metaphysical and epistemological views.

James Gray · 30th November 2010 at 8:55 am

I’m also interested in this, and the my feelings are that we have two sources of motivation to be ‘good’: internal and external. Internally we have our moral sense as well as consciously held beliefs, but these sit in tension with our self-interested concerns. Fostering internal motivation is important to any moral system, but it’s not enough to guarantee ‘good’ behaviour. External motivation comes from others in the form of approval/disapproval, punishment and communicating moral beliefs.

A major concern is why anyone would foster internal motivation to be moral.

Is it justified? Depends on the normative system you subscribe to. For mine, you can justify it through a social contract arrangement – we all want to maximise our interests, and the best way to do so is to give up some freedoms and in return others agree not to harm our interests. It’s in all of our interests to maintain this situation, whether through persuading others to behave or through punishing those who misbehave.

The word “justified” here has to do with why it makes sense for anyone to try to be a good person rather than just maximize their own personal happiness. Those are two different things. They might overlap sometimes but not always.

Politicians, CEO’s, and people in the mafia could probably live a rich family life (or personal life) and still hurt people (and have little fear of personal risk). Those tend to be the most “evil” people we would rather not exist.

Not sure why an anti-realist would want to abandon empathy; it’s one of the most important internal motivators.

I already explained why. Empathy is painful. A CEO of a company might abandon empathy because he might have to make decisions that hurt others to keep his job. The president of the USA might not want empathy because he might have to make decisions that will get people killed. There are less extreme possibilities within the lives of “ordinary” people. Sometimes people get hurt, but we might rather not think about it or feel “bad” about it. Life is hard enough as it is. Why make ourselves less happy by caring about strangers?

Sure, it’s fallible – but it’s a nifty heuristic that often encourages ‘good’ behaviour, so it’s worth keeping, but not without condition.

We might want other people to be “good” but why should anyone else want to be good rather than just a rational egoist?

By ‘rational’ I just mean the plain vanilla definition of instrumental rationality. I’m not talking about psychological ‘rationality’.

So is the assumption that everyone is nothing other than a rational egoist? If so, how could morality trump one’s self interest?

Valuing money is an instrumental interest – and I’d suggest someone who values it too much will probably end up compromising on one or more of their other interests.

It’s not just an instrumental interest for all people. Are you saying that people “should” treat it is such? How can you justify such a claim?

You’re right – these things are all linked, and we probably need to establish our foundations etc before leaping into ethics. In short, I’m a pragmatist pluralist empiricist with a pinch of Madhyamaka, if that means anything.

I’m a realist in the sense that I believe there’s a world, and there is a way the world is, but I don’t believe there are real types or natural kinds etc. Instead we abstract the fuzzy concrete world into chunks that are useful for interacting with the world. But no one coherent and consistent set of abstractions will exhaustively describe the world. And I think the obsession with knowledge-that and propositions in epistemology is wasting a lot of time.

See my posts on cartography and metaphysics, and the one on knowledge-how for a sample of my metaphysical and epistemological views.

I read your posts and don’t have a strong objection to them, but I don’t see how what you say is incompatible with moral realism, epistemic realism, mental realism, etc.

James Gray · 30th November 2010 at 9:15 am

You said you mainly will discuss 1 and 3:

1. There are moral strategies, and this diversity can be a good thing.
3. There is some unique biological differences in people that have an impact on their moral beliefs.

Could you expand a little about what you will want to say about these topics? Do you just want to prove that they are true?

Tim Dean · 30th November 2010 at 5:23 pm

A major concern is why anyone would foster internal motivation to be moral.

If I agree that it’s in my long term interests to be social, then it’s in my interests to behave morally, so it’s worthwhile embracing internal motivation.

As for someone who realises they’d be worse off by behaving morally – i.e. they might benefit more from defecting than cooperating, and they might be strong enough to get away with it in perpetuity – what might motivate them to be internally motivated? Well, nothing. (That’s why I don’t trust Superman.)

But that doesn’t make their behaviour morally justifiable (a la Gyges in the Republic). It just means others would be inclined to find a way to bend this person into being moral through external motivation. And we’d hope that we can bend their will into being moral.

It’s important to note that, because I don’t think morality has any potency in terms of intrinsically overriding internal motivation, it is ultimately up to all of us to make morality work. There will likely always people who will benefit from not being moral, so there will always be reason for others to want to band together to try to enforce it on them.

Empathy is painful.

Yes, often it is. Often our heuristics encourage behaviour that is counter-productive to our long term interests. That’s why the developed world is undergoing an obesity epidemic. Thankfully we have reason to help steer our behaviour in addition to our emotions. That doesn’t mean a president who sends troops to their death shouldn’t feel bad about the decision. They should. Just doesn’t mean they should back away from that decision if it’s in their nation’s long term interests. And most tough decisions are a trade-off of some sort.

We might want other people to be “good” but why should anyone else want to be good rather than just a rational egoist?

Because if I’m a rational egoist, I’ll probably either behave morally, because social and cooperative behaviour are in my interests (in which case internal motivation is a cheap way to behaving well), or I’ll behave immorally and be rebuked by those around me and persuaded to be moral.

So is the assumption that everyone is nothing other than a rational egoist? If so, how could morality trump one’s self interest?

I meant the opposite. There’s ‘rational’, defined in decision theory terms. So I can describe an action as being rational or not at the ultimate level of explanation. That has little to do with the psychological processes people use, which are messy, and are at the proximate level of explanation. ‘Psychological rationality’ is the capacity for conscious reflective thought, sometimes rational. People are capable of psychological rationality, but few (if any) are truly ‘rational’.

And morality doesn’t trump ultimate self-interest. But it’s usually in my ultimate self-interest to be moral.

I think this is a real sticking point between us. I don’t consider morality to be the be-all and end-all of decision making. Morality can be overridden by other things – or it might not be relevant in many situations.

Morality is about ultimately about social interaction. There are lots of things that advance my interests, keep me alive, help reproduction and enable me to choose which pair of shoes I should buy etc that are not moral.

Morality guides social interaction, and it’s a good idea to be moral because it will more often than not advance my interests to do so. If there are situations where it maximises my interests not to be moral, then other people probably won’t like that, and they’ll give me a kick in the pants. Not far from Hobbes’s account, really.

If I was Superman, I could behave however I want. No-one could stop me. But that wouldn’t make my behaviour moral.

It’s not just an instrumental interest for all people. Are you saying that people “should” treat it is such? How can you justify such a claim?

Some people do pursue money as an end in itself. I think these people are misguided. For them money is a psychological interest – it’s what they think they want. But they’re really pursuing their intrinsic interests, and by pursuing money as an end will probably end up satisfying less of their intrinsic interests than taking another approach. There’s nothing metaphysical that says they shouldn’t pursue money as an end in itself, but if they want to satisfy their intrinsic interests, then it’s probably a poor strategy. Note, this isn’t a moral issue (unless their actions impact on others).

Could you expand a little about what you will want to say about these topics? Do you just want to prove that they are true?

I could go on about these… but basically I’m trying to lend some insight into why we see so much apparently intractable moral diversity in the world. I’m using psychology and evolution to help provide that answer. I think understanding the nature of psychology and the source of disagreement will be important for normative ethics – any normative system has to actually work, so it needs to be compatible with our psychology, so understanding that psychology is beneficial.

Hope that clarifies things.

James Gray · 30th November 2010 at 6:05 pm

Because if I’m a rational egoist, I’ll probably either behave morally, because social and cooperative behaviour are in my interests (in which case internal motivation is a cheap way to behaving well), or I’ll behave immorally and be rebuked by those around me and persuaded to be moral.

This misses the point. You are acting as if morality is always in one’s rational self-interest. Consider Hobbes’s concern that everyone will want to break the social contract whenever they can get away with it. He thought that we should live under a brutal dictatorship to assure us that anyone who breaks the rules will be punished, but he also “bit the bullet” and decided that the dictator is “above the law.”

The fact that moral behavior is usually compatible with rational self-interest is uncontroversial, but the most evil people know that. All people act moral for the most part. They only do horrifically immoral actions every once in a while.

I think this is a real sticking point between us. I don’t consider morality to be the be-all and end-all of decision making. Morality can be overridden by other things – or it might not be relevant in many situations.

I don’t think morality is the be all and end all of decision making. I also think it can be irrelevant. I just think that morality can be truly overriding in a way that you don’t.

Morality is about ultimately about social interaction. There are lots of things that advance my interests, keep me alive, help reproduction and enable me to choose which pair of shoes I should buy etc that are not moral.

I don’t see the relevance of what you are saying here. You define morality that way, but that doesn’t mean the definition is correct.

Morality guides social interaction, and it’s a good idea to be moral because it will more often than not advance my interests to do so. If there are situations where it maximises my interests not to be moral, then other people probably won’t like that, and they’ll give me a kick in the pants. Not far from Hobbes’s account, really.

I find this reasoning strange. The fact that moral behavior is generally a good idea and should be the default says nothing about whether or not I should be a good person (and be willing to act against my self interest). (Whether I should try to break the rules to benefit myself whenever I can get away with it.) A person can ask me, “Should I be moral?” and I can answer, “No. Act in whatever way advances your self interest, and that will often coincide with what is moral.” In other words no one should act morally because it’s moral. They should behave in a way that looks moral when it serves self-interest, but not when it doesn’t. That’s exactly what all the most evil people in the world do. The smart evil people often “get away with it” and know how to stay out of trouble.

I think we both agree that as a society we can learn how to benefit better from cooperation and do so, but we disagree that intelligent people can find a good reason to be moral even when it’s not in their self-interest. I think that evil people can learn to regulate their own behavior even when no one is watching and even when they can get away with harming others for their own benefit. (I’m talking about real self-benefit, not behavior in the hope of benefiting oneself but likely to lead to punishment.)

When I want to develop myself as a better person, I want to learn to refuse to hurt others, even when it would harm me to do so (and benefit me to harm them). I don’t merely want to learn how to get more out of cooperation with others.

In your view I don’t think morality can ever rationally override self-interest, but in mine it can. This can have implications in the decisions we make while no one is watching, how we try to become more motivated to be moral, foster empathy, and so on.

Of course, none of this proves that you are wrong. It just means that the most intelligent people (who embrace anti-realism) will only think of morality as a way to benefit from cooperation and will be willing to harm others.

Tim Dean · 30th November 2010 at 6:32 pm

I think I understand your position, but let me ask, for the sake of clarity: why should someone be moral?

Why should they be moral even when it’s not in their self-interest?

Why should an ‘evil’ person not be ‘evil’, even if they can get away with it?

I’d be interested in the answers to these from your realist perspective. Might help me get a grasp on the core of our different views.

James Gray · 30th November 2010 at 7:27 pm

Some people do pursue money as an end in itself. I think these people are misguided. For them money is a psychological interest – it’s what they think they want. But they’re really pursuing their intrinsic interests, and by pursuing money as an end will probably end up satisfying less of their intrinsic interests than taking another approach. There’s nothing metaphysical that says they shouldn’t pursue money as an end in itself, but if they want to satisfy their intrinsic interests, then it’s probably a poor strategy. Note, this isn’t a moral issue (unless their actions impact on others).

I don’t see why empathy is any different from valuing money “inappropriately” from your point of view. Empathy is when we value the feelings of others for their own sake — but they don’t really have value except instrumentally. We shouldn’t want to help strangers unless it will be mutually beneficial.

Also, love seems just as problematic. People we love are people we want to do well for their own sake, but we should only care about them insofar as it is mutually beneficial.

If empathy and love are “too difficult to do without,” then we could still decide to neglect this part of ourselves to have less empathy and love rather than more. We could instead concentrate on the most effective cooperation and acting only out of rational self-interest without valuing various things for their own sake (other than certain intrinsic desires).

I think I understand your position, but let me ask, for the sake of clarity: why should someone be moral?

Why should they be moral even when it’s not in their self-interest?

Why should an ‘evil’ person not be ‘evil’, even if they can get away with it?

I’d be interested in the answers to these from your realist perspective. Might help me get a grasp on the core of our different views.

I think that I discuss this sort of thing in quite a bit of detail in my argument for moral realism. It was there that I consider why exactly pain seems to have intrinsic value and how that seems to give me a reason to care about other people’s pain. Nonetheless, here is how I see the issue right now:

I care about making the world a better place, helping people, benefiting them, and so on. I think that promoting intrinsic value is a way of doing so that “really matters.” The goodness in the world can be increased and that’s something I want to do. I think these desires of mine are rational.

An evil person is likely to think that their own happiness is all that matters. They care about family and friends, but not about strangers. They suspect that the people they care for have no “real worth” but perhaps it is still important to their happiness to have loving relationships. An evil person is unlikely to think that strangers have equal value to themselves or are “highly important.” It’s much easier for an evil person to let people be harmed who are deemed “unimportant” than “very important.” This is why we vilify and dehumanize our enemies in propaganda to gain support for wars and so on. The mere realization that our enemies are hard working and caring people like ourselves makes it almost impossible not to care for them to some extent.

I don’t fully understand “moral rationality” but the main idea is that it “makes sense” for us to want to promote intrinsic goodness and avoid intrinsic badness. (Or respect/love whatever has intrinsic value.)

It could be that intrinsic value is something like a desire independent reason for action. If something is intrinsically good (e.g. happiness), then it is rational to want it to exist (or to do what we can to attain the motivation to cause it to exist). That’s not to say that we can act without motivation. The actual psychology involved is simply not fully understood at this point in time.

Pleasure seems like a good candidate for a desire-independent reason because we know that pleasure is worthy of desire. Once we know what pleasure is and that something is pleasurable (and bad in no way), we will probably desire it — and this seems fully rational considering what it’s like to experience pleasure.

Consider that pain seems bad. I want to avoid pain — but I also want to help other people avoid pain. Their interest in avoiding pain is a reason for my action to help them. When I think about pain being bad, I don’t think it’s only bad when I experience it, but that it’s also bad when other people do (because they experience it in roughly the same way.) Their pain is just as real as mine and it is “bad” no matter where it is found (in what person’s body or mind).

Also consider that when I take other people’s pain as being a reason to act, it doesn’t seem irrational for me to do so. It seems to “make sense.” It’s hard to know how exactly rationality works, but we have various “platitudes” (highly plausible common sense beliefs) about it. It might be that intrinsic value does exist but it’s still irrational for me to want to help others, but that seems pretty silly given our ordinary understanding of the world.

It might be that moral rationality is a fabrication, but in that case our moral commitments (to promote intrinsic value) might still be neutral with respect to rationality. There might be no more reason to favor promoting self-interest than the interest of others, but moral realists could commit themselves to being good (motivated to be good at the expense of self-interest) without being “irrational.”

Now let’s consider the practical reason of anti-realism. It might be that promoting one’s self-interest isn’t “really rational” either. It might be that practical reason is a fabrication created out of our own interests. In that case it’s still not clear that we could rationally (non-irrationally) promote the interests of others because their interests might be totally irrelevant to the universe and ourselves. We can’t feel the pain of others, so it doesn’t exist as far as we are concerned. However, our own pain is very real to us, so it wouldn’t be irrational to care about our own pain.

Tim Dean · 1st December 2010 at 10:50 pm

I don’t see why empathy is any different from valuing money “inappropriately” from your point of view. Empathy is when we value the feelings of others for their own sake — but they don’t really have value except instrumentally. We shouldn’t want to help strangers unless it will be mutually beneficial.

Yes, they are similar. They’re both instrumentally useful. I can’t see how it could be any other way. One can be too empathetic, for example, such as when they are reluctant to punish, and that leads to greater harm in the long run.

Someone who values money as an end in itself, like someone who values empathy as an end in itself, is likely to run in to problems. That doesn’t mean we need to abandon either. Nor that we should consciously relegate them to instrumental values – ‘I feel empathy for that person, but I know I *really* care about them because of the principle of reciprocity’. Empathy, like money, is a useful proximate mechanism for making good things happen. As with money, empathy has an ‘invisible ‘hand’, and it needn’t be made visible to work. In fact, making it visible might be harmful.

We could instead concentrate on the most effective cooperation and acting only out of rational self-interest without valuing various things for their own sake (other than certain intrinsic desires).

At the end of the day, I don’t think we can ever think like rational agents, even if we might want to behave like them. The proximate mechanisms and heuristics are necessary and useful, if prone to error. The very reason I’m interested in moral psychology is because I think any normative system – no matter how it’s constructed in abstraction – needs to work in the concrete, and that means being compatible with our psychology.

I don’t fully understand “moral rationality” but the main idea is that it “makes sense” for us to want to promote intrinsic goodness and avoid intrinsic badness. (Or respect/love whatever has intrinsic value.)

I don’t see the difference between fictionalism and realism in this regard. If the notion of intrinsic goodness can motivate us to pursue it, it might not matter that it doesn’t actually exist.

It seems to me the only way a realist can justify the further step from fictionalism is to provide some evidence or reason to support the existence of intrinsic values. Make ethics like a science that pursues moral facts. But, like Mackie and others, I just don’t see any way the moral facts or intrinsic value could be ‘real’.

Pleasure seems like a good candidate for a desire-independent reason because we know that pleasure is worthy of desire. Once we know what pleasure is and that something is pleasurable (and bad in no way), we will probably desire it — and this seems fully rational considering what it’s like to experience pleasure.

I’m not sure I understand the idea that “pleasure is worthy of desire”. Seems like the Error of Inversion (I just made that up – based on Dennett’s ‘strange inversion’).

I’d say that something is considered pleasurable because we desire it, not the other way around. That’s what biology tells us that we do. To use Dennett’s analogy, chocolate isn’t intrinsically sweet; it doesn’t possess the property ‘sweetness’ (at least, not as a primary property). It’s sweet because we have evolved an ‘attract’ reaction towards it. That hardly seems to make it ‘worthy’ of desiring it, in any rational sense.

Otherwise, I generally am wary of linking pleasure and pain to morality, as I’ve written about before. They’re evolved heuristics that attract/repel us from stimuli that benefit/harm our biology. That seems a problematic foundation for morality.

On another note, I did believe in intrinsic value not that long ago. It was the foundation of my moral thinking since my undergrad years. I’ve always been keen on ‘intrinsic’ stuff – things stuff has by virtue of being the way it is (possibly the only thing about Heidegger that influenced me).

I talked about ‘intrinsic knowledge’, which is the knowledge-how someone has by virtue of the properties they possess (I ‘know’ how to heal). And I talked about ‘intrinsic value’, such that my body ‘values’ integrity and avoiding damage, and that’s where our morals come from: trying to behave in a way that promotes these intrinsic values.

But, a few years ago now, I changed my mind. It actually happened all of a sudden. Can’t remember the catalyst – it was probably reading some Buddhism or Zen or something. And *bam*, intrinsic value went out the window. I couldn’t see how an object could possess a ‘value’ property.

Reading Mackie just gave me a better vocabulary to express what I did believe: error theory. But error theory is like atheism – it’s a negative thesis, and it doesn’t do much to help build a positive moral system (although Mackie does spend most of Ethics talking about how to do so).

Anyway, just thought I’d add a bit of context to my thoughts. I can understand how someone might believe in intrinsic value, but for me, I blew that fuse some years ago now.

James Gray · 2nd December 2010 at 1:07 pm

Yes, they are similar. They’re both instrumentally useful. I can’t see how it could be any other way. One can be too empathetic, for example, such as when they are reluctant to punish, and that leads to greater harm in the long run.

Isn’t empathy what happens when we value other people for their own sake? If so, wouldn’t empathy be wrong just like valuing money for its own sake?

At the end of the day, I don’t think we can ever think like rational agents, even if we might want to behave like them. The proximate mechanisms and heuristics are necessary and useful, if prone to error. The very reason I’m interested in moral psychology is because I think any normative system – no matter how it’s constructed in abstraction – needs to work in the concrete, and that means being compatible with our psychology.

This seems to miss my point entirely. Do you want to say that irrational behavior is beyond criticism and we shouldn’t worry about it?

I don’t see the difference between fictionalism and realism in this regard. If the notion of intrinsic goodness can motivate us to pursue it, it might not matter that it doesn’t actually exist.

It’s irrational or stupid to value money for itself, and it would be irrational or stupid to value something based on it’s intrinsic value if they don’t exist. Yes, we can delude ourselves and pretend, but why do that?

It seems to me the only way a realist can justify the further step from fictionalism is to provide some evidence or reason to support the existence of intrinsic values. Make ethics like a science that pursues moral facts. But, like Mackie and others, I just don’t see any way the moral facts or intrinsic value could be ‘real’.

I’ve presented multiple arguments in support of intrinsic value and against the objections to them. I’m not sure exactly what your objections are. My own argument is that the best way to understand our moral experiences of pain is to think that pain is intrinsically bad. It’s possible that I’m wrong, but that doesn’t mean there is “no reason” to be a moral realist. It just means that you don’t find those reasons to be adequate. You either think the arguments are completely without merit or you must admit that my interpretation of pain experience does have some plausibility.

I’m not sure I understand the idea that “pleasure is worthy of desire”. Seems like the Error of Inversion (I just made that up – based on Dennett’s ‘strange inversion’).

I’d say that something is considered pleasurable because we desire it, not the other way around. That’s what biology tells us that we do. To use Dennett’s analogy, chocolate isn’t intrinsically sweet; it doesn’t possess the property ‘sweetness’ (at least, not as a primary property). It’s sweet because we have evolved an ‘attract’ reaction towards it. That hardly seems to make it ‘worthy’ of desiring it, in any rational sense.

How exactly does biology tell us that? Imagine that pain didn’t feel bad. It would then no longer really be “pain” at all. Would you still dislike it because it’s biologically determined that you dislike it?

Chocolate is an object out in the world. The experience of eating chocolate is something else. To value this experience is not the same thing as valuing chocolate. To say that chocolate is “intrinsically sweet” is different than saying that the experience of eating chocolate is an experience of sweetness.

The analogy is not a good one. We dislike pain because of what it’s like to experience it. Pain doesn’t feel bad because we dislike it. I think it is you who are presenting me with a “strange inversion.”

Otherwise, I generally am wary of linking pleasure and pain to morality, as I’ve written about before. They’re evolved heuristics that attract/repel us from stimuli that benefit/harm our biology. That seems a problematic foundation for morality.

So torturing people isn’t wrong if it doesn’t cause damage to the body? That is obviously false. Pain is quite relevant to morality. I’m not saying that pain is the only thing that matters, but it is at least one thing that matters.

On another note, I did believe in intrinsic value not that long ago. It was the foundation of my moral thinking since my undergrad years. I’ve always been keen on ‘intrinsic’ stuff – things stuff has by virtue of being the way it is (possibly the only thing about Heidegger that influenced me).

Where did Heidegger talk about such things?

I talked about ‘intrinsic knowledge’, which is the knowledge-how someone has by virtue of the properties they possess (I ‘know’ how to heal). And I talked about ‘intrinsic value’, such that my body ‘values’ integrity and avoiding damage, and that’s where our morals come from: trying to behave in a way that promotes these intrinsic values.

But, a few years ago now, I changed my mind. It actually happened all of a sudden. Can’t remember the catalyst – it was probably reading some Buddhism or Zen or something. And *bam*, intrinsic value went out the window. I couldn’t see how an object could possess a ‘value’ property.

We can know many things even if we can’t articulate how and even if we can’t understand how it could be true. Powerful skepticism might make me wonder if seeing my hand is a good reason to believe I have a hand. I think skepticism about intrinsic value is much like that. I experience that some things are bad, but to dismiss these experiences seems wrong just like dismissing my experience that I have a hand.

If you actually read what philosophers have to say about intrinsic values and moral realism, you will probably be somewhat disappointed, but you might also be disappointed with reasons philosophers give for empirical knowledge in general. It’s mysterious how mathematics constrains reality, how we know observation can be a reliable way to attain knowledge, how we know that at least certain experiences of our own thoughts and consciousness can be reliable, and so on. This is why I discuss theoretical virtues, self-evidence, coherence, and common sense assumptions. You think that we should concentrate less on “knowledge” and more on pragmatic evidence. To concentrate on various forms of justification rather than knowledge enables a more modest form of reasoning, but I see no reason to think that intrinsic values should be rejected considering our understanding of epistemology and justification.

Reading Mackie just gave me a better vocabulary to express what I did believe: error theory. But error theory is like atheism – it’s a negative thesis, and it doesn’t do much to help build a positive moral system (although Mackie does spend most of Ethics talking about how to do so).

Anyway, just thought I’d add a bit of context to my thoughts. I can understand how someone might believe in intrinsic value, but for me, I blew that fuse some years ago now.

Skepticism is often an appropriate response, and Mackie’s argument for queerness is his own objection based on his demand for a certain amount of justification. Some people are more demanding than others and demand more “reason” (justification) to believe something before accepting it. It’s not entirely clear how much justification (or how strong of a justification) we should demand before rejecting a proposition, and it might be that some disagreement is rational concerning such demands. However, I don’t think moral realism is entirely without merit. I think it is false to claim that there is absolutely no reason (justification) to accept it. There is. The problem is whether or not the evidence is “sufficient” to warrant rational belief — or even sufficient to require belief.

Some people have claimed that there is no reason to be a theist. It’s possible that all arguments for theism are fallacious. I don’t think moral realism is like that. We should see whether or not moral realism is analogous to other rational beliefs. It does seem analogous to the view that we have minds (that actually do something rather than merely brain states) and epistemic realism. I think it would be silly to say that these views are entirely without merit, but you have admitted that you might be skeptical about these things.

Consider whether or now we can “experience God.” We have good reason to think this experience is deceptive and requires one specific interpretation of an experience. However, pleasure and pain don’t seem like they can be deceptive. How we experience something is what makes it pleasurable or painful. There’s nothing to be deceived about.

James Gray · 2nd December 2010 at 1:13 pm

I have also spent some time studying Buddhism and one of my Buddhist friends argued that intrinsic values don’t exist because “desire (or attachment) is the cause of all suffering” and such. I have discussed this issue in the past. I don’t think suffering is nothing other than a problem with desires, but even then it’s not clear that pain and suffering aren’t intrinsically bad and that nothing is intrinsically good.

I think Buddhism should be taken modestly. The fact that attachments can lead to suffering doesn’t mean that suffering isn’t bad. We can try to live without attachments (of the inappropriate kind) to live a life without (or with less) suffering no matter what the ultimate reality of such things are. The Buddhist can be a pragmatist and simply not concern herself with meta-ethics. The few Buddhists who want to be moral realists can do so without much of a problem.

Leave a Reply to Tim Dean Cancel reply

Avatar placeholder

Your email address will not be published. Required fields are marked *