Finding Moral Motivation

Published by timdean on

What would you think if you met someone who stated convincingly that they believe stealing was wrong, and yet you knew that they were prone to theft, and did so in an entirely nonchalant manner?

And when asked after stealing something whether they still believe stealing is wrong, they reply emphatically that, indeed, it is. That people shouldn’t steal. Yet they show no inclination to change their stealing behaviour, not any apparent negative attitude towards their own acts of stealing.

What would you think? Maybe that there was something somehow wrong with them? Or perhaps something wrong with their moral conviction?

It certainly seems something is awry in this situation. For it seems intuitive that in order for someone to state that some act is morally wrong, they must feel some compulsion to behave in accordance with that belief. Surely, they can experience weakness of the will, or they can have conflicting moral proclivities, but they must at least feel some motivation to act in accord with the moral norm, even if that motivation is eventually overwhelmed by other desires.

As such, it seems somehow fundamentally inconsistent for them to say X is wrong and yet be either entirely indifferent to X happening, or for them to do X with indifference. Some even think it’s logically inconsistent, or even logically impossible, for them to say they believe X is wrong and feel indifference towards X.

However, if this notion of ‘internalism’ is true – the notion that moral beliefs entail some motivational component – then it raises a hairy pickle. Namely, how it is that a mere belief can carry with it a built-in motivational compulsion? What kind of strange beliefs moral beliefs would be were this the case.

After all, other beliefs don’t appear to have built-in motivations towards some action. The belief that 2 + 2 = 4 doesn’t seem to imply any motivation to behave a particular way. Even a belief the runaway bus is heading straight for you doesn’t have a built-in motivation, at least, not without the addition of a separate desire that it doesn’t run you down.

Accounting for moral beliefs, and presumably the moral facts that underlie them, in a way that explains their uniquely motivating character has proven a real challenge – particularly if you don’t believe that moral facts exist (as I don’t – in fact, it’s precisely these queer intrinsically motivating moral facts that I believe can’t exist).

This is the Problem of Internalism.

There are several renditions, and many attempts at a solution, but the core problem is the same: how to account for the apparent intrinsically motivating aspect of moral beliefs?

However, I believe the Problem of Internalism rests on a faulty, or at least incomplete, account of moral psychology – one that is overly laced with pre-empirical assumptions about what morality is and how it works.

And with a few tweaks to our understanding of moral psychology, we can understand both the motivating character of moral beliefs, and we can do so without recourse to queer intrinsically motivating moral facts. Here’s how:

The first thing we need to do is tease apart a two key types of moral judgement.

The first is intuitive moral judgement. This is the immediate sense of approbation/disapprobation that we feel upon encountering a moral situation, an act, a vignette, a moral debate or any other morally-charged situation. It’s fast, it happens automatically and it happens without conscious reflection.

This kind of intuitive moral judgement is primarily an emotional response, and it’s only further down the track that we might reflect on the response and wonder why we reacted that way, and abstract away or reference some moral norm or belief.

Such intuitive moral judgements are intrinsically motivating because they’re intrinsically emotional, and emotions are themselves intrinsically motivating.

Then we have considered moral judgement. This is the calculus that we undertake when reflecting on moral norms. It is slower, more effortful and more like other kinds of reasoning. We can think about norms and beliefs, we can trace justifications, we can form arguments based from premises working towards conclusions.

Moral norms and beliefs exist in the realm of considered moral judgement, and they can guide our behaviour in a number of ways. One is to resolve disputes between conflicting moral and non-moral desires or between conflicting intuitive moral judgements. It can inhibit some potential actions and steer us in new directions, such as when we refrain from taking a wallet someone left behind because we believe stealing is wrong.

Moral norms and considered moral deliberation can also affect our behaviour in a more subtle way by feeding back and influencing the way we see the world and how we apprehend moral situations, thus influencing our intuitive moral judgements.

In fact, this is probably the preferred way of influencing moral behaviour because it’s fast, it happens automatically and someone with a well honed moral worldview will probably tend to behave more consistently in accord with the moral principles than someone with a stunted moral worldview who needs to constantly appeal to moral norms and employ considered moral deliberation to direct their behaviour.

But moral beliefs at the level of considered moral deliberation still aren’t intrinsically motivating. There needs to be another step to give them motivating force. And that is a desire to conform with the normative system that the belief or norm belongs to.

Some philosophers have referred simply to the “desire to be moral” (Svavarsdottir, 1999), which is very close, but slightly more general and ambiguous, to what I’m stating. I tend to think “normative system” is more specific and more useful than just “moral” in this context.

The whole point of considered moral judgement and normative systems is to help steer behaviour when our more basic empathy and altruistic sentiments break down. Altruism and empathy work by encouraging us to factor in the desires of others into our behavioural decision making process. However, other-interest conflicts with self-interest, and self-interest often wins.

Enter moral norms. A moral normative system will introduce a system of behavioural rules with which we should conform whether our self-interest likes it or not. (Now, ultimately, it might be in our self-interest to conform with such a system, because living in group where others also conform to such a system prevents them harming us as much as it prevents us harming them – but this fact doesn’t necessarily enter into our proximate day-to-day behavioural decision making process.)

So, what motivation might we have to behave in accordance with these moral norms, particularly if they encourage us to act in opposition to our self-interest? I think we do for two main reasons:

First is the simple desire to conform to the normative system, as stated above. There’s evidence we have an evolved predisposition towards normative thinking – the creation and adherence of behavioural norms (Stitch, 2011). Just watch kids playing and you can see them spontaneously create rules for games, and spontaneously punish those who transgress (while occasionally trying to get away with the odd transgression themselves).

Over time, driven by our innate norm psychology along with plenty of enculturation, we form a strong desire to behave normatively, particularly when it comes to behaviours that concern core social interactions, i.e. behave in accordance with our culture’s moral normative system.

Now, this doesn’t always work. There are still times when the norms conflict with our self-interest, and we’re tempted to ignore the norm and behave how we want. This is where the second reason for conforming with norms comes in: punishment.

A fundamental part of norm psychology and normative systems is that we tend to punish those who contravene the norms, either physically or through social exclusion or other means. This not only encourages conformity with the norms themselves, but also discourages people from opting out of the normative system entirely – the amoral route doesn’t tend to work in a social world.

And it’s from these two aspects that we generate our desire to be moral or be normative. And it’s this desire that individual moral beliefs and individual moral norms draw upon.

So when I believe that X is wrong, I believe things about X, including that it’s a part of the normative system, and I have a desire to behave in accordance with the normative system, so I desire to conform with my moral belief. The belief itself is motivationally neutral – it picks up its motivation from our desire to be moral.

Although I’d suggest that the belief rapidly acquires a motivational character of its own via its influence on our moral worldview, which in turn influences our intuitive moral judgements.

Now we can better understand the Internalist Problem. When someone says X is wrong, but they’re not motivated to conform with the belief, it might be that they do understand what “X is wrong” means, but they simply fail to acknowledge it’s a part of their normative system, or they aren’t motivated to behave morally. Or perhaps that it’s such a new moral belief that it hasn’t had time to trickle down to influence their moral worldview.

I think it unlikely that many people simply aren’t motivated to behave morally (perhaps psychopaths do – although they may still be motivated by a desire not to be punished), but I do think people do often fall into the former category of not considering a particular belief to be a part of my normative system.

As such, I can engage in moral discourse and moral argument, and tangle with moral beliefs, with someone from a different normative system, and come to hold moral beliefs, and yet they hold no motivational sway over me.

For example, I could spend time debating ancient Roman ethics, and debate the moral status of slavery, and form beliefs about the rightness or wrongness of slavery in that context. But, because I don’t conform to the Roman moral system, and have no associated motivational desires to conform in that respect, the moral belief has no motivational compulsion for me.

Thus, this account says that a correct appreciation of what it is to hold a moral belief (assuming it’s relevant to the normative system to which the individual conforms) does entail some motivational component. If it didn’t, it suggests something has broken down psychologically (if not conceptually) on the way. However, the motivation to conform doesn’t come internally from the individual moral belief itself, but externally from a desire to be moral along with a desire not to be punished.

Moral norms and beliefs also flavour our moral worldview, which yields intrinsically motivating intuitive moral judgements. Finally, we also have our empathy and altruism to fall back on, and these will often motivate moral behaviour, sometimes even contravening the moral norms we believe to be true.

The key thing with this account is it’s entirely psychological, and it requires the existence of no queer intrinsically motivational moral facts, only plain old desires and a bit of moral reasoning. And that’s no Problem.


5 Comments

Kate King · 12th January 2012 at 8:14 pm

“…particularly if you don’t believe that moral facts exist (as I don’t – in fact, it’s precisely these queer intrinsically motivating moral facts that I believe can’t exist).”

Bravo, (apologies I went to a stage show yesterday and it hasn’t worn off yet) of course they don’t, morals are completely subjective.
I adore “hairy pickle” I’m pinching that one!

Nikita Rybak · 13th January 2012 at 12:46 am

I feel like the concept of “different normative system” needs some explanation: your Roman illustration doesn’t fit the original stealing example. Surely, all of us are part of a normative system which says that thieves should go to jail?

From my point, the question (and original example) would benefit from clarification.
If they “stated convincingly” to _you_ that stealing (wife-beating, baby-eating) is wrong, then it can be plain logic. It works much like you explained, but it’s not really their belief and morality isn’t involved at any stage. They just want to avoid unpleasant consequences.

If they state it to themselves, it gets more interesting. I believe it’s impossible for those two (doing X and condemning X) to coexist without friction (short of split personality).
A common example of such conflict would be a person who notices homosexual tendencies within himself, but knows that it’s “wrong”. Another – a teenager living in a conservative society and having to deal with his/her sexuality. Abusive behaviour and alcoholism go the same way. I think everyone could think of few examples from their own experience.

There’re dozens of defence mechanisms people use to deal with conflicts like this, conflicts within themselves. On the other hand, I can’t think of such situation without any inner friction, as if conflict doesn’t exist.

JW Gray · 13th January 2012 at 7:40 pm

Something should be said about empathy and epistemic normative stuff. We think we should form beliefs in certain ways, we should believe something that’s sufficiently justified, and so on.

Mark Sloan · 14th January 2012 at 4:18 am

Tim, as you point out, the idea that there are two kinds of moral motivations has good explanatory power. For example, why mentally normal people can, without guilt, sometimes act in ways they intellectually agree are immoral.

You have defined these two kinds of moral motivations as intuitive moral judgment and considered moral judgment. I have no difficulty understanding what you mean, but some may find it confusing to refer to the two motivations as “judgments”. Perhaps I have missed something though.

Alternative nomenclature for these motivations might include:

Emotional motivation from our moral intuitions (a bit awkward) – directly responding to present emotions or desires – moral intuitions are based in biology but their triggering circumstance and intensity are strongly shaped by culture and experience

Instrumental motivation – motivation to do something in the present, which is not necessarily a desire, in service of future rewards and avoiding penalties such as 1) avoiding punishment or 2) acting morally due to an intellectual belief that doing so will, almost always, increase your sustainable well-being even when in the moment of decision you expect acting morally will be against your self-interest. Instrumental motivations are based on predictions of future rewards and penalties.

Due to another of our moral biology adaptations, practicing considered moral judgments will automatically, for mentally normal people, be incorporated over time into our moral intuitions and become intuitive moral judgments. (I don’t know that there is a recognized name for this biological adaptation; there should be.) Ideally, almost all an individual’s moral judgments will eventually be intuitive moral judgments. There may only be a rare need, such as when there is a disagreement about what is moral, for considered moral judgments.

Certainly, motivation for altruistic moral behavior “requires the existence of no queer intrinsically motivational moral facts, only plain old desires and a bit of moral reasoning”.

Mainstream moral philosophy interprets the above state of affairs as being consistent with “there are no moral facts”.

I object to this nomenclature’s definition of “moral facts” based on it being misleading and un-useful.

I think we agree that it is true that the underlying function (the primary reason they exist) of our moral emotions that motivate altruism and enforced cultural norms (moral standards) is something like: “to increase the benefits of cooperation in groups by altruistic acts”. If this is a fact, then it is a moral fact – perhaps the only moral fact there ever has been or ever will be (aside from trivial descriptive moral facts).

It may be very difficult to convince someone, who has bought into Ruse’s awful “morality is an illusion!” statements, that morality is actually an evolutionary adaptation that is as factually real and objective as mathematics.

Mark Sloan · 14th January 2012 at 4:37 am

Instead of claiming “There are no moral facts”, a less misleading and more useful definition of “moral fact” would enable us to say “There is just one moral fact”.

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *