The Problem with Moral Philosophy (and Moral Psychology)

Published by timdean on

Moral philosophy is obsessed with moral judgements. As is moral psychology. And this is a problem.

Moral philosophy cares about moral judgements because it wants to figure out how to make the right ones, and how to know they’re right. Metaethicists are interested in what moral judgements themselves are.

Moral psychology cares about moral judgements because it wants to figure out by which thought processes people arrive at them.

But the problem is that morality isn’t just about judgements, it’s about behaviour. And it seems to me there’s a dearth of research showing how someone goes from moral judgement X to corresponding behaviour Y.

Or why two individuals might go from the same moral judgement X to differing behaviours Y and Z.

Or why someone can hold moral judgement X to be correct, and not perform the corresponding behaviour Y.

Or why someone could hold moral judgement X to be correct, and not know how to behave.

I could go on. But the moral of the story is clear (if you excuse the pun): there is a clear gulf between judgement and action, and that gulf is tremendously important if moral philosophers and moral psychologists want to develop a complete picture of not only what constitutes good behaviour, but how to bring it about.

My concern on the philosophy side is that even if we have a foolproof moral theory that allows us to arrive at the correct moral judgement in any situation (a scenario I find implausible, but let’s assume it’s possible), we might still fail to account for the way behaviour does or doesn’t spring from someone holding that correct judgement. We might even already have the answer, but we’ve failed to connect the dots so people are none the wiser in terms of how to behave.

There’s also the issue of moral decision making. A complete moral theory might provide the tools to evaluate a particular action or outcome and determine whether it was right/wrong or good/bad, but it might not provide the tools for individuals to make moral judgements on the fly, as they’ll need to when it comes to responding to the world around them.

As far as I can see, the vast majority of moral philosophy and moral psychology – the former with it’s emphasis on justification, the latter with its trolley dilemmas – have neglected the final crucial step from judgement to behaviour. Even those philosophers who have looked at practical reason and action theory still talk about both theoretically; they talk about how to understand an action or how to judge its merits after the fact.

No-one I know of gives a thorough philosophical and psychological account of how, given a certain theory or belief, to reliably turn that into behaviour on a day-to-day basis.

The closest I can think of is virtue ethicists who seek to cultivate the kind of disposition that will produce the desired behaviours. But even then, virtue ethicists need a comprehensive account of psychology to explain how certain characteristics lead to certain behaviours and why these behaviours are good.

I know it’s all too easy to get sucked in to the current debates. But this is dangerous when the current debates focus on only a fraction of the whole picture, leaving crucially important issues untouched. Particularly when those issues are as important as how to encourage people to behave morally.

My suggested solution: don’t necessarily stop philosophers or psychologists from doing their present research, but encourage an acknowledgement that morality is about behaviour, and a completed moral theory is one that enables people to behave in accordance with the theory. This might inspire more research into that final leap. Because any moral theory that doesn’t include behaviour is not a complete moral theory.


5 Comments

James Gray · 7th December 2010 at 8:52 pm

Some examples and possible solutions might also be helpful to communicate exactly what you are saying here. If I judge that giving to charity A is the best thing I can do, then I don’t think it’s mysterious when I actually do so.

I must admit that actual motivation to act is a neglected issue (though I haven’t spent enough time studying moral psychology). The Stoics obviously thought they had an answer to this question and developed their “way of life” and “meditative/spiritual exercises.” I assume that there are alternatives to Stoicism, but I don’t know a lot about them.

I don’t think the most practical elements of philosophy require absolute theoretical knowledge. The stoics had a theory of the psychology that underlies motivation, but trial and error is enough to be “helpful” and do some good for us. I think there are wise people (at least more than I am) and they don’t need all the theory to put some moral behavior into practice. They might have had to learn the hard way, but I don’t think the issues you mention are utterly mysterious.

Moral education is almost non-existent and what little we have is almost exclusively theoretical. Some actual training and motivational strategy would be a great idea. More philosophical research on the topic is also something I agree we need.

James Gray · 7th December 2010 at 9:01 pm

I personally have a strong interest in learning more about moral motivation and my own thoughts on the matter can be pretty much summed up with the 8 points I mentioned here: http://ethicalrealism.wordpress.com/2010/08/31/how-to-become-moral/

Moral education tends to not only be theoretical, but it tends to be more impractical and abstract — but that is a failing of a lot of philosophy (including training in logic.)

Something I posted on facebook is also relevant: When teaching or discussing philosophy we have the following values: (1) entertainment, (2) relevance, (3) understandability, (4) intellectually satisfying. Most philosophy education or discussion that has 1-3 seems to lack 4 and vice versa. The nuance and complexity of being intellectually satisfying makes it difficult to make understandable, relevant, and entertaining — but it’s not necessarily impossible.

When philosophy becomes practical (motivation and/or action), it becomes “relevant.”

Tim Dean · 7th December 2010 at 10:16 pm

Basically I’m talking about the difference between asking someone to judge another person’s action in a trolley dilemma (the paragon case in moral psychology studies), and putting them in that situation to see what they would really do themselves.

Should we come to agreement that it’s permissible, or even obligatory, that they throw the switch or push someone on to the track, but people consistently hesitate when placed in that situation in real life, then there’s a serious disconnect that needs to be looked at.

Many moral decisions can probably be formed slowly and with much deliberation, such as whether to donate to charity. But I would suggest most – including the most important ones – need to be decided rapidly, with little opportunity for lengthy deliberation.

A complete moral theory needs to account for both situations. Even if it can’t guarantee the correct choice in the rapid decision making situation, it should at least offer the best chance at reaching the correct choice, or the best chance of reaching an acceptable choice.

If it is a brilliant theory for the slow deliberative stuff, but if it also consistently arrives at the wrong choice when push comes to shove (excuse the pun), then it needs work.

Mark Sloan · 8th December 2010 at 8:00 am

Tim, as you might guess from our previous conversations, I have had similar thoughts. I disagree with you in the sense that it seems to me that moral philosophy HAS cared about motivation for behaving morally in its pursuits of “imperative oughts”. As Christine Korsgaard puts it in the Sources of Normativity, the hard problem, that lots of smart people have failed at, is identifying the source of justification (motivation as I understand it) for accepting the burdens of behaving morally. I agree with you that it is highly unlikely that any such imperative oughts exist. (And Korsgaard’s attempt at deriving an imperative ought appears unlikely to motivate any average person to do anything.)

I too am puzzled at the lack of apparent interest in the topic of what societies might do to motivate average people to increase their moral behavior (understanding that there already is a vast amount of moral behavior taking place every day, just not as much as might be optimum).

Some old approaches to this goal would be off limits due to moral considerations. For instance, attempting to convince people of supernatural punishments for immoral behavior and draconian punishments for immoral behavior such as cutting off hands as punishment for theft and execution as punishment for adultery would be morally unacceptable.

But aside from these, there is lots that can be done. First, we can define the secular ‘end’ (the goal) of ‘moral’ behavior be whatever is expected to best meet people’s long term needs and preferences concerning living in social groups. Second, we can define punishments for immoral behavior to be whatever is expected to best meet that same goal. Third, we might discourage behaviors, such as pursuit of anonymity, that are known to encourage immoral behavior.

I don’t see it as even very difficult to define a secular moral system that would better fulfill the above goal than any available alternative. And such a secular moral system comes with its own innate motivation to accept the burdens of moral behavior: the expectation that doing so will very likely be in a person’s best interests in the long term.

I can already hear the howls of “But how can we know that is what morality ought to be?” My reply is “Dudes, we are people; we are the deciders; the universe doesn’t care. We can define morality any way we think will best meet our needs and preferences concerning living in social groups”.

Were you disappointed in my related post on this topic? (I was hoping you would comment on it.)

http://forums.philosophyforums.com/threads/part-outline-of-a-culturally-useful-workable-secular-moral-system-44465.html

NIck Smyth · 10th December 2010 at 2:41 pm

While I agree with some commeters that we could use some examples here, I do appreciate your basic point.

Now, suppose it were the case that the best way to motivate people, in general, was through rhetoric/religion/other nasty “non-rational” stuff. Suppose philosophy of action/psychology established something like this. That would mean that the considered, rational judgments of normative ethics were basically inert.

Is this a real possibility? I sure think so, and I suspect that many moral philosophers think so, too. This might explain their unwillingness to think too much about the bridge you’re describing. The realities of human motivation might make the whole business of moral philosophy irrelevant. I think Bernard Williams thought something like this.

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *