Why Moral Subjectivism Doesn’t Imply Moral Relativism

Published by timdean on

I seem to spend an awful lot of time defending my moral anti-realism from claims that without objective moral values, then morality is merely subjective.

However, this equivocates over two possible senses of ‘subjective.’

Given I don’t believe there are objective moral values, I do subscribe to a subjectivism of sorts. However, it’s not the subjectivism that says ‘what is right and wrong is entirely down to what I believe is right and wrong at the time.’ This kind of subjectivism slides easily into mad-dog moral relativism, a kind of free-for-all where the justification for any moral norm is that I believe it to be so.

That’s not the kind of subjectivism I’m in to.

The other kind suggests that what makes something right or wrong is not dependent on my personal proclivities, but on a common set of norms as agreed to by a group. Now, there might be disagreement over these norms or their specific applications, but there’s an agreement – implicit or explicit – that there does exist a right and wrong way to behave.

If I buy in to that agreement – that social contract if you will – then I’m bound to those norms irrespective of my personal desires or interests at the time. I might really want to lie, cheat or steal, but if I’m privy to the moral agreement, then I’m morally obliged not to.

The subjectivism comes in because there’s nothing objective, necessary or logically binding about my buying in to the moral agreement. In fact, for most people, their buy-in is implicit. And I’d suggest that virtually everybody does buy in to the moral agreement because, in the long run, it serves their interests – even if it forces them to quash certain proximate interests from time to time.

(There’s a bit of Allan Gibbard’s norm expressivism here, but I wouldn’t necessarily subscribe to Gibbard’s language, or his analysis of moral terms.)

Should someone not buy in to the moral agreement – or only do so when it suits them – the rest of the morally-bound community will probably not react well. You know, getting all stabby and lynchy and such. And there we have the basis of external moral motivation.

So, yes, there is some subjectivism about in my account of morality. But it’s not the kind of subjectivism that says I can make morality any way I want so that it suits me. It says that we all get together, nut out a set of behavioural rules, agree to abide by them, and then we’re bound. Not logically, but subjectively by agreement.

And I really don’t think that this, in any substantial way, undermines morality or moral authority. In fact, I think morality has been working this way throughout history already. It’s not been perfect, but as far as morality has worked, it has worked this way.

Some, such as Richard Joyce, worry that revealing this fact about morality – and the corresponding revelation that there are no objective values – means we might all throw our arms in the air and start looting. I’m more inclined to agree with Joshua Greene that the sooner we abandon any pretence that there are objective moral values – and abandon the often lethal arguments over who’s rendering of moral truth is the correct one – the better we’ll be.

The sooner we realise that morals are negotiated and agreed upon, the sooner we’ll loosen our grip just enough to encourage more tolerance and flexibility.

So there – subjectivism isn’t subjectivism isn’t subjectivism. And that’s alright.


22 Comments

John Wilkins · 22nd February 2011 at 8:08 pm

I think that subjectivism and relativism are often conflated. The best way is to understand them as contraries of a different kind: objective-subjective and absolute-relative. You can therefore have absolute subjectivities and objective relativities, and so on.

Moral subjectivism means that there are no objective moral values, but it doesn’t mean we can’t have moral absolutes. I can think that there are absolute rights without asserting thereby that rights are a feature of the objective universe. The world doesn’t care if I am free or bound, but I think there is an absolute right to freedom I should have.

[In case it isn’t clear, I’m agreeing with you here.]

GTChristie · 22nd February 2011 at 11:21 pm

Moral theorists who apply relativism to practical ethics usually begin from subjectivity as a premise. Roughly the form of the argument is, “if moral values are not objective, then they are subjective. If all values are subjective, then ‘my’ values are just as good as ‘yours’ and all values are equal.” This use of subjectivity as a premise explains how relativism and subjectivism become conflated in many discussions of relativism: the two are associated together and some people forget to distinguish between them, or to discuss what conclusions subjectivity actually entails and what it doesn’t.

You are seeing correctly that subjectivity is a meta-ethical condition that must be accounted for in any moral theory, without adopting it as a premise for practical moral arguments. So your position is spot-on. I agree with you that morality and ethics are actually consensus-based, mostly implicit rather than explicity, and a bit less than a “social contract.”

In your post above, this statement can be improved:
And there we have the basis of external moral motivation.
I would expand it ever so slightly, to include what it implies:
And there we have the basis of external moral motivation: others’ expectations.

I believe the force of others’ expectations is the “binding” force in moral judgments made by the individual. Those expectations are culturally (or socially, if you prefer) formed, and the process is messy consensus-making (and recognizing), but consensus all the same.

The binding “force” is the individual’s willingness (and ability!) to notice, acknowledge and finally live by the recognized expectations of others. Usually we benignly accept the “rules” implied by what others expect from us; sometimes we chafe; some people (such as criminals) are blind to them or don’t care; careful thinkers sometimes genuinely disagree and purposely (on principle) subvert the implicit rules. In each of these levels of acceptance, the bindingness is stronger or weaker, and “subjectivity” comes into it simply as a matter of the individual’s internal interpretation of and respect for others’ expectations: every person’s interpretation being unique.

This is a wonderful post, and gets it right. You are (subjectively speaking) my kind of philosopher.

devin howard · 23rd February 2011 at 4:52 am

I tend to think of moral objectivism as a belief in the existence of an absolute moral state where right and wrong are clearly, explicitly defined in universal terms. Subjectivism, as I see it, is an interpretatin of human morality that establishes a kind of values and beliefs based spectrum. Normative values located near the center where the majority of people cluster, and moving out (probably better to think spherically rather than as a two dimensional axis as it would permit a greater range of cultural and social diversity) as ones moral and ethical identity moves further away, sharing less and less commonalities, the more deviant. While it could be argued that this would establish a moral ‘absolute’ at the center, i don’t think this is true because it is common features, not an objectively defined region or set of values, that would define ‘normative’. Relativism, in this construction, would permit moral frameworks existing far outside the center to be directly compared with and equated to anything in the entire range.

Quick note: I lack the specific vocabulary necessary to engage this as a philosophy student, but great post, fascinating stuff.

Mark Sloan · 23rd February 2011 at 2:40 pm

The repeated questions about objectivity may in part be due to confounding two separate questions about the reality of objective moral values. I think we are all in agreement that objective moral values DO NOT exist that somehow magically bind people’s actions regardless of needs and preferences.

Where we possibly disagree is that I think there ARE objective moral values (or principles) that 1) are mind independent in the same sense as mathematics and science and 2) are ‘prudent moral principles’. Here, ‘prudent moral principles’ are those that, if adopted and practiced, can be reasonably expected by mentally normal people to better meet their needs and preferences over a lifetime than any available alternatives. Of course, such ‘prudent moral principles’ have no more ‘magic bindingness’ on people’s actions than do objective facts from mathematics or science.

It is uncontroversial in evolutionary morality that our biological emotions such as empathy, guilt, and righteous indignation and our cultural moral standards such as the many versions of the Golden Rule and prohibitions against murder, theft, and lying exist because they are biological and cultural heuristics for “increasing the benefits of cooperation in groups”. Further, these biological emotions and cultural moral standards motivate or advocate behaviors that exploit specific strategies from game theory for increasing those benefits. These include kin altruism, direct and indirect reciprocity, and ‘commitment flagging’ (showing membership and commitment to a subgroup by, for example, dress, circumcision, and food prohibitions).

It is at least possible that we will someday learn about the moral standards of other advanced, intelligent species. For the same reasons we would expect them to be familiar with objective, cross-species universal mathematical facts such as 2 + 2 = 4, we can expect them to likely be familiar with the same game theory strategies that we see implemented in the biology producing our moral emotions and in the strategies underlying our cultural moral standards.

Based on the likely universality of mathematics, we should not be surprised if these hypothetical intelligent aliens recognized something like “Unselfish behaviors that increase the benefits of cooperation in groups are moral behaviors” as an objective, mind independent, ‘prudent moral principle’ and a potential basis of secular moral systems.

James Gray · 23rd February 2011 at 9:59 pm

Should someone not buy in to the moral agreement – or only do so when it suits them – the rest of the morally-bound community will probably not react well. You know, getting all stabby and lynchy and such. And there we have the basis of external moral motivation.

I think you’ve missed the point here. Of course, it’s beneficial to seem like one buys into morality, and of course it is beneficial to do so for the most part. But that certainly doesn’t mean it’s always beneficial to nor does it mean that “people will always know” when we break the moral rules. You’re attitude seems to be that one might as well be immoral when they know they won’t get caught.

Evil politicians, cops, and CEOs get away with doing horrible things quite often and they do it because they think it will benefit them. The myth that being moral is always in one’s interest is absurd to the extreme. The whole point of morality is that we can harm others at one’s own benefit and benefit others at our own expense. To think morality is only about self-interest misses the mark by a long shot not only because immoral behavior can sometimes benefit oneself but also because heroic or “supererogatory” behavior often requires a person to help others at the expense of one’s own personal interest. Martin Luthor King Jr and Ghadi must be pretty stupid if morality is what’s within their self-interest.

If morality is nothing but self-interest, then there is no need for morality at all. We can just figure out how to achieve our own goals as well as possible.

Tim Dean · 23rd February 2011 at 10:18 pm

James, while I wish it were possible to have some special binding system that will motivate me to act against my interests when my actions harm others, I just don’t think such a thing is possible. But I don’t think that undermines morality as much as you do.

And to clarify a couple of points: I would expect that people will often be tempted to do immoral things they think they can get away with, but that they think they can get away with them doesn’t make these acts moral. What’s moral is prescribed by the system of norms – and I’d imagine most systems to be some variant on the collective benefit theme.

It might be rational for a politician, cop or CEO to break the rules from time to time, but they’re still breaking the rules, and still deserve punishment.

Since there isn’t any internally motivating binding moral code that will prevent such behaviour, we need to make sure our moral system has some checks and balances built in to prevent, detect and punish such behaviour. That’s the best we can do – and until someone figures out how to make knowledge of categorical imperatives or the Form of the Good internally motivating, then moral realism has to deal with this issue as well.

And I think to base morality on anything other than self-interest is even more absurd. Note that I’m not talking about proximate self-interest with my moral subjectivism, but long term or ultimate self-interest. I benefit from being a part of a cooperative group. I might score some quick wins by defecting, but it’s more likely I’ll benefit by playing nice, particularly if the population I defect on punishes me for my behaviour.

To base a moral system purely on altruism, ultimate and proximate, is desperately unstable. While living in a system (if everyone adhered to it) would be wonderful, a small bunch of mutant or invading psychopaths would bring the whole system down – that’s why I say such a system suffers from the Utopian Fallacy. Better to assume people will be self-interested and build a moral system that accommodates that. In that sense, me and Hobbes sit on the same side of the table.

If morality is nothing but self-interest, then there is no need for morality at all. We can just figure out how to achieve our own goals as well as possible.

And that’s precisely what the moral system I’m talking about comes from. Just like the governments we live in are anarchism + 5000 years, moral systems are self-interest plus a few thousand generations (with a little help from evolution).

GTChristie · 23rd February 2011 at 10:23 pm

@devin howard:
Subjectivism, as I see it, is an interpretati0n of human morality that establishes a kind of values and beliefs based spectrum.

Substitute relativism for subjectivism here to improve the statement. Then it becomes a powerful intuition/observation. In most discussions, subjectivism refers to the “individual’s experience of” any item in question, and relativism refers to the variation (ie, the fact that individual experiences vary). So the spectrum you speculate is a good visualization, but it’s a picture of relativity, which maps the existence of many subjective experiences.

Normative values located near the center where the majority of people cluster, and moving out … as one’s moral and ethical identity moves further away, sharing less and less commonalities, the more deviant.

That is exactly right. It’s a good visualization of moral relativity. In the moral sphere, there is more consensus on some beliefs or values than others and people’s judgments cluster around those; every individual is an instance of a subjective point within the sphere (and each would see the sphere differently from inside it). It’s a very useful visualization of what actually happens in the moral domain. You are identifying that there are subjective points in a relative sphere, in which the points tend to cluster; the clusters represent consensus. Used correctly, this visual might even show why “relative ethics” is bogus: it is not deducible from the map that all (subjective) moral judgements are equal in value, but that every moral judgment is 1) subjective and 2) represents a deviation from consensus (which could be zero). “Judge not” is not a valid conclusion from the map. “Everyone judges” is a valid conclusion from the map — and so is “everyone judges, and some judge closer to the consensus than others.”

All in all, a very nice intuition. I have no idea whether anyone could quantify actual values and place them in the sphere (previous posts here last month took a similar approach to mapping political persuasions but the quantification protocol was not well-defined only approximate).

Pretty good student, aren’t you, Devin? Bravo.

GTChristie · 23rd February 2011 at 10:58 pm

@JamesGray:
To think morality is only about self-interest misses the mark by a long shot … immoral behavior can sometimes benefit oneself … [and] “supererogatory” behavior often requires a person to help others at the expense of one’s own personal interest.

I agree with the idea that morality cannot be about self-interestonly, for the reasons you point out. But the fact that everyone is self-interested seems incontrovertible. It can’t be ignored that each person indeed possesses a subjective view of morality.

So the problem becomes: where does each subjective “analysis” of right action fall in relation to;
1) others’ analyses OR
2) an objective standard (if such exists) OR
3) some consensus, explicit or implicit.

I think #2 does not exist, or if it does, it actually is a product of #3. The “strong” moral relativist believes exclusively in #1 but faces the problem how any judgment is valid at all (relativists have no decision model), which actually is most of your point. Ie, subjectivity is an insufficient basis for moral judgment. Agreed.

But I don’t think anyone here, including Tun, actually believes in #1 exclusive, so nobody in the neighborhood disagrees with you, except that I disagree that anyone has missed the point.

(Although I may have missed something along the way as usual by reading too fast LOL).

Mark Sloan · 24th February 2011 at 4:46 am

James, if, for example, I commit to and practice whichever of the many versions of virtue ethics I believe is MOST LIKELY to lead to my lifelong happiness and flourishing (eudemonia), then I have chosen my morality based on my ultimate self interest. This does not mean, as you seem to be thinking, that I am defining morality from moment to moment as whatever is in my proximate best interests, or even as what I expect from moment to moment will be in my ultimate best interests.

As you say, it is virtually certain that, in the moment of decision, I will sometimes expect it to be in my best interests to act un-virtuously (or unmorally by some social contract morality or some claimed universal moral principle). Then you are claiming it would necessarily be rational for me act immorally?

In fact, if I have chosen my morality wisely, it is likely to almost always be in my ultimate best interests (my prudent, rational choice) to act virtuously (or morally) even when, in the moment of decision, I expect doing so will not be in my best interests. The chief reason this is true is that people are often poor predictors of what will be in their ultimate best interests. This is particularly true in the heat and confusion of the moment of decision when our base instincts for greed and violence may be swaying our judgment.

Which is more rational, relying on your imperfect predictions in the moment of decision about what is in your ultimate best interests, or relying on the distilled moral wisdom of the ages (as you understand it)?

In all but a very few cases, I am going to rely on the distilled moral wisdom of the ages.

James Gray · 24th February 2011 at 11:20 am

Tim,

James, while I wish it were possible to have some special binding system that will motivate me to act against my interests when my actions harm others, I just don’t think such a thing is possible. But I don’t think that undermines morality as much as you do.

Not all anti-realists will agree with your personal positions on the matter. My point here is not merely undermining anti-realism but arguing against one line of thought you presented above. I disagree with that line of thought.

I never said that a “binding system” could always motivate us to be moral, and something like empathy can help. It might be possible to form personal commitments to be moral as well.

And to clarify a couple of points: I would expect that people will often be tempted to do immoral things they think they can get away with, but that they think they can get away with them doesn’t make these acts moral. What’s moral is prescribed by the system of norms – and I’d imagine most systems to be some variant on the collective benefit theme.

People can learn to control themselves and regulate themselves without the need for a threat of punishment.

It might be rational for a politician, cop or CEO to break the rules from time to time, but they’re still breaking the rules, and still deserve punishment.

But they often know they won’t be, so some kind of self-regulation would be a good thing.

Since there isn’t any internally motivating binding moral code that will prevent such behaviour, we need to make sure our moral system has some checks and balances built in to prevent, detect and punish such behaviour. That’s the best we can do – and until someone figures out how to make knowledge of categorical imperatives or the Form of the Good internally motivating, then moral realism has to deal with this issue as well.

Yes, but the idea of controlling people can easily be taken to an extreme. Do we want hidden cameras to watch everyone at all times? Would that encourage moral behavior? I think self-regulation would be preferable to that.

And I think to base morality on anything other than self-interest is even more absurd.

This is ambiguous. I don’t think you want to say that morality is whatever is best for oneself because it’s what’s best for others as well. I suspect that you want to say that you think self-interest is the only motivation possible.

Note that I’m not talking about proximate self-interest with my moral subjectivism, but long term or ultimate self-interest. I benefit from being a part of a cooperative group. I might score some quick wins by defecting, but it’s more likely I’ll benefit by playing nice, particularly if the population I defect on punishes me for my behaviour.

No, a person can be smart enough to cheat whenever it’s beneficial to do so. To pretend no one can be smart enough to do that is absurd.

To base a moral system purely on altruism, ultimate and proximate, is desperately unstable.

I don’t know what you want to say here. It sounds like you are coming up with a straw man view of morality that almost no one would agree to. One’s real motivations are certainly important for morality. We can’t demand people to behave in ways that are impossible.

While living in a system (if everyone adhered to it) would be wonderful, a small bunch of mutant or invading psychopaths would bring the whole system down – that’s why I say such a system suffers from the Utopian Fallacy. Better to assume people will be self-interested and build a moral system that accommodates that. In that sense, me and Hobbes sit on the same side of the table.

And Hobbes thought you needed a totalitarian state to control everyone because he didn’t understand how self-regulation could be possible. The moral sentimentalists like Hume had a much less coercive view of morality.

You cherish the idea of being “realistic” about morality and looking at what real motivations are, but what you are saying here sounds very abstract and speculative. It sure doesn’t describe how I think I am motivated to behave.

GTChristie,

What is a “subjective view of morality?”

Mark Sloan,

This does not mean, as you seem to be thinking, that I am defining morality from moment to moment as whatever is in my proximate best interests, or even as what I expect from moment to moment will be in my ultimate best interests.

I never said that anyone “defines” morality that way. The point here is why you wouldn’t do what is in your best self-interest from moment to moment. That’s what a virtuous egoist would learn to do.

In fact, if I have chosen my morality wisely, it is likely to almost always be in my ultimate best interests (my prudent, rational choice) to act virtuously (or morally) even when, in the moment of decision, I expect doing so will not be in my best interests… Which is more rational, relying on your imperfect predictions in the moment of decision about what is in your ultimate best interests, or relying on the distilled moral wisdom of the ages (as you understand it)?

We aren’t stupid computers. We can learn when we can break the rules to serve ourselves given our situation. We don’t have to “generalize” to the point of ignoring the situation we are in. That’s exactly the kind of virtue Aristotle was interested in — virtue to know what one should do given the precise situation we find ourselves in. That kind of virtue is important whether we are egoists or not.

All of our predictions are imperfect and it’s certainly true that breaking the rules is often foolish, but that doesn’t mean we can never learn how to do it in any situation we ever find ourselves in.

Criminals break the rules to serve themselves. To tell criminals that they are totally stupid and should become moral certainly hasn’t been changing their minds because they are already doing their best to help themselves as it is. Some criminals are quite successful and intelligent about it.

Tim Dean · 24th February 2011 at 12:04 pm

Hi James. I entirely agree that internal motivation/self-regulation, where possible, is preferable to external motivation – empathy, moral heuristics etc, as I’ve written about in my moral education post.

However, I stress that a moral system based entirely around internal motivation is unstable and would be at dire risk of invasion by nasty folk. So, as long as nasty folk cannot be ruled out, and internal motivation lacks the kind of binding force Kant et al. wanted it to have, then a moral system will require external motivation.

This isn’t to say internal motivation isn’t preferable, only that it can’t be the only crutch.

And I sincerely don’t believe we’ll ever have a perfect moral system or a society that ideally exemplifies it. The checks and balances are a sliding scale, and it’s a trade-off as to where you sit on that scale. I don’t believe there are any perfect solutions to this problem.

Also, I want to stress the importance of the ultimate/proximate distinction. The two senses of self-interest along these lines is pivotal to understanding my account of morality.

Tim Dean · 24th February 2011 at 1:10 pm

Oh, and I agree with Hobbe’s diagnosis of the problem (although his state of nature doesn’t actually resemble nature at all), but I disagree with him on his proposed solution.

James Gray · 24th February 2011 at 3:37 pm

Tim,

Thanks for the clarification. I think that helps clear up the egoism issue and I agree that Hobbes’s philosophy has some credibility.

GTChristie · 24th February 2011 at 11:48 pm

@James:
What is a “subjective view of morality?”

The statement you’re referring to was:
… the fact that everyone is self-interested seems incontrovertible. It can’t be ignored that each person indeed possesses a subjective view of morality.

Your own point of view, for instance. “Morality” as experienced/interpreted by the individual. The sentence just says everyone has his/her own unique standpoint, not that everyone has a subjectivist theory of ethics.

GTChristie · 26th February 2011 at 1:01 pm

@John Wilkins:
The world doesn’t care if I am free or bound, but I think there is an absolute right to freedom I should have.

This observation makes you a candidate to consider Freedom is Radical

The freedom you should have is the freedom you do have. Although the essay has its flaws, it distinguishes between natural and contingent freedom and, incidentally, dismisses “free will” from any priveleged place in ethics. At risk of shameless self=promotion, I point you to this essay (and its companion, “Make My Day,”), for your ctiticism. I’ve never seen a negative comment except that it’s hard to follow. It’s hard to follow because it is outside the box.

The meta-ethics that results from recognition of radical freedom is a form of relative ethics — consensus based, therefore contingent. But the relativity is limited to the fact that ethics is invented, not discovered; contingent, not axiomatic. And there I leave the invitation to you, and all … shameless as I am. Comments invited, no matter how skeptical or savage. (Forgive me for the advert, Tim, please sir …)

Y. · 9th February 2013 at 9:18 pm

That’s not very convincing. At best, all you’re doing is shifting the ground from individual moral relativism to group-level moral relativism. Some of history’s worst actors had a social-contract and a ‘moral community’ as well.

At worst, you are legitimating any order which arises from violence. After all, in some communities, people also get stabby and lynchy if you insult the local dictator.

Tweets that mention Why Moral Subjectivism Doesn’t Imply Moral Relativism « Ockham’s Beard -- Topsy.com · 23rd February 2011 at 12:41 pm

[…] This post was mentioned on Twitter by Michael K. Potter, Moises Macias Bustos and ethicalrealism, Tim Dean. Tim Dean said: Why moral subjectivism doesn't imply moral relativism: http://is.gd/0XDxWz Feels like I'm arguing this point a lot these days. […]

baalbek.org » Subjectivism without relativism · 24th February 2011 at 10:12 pm

[…] Tim Dunn explains how his moral anti-realist position does not imply moral relativism. […]

baalbek.org » 21 – 26/2/11 · 17th June 2011 at 7:05 am

[…] Tim Dean argues that his moral anti-realist position does not imply moral relativism. […]

baalbek.org » Subjectivism without relativism · 20th June 2011 at 1:30 pm

[…] Tim Dean argues that his moral anti-realist position does not imply moral relativism. […]

Anyone have a case for Relativism? - Page 11 - Christian Forums · 19th January 2013 at 12:12 pm

[…] […]

Why does God not stop the evil? - Page 54 - Christian Forums · 4th February 2013 at 12:33 pm

[…] […]

Leave a Reply to Mark Sloan Cancel reply

Avatar placeholder

Your email address will not be published. Required fields are marked *