Two Meanings of Moral

Published by timdean on

Are there any terms less well defined, less well understood, than “moral”? I’ve already tried to tease out a few different uses of moral terms, but there’s a further critical distinction that I think it’s worth stressing, particularly in light of my recent riff on why morality doesn’t need God over on the ABC’s Drum Unleashed.

In the comments to that piece, between accusations of rampant relativism and lashings of ad hominem, it wasn’t uncommon to see the accusation that if you deny an objective moral absolute – whether from God or from or reason – then you’re left with an unpalatable moral nihilism, the kind of nihilism that makes rape and torture permissible, as suggested in this comment:

Without a defined yardstick morality is nothing but a slippery slope that steadily bends to prevailing ideas, be they correct or incorrect.

If you stand for nothing, you will fall for everything. The notion that morality is relative might soon accept relations with relatives – ridiculous thought now but who knows in what direction this slippery slope declines. Many an accepted idea today was frowned upon in the past.
– Grander view

Despite the fact that the church saw marriage between relatives as permissible in the past, this perspective is only true if you hold a particularly broad conception of morality – a conception that I think it altogether too broad.

Morality is often seen as referring to a code of behaviour, something that directs our actions, particularly when it comes to actions that affect others. But it’s also often seen as being a subject that concerns the facts about what is right and wrong. The two senses are like the difference between saying “don’t steal” and “stealing is wrong”; the first directs your actions, the second suggests there’s a fact about the world regarding stealing and its wrongness. That latter position is typically called moral realism – which says there are moral facts in the world that we can discover to be true, and these facts justify particular moral norms.

But, as Joshua Green argues it in his PhD dissertation, it’s possible to discard the moral realism bit and keep the action-guiding bit – and if you do so, you really don’t lose that much at the end of the day. The way he characterises the distinction is between:

moral1:  of or relating to the facts concerning right and wrong, etc.

moral2:  of or relating to serving (or refraining from undermining) the
interests of others.

So he reckons you can ditch moral1 – and there are good reasons for doing so – and still keep moral2.

You might call Greene’s approach ‘moral nihilism’ because he doubts the existence of objective moral facts in the world – as do other philosophers, such as John Mackie and Richard Joyce. But it’s only ‘nihilistic’ towards moral1, not moral2. Still, it might look like abandoning objective moral facts opens the door to an odious relativism when it comes to moral2; what’s to stop me justifying anything I want as being moral in any way I please?

However, the nihilism needn’t spread that far. All we need to do is find agreement on some very basic premises, and we can build a moral2 that looks just like we’d expect a moral system to look like, yet it’s not dependent on non-existent moral facts or supernatural forces.

One of the things we agree on is what morality is for. I’d suggest that morality springs from the fact that we’re social animals, and that we all have our own interests that we wish to pursue. Yet many of our interests conflict with those of others. So if we’re to effectively pursue our interests, then it’s a good idea to place limits on our behaviour when that behaviour might negatively impact the interests of others – as long as they agree to limit their behaviour when that behaviour might negatively impact our interests. Old school social contract stuff. Morality basically serves to regulate these interactions. It also serves other purposes, such as binding communities together through shared practice, and giving people rituals that cement their identity – but I’d suggest the core function of morality is to regulate social interactions. And I’d suggest that every moral system in the world has this as a core objective.

So, if we can all agree that this is what we want morality to do, then we can start to look at how best to achieve this end. Importantly, we don’t need to agree on particular moral norms or values – just on the function of morality. There might be many ways to effectively regulate social interaction – some better than others in different circumstances – but there are also lots of things that we know disrupt social harmony. Things like rape and torture.

You don’t need objective moral facts that are independent of us in order to show that this is true. So you can build a flourishing moral2 system that abandons the requirement for moral1 to back it up – it’d end up being somewhat pluralistic, at least where there’s more than one way to promote social interaction – but it wouldn’t be rampantly relativist.

As a side note, I think much moral enquiry (at least until G.E. Moore) starts with questions about moral2, but then it’s very easy to assume you need something like moral1 to back it up – so moral enquiry (particularly post-Moore) focuses altogether too much on moral1, and in doing so misses the point of what morality is really about. As such, I reckon moral philosophy could do with a bit of a purge of moral1. And if we can manage that, we might be able to focus more attention on the real problems of morality, i.e. constructing a robust and persuasive moral2 system.


66 Comments

chapter18 · 29th May 2010 at 7:59 pm

Morality is an absolute must not only for order in society but also for disciplining one’s creative energies to attain higher levels of consciousness. Fear of sin, love for God and morality in society are the cardinal bedrocks on which a caring and a peaceful world could be built.

Regards
Narayanan
http://www.chapter18.wordpress.com

Tim Dean · 29th May 2010 at 8:04 pm

Hi Narayanan. Can you explain why morality must be absolute in order to achieve these things? My argument is precisely that we can achieve social order without morality being absolute, in the moral1 sense. People can be quite sufficiently motivated to behave in a caring and peaceful way purely through a moral2 system.

James Gray · 29th May 2010 at 9:02 pm

My arguments have already been presented on this issue, but I will summarize several of them anyway. I think moral realism is sensitive to (and perhaps confirmed by) various experiences we have. Additionally, realism can tell me why I shouldn’t break the contract when it would benefit me to do so.

The idea of the social contract is to punish people for breaking the rules to try to prevent “cheating” but right now rich people have control of the system and cheating is almost allowed for them. The banking industry, oil spill, and the Catholic church cover ups seem to be pretty good examples. The idea that “might makes right” seems to be the guiding rule for the world. If you are powerful enough, the laws tend not to apply.

People are social animals, but they clearly decide to hurt others in order to attain benefits quite often. Telling someone that they are a social animal doesn’t miraculously make them moral.

The “action guiding” element of morality isn’t so clear without realism. My view is that morality is action guiding in the sense that what we do really matters and makes things “better” or “worse,” but what does the word “ought” mean to an antirealist? Something like, “do it!” like R. M. Hare suggests? I’m not sure what that is supposed to mean to me.

We could determine the “right thing to do” to a great extent without moral realism, but it will sound arbitrary and “unimportant” in some sense.

Of course, action guiding elements and “importance” can both come from desires. Still, most people’s desires seem to lead to quite a bit of harmful behavior. It isn’t clear how desires can be used by anti-realist morality to lead to good behavior.

There might be an existentialist element to anti-realism. Perhaps it’s not so rational, but you can “commit yourself” to living by a moral standard. Foucault seems to want to do something like that. But such an existentialist element seems to require a totally non-oppressive moral system. You can’t force your views onto others. But as a moral realist, forcing my view onto others is very important to me. I want to know which sorts of behaviors “must” not be allowed. Even an ideal “contract” that fully informed people would agree to might not be able to justify the use of force. What could justify punishment or coercion? (e.g. Why could we take children away from abusive parents?)

People in all countries are relevant. We want to help prevent genocide, for example. The use of force seems necessary for the morality that we want.

We want a social contract that “forces people” into minimally decent behavior, but the anti-realist seems to just commit oneself to being willing to coerce others by the use of force without “really” being right to do so. Internet personality Stefan Molyneux is an antirealist and agrees that the use of force can never be justified. To him we could use ostracism and be unwilling to cooperate with “criminals” but we would have to allow children to be molested, raped, etc. by self-sufficient hermits and self-sufficient ignorant communities. People in other times and/or places were quite happy with slavery, and people outside their community wouldn’t be able to justify the use of force to protect the innocent.

James Gray · 29th May 2010 at 9:18 pm

Of course, I don’t want to say that I endorse the view that God is necessary for morality, or that religion is necessary in particular. It seems like an equivocation to move from the realist morality of the church to anti-realist atheism. Right now I understand that most philosophy professors are moral realist atheists, which should make such a position a clear possibility.

Additionally, atheists need not claim that God not must exist to make sense out of morality. Ethics could be something that simply is best done without mentioning God.

What is also very startling to me, someone from the US, is the state-sponsored religious activity at the schools in question. Even if religion was necessary for morality, what about Buddhism, Taoism, Hinduism, and so forth? Buddhism in particular can be atheistic.

I agree that the equivocation with “atheist” and “simplistic relativist” is not a good one. I agree that anti-realists can avoid relativism. However, it should also be pointed out that “secular” moral realism can be a serious viewpoint.

chapter18 · 30th May 2010 at 3:00 am

For any individual, there are two types of morality that he has to conform to. One is universal that is appicable to everyone in the society, irrespective of his age, position or economic status. Being truthful in one’s conduct with his fellow beings, forgiveness etc fall in this catagory. The second one is specific to the individual and is dependent on many factors. Here, it might be that a morally correct action for one could be totally immoral for the other placed in a different setting and environment. For a person suffering from de-hyderation, doctor would suggest a lavish dose of sugar-salt soution while he would severely restict these for one who is a diabetic. Likewise, some moral principles are absolute and are non-negotiable while others framed taking into account the peculiar needs of the individual.

Regards
Narayanan

Paul · 30th May 2010 at 8:29 am

I want to second the sentiment of James Gray’s posts. Even though a certain standard of behavior can be maintained without a widespread belief in moral realism, as it is in the U.S. today, it certainly lacks the bite that the moral realism of past cultures. Even though the U.S. is a better place today than it was 100 years ago, I agree with James Gray that the elites of the U.S. do not adequately include their fellow citizens within their realm of moral concern, and that they did do a much better job of this in the past, examples being the GI Bill, and the Civil Rights Movement.

Mark Sloan · 30th May 2010 at 9:29 am

Tim,

You have proposed two definitions of morality to be of interest which claim either 1) there are facts determining if an action is right or wrong (standard moral realism) or 2) moral behavior has an underlying function (a function to be agreed on by means of standard social contracts as I understand):

moral1: of or relating to the facts concerning right and wrong, etc.

moral2: of or relating to serving (or refraining from undermining) the interests of others (a function to be agreed on by means of standard social contract?)

There is a third definition of morality, related to the second, which might be of interest:

moral3: of or related to increasing the benefits of cooperation within a group by motivating unselfish behavior (My interest in this form arises from my claim that I can support, by applying the normal methods of science, that this function of moral behavior is the underlying function of almost all moral intuitions and cultural moral standards).

Tim, here you may be remembering some previous communications with me and comments you made. My presentation has evolved some since then and I hope my perspective may again be of interest.

Provided that definition 3 can, in fact, be supported as provisionally ‘true’ by the normal methods of science, then it seems to me it has two important advantages over definition 2.

First, it is a unitary definition based on it being provisionally ‘true’ as matter of science. So there should not be other functional definitions, at least from science, vying for attention as the ‘best’ version we should adopt, and enforce, as part of a social contract.

Second, if it is the underlying principle of almost all moral intuitions and cultural moral standards (which modify moral intuitions when adopted) then it should be more consistent with our moral intuitions than any other claimed function of moral behavior. Consistency with more of our moral intuitions means that 1) we will feel more emotional justificatory force for accepting its burdens including the burdens of punishing offenders, and 2) it should better meet John Rawls’ criteria for moral action (reflective equilibrium). That is, this definition of the function of morality should agree, better than all others, with our moral intuitions from all perspectives and in all variations of circumstances (consistent with Rawls’ veil of ignorance).

However, even this third definition of moral still, at bottom, can be accepted and enforced only as a social contract. There is no source in science of magic ‘oughts’ with justificatory force beyond reason. The only justification for accepting the burdens of the third definition as part of a social contract is rational choice (the choice expected to best meet needs and preferences of a group or individual).

Interestingly though, due to the function of morality being defined as to produce ‘benefits’ of various kinds, it is arguable that, on average for almost all cases, it may be in the best interests of the individual to accept its burdens even when the individual expects, in the heat and confusion of the moment, that accepting those burdens will not be in their best interests. That is, it claims to be a reliable heuristic for, on average, increasing benefits to the group and the individual.

I appreciate the opportunity to discuss these issues which I find fascinating.

Mark Sloan

Mark Sloan · 30th May 2010 at 9:32 am

James,

I’ve enjoyed reading your referenced website. Based on that website, I am thinking you might be interested in how one could claim to argue that moral definition 3 can be shown to be provisionally ‘true’ using the normal methods of science.

Briefly, I claim this can be done by recasting definition 3 as a hypothesis about what moral behaviors ‘are’ as a matter of science. Of course, this says nothing about what moral intuitions and cultural moral standards ‘ought’ to be which science has no access to so far as I know.

My hypothesis is: “Almost all moral intuitions and cultural moral standards motivate or advocate behaviors that are unselfish at least in the short term and increase, on average, the benefits of cooperation (material goods, emotional goods, and sometimes reproductive fitness) within a group.”

I claim this hypothesis can be shown to be provisionally ‘true’ if it, better than any other hypothesis, meets the following criteria for scientific utility (adapted for moral behaviors): 1) explanatory power for diverse and contradictory cultural moral standards and common moral intuitions, 2) explanatory power for moral puzzles such as the origin of altruism and observed puzzling moral intuitions, 3) predictive power for moral intuitions, 4) universality, 5) utility, and 6) consistency with existing science.

I have been surprised at how well the hypothesis explains contradictory moral standards, puzzling aspects about morality, and puzzling moral intuitions. My problem has been finding counter-examples rather than examples.

Of course, one ‘easy out’ I have for dismissing counter-examples is the disappointing phrase “Almost all”. I originally thought this phrase was required to exclude moral behaviors such as kin altruism and aversion to inbreeding which evolved (based on our mammal relatives) long before our ancestors began living in cooperative groups as adults. Recently, I have concluded kin altruism and aversion to inbreeding are actually covered by the hypothesis for the special cases where cooperation within a group is defined as perhaps unconscious cooperation between closely related individuals, rather than as rationally chosen cooperation with a social group who may be unrelated.

In any event, I still have not yet been able to think of any good counterexamples.

Mark Sloan

Tim Dean · 30th May 2010 at 1:53 pm

James: I guess what I’d say in response to your comments on moral realism is that I’d suggest there are no moral facts, and never have been. Yet, in the absence of those facts, we’ve made it this far already. That shows that anti-realism can work, in as far as it’s worked to date.

Also, I do think the use of force is justified at times. For example, I believe that punishment, or the threat of punishment, is crucial to keeping morality functioning properly.

But this is also because I don’t believe that moral facts (if they existed) would be intrinsically motivating. I’d argue that even when an individual believes they’ve stumbled on a moral fact, there are other contingent psychological factors that actually motivate them to behave in accordance with it. And anti-realism can utilise these psychological factors to motivate behaviour as well.

Hey, if moral realism were true, I’d be all for it. It would certainly be helpful to have some objective, external and absolute standard. But I don’t think such a standard exists, so we have to make do with anti-realism. It’s like I’d love it if there was one book that had *all* the answers to any philosophical question I might have – I can see the appeal of something like the Bible to Christians, or the Qur’an to Muslims – but, sadly, I don’t believe such a book exists with all the answers. So we have to make do with our own feeble abilities to tackle the questions ourselves, whilst standing on the shoulders of past giants, of course.

Mark, I don’t disagree with your moral3, but I’d suggest it falls under the umbrella of moral2: motivating unselfish behaviour and promoting cooperation both advance peoples’ longer term interests.

Also, I’d caution against committing the naturalistic fallacy – in any of its many guises – by suggesting that what *is* is what *ought* to be. That science can demonstrate that morality has performed a certain function doesn’t imply that morality *ought* to perform that function. At least, it doesn’t imply it with logical force. That said, I think science is central to moral enquiry, but it needs philosophy and reason to lay the foundations of normativity.

Mark Sloan · 30th May 2010 at 3:23 pm

Tim,

Moral2 is “of or relating to serving (or refraining from undermining) the interests of others” As I understand it, this asserts moral behavior has a function similar to increasing the generalized happiness for the most people (similar to Utilitarianism?) or Sam Harris’ recent TED talk claims about morality being about increasing generalized human well being?.

Moral3 is much more restricted. It is only “of or related to increasing the benefits of cooperation within a group by motivating unselfish behavior”.

I am ready to defend the idea that as a matter of what moral behavior ‘is’ (‘is’ being judged by the normal methods of science), the restricted benefits defined by the moral3 definition is correct, and the moral2 definition is INCORRECT because moral2 includes in a lot of interests of people that have nothing to do with what moral behavior ‘is’. (Capitals added for emphasis because it is a critical point.)

Of course, with support from Jeremy Bentham, John Stuart Mill, and many others, I fully expect you can produce what to me would be good, even convincing, arguments that moral2 is how morality really ‘ought’ to be defined. Then my point about it not being what moral behavior ‘is’ could be moot.

However, looking at the history of moral philosophy, I have no confidence such philosophical arguments about what moral behavior ‘ought’ to be, no matter how reasonable they might seem to you and me, will ever become generally accepted.

On the other hand, showing what moral behavior ‘is’ as a matter of science seems to be exploring relatively new ground. (That is, new ground in the sense that previous work in the field has been famously dominated by junk science and philosophical nonsense.)

I think my approach fully avoids any problems with the naturalistic fallacy. First, I state that science has no access to magic ‘oughts’ (justificatory force beyond rational thought for accepting any morality’s burdens). However, due to the special characteristics of this specific hypothesis about what moral behavior ‘is’, there appear to be much better than I expected reasons to choose, as a rational choice, to accept the burdens of its implied morality even when the individual expects that action to be against their best interests. So acceptance of this morality’s burdens could be a rational choice; no magic ‘oughts’ are required or available.

I am particularly interested in what moral philosophy might tell us about justificatory force for accepting the burdens of a morality based in what science can tell us moral behavior ‘is’. Specifically, any arguments that might be made for justificatory force for moral actions that meet John Rawls’ criteria for reflective equilibrium and the veil of ignorance (which moral3 can be interpreted to do very well).

These are not easy issues. I appreciate the way your comments make me rethink what I am trying to get across.

Mark

Tim Dean · 30th May 2010 at 4:38 pm

I would ask: why promote moral3? And I suspect the answer would lie in moral2.

I start with rationalist self-interest assumption that people will pursue their interests, and these interests will conflict. So, it’s rational to adopt a moral system that puts limits on the behaviours of others that might impinge on my interests, and, in turn, I agree to put limits on my behaviours. That’s the social contract part. And cooperation is also an integral part of this contract, because (as the Prisoner’s Dilemma shows) cooperation can yield benefits for all as long as we can avoid the traps of cheating, free-riding, mutual defection etc.

So, yes, I’d accept moral3, but I’d do so because of moral2, but I still wouldn’t require moral1.

Mark, I suspect we’re probably 90% in agreement, maybe more – our only differences are over detail and expression.

James Gray · 30th May 2010 at 4:53 pm

Tim,

I never said we should agree to a religion or supernatural beliefs. I don’t see moral realism as doing so.

Here are the statistics I found from philpapers about the beliefs of people with philosophy PhD’s:

Meta-ethics: moral realism or moral anti-realism?
Accept or lean toward: moral realism 1017 / 1803 (56.4%)
Accept or lean toward: moral anti-realism 511 / 1803 (28.3%)
Other 275 / 1803 (15.2%)

God: theism or atheism?
Accept or lean toward: atheism 1257 / 1803 (69.7%)
Accept or lean toward: theism 295 / 1803 (16.3%)
Other 251 / 1803 (13.9%)

http://philpapers.org/surveys/results.pl?affil=Philosophy+faculty+or+PhD&areas0=0&areas_max=1&grain=coarse

My point is merely that there is something that philosophers can find quite reasonable about moral realism, even if they are atheists. I realize that you are not a moral realist, so you don’t believe in such things.

You think that anti-realism has been doing a good job so far, but I’m not sure what that means. People might act ethical for the most part even without morality. It might be that some people don’t need morality at all. They might just have instincts to be helpful.

What is more likely is that those in power coerce others into being moral something like Hobbes’s political philosophy describes. Those in power can be free to be immoral while everyone else is a slave with little choice but to follow the rules.

I actually think that it is moral realism has been doing a good job. I think we realize that other people really do matter. Existence seems like a good thing and we realize that is true for others as well as ourselves. We realize that pain is bad no matter who experiences it, and so on. Moral realism is taken to be “common sense” by most people, but it might be deluded. That is a possibility that I can’t deny.

You say that you think coercion can be justified for an anti-realist, but how could that be? It might be justified by some moral standards you commit yourself to, but you are then going to force your commitments onto others. That doesn’t sound quite right.

Mark,

I’m not sure what to think about what you are saying. I’m sure that we can simulate morality without moral realism. We might be able to say what determines right and wrong without really knowing what “right” or “wrong” even means. Utilitarians could probably tell us something about right and wrong without telling me why I “should” be a utilitarian.

Mark Sloan · 31st May 2010 at 7:23 am

Tim,

I think the light has finally dawned for me. Let me take the liberty of paraphrasing what I now understand you are saying about moral2.

Moral2 is defined based on what moral behavior ‘ought’ to be in the sense of the ‘best’ definition for a social contract that groups or individuals can, based on rational choice, decide to adopt, accept the burdens of, and enforce.

You still have to provide supporting arguments why it should be the basis of the ‘best’ social contract that people might rationally choose to accept the burdens of, but that is a different discussion.

My misunderstanding was I thought you were planning to defend moral2 as what moral behavior ‘ought’ to be based on some source of justificatory force beyond reason for accepting its burdens (what I refer to as the elusive magic ‘oughts’).

Just as a suggestion, it would have been clearer to me if you had framed moral2 as your proposal for the ‘best’ basis of a social contract adopted by rational choice. (Sam Harris could have avoided a lot of needless flack if he had been clearer on this point – but I don’t know if that is what he really thinks or not.)

Actually, now that I think I understand moral2, I am prepared to argue that moral3 is a ‘better’ “definition (of morality) for a social contract that groups or individuals can, based on rational choice, decide to adopt, accept the burdens of, and enforce.” Major parts of that argument include 1) higher consistency with existing moral intuitions and Rawls’ reflective equilibrium criteria for moral action, 2) higher levels of emotional justificatory force for accepting moral3’s burdens due to that consistency, 3) a firm foundation in modern moral philosophy in terms of meeting Rawls’ reflective equilibrium criteria better than any other definition of morality, and 4) a unitary, universal, provisionally ‘true’ definition based on what moral behavior ‘is’ based on the normal methods of science.

Mark

Mark Sloan · 31st May 2010 at 7:27 am

James,

With regard to moral realism, my hypothesis claims to describe the underlying, necessary characteristics of almost all moral intuitions and cultural moral standards as a matter of science. So these characteristics of moral heuristics are, in fact, just as ‘real’ as game theory and the rest of science. Thus I am proposing a robust form of moral realism in terms of THESE CHARACTERISTICS being real.

But in this view, moral intuitions and diverse and contradictory cultural moral standards are only heuristics selected for a specific purpose. To be selected as heuristics, they only have to work, on average and in the long term, better than any available, competing heuristics. Some heuristics will produce more benefits for the group than others. Some may produce benefits for the group in some cultures, but not in other cultures. Since cultural standards in particular are diverse and contradictory, it may often make no sense to call cultural moral standards right or wrong as moral absolutes. Therefore, with regard to cultural standards, science cannot provide a source of absolute moral realism in terms of whether specific acts are always moral or immoral.

Also, the science of what moral behavior ‘is’ tells us nothing about the size of the group that ‘ought’ to benefit from cooperation and who (in out-groups) it might be OK to exploit by that cooperation. This comes down to who is in the in-group and who is in the out-group. Is the in group just relatives? Just Men? Just certain races? Or should the in-group be every conscious being? Science also does not tell us if we should prefer moral heuristics that produce more benefits to those that do not.

Fortunately, people are able to make rational choices about who is in the in-group that will receive the benefits of cooperation and rational choices to advocate cultural moral standards that produce more benefits than competing cultural moral standards. So, as a rational choice, people can still define criteria for ranking the morality of cultural moral standards. Thus the implied definition of moral behavior is also far from being describable as just moral relativity concerning cultural moral standards.

The above may be more than a little confusing. Perhaps because, so far as I know, there is no generally accepted category in moral philosophy (in terms of moral realism) for such a hypothesis and implied definition of moral behavior.

Mark

James Gray · 31st May 2010 at 5:47 pm

Mark,

The main idea here is two meanings of morality in which one meaning might be involved with moral realism and the other is not. Tim seems to think that religion claims to have an advantage with regards to moral realism, but not anti-realism. I’m not convinced that the whole religious debate should necessarily require atheists or secular philosophers to accept moral2 and reject moral1 (moral realism.)

You said that “these characteristics” are “real.” That still doesn’t tell me that you are talking about moral realism. If you are, then you should be able to tell me how your definition relates to the issues that I have brought up against anti-realism. Are we going to be “moral” only insofar as we commit ourselves to some standard? Or is there really one “better” way to do things?

I understand that “better” can involve intrinsic value. There can be something of real value that can be increased. (Or something of intrinsic disvalue that can be decreased.)

Moral realism isn’t usually good at defining “morality” because such definitions tend to be circular. Morality is about doing “good” things, and “good” things increase “intrinsic value” and “intrinsic value” is “good” and so on.

Mark Sloan · 1st June 2010 at 1:57 pm

James,

I appreciate your patience. My spotty, self taught background in philosophy sometimes inhibits the effectiveness of my communications. I’ll attempt to clarify the points I am trying to make.

Tim has proposed two classes of definitions of moral behavior

moral1: of or relating to the facts concerning right and wrong, etc.
(implying a universal moral realism)

moral2: of or relating to serving (or refraining from undermining) the interests of others
(an underlying characteristic of all moral behaviors; moral standards that share this underlying characteristic may be quite diverse and even contradictory)

I agree that “religious debate should (not) necessarily require atheists or secular philosophers to accept moral2 and reject moral1 (moral realism.)”

But depending on what the basis is of definition moral2, I will argue that moral2 can include a moral fact (it can have an aspect of moral realism).

First, assume moral2 is a definition chosen by a group to be the basis of a social moral contract. They have chosen it as a rational choice, the choice expected to best meet their needs and preferences. In this case, the statement about the underlying universal characteristic of moral behavior cannot be objectively true or false. It is just the definition of morality some people expect to best meet their needs and preferences. Then moral2 would not include any elements of moral realism. (Tim, is this the way you are thinking of moral2?)

Second, assume moral2 reveals the necessary underlying characteristics of what almost all moral intuitions and cultural moral standards actually ‘are’ as determined by the normal methods of science. Then moral2 could be an objectively ‘true’ fact about what moral behavior ‘is’ and moral2 can have an aspect (its underlying characteristic of moral behavior) of moral realism.

As a third possibility, assume moral2 reveals the necessary underlying characteristics of what almost all moral intuitions and cultural moral standards ‘ought to be’ through discovery of some source of justificatory force beyond reason for accepting the burdens of this particular definition of morality. While they have inspired a lot of words in moral philosophy, I personally have little interest in definitions of morality based on elusive ‘magic oughts’ and will leave it to other people to decide if such definitions could be called moral facts.

My position is that it is possible to actually show what the underlying characteristics of almost all moral intuitions and cultural moral standards ‘are’ using the methods of science as a moral fact. Further, the underlying characteristics of what almost all moral behaviors ‘are’ (a moral fact) can be shown to be as I have suggested as moral3:

moral3: of or related to increasing the benefits of cooperation within a group by motivating unselfish behavior

If Tim is thinking of moral2 only as a rational choice for the basis for a social moral contract, then moral2 has no attributes of moral fact. I am claiming moral3 differs in that 1) it can be defended as a moral fact using the normal methods of science, and 2) it is arguably a moral rational choice (can be expected to better meet needs and preferences) for the basis of a social contract morality.

In response to “Are we going to be ‘moral’ only insofar as we commit ourselves to some standard? Or is there really one ‘better’ way to do things?”, I would reply regarding moral3 as:

I know of no justification beyond rational choice for choosing to make it the basis of a social contract morality. So yes, we are going to be moral only insofar as we commit to this standard. (But I can argue that it is usually the most rational choice to accept the burdens of moral3 even when, in the heat of the moment, individuals expect doing so will be against their best interests.)

I am not aware of any way to show that any given morality is the best that could ever be. It is conceivable that some other morality than moral3 might be a MORE rational choice (one expected to better meet needs and preferences). So I cannot say that it is a moral fact that moral3 represents the best definition of morality.

What I can claim though, is that it is a provisionally ‘true’ fact that moral3 (or something close to it) describes the characteristics of what almost all moral intuitions and cultural moral standards ‘are’ as determined by the normal methods of science.

I am not sure this is clearer to you or Tim, but I think it has helped me clarify my own thinking.

Mark

James Gray · 1st June 2010 at 3:10 pm

Mark,

It might be a good idea to try to give some examples. There are moral realists who think something like a science of morality is possible, but it is also possible that an anti-realist could commit themselves to a moral realist-inspired standard.

When someone judges a talent show, they can use certain objective standards to determine the winner, but I don’t think they believe in “realism” insofar as who ought to “really” win a talent show.

Tim Dean · 1st June 2010 at 6:33 pm

Mark, just to clarify my moral anti-realism position: I’d suggest that talk of ‘real’ moral properties is similar to talk of ‘phlogiston’ or ‘elan vital’. Both of these were proposed real properties of things – of flammable and living things, respectively – but both were ultimately found to be unnecessary to explain the existence of the things they’re trying to explain, i.e. flammability and life.

Otherwise, I don’t entirely disagree with much of what you’re saying. I believe there are facts about cooperation, and there are facts to the effect that ‘if people wish to pursue their interests, then cooperation is a good way to go about doing so’. I’d also suggest there are facts about moral systems throughout history and today that suggest most of them hinge on promoting cooperation.

But, this could all be true, but none of it might tell us what we ‘ought’ to do – such as whether we even ‘ought’ to pursue our interests in the first place, or whether any existing moral systems are ‘really’ moral. James thinks there are further facts about the world that do provide us with those foundational ‘oughts’. I disagree. Hence, I build up my moral anti-realism without those moral facts by assuming that people want to pursue their interests, and if so, here’s a way to go about it…

Mark Sloan · 2nd June 2010 at 8:31 am

James,

“There are moral realists who think something like a science of morality is possible, but it is also possible that an anti-realist could commit themselves to a moral realist-inspired standard.”

I understand this to say that (and I agree with) 1) a group of anti-realists could commit themselves, as a rational choice expected to best meet needs and preferences, to a science derived definition of morality as the basis of a social contract morality, and that 2) a group of moral realists (moral realists in terms of believing there are facts about the underlying characteristics of almost all moral intuitions and moral standards) could also commit themselves, as a rational choice, to the same science derived definition of morality as the basis of a social contract morality.

First, let’s look at the similarities in the two group’s beliefs. Both groups should agree (in my opinion) that logic and science 1) provide no justificatory force BEYOND RATIONAL CHOICE for committing to this particular morality (or as Tim says, no ‘oughts’ that people ‘should’ commit to this morality or even that we ‘ought’ to act to meet our needs and preferences) and 2) also provide no basis for claiming this science based morality is necessarily the best possible morality for meeting human needs and preferences (the best rational choice).

Where the groups would disagree is that the moral realists think that, since there are moral facts about the underlying characteristics of almost all moral behaviors based on the normal methods of science, this particular morality has aspects of moral realism. The anti-realists reject this notion, but so far as I know just because it fails to meet some aspect of their preferred definition, perhaps one or both of the group’s two shared beliefs?

I have been thinking that to be a moral realist, it is only required that a person believe there are facts about moral behavior. And I have been assuming that facts from science qualify.

I have no argument with your talent show example, and suspect we are only talking about the preferred definitions of moral realist and anti-realist.

Finally, I am happy to provide examples, but what kind are you thinking of that might be most useful? Are those examples of the “almost all moral intuitions and cultural moral standards” that share my two claimed necessary characteristics (unselfishness in the short term and increased benefits from cooperation) or something else?

Mark

Mark Sloan · 2nd June 2010 at 8:35 am

Tim,

We agree that logic and science provide no basis for claims of ‘oughts’ with justificatory force beyond rational choice that people ‘should’ commit to any morality or even that we ‘ought’ to act to meet our needs and preferences.

But I expect we also agree that “Logic and science provide no basis for claims we should NOT act in ways expected to best meet our needs and preferences.”
Motivated more by biology than logic, my plan (and I expect most people’s plan) going forward is to continue to rationally choose to act in ways expected to best meet my needs and preferences.

Ok, what does this mean with respect to accepting any burdens of acting morally – such as when we expect that acting morally (in accordance with a social contract for instance) will not meet our individual needs and preferences?

I understand that both mainstream secular moral philosophy and science support the idea that the kind of morality most likely to meet human needs and preferences will be a social contract morality. That is, a morality agreed on and committed to by the members of a group (often agreed on and committed to informally or committed to simply by being born into a group).

If chosen rationally by members of a group, that morality will be the one expected to best meet, on average, the needs and preferences of the individual. A social contract type of morality can argued to best meet the needs of the individual because, on average, accepting its burdens when that will be against the individual’s best interest are more than compensated for by the large synergistic benefits of cooperating in a group.

Where I am sure we disagree is my claim that my moral3 can be expected to better meet needs and preferences for the individual as the basis of a social moral contract than moral2.

Thanks again for this opportunity, rare for me, to discuss these issues.

Mark

James Gray · 2nd June 2010 at 10:35 am

Mark,

Some moral realists think that the scientific method could give us moral facts. I discussed one of the earlier arguments for this position here: http://ethicalrealism.wordpress.com/2009/06/21/moral-explanations-by-nicholas-l-sturgeon/

Part of “moral realism” is the claim that morality is “irreducible” to other sorts of scientific facts. If morality is “nothing but” rational self interest as Thomas Hobbes thought, then moral realism is false and we wouldn’t have to talk about morality at all.

Moral realism can also be based on “reason,” but it might require a different sort of reason. Instead of being rational to do something to achieve a goal, something might be rational because it is objectively better, for example.

Immanuel Kant was very interested in an irreducible sort of “moral reason” but for him “better” would not just have to do with intrinsic value. He thinks there are other factors involved. (He thought hypocrisy is wrong, which even Utilitarians would agree with, but he seemed to leave it open what kind of reasons we could give.)

I am not sure exactly what you think moral3 is supposed to be apart from moral1 and moral2. You could give an example about how each definition will say something different given a scenario, eg. involving murder.

Mark Sloan · 2nd June 2010 at 3:38 pm

James,

Thanks for the reference on moral realism and comments about Kant and the comments about moral realism being irreducible to other sorts of scientific facts. I’ll have a read and consider the implications. I recognize I may have wandered out of my depth in expressing an opinion about moral realism.

I have two examples where common moral intuitions are consistent with moral3 and contradict moral1 and moral2. In review,

moral1: of or relating to the facts concerning right and wrong, etc.
moral2: of or relating to serving (or refraining from undermining) the interests of others
moral3: of or related to increasing the benefits of cooperation within a group by motivating unselfish behavior

(Moral1 says there are facts concerning whether specific actions are right and wrong; its moral standards cannot be contradictory. Moral2 and moral3 propose necessary and sufficient underlying characteristics of all moral behaviors; moral standards that share either the underlying characteristics of moral2 or moral3 may be quite diverse and even contradictory.)

Discriminating Example 1: A small bodied by-stander decides not to push an innocent, unsuspecting, very large man into the path of a runaway trolley (which the small bodied person knows would stop the trolley but kill the man) in order to save the lives of 5 innocent (but deaf) men who are working on the tracks and will all be killed if the body of the large man is not used to stop the trolley. The small bodied person knows that if he jumps in front of the trolley himself, the trolley will not be stopped and both he and the 5 innocent workmen will be killed. In some experiments, over 70% of naïve test subjects (people who were not philosophy majors) have stated that their moral intuitions were that deciding to not push the large man in front of the trolley was the most moral action.

These common moral intuitions (over 70%) for this case are fully consistent with moral3. Knowledge that a by-standing large man had been pushed to his death in an uncertain attempt (in the minds of everyone else) to save five people would reduce trust in other people in the community, even if it worked, and thereby reduced levels and benefits of cooperation more than the deaths of the five workmen would. Also, the small bodied person acted unselfishly in the sense of the guilt he may feel for his cowardice in not risking acting in order to save the five workmen.

But these common moral intuitions directly contradict moral2 which would, I assume, advocate killing the large man in order to serve the interests of the five workmen which, on a Utilitarian basis at least, should outweigh the interests of the large man.

To show how these common moral intuitions also contradict moral1, I will assume two different fixed facts about killing. (I must make assumptions since moral1 doesn’t state what its facts about moral behavior are.)

Those assumptions are either 1) it is always wrong to act to cause the death of another person or 2) it can be right to cause the death of another person if that action saves many more people (a Utilitarian body count approach). If assumption 1) is correct then moral1 is contradicted by common moral intuitions concerning, at minimum, killing in time of war and by police. If assumption 2) is correct, then moral1 is contradicted by the common moral intuitions concerning the above trolley case.

The only assumption about a fixed fact concerning right or wrong that I can think of that would allow moral1 to not be contradicted would be if that fixed fact was moral3. But that would make moral1 moot.

Discriminating Example 2: Bill Gates starts a company called Microsoft and sells software for PC’s that enables people around the world to communicate and cooperate together in ways never before possible and, by doing that, serves the interests of many more people than if Bill Gates had spent his life doing something else, perhaps writing bad poetry.

Bill Gates business success meets moral2’s requirement for serving the interests of many people, but I claim is not morally praiseworthy by common moral intuitions (though it is socially admirable) and moral2’s definition is contradicted. Bill Gates became morally praiseworthy as a public person by common moral intuitions only when he began to give away his fortune and his actions were consistent with the claimed critical characteristic from moral3: unselfishness used to increase the benefits of cooperation, in Bill gate’s case, in particular with needier segments of the world’s people.

The above can be readily criticized by “What makes common moral intuitions capable of determining if definitions of morality are contradicted?” This is just my choice because of my interests in what moral intuitions ‘are’ as a matter of science. I am open to other suggestions, even hypotheses about what moral behavior ‘ought’ to be, but expect the answers relative to these examples will not change.

I apologize for the length of this post. I hope it was at least somewhat interesting.

Mark

James Gray · 2nd June 2010 at 4:33 pm

Mark,

The examples seem a little over complected. If moral realism is correct, then promoting cooperation and unselfish behavior would make a lot of sense on utilitarian grounds, but it might be unjustified manipulation similar to psychological conditioning or brainwashing if moral realism is false, which seems disrespectful.

I’m not convinced that intuitions involving scenarios like the trolley case tell us much. The idea is to eliminate real life and give a fantasy scenario. You know 100% ahead of time that throwing the guy in front of the train will save lives, but our “intuition” might not take that into consideration. In real life throwing someone in front of a train sounds like a good way of getting even more people killed.

There was another example of the scenario where most people would be willing to flip a switch to kill one person instead of several children. The behavior is the same and the intuitions seem contradictory. That isn’t surprising because of how stupid throwing a guy in front of a train would be in real life.

Also, some of our “intuitions” are much more reliable than others. We all have an intuion that certain acts of murder are wrong and we are pretty sure that we are right. We will not feel so certain about the right thing to do in many of these scenarios.

So far moral3 sounds like a secondary moral interest based on moral2. Lots of moral goals could be based on moral2. We should want people to be cooperative and unselfish based on moral2.

Moral realism could be used to justify moral2 and moral3 on a rational ground, but the definitions don’t seem mutually exclusive.

Mark Sloan · 3rd June 2010 at 7:06 am

James,

The purpose of the examples is to demonstrate that some common moral intuitions contradict moral1 and moral2 but are consistent with moral3. Considering the confusion over the last few thousand years on this subject, I don’t feel apologetic about being a little complicated. Of course simpler examples are usually better; these are just the ones I have thought of.

Your example of tested common moral intuitions that it would be moral to throw a switch to save five people by killing one person is also consistent with moral3. Throwing the switch is much less likely to make people in the future fearful of cooperating with other people compared to the example of being pushed, unawares, in front of a trolley. So moral3 is fully consistent with and has explanatory power for apparent contradictions between these trolley story examples.

Your point that pushing the large man in front of the trolley may be intuitively immoral because of its low probability of success (in reality) is correct. The investigators tried to get around this problem by emphasizing to test subjects that saving the five workmen was 1) a certainty and 2) killing the large man was the only way to save the five workmen. I don’t know if just making those claims to the test subjects was sufficient to accomplish that task. In publications, the authors don’t mention any doubts about the subject.

I have been trying to think of an improved trolley scenario to clarify, by empirical test, the level of contradiction with common moral intuitions of moral3 and moral2, but so far no success. Any suggestions would be welcome.

How about the Bill Gates example? I can’t think of any basis for claiming that Bill’s actions (in general) were morally admirable while he was creating his empire by serving the interests of his customers better than anyone else. No matter how well he served the interests of his customers, moral3 claims he was not acting morally unless unselfishness was involved. Thus moral2, which does not appear to require unselfishness, is contradicted. Or is unselfishness somehow implied in moral2 in a way I am not recognizing?

I am puzzled by the statement “If moral realism is correct, then promoting cooperation and unselfish behavior would make a lot of sense on utilitarian grounds, but it might be unjustified manipulation similar to psychological conditioning or brainwashing if moral realism is false, which seems disrespectful.”

Once we have fully defined what is meant by moral realism then its truth is just an empirical question. (I now understand that, in common usage in philosophy, a lot more may be involved than just the existence of moral facts like the underlying, necessary characteristics of almost all moral intuitions and moral standards as claimed in moral3.)

But how could the question of moral realism’s truth affect a group’s rational choice to adopt, say moral3, in preference to some other definition as the basis for a social contract morality? If rational choice is defined as the choice that is expected to best meet needs and preferences then whether moral realism is true or false is relevant only to the extent this knowledge changes the expectations of a morality’s ability to meet those needs and preferences.

I concluded long ago that moral realism of type moral1 very likely can have no basis in reality and that the only morality that could reliably be based on facts is the moral3 kind (what moral behaviors ‘are’ determined by the normal methods of science). If this is true, whether definitions like moral3 are called examples of moral realism or not has no practical significance.

Thanks for your comments,

Mark

James Gray · 3rd June 2010 at 9:38 am

Mark,

I don’t understand the trolley example. Most utilitarians think that it is just a mistake not to kill one person to save others when the outcomes are certain. The problem is just that in real life it isn’t certain.

Bill Gates did a lot of bad things. He didn’t “just serve his customers best.” Linux was probably a better operating system but it has never been incredibly popular. Most people tell me that they prefer Windows only because it lets them play video games.

Even so, I don’t see how the unselfishness idea is going to be much different in moral1 and moral2. They will all promote unselfishness and the idea that other people count.

Moral realism can say that everyone really does matter. Cooperation and unselfishness could be justified precisely because of moral realism.

Let’s say that a stranger comes to town and has a headache. I could give him an aspirin to help him out. If moral realism is correct, then this action makes perfect sense because other people matter. Selfishness would assume that only I matter.

An anti-realist might have a hard time explaining why I should give an aspirin to a stranger like that. Cooperation and unselfishness would be something I want other people to do, but it doesn’t make it rational for me to do it as well. In that case I should agree with the social contract just because I want everyone else to agree with it. But I should also break the social contract as soon as it would benefit me to do so all thing considered.

So, it is possible to commit myself to moral3 as an anti-realist, but it isn’t necessarily “rational” to do so given the fact that the social contract is irrational when it doesn’t benefit me to agree to it.

It might also make sense to try to get other people to agree with moral2 or moral3 as an anti-realist, but that might encourage others to do irrational things, and it would therefore be disrespectful.

I have my own argument for moral realism. The main idea is that pain is part of reality, and pain is bad. I have no reason to think that other people’s pain isn’t bad. It makes perfect sense to think pain is really bad no matter who experiences it. We don’t merely dislike pain. We dislike it because of how it is experienced. The argument was given in detail here: http://ethicalrealism.wordpress.com/2009/10/07/an-argument-for-moral-realism/

I think that some people decide that moral realism can’t have any basis in reality because of some strange assumptions, such as an idea that our subjective experience is somehow unreal. The disvalue of pain is part of our experience.

Mark Sloan · 4th June 2010 at 12:14 pm

moral1: of or relating to the facts concerning right and wrong, etc.
moral2: of or relating to serving (or refraining from undermining) the interests of others
moral3: of or related to increasing the benefits of cooperation within a group by motivating unselfish behavior

James, perhaps a different approach on my part would be useful. How about the following:

You (as well as Tim) have said that moral3 is included as part of at least moral2. I agree. Where we doubtless disagree is that I claim moral2 includes acts that are not morally admirable (are morally neutral).

For example, an individual participating in trade, businesses, and government serves the interests of others: trading partners, customers, and other citizens. Therefore, these activities should be included under moral2 even if the interests of the individual in question are simultaneously being served. Moral2 does not appear to forbid the interests of the individual in question being served even much more than the interests of others.

But a person can engage in trade, start a business, or participate in government as morally neutral acts (as when unselfishness is not involved). So not all acts described by moral2 are, in fact, morally admirable and the definition is contradicted.

Does that make sense to you?

I have started working through http://ethicalrealism.wordpress.com/2009/10/07/an-argument-for-moral-realism/ but am not ready to comment. How about when I do, I comment on your website?

James Gray · 4th June 2010 at 5:53 pm

I agree that moral2 is not only about admirable acts. It is also about “wrong” actions insofar as it is wrong to undermine the interests of others. moral3 lacks the wrongness element. Either way, the main motivation between moral1 and moral2 is only that it is possible to understand morality in an anti-realist sense.

Feel free to comment on my website whenever you have anything to say or ask.

Mark Sloan · 5th June 2010 at 8:19 am

James, my point was that the moral2 is not a good definition of morality because it is contradicted by common understandings of what is and is not moral.

The contradiction arises because moral2 would classify as moral an individual participating in trade, businesses, and government which serves the interests of others (trading partners, customers, and other citizens) even if the individual’s interests were being served much more than the interests of others. This classification as moral is contradicted by common understandings of what is moral. The chief flaw is that no requirement for unselfishness is included.

In moral3, rather than making it a more complicated statement, I considered it implied that acts are immoral if they decrease the benefits of cooperation.

James Gray · 5th June 2010 at 10:01 am

Utilitarians tend not to care about whether the behavior is motivated by selfishness as long as the behavior promotes the interests of others.

The moral intuition that businessmen are not moral for being businessmen could be based on virtue ethics. A utilitarian could argue that we call someone “virtuous” when the behavior involves requires unusual degrees of moral skill.

Moral2 is not meant to explain everything about morality or to be maximally intuitive. It is just a way to differentiate between moral realists and an antirealist sort of morality. We want to know “why” an action is “worthy” (moral1) and if an action helps accomplish “worthy” goals (moral2).

Moral2 doesn’t talk about what actions are praiseworthy or blameworthy and I think that is the fault you are finding in it.

I suppose you could redefine moral2 in all sorts of ways, but the main idea right now is that everyone’s interest counts as a worthy goal to promote. If something other than the interests of beings matter, then we might have to change the definition. For example, if we found out that not all interests are worthy. We could count happiness and pain instead of all interests, for example.

Mark Sloan · 7th June 2010 at 6:55 am

Tim, while checking to see if you had posted anything new, I reread your posts and some of my previous comments on “two definitions of morality”. I believe I misconstrued what you were saying at least a few times.

To try to understand how close our positions might be, would you be interested in comparing the kinds of moral statements we believe we are making?

To assist this discussion, I propose six kinds of moral statements as listed below.

Is your intent to propose “Rational argument ‘Ought’ moral statements” (kind 2), “Analytic ‘Is’ moral statements” (kind 5), “Rational choice moral statements” (kind 6), some combination, or something else? (So far as I know, none of the example moral statements I provide of the different kinds represent your position. Only the example for kind 5 represents my position.)

My intent is to propose an “Analytic ‘Is’ moral statement” (kind 5) which I can argue turns out to also be a “Rational choice moral statement” (kind 6) relative to all alternatives I am aware of.

Prescriptive statements:

1) Divine command ‘Ought’ moral statements such as “God said: ‘Do unto others as you would have them do unto you’.”
2) Rational argument ‘Ought’ moral statements such as “Rational argument TBD reveals a source of justificatory force beyond enlightened self interest binding you to ‘Do unto others as you would have them do unto you’.” – No such To Be Determined argument is presently generally accepted as ‘True’ so far as I know –
3) Emotional ‘Ought’ moral statements such as “I and other people feel pain and don’t like it. Therefore, neither I nor other people ‘ought’ to inflict pain on others”.

Descriptive statements: (Can Be Objectively ‘True’, ‘False’, or Indeterminate as a matter of science)

4) Simple descriptive ‘Is’ moral statements such as “Versions of the Golden Rule are found around the world. Do unto others as you would have them do unto you is a common version of the Golden Rule.”
5) Analytic ‘Is’ moral statements such as: “Virtually all cultural moralities and moral intuitions are heuristics for increasing, on average, the benefits of cooperation for the group and are unselfish at least in the short term.”

Rational choice: (A “Rational choice” is the choice expected to best meet needs and preferences.)

6) Rational choice moral statements such as: “Choosing ‘moral acts increase human well being’ as the basis of a social contract morality will better meet a group’s needs and preferences than any other known basis.”

James Gray · 7th June 2010 at 1:31 pm

I think Tim doesn’t think we need “ought” statements at all. If anything, moral “oughts” are the same as nonmoral oughts. They are just ways to accomplish goals. If you want to end your hunger, then you “ought” to eat.

My own view of “ought” is a very open one. I don’t think it really means a whole lot, but it does (to the very least) reflect our intuitive day to day interactions insofar as we take it to be “rational.” “Ought” could mean little more than “promotes intrinsic value.” We seem to understand that increasing intrinsic value is rational if doing such a thing is possible.

Tim Dean · 7th June 2010 at 6:15 pm

Interesting taxonomy.

My initial thoughts are that I’m not in favour of 1) for fairly obvious reasons.

Nor do I accept 2) – such as Kantianism – because I don’t believe there are any hearty transcendental values that bind us irrespective of our interests, whatever they may be. And even if there were, I don’t see how we could either know them, or how they would motivate us to act morally.

3) is interesting – not because I’d endorse a prescriptive emotivism or sentimentalism – but because I do think (like Hume) that emotions are one of the prime motivating factors behind moral behaviour and forming moral judgements. But just because we all feel x is good doesn’t necessitate that x really is good.

4) is straightforward enough, at least descriptively – it’s what moral psychologists and anthropologists study. But to take 4) as having prescriptive implications would be to commit (one version of) the naturalistic fallacy.

I’m not sure how 5) is ‘analytic’. Nor can I see how it differs from 4), except in detail. Thus, I would also be wary of drawing prescriptive implications from it.

I’d probably agree with 6), but that leaves open the question of whether meeting an individual’s or group’s need or preferences is itself good (in the moral1 sense of ‘good’).

My response to this question would be that there is no ultimate or objective answer to this question. However, as a matter of fact, people will pursue their interests (whatever they might be), and if they want to do so, then engaging in a social contract is a prudent (or ‘good’ in the moral2 sense) thing to do. There’s no binding reason to do so, but I’d expect enough people would accept that they want to pursue their interests, and would accept the instrumental value of a social contract, to encourage others to join in – perhaps even force them to join in. And I’d suggest this is pretty much what evolution has primed us to do, if imperfectly.

Mark Sloan · 8th June 2010 at 1:34 pm

Tim, we do appear to have substantial agreement. I will clarify some points and then ask three questions that I hope you will have time to consider.

Three clarifications:

In 5), “Analytic” was intended to have only a vernacular meaning. After looking up the philosophical usage of “Analytic”, I see my ignorance of the peculiar usages in philosophy of common words has again led me to step in a verbal cow pie (which was the reason you questioned the term?). For now, I’ll call category 5) “Derived ‘Is’ Moral Statements”.

A single moral statement can belong in multiple categories. For instance, my work in moral philosophy is strictly limited to 5) “Derived ‘Is’ Moral Statements” that also can be argued to be 6) “Rational Choice Moral Statements”.

Neither category 5) or 6) statements are necessarily binding except as ought judgments based on enlightened self interest.

Three questions:

Your last paragraph matches my position almost perfectly with the exception that I don’t understand “My response to this question would be that there is no ultimate or objective answer to this question.”

You seem Ok with the idea that moral statements which are “Rational choice moral statements” could be useful and of interest. But are you saying your favored candidates for a rational choice moral statement would not also belong in one of the other five categories? If so, could you say what other category of moral statement would be needed to capture it?

In the hope you can tolerate hearing about it one more time, I’ll use my candidate hypothesis concerning what virtually all moral behaviors ‘are’ (a derived ‘Is’ moral statement) as the example illustrating my last question. That candidate hypothesis is: “Virtually all cultural moralities and moral intuitions are heuristics for increasing, on average, the benefits of cooperation for the group and are unselfish at least in the short term.” I can argue that this hypothesis, as a matter of science, reveals the underlying, universal, necessary principles of virtually all of what people call ‘good’ actions toward other people. (Note this understanding of what people call “‘good’ actions toward other people” also does not imply any ‘oughts’ beyond enlightened self interest.)

I am also claiming the implied definition of morality is the one that most group and individual practitioners will judge to be more ‘useful’ (defined as more effective, on average, at meeting their needs and preferences) than available secular alternatives. Any other characteristics or the answers to other philosophical questions about such a definition are irrelevant to me to the extent they do not reduce the definition’s utility. For instance, I expect that the definition’s utility relative to alternative secular definitions will not be reduced if the question “Is meeting an individual’s or group’s need or preferences itself ‘good’?” is never answered.

Do you know why moral philosophers continue to put such apparently little effort into producing definitions of moral behavior valued for their ‘utility’ at meeting people’s needs and preferences?

This is particularly difficult to understand when the most common mainstream philosophical position, as I understand it, is that no source of justification for behaving morally, beyond enlightened self interest, has ever been found and likely none exists.

James Gray · 8th June 2010 at 1:53 pm

This is particularly difficult to understand when the most common mainstream philosophical position, as I understand it, is that no source of justification for behaving morally, beyond enlightened self interest, has ever been found and likely none exists.

This is false. Most philosophers are moral realists, and some anti-realist philosophers, such as R. M. Hare are Kantians. Very few philosophers believe what you just said. I already gave some of the relevant statistics earlier:

http://philpapers.org/surveys/results.pl?affil=Philosophy+faculty+or+PhD&areas0=0&areas_max=1&grain=coarse

“Definitions” also not the main concern of philosophers and they are pretty arbitrary. There are utilitarians who think they can help us all live better lives without moral realism, like Hume.

Mark Sloan · 8th June 2010 at 5:35 pm

James, can you provide any examples (that are currently generally accepted among mainstream philosophers) in the literature which identify any source of justification for behaving morally, beyond enlightened self interest?

I have limited access to appropriate libraries. Perhaps you might know of (or could rapidly find) some examples in the on-line literature, perhaps the Stanford Encyclopedia of Philosophy?

Tim Dean · 8th June 2010 at 6:03 pm

I agree with James that the mainstream position amongst contemporary philosophers appears to be moral realism of some sort. As an anti-realist, I’m a bit of an outsider, but I’m used to that. Probably the best place to start for an overview of realist positions and advocates is the ever-mighty Stanford Encyclopaedia of Philosophy.

As for you earlier questions: I direct you back up to the top of this page and my original post. What I’m suggesting is that there is no objective basis for moral1, i.e. for the truth/falsity of moral facts that are objective and that compel our behaviour regardless of our wishes. That’s why I’m an anti-realist – I don’t believe objective moral facts exist.

However, this doesn’t mean we can’t still have moral2, and I agree that that will be justified by rational self interest.

One way to look at this is through a bastardisation of G.E. Moore’s Open Question Argument (OQA). Basically, you keep asking “why is X good” until you reach some ultimate, objective, justifiable foundation.

So, given what you’ve stated – i.e. that morality is basically “heuristics for increasing, on average, the benefits of cooperation for the group” (a statement I’d agree with) – I’d ask “why is increasing cooperative benefits good?”

You might give a further answer, such as “that’s what suitably situated rational self-interested agents would choose to do” – and I’d ask “why is doing what a suitably situated rational self-interested agent does good?”

You might then answer that “people desire to pursue their rational self-interest” – and I’d ask “why ought they pursue their rational self-interest?” Etc etc.

Basically, I believe you could go on forever with this line of questioning and never reach a firm, objective foundation that underpins all morality; it’s turtles all the way down. But, these kinds of questions are seeking moral1 stuff. So, if we can’t reach moral1, so we chuck the lot. No great loss.

Because moral2 can still survive. We can still have reasons for being nice to each other, because doing so advances our rational self-interest. Now, there’s no humdinger reason why we *ought* to pursue our rational self-interest rather than act like irrational baboons, but I’d suggest that, as a matter of fact, virtually all people *will* pursue their interests, and if they do so, then moral2 is a bloody good way of going about it.

Not sure if that makes sense, but thought I’d give it a shot.

James Gray · 8th June 2010 at 7:18 pm

Mark Sloan,

I’m not sure exactly what sort of answer you want. There are different ways to read your question.

Kantianism is one way to justify morality beyond self-interest. Intrinsic value is another way. If x promotes intrinsic vale better than y, then x is more justified. Intuition is probably the most widely tool used for justifying moral beliefs and behavior. If x is intuitively rational and not-x is intuitively irrational, then we have some reason to agree that x is rational. We can certain moral justifications to be true because they are widely accepted as such (and highly defensible), then see if we can derive further rational moral principals. For example, the case of giving an aspirin to another person seems perfectly rational even with no reward expected when we expect it to help alleviate pain.

We don’t have to think we can know everything about moral rationality. We can try to understand moral rationality as much as we can based on our limited experience and understanding of morality.

I talk a little about moral justification here: http://ethicalrealism.wordpress.com/2009/10/27/objections-to-moral-realism-part-2-intuition-is-unreliable/

Philosophers justify things in all sorts of ways, and there isn’t one highly agreed-upon way to give moral justification. There are many variations, but there are also certain similarities among them. In general, philosophy must always be intuitive to some extent. There has to be agreed upon assumptions that we can build off of.

Mark Sloan · 9th June 2010 at 3:35 am

James, my assertion was not that there are no generally accepted justifications for acting morally, advocating others act morally, and punishing people who do not act morally. My assertion, which still stands, is that there are no generally accepted justifications beyond rational choice for acting morally in the specific case when the individual expects that to be against their best interests.

Such a justification would be a required part of any moral statements of my type 2 above: Rational argument ‘Ought’ moral statements such as “Rational argument To Be Determined reveals a source of justificatory force beyond enlightened self interest binding you to behave morally even when it is against your best interests.”

I remember reading that Kant’s argument for being bound to conform to his categorical imperative(s) in such cases was based on the existence of God. My understanding is that none of your examples of justifications of morality are generally accepted as revealing “a source of justificatory force beyond enlightened self interest binding you to behave morally even in cases when it is against your best interests”.

Mark Sloan · 9th June 2010 at 4:27 am

Tim, you have pointed out a perhaps important point where we disagree. Your position is “there is no objective basis for moral1, i.e. for the truth/falsity of moral facts that are objective and that compel our behaviour regardless of our wishes. That’s why I’m an anti-realist – I don’t believe objective moral facts exist.” (James also is claiming that if moral facts exist then we are somehow bound to obey them regardless of our wishes.)

I don’t understand how the existence of moral facts (which I can argue exist) somehow necessitates there being a mysterious source of justificatory force for acting morally (magic ‘oughts’) in cases where acting morally will be against our best interests.

For example, I can argue that the normal methods of science show that the two underlying, necessary characteristics of virtually all moral intuitions and cultural moral standards are that they are unselfish in the short term and increase the benefits of cooperation. This to me is a moral fact (or at least a fact in the sense it is provisionally ‘true’ as a part of science). But there is no source beyond enlightened self interest in science, or anywhere else so far as I know, that binds us to act morally when that will be against our best interests. I see no necessary linkage between there being facts about moral behavior and magic ‘oughts’.

I will go review what the on-line Stanford Encyclopedia of Philosophy says about how the existence of moral facts could necessitate the existence of a source of justificatory force beyond enlightened self interest. Perhaps we are just disagreeing about definitions.

But with my understanding of the relevant definitions, there is no logical conflict in 1) there being objective moral facts and 2) those moral facts being unable to compel our behaviour regardless of our wishes. Note I have not mentioned moral realism or anti-realism here. These terms add an unneeded additional level of concepts to the discussion which appear to me to fog the real issues.

James Gray · 9th June 2010 at 7:41 am

James, my assertion was not that there are no generally accepted justifications for acting morally, advocating others act morally, and punishing people who do not act morally. My assertion, which still stands, is that there are no generally accepted justifications beyond rational choice for acting morally in the specific case when the individual expects that to be against their best interests.

I suppose there are no specific “generally accepted” ANYTHING in philosophy. That doesn’t prove anything and I don’t see how it is relevant.

Here is the stanford entry on practical reason: http://ockhamsbeard.wordpress.com/2010/05/29/two-meanings-of-moral/#comments

The idea that justification of “practical reason” must be found in self-interest is generally a form of humeanism.

Such a justification would be a required part of any moral statements of my type 2 above: Rational argument ‘Ought’ moral statements such as “Rational argument To Be Determined reveals a source of justificatory force beyond enlightened self interest binding you to behave morally even when it is against your best interests.”

It depends what you mean by “self interest.” Everyone wants people to act altruistically, but the “motivation” for altruistic action could be egoistic. I think that moral realism is the best way to account for altruism, but there are ways to have altruistic action (of some sort) within anti-realism. In particular, we could reward altruistic action and punish actions that harm others.

I remember reading that Kant’s argument for being bound to conform to his categorical imperative(s) in such cases was based on the existence of God. My understanding is that none of your examples of justifications of morality are generally accepted as revealing “a source of justificatory force beyond enlightened self interest binding you to behave morally even in cases when it is against your best interests”.

Obviously most philosophers don’t think that we need God to justify morality. Here is my take on Kant’s argument for faith in God: http://ethicalrealism.wordpress.com/2010/01/28/kants-argument-for-faith-in-god/

I concluded that “At best, Kant showed religious people a reason to believe in a perfect and all powerful God as one way to make sense out of certain moral beliefs. At worst, he failed to prove that we need to assume that perfect moral success is possible.”

My understanding is that none of your examples of justifications of morality are generally accepted as revealing “a source of justificatory force beyond enlightened self interest binding you to behave morally even in cases when it is against your best interests.

That is quite a statement, but I obviously disagree with it. If you think you can come up with a good argument, go ahead. We should be talking about the truth (or what is probably true given our current information) and not just what is “popular among philosophers.”

James Gray · 9th June 2010 at 7:45 am

Tim is taking “moral facts” to mean “moral realism.” It is possible that moral facts can’t motivate. That is what I argued here: http://ethicalrealism.wordpress.com/2009/11/10/objections-to-moral-realism-part-4-beliefs-cant-motivate/

Tim Dean · 9th June 2010 at 10:42 am

Mark, I think our disagreement (if there is one) is, as James has pointed out, over our usage of terms.

I should qualify that there are typically accepted to be two types of moral ‘facts’: descriptive and normative. The former simply describe the moral beliefs and/or behaviours of some particular group (or even of all particular groups); normative moral facts talk about how we ‘ought’ to behave.

So it might be descriptively true that all cultures believe murder is wrong, but this leaves open the question of whether murder *really is* wrong. It might be that our descriptive moral facts don’t coincide with some set of objective normative moral facts – such as if, say, 5,000 years ago, all cultures believed slavery was permissible, but today we’d say they were wrong somehow.

Once again the Stanford Encyclopaedia of Philosophy explains more.

For the record, my whole moral1/moral2 stuff is precisely because I think the distinction between descriptive and normative is overblown. I do think the distinction stands, but I argue that the super-duper normative facts don’t exist. So we have to make do with descriptive facts instead of chasing normativity to the ends of reason.

Where you and I agree is that self-interest is the ultimate justification of morality. I argue that this is contingent (because there are no super-duper binding normative facts, so it must all be contingent), but I think you’re arguing that this is necessary, because it’s descriptively true that every moral system is based on rational self-interest. So, you’re building a moral system using descriptive facts, but you’re loading them with the force of normative facts, and that jump is problematic.

Mark Sloan · 10th June 2010 at 5:37 am

Tim and James, you have both given me a lot to think about and I once more want to express my appreciation for your conversations. There is a lot I could respond to, but perhaps a discussion of the possibility of the existence of a particular normative moral fact would be the most useful at this point based on the interest you both have shown in the question of the existence of normative moral facts and the relevance to Tim’s starting post.

Tim, I had read it before, but I found it very useful to re-read “The Definition of Morality” page from SEP as you suggested.

However, even after consideration, I am still puzzling over whether my hypothesis (about the claimed underlying, universal, necessary characteristics of virtually all moral behaviors) could ever be correctly called a proposed normative fact in addition to being a proposed descriptive fact. Obviously, if my hypothesis became accepted as provisionally ‘true’ as a part of science, then it would be a descriptive fact from science concerning what virtually all moral behavior ‘IS’. To simplify the rest of this discussion, I will assume my hypothesis does become accepted as provisionally ‘true’.

I find confusing how the referenced SEP page defines normatively: “Normativity refers to a code of conduct that, given specified conditions, would be put forward by all rational persons”. Further, “Accepting a normative definition of morality commits a person to regarding some behavior as immoral, perhaps even behavior that he is tempted to perform.”

Assume groups or persons accept, as the unique rational choice expected to best meet needs and preferences of all rational persons, my hypothesis as a normative statement and express the desire that all other rational people also accept it. Then, in addition to being a descriptive fact about science, it would also be a normative statement by the above definition.

However, I would argue it is a normative statement that has no source of justificatory force beyond enlightened self interest (acting unselfishly and advocating that everyone else do the same justified by the idea that this will maximize predicted self interest). What I am struggling with is 1) are normative statements that have no such source of justificatory force commonly considered to ‘really’ be normative? 2) if they are normative statements, are they also correctly called normative moral facts when they represent statements that are also descriptive moral facts?

Finally, Tim said: So, you’re building a moral system using descriptive facts, but you’re loading them with the force of normative facts, and that jump is problematic.” If I claim this moral system has no justificatory force beyond rational choice, how can I be loading it with the “force” of normative facts? Perhaps that is part of my confusion about whether normative facts must have justificatory force beyond rational thought – by SEP’s definitions, I don’t see that they do.

James Gray · 10th June 2010 at 6:38 am

Mark,

However, I would argue it is a normative statement that has no source of justificatory force beyond enlightened self interest (acting unselfishly and advocating that everyone else do the same justified by the idea that this will maximize predicted self interest). What I am struggling with is 1) are normative statements that have no such source of justificatory force commonly considered to ‘really’ be normative? 2) if they are normative statements, are they also correctly called normative moral facts when they represent statements that are also descriptive moral facts?

Yes, they can be considered normative even if the are only accepted as part of cultural customs and/or human psychology. I think R.M. Hare accepts something like that.

Tim Dean · 10th June 2010 at 10:39 am

Mark, all of what you have said makes sense – even the questions you pose, which aren’t easy questions to answer.

On the SEP definition of morality as being put forward by “all rational persons”, I also think this is problematic. It’s a part of the author, Bernard Gert’s, own philosophy, and has leaked into his SEP essay (although this happens from time to time).

On 1), as James has said, statements can be (a kind of) normative even if they lack that magic normativity of moral realism. In fact, as an anti-realist, I am also interested in building a normative system (although one that isn’t binding with logical or necessary force).

On 2) talk of ‘moral facts’ is tricky. Most metaethics I’ve read talks of ‘moral facts’ in the moral realist sense – although acknowledging the existence of descriptive moral facts. You might need to – and there’s nothing stopping you from doing this – define a few different senses of ‘moral fact’ and lay them out before you outline your thesis. Simply separate descriptive facts from binding normative facts from your idea of a ‘code of conduct followed to promote rational self-interest’. That might work.

Mark Sloan · 10th June 2010 at 3:44 pm

Guys, thanks for these comments.

Tim neatly summarized what I have been thinking in terms of three kinds of moral statements of interest to me.

1) Descriptive moral facts of two kinds
• “The Golden Rule is a common cultural moral standard”
• My derived “Virtually all moral behaviors increase, on average, the benefits….” provided it can be shown to be provisionally ‘true’ as a matter of science and therefore a derived descriptive moral fact according to science

2) Binding normative facts which carry justificatory force beyond enlightened self interest for accepting their burdens – I will make no claims related to such facts

3) Rational self-interest moral statement (not necessarily derived from moral facts) – A ‘code of conduct whose burdens are accepted based on an expectation that doing so will, on average, promote rational self-interest better than any known alternative’

Further, I will avoid any mention of moral realism or anti-realism. However, just between us, if I was forced to pick one, I would say that consistent with my above chosen definitions of kinds of moral statements, I am a moral realist. Sorry, Tim.

I think I now have enough additional understanding of the philosophical issues to usefully revise my paper. Maybe the journal of Biology and Philosophy will accept it next year!

Thanks again.

tomess · 10th June 2010 at 4:24 pm

Dear Tim

Hi. Again, you “don’t believe there is an independent measuring stick beyond the facts of the natural world”, and yet you make statements of truth, like:

“However, the nihilism needn’t spread that far. All we need to do is find agreement on some very basic premises, and we can build a moral2 that looks just like we’d expect a moral system to look like, yet it’s not dependent on non-existent moral facts or supernatural forces.”

You’re implying that your statements are not totally dependent upon your biological, economic and social circumstances, and therefore I can’t discount your statements by using arguments emphasising such circumstances. For example, I can’t merely say that you’re wrong because you’re Caucasian, or because you’re of a certain socio-economic group.

As a theist I accept that reason originates independently of the biological and other circumstances in which it is articulated.

But what is the basis for asserting your independence from these factors, if you insist that there is no independent measuring stick beyond the facts of the natural world?

Reason must be supernatural in origin, if it is to be independent of the physical and other observable circumstances through which it is manifested.

I look forward to your reply.

t

James Gray · 10th June 2010 at 8:24 pm

Tomess,

One problem is that you never really defined “reason,” so it is hard to know what exactly you want to say. Our mind seems like it is capable of reasonable thought, and philosophers pretty much agree unanimously that the mind is part of the natural world like everything else. That doesn’t necessarily mean that the mind “is only the brain” or that we don’t have free will. It certainly doesn’t mean that reason has to be “merely” biological.

I think John Searle’s philosophy of the mind does a pretty good job explaining how the mind could be part of the natural world. Here is what I think Searle wants to say: http://ethicalrealism.wordpress.com/2010/01/22/searles-philosophy-of-the-mind/

Part of the problem seems to be that you have a pre-conceived idea about what the natural world is like, but it can be that the natural world has a lot of emergent phenomena that are “more than the sum of their parts.”

tomess · 11th June 2010 at 12:41 pm

Dear James (Hmmm…. You look different from Tim…)

Hi. ‘Reason’ can be briefly defined, as the capacity for analytical, rational and logical thought. Let’s assume that Searle explains how mind could be part of the natural world. I actually happen to believe that science will one day explain the whole physical or observable universe. However, if reason would ever to be reduced to observable phenomena or exhausted by them, then it would not offer an independent perspective upon what is observed.

Reason must originate independently of the observable universe, in order to provide an independent perspective on it, when we use reason ‘inside’ the observable universe.

You suggested that the natural world “has a lot of emergent phenomena that are ‘more than the sum of their parts'”. Fine and dandy, but, again, what is the independent basis for making rational observations about ’em? What is your defence to deterministic, reductionist rubbish like “You just say that because you’re Caucasian”, or “You just say that because you’re middle-class”. Are these emergent phenomena independent of their parts? On what basis do you hold that?

I agree, with respect, that our minds are capable of reasonable thought, because I hold that transcendent or supernatural reason is embodied or manifest in our biological, social, economic etc circumstances. My quarrel is with atheism, so far as it denies that reason originates independently of the observable universe, and yet atheists (including many strict materialists, apparently) argue as if reason is independent of their particular circumstances, and make assertions of truth (as if they can’t be dismissed simply by stating and questioning the biological, economic, social etc. circumstances of the speaker/writer – e.g. “You say that just because you’re Caucasian”)

I didn’t read your reference to Searle. Can’t you summarise him as part of this post? I could just as easily suggest you go and read various philosophers whom I like, but I would have thought that in a philosophical forum one should be able to present a self-contained argument, with references being only additions that further illuminate one’s point already made in the text (rather than saying ‘go and read X for an explanation of the position I hold’).

Cheers,

t

Tim Dean · 11th June 2010 at 4:11 pm

Hi tomass.

The fundamental difference between your view and that of myself and James is that you believe that reason “must” be “independent” in order to provide an “independent perspective” of the universe. The “must” and “independent perspective” are both problematic to us (well, I’m speaking for James, but I might be wrong about his opinion).

First, on “independence”. Reason is only independent in the sense that it transcends my contingent circumstances. Saying “You just say that because you’re Caucasian” suggests that we cannot escape our particular circumstances. Yet reason allows us to understand the underlying regularities of the world that transcend contingent individual circumstances by observing patterns and linking them together.

That, I would suggest, is sufficient for reason to allow us to understand a great deal about the world at large. There are limits to reason, which are really the limits of our psychology and rational capacity. I don’t think that amounts to a sceptical argument – it doesn’t mean that reason is entirely powerless – but I don’t think it’s possible (or necessary) for reason to be entirely independent of the world in order for us to be able to explain stuff in a non-contingent way using it. So that eliminates the “must”.

Now, I can’t prove unequivocally that reason is in-and-of the world, but I don’t believe it can be proven unequivocally that reason is supernatural. So, given my other beliefs – particularly my belief that there are no supernatural things, including gods – I prefer to believe in a worldly rationality.

That might not be satisfying for you – and I’m not trying to convince you 100% to my conception of reason; I’m just offering my argument – by I believe it to be true. In order to change my mind, you’d have to convince me that supernatural things exist, that gods exist, and how supernatural reason can interact with the natural world.

James Gray · 11th June 2010 at 4:54 pm

Tomess,

If you want to have an impressive view about reason and consciousness, then it is your job to study the philosophy of the mind. I have done my share and I know at least the basics, and what you are saying seems to discount a great deal of philosophical literature. I’m not here just to win an argument. If you want to suggest that I learn about something in philosophy of the mind that I missed, you can point me in the right direction.

It sounds like you are mainly concerned with consciousness and the first person point of view. We view the universe as something other than lifeless bodies in motion. We have “subjectivity.” Searle thinks subjectivity is “physical” like everything else, but subjectivity seems to be caused by the brain. There are other opinions in the philosophical literature. One of the least popular views is substance dualism: The view that subjectivity and physicality are two totally different kinds of stuff.

There are substance dualists who are also atheists, so even substance dualism doesn’t seem to require God’s existence.

tomess · 13th June 2010 at 4:17 pm

Dear James

Hi. I’m interested in finding truth, rather than having an ‘impressive view’ about anything. Apparently you ‘know the basics’ (whereas I ‘appear to discount a great deal of philosophical literature – does that include the majority of European thinkers before the 18th century; those who believed in objective truth? I think not). Great. And apparently subjectivity is true. According to an independent standard, of course, rather than just your circumstances. So, what is the basis for a retort to the statements “Your comments can be disregarded on the basis that you’re Caucasian”? Further, if your views are merely subjective, does that really give you a footing for saying that mine are too? You have ‘subjectivity’. I’m not so sure about ‘we’.

As for your not being here ‘just to win an argument’, but to ‘learn about something in philosophy of the mind’, how can you or anyone else learn about philosophy when there’s two apparently conflicting views to be examined? How can comparing one philosophy to another not involve argument? And if one is defeated in an argument, doesn’t that substantially disprove one’s beliefs propounded during the argument? Or does ‘subjectivity’ come to the rescue again? 🙂

Um, at what point did I introduce the existence of God?

Cheers,

t

James Gray · 13th June 2010 at 5:04 pm

Hi. I’m interested in finding truth, rather than having an ‘impressive view’ about anything. Apparently you ‘know the basics’ (whereas I ‘appear to discount a great deal of philosophical literature – does that include the majority of European thinkers before the 18th century; those who believed in objective truth? I think not). Great.

Philosophers still want to know the objective truth, but philosophy can’t always give us anything close to certainty. Scientists seem to give more reliable information about this kind of thing in general.

And apparently subjectivity is true. According to an independent standard, of course, rather than just your circumstances. So, what is the basis for a retort to the statements “Your comments can be disregarded on the basis that you’re Caucasian”?

I don’t know what you are asking here.

Further, if your views are merely subjective, does that really give you a footing for saying that mine are too? You have ‘subjectivity’. I’m not so sure about ‘we’.

I never said that my views are merely subjective. My point was that “subjectivity” exists. It means that minds exist, which seems different than the movement of atoms.

As for your not being here ‘just to win an argument’, but to ‘learn about something in philosophy of the mind’, how can you or anyone else learn about philosophy when there’s two apparently conflicting views to be examined?

The same way scientists try to resolve differences. Look at the evidence. You don’t need to know the absolute truth right away. You can work on finding the truth and get closer to it.

Philosophers have already been arguing about rationality, reason, and minds for thousands of years. If you want to present your own argument without knowing the history, then they won’t be convincing. Everything you say will have already been discussed in the past.

How can comparing one philosophy to another not involve argument?

Did I say it didn’t?

And if one is defeated in an argument, doesn’t that substantially disprove one’s beliefs propounded during the argument? Or does ‘subjectivity’ come to the rescue again? 🙂

It isn’t always obvious when one is truly “defeated” in an argument, but people often say “so and so lost the debate” and obviously an idiot can beat another idiot in an argument. One could win a debate that “creationism” is a better theory than “evolution” but that wouldn’t prove evolution is false or that evolution is true.

When someone says that they want to “win an argument” it means that they want to win a debate in the sense that their arguments were superior or more persuasive. It doesn’t mean that they know the truth or had infallible arguments.

Um, at what point did I introduce the existence of God?

You are talking about a “supernatural” realm of reality. I just assumed that God as involved somehow, but it doesn’t make much difference.

James Gray · 13th June 2010 at 5:38 pm

Tomess,

I think you confuse epistemological and ontological objectivity and subjectivity. I currently have no idea what you think “reason” means, but I suspect that you think reason itself is quite unlike the reality of atoms moving about and so forth. This implies to me that you see reason as involving minds and “subjectivity” in the sense of mental activity. We experience mental activity as having a “first person point of view.”

Ontological objectivity refers to what exists in reality outside of the mind and ontological subjectivity refers to what exists inside of the mind.

Epistemological objectivity refers to methods of reasoning that people agree are reliable and epistemological subjectivity refers to methods of reasoning that are not reliable. The evidence might be testimonial or anecdotal, for example.

I said the following:

It sounds like you are mainly concerned with consciousness and the first person point of view. We view the universe as something other than lifeless bodies in motion. We have “subjectivity.” Searle thinks subjectivity is “physical” like everything else, but subjectivity seems to be caused by the brain. There are other opinions in the philosophical literature. One of the least popular views is substance dualism: The view that subjectivity and physicality are two totally different kinds of stuff.

This is clearly about mental experience. Things are subjective in the sense of happening within the mind. It is not about anecdotal evidence or other unreliable methods of reasoning.

tomess · 13th June 2010 at 6:54 pm

Hi tomass.

Dear Tim.

[Hi, and I’ll assume that ‘tomass’ was a typo, rather than an ad hominem. :)]

The fundamental difference between your view and that of myself and James is that you believe that reason “must” be “independent” in order to provide an “independent perspective” of the universe. The “must” and “independent perspective” are both problematic to us (well, I’m speaking for James, but I might be wrong about his opinion).

[Agreed, with respect.]

First, on “independence”. Reason is only independent in the sense that it transcends my contingent circumstances. Saying “You just say that because you’re Caucasian” suggests that we cannot escape our particular circumstances. Yet reason allows us to understand the underlying regularities of the world that transcend contingent individual circumstances by observing patterns and linking them together.

[Agreed. ‘Transcendent reason’ is fundamental to my argument. You’ve expressed my argument better than I have!]

… but I don’t think it’s possible (or necessary) for reason to be entirely independent of the world in order for us to be able to explain stuff in a non-contingent way using it. So that eliminates the “must”.

[I have to disagree. I think it is necessary for reason to originate independently of circumstance, in order to be able to explain stuff non-contingently. Put another way, and perhaps more broadly, I would repeat my suggestion that reason must originate independently of the (observable) world, in order to provide an independent measuring-stick for it – i.e. Reason must be ‘beyond-Nature’, ‘above-Nature’ or ‘supernatural’, if it is to be an independent measure of the physical universe or ‘Nature’. Otherwise, it does seem that reason can ultimately be dismissed or trivialized as contingent upon circumstance. Of course, our use of reason to explain things is conditioned by circumstance, such as whether we are distracted about whether Australia can beat Germany 1-0 tonight! (a goal in the 73rd minute, by Josh Kennedy.)]

Now, I can’t prove unequivocally that reason is in-and-of the world, but I don’t believe it can be proven unequivocally that reason is supernatural. So, given my other beliefs – particularly my belief that there are no supernatural things, including gods – I prefer to believe in a worldly rationality.

That might not be satisfying for you – and I’m not trying to convince you 100% to my conception of reason; I’m just offering my argument – by I believe it to be true. In order to change my mind, you’d have to convince me that supernatural things exist, that gods exist, and how supernatural reason can interact with the natural world.

[I think the only supernatural thing that I can prove to exist is reason itself, by the simple argument that proof itself requires independence from circumstance or contingency upon the situation in which the matter is proved – hence all those white lab coats and other controls upon scientific experiments.

Thanks for the thoughtful and detailed answer,

t]

First, on “independence”. Reason is only independent in the sense that it transcends my contingent circumstances. Saying “You just say that because you’re Caucasian” suggests that we cannot escape our particular circumstances. Yet reason allows us to understand the underlying regularities of the world that transcend contingent individual circumstances by observing patterns and linking them together.

That, I would suggest, is sufficient for reason to allow us to understand a great deal about the world at large. There are limits to reason, which are really the limits of our psychology and rational capacity. I don’t think that amounts to a sceptical argument – it doesn’t mean that reason is entirely powerless – but I don’t think it’s possible (or necessary) for reason to be entirely independent of the world in order for us to be able to explain stuff in a non-contingent way using it. So that eliminates the “must”.

Now, I can’t prove unequivocally that reason is in-and-of the world, but I don’t believe it can be proven unequivocally that reason is supernatural. So, given my other beliefs – particularly my belief that there are no supernatural things, including gods – I prefer to believe in a worldly rationality.

That might not be satisfying for you – and I’m not trying to convince you 100% to my conception of reason; I’m just offering my argument – by I believe it to be true. In order to change my mind, you’d have to convince me that supernatural things exist, that gods exist, and how supernatural reason can interact with the natural world.

tomess · 13th June 2010 at 6:57 pm

[Please ignore previous post – typographical issues]

Hi tomass.

[Dear Tim

Hi, and I’ll assume that ‘tomass’ was a typo, rather than an ad hominem. :)]

The fundamental difference between your view and that of myself and James is that you believe that reason “must” be “independent” in order to provide an “independent perspective” of the universe. The “must” and “independent perspective” are both problematic to us (well, I’m speaking for James, but I might be wrong about his opinion).

[Agreed, with respect.]

First, on “independence”. Reason is only independent in the sense that it transcends my contingent circumstances. Saying “You just say that because you’re Caucasian” suggests that we cannot escape our particular circumstances. Yet reason allows us to understand the underlying regularities of the world that transcend contingent individual circumstances by observing patterns and linking them together.

[Agreed. ‘Transcendent reason’ is fundamental to my argument. You’ve expressed my argument better than I have!]

… but I don’t think it’s possible (or necessary) for reason to be entirely independent of the world in order for us to be able to explain stuff in a non-contingent way using it. So that eliminates the “must”.

[I have to disagree. I think it is necessary for reason to originate independently of circumstance, in order to be able to explain stuff non-contingently. Put another way, and perhaps more broadly, I would repeat my suggestion that reason must originate independently of the (observable) world, in order to provide an independent measuring-stick for it – i.e. Reason must be ‘beyond-Nature’, ‘above-Nature’ or ‘supernatural’, if it is to be an independent measure of the physical universe or ‘Nature’. Otherwise, it does seem that reason can ultimately be dismissed or trivialized as contingent upon circumstance. Of course, our use of reason to explain things is conditioned by circumstance, such as whether we are distracted about whether Australia can beat Germany 1-0 tonight! (a goal in the 73rd minute, by Josh Kennedy.)]

Now, I can’t prove unequivocally that reason is in-and-of the world, but I don’t believe it can be proven unequivocally that reason is supernatural. So, given my other beliefs – particularly my belief that there are no supernatural things, including gods – I prefer to believe in a worldly rationality.

That might not be satisfying for you – and I’m not trying to convince you 100% to my conception of reason; I’m just offering my argument – by I believe it to be true. In order to change my mind, you’d have to convince me that supernatural things exist, that gods exist, and how supernatural reason can interact with the natural world.

[I think the only supernatural thing that I can prove to exist is reason itself, by the simple argument that proof itself requires independence from circumstance or contingency upon the situation in which the matter is proved – hence all those white lab coats and other controls upon scientific experiments.

Thanks for the thoughtful and detailed answer,

t]

Mark Sloan · 14th June 2010 at 8:31 am

In response to Tomess’ post about the required role of the supernatural in reason, I outlined the following brief argument concerning the relationship of human rational thought to whatever passes for rational thought in the great apes and ‘lower’ mammals. My view is based on what I understand to be the science of the matter, rather than the philosophical approaches presented so far. I doubt my argument will convince anyone, but perhaps my ‘mechanistic’ view of rational thought will be of interest at least as a contrasting viewpoint. In any event, I enjoyed thinking about the subject.

I think of the components necessary for what is commonly called rational thought as: 1) the ability to imagine alternative scenarios about the future in aid of choosing between actions in the present that can be expected to best meet needs and preferences, 2) the ability to use symbols (at least in the sense of mental images) in those imagined alternative scenarios, 3) the ability to generalize experiences, based on recognizing commonalities, into generalized heuristics useful for instantly choosing, without rational thought, between possible actions, 4) the ability to group symbols, including symbols for heuristics, again based on their similarities into higher level symbols, and 5) the ability to communicate using a common symbology (language for people) with other individuals which makes rational thought a much more powerful tool for understanding the universe by combining the abilities of many individuals.

Any individual, human or otherwise, who has the above five abilities seems to me capable of useful rational thought and understanding the universe in the same sense, but not necessarily the sophistication, that people do. My understanding is that the first two abilities are common in all mammals that dream (including mice), the first three abilities are common in mammals as intelligent as dogs, and the fourth and fifth abilities are arguably within the capabilities of chimpanzees and gorillas. The examples of grouping symbols and symbolic communication I am thinking of in both chimpanzees and gorillas are sign language communications which have been extensively observed. In this view, the differences in ability to think rationally between humans and our cousins the great apes are due to differences in our brain’s sophistication, not the type of thinking going on.

Tomess, do you believe any of the above abilities require a supernatural component? Or there other abilities required for rational thought, but not recognized here, that you believe must have a supernatural component? And finally, would it be a problem for you if a human ability necessary for reason which you concluded required a supernatural component was an ability shared with chimpanzees and gorillas?

tomess · 8th July 2010 at 9:38 pm

Dear Mark

Hi. Thanks for the detailed answer, and sorry about the delay in responding.

I would suggest that the above abilities do not in themselves require a supernatural component, so far as they merely involve circumstance. But so far as they require independence from circumstance, in order to have an objective (yes, I accept the need for objectivity) measuring stick for the universe, then, yes, they do involve a supernatural component.

As you state, your view is founded upon science rather than philosophy. My point is merely that science itself is predicated upon a belief in objectivity, and hence a belief that rationality is independent of circumstance. For example, the modern computer is able to process information faster than the computer of 20 years ago, according to objectively measurable standard such as numbers of bytes.

As indicated above, I would suggest that reason originates independently of circumstance, the observable universe, or ‘Nature’, and so is ‘above-Nature’, ‘beyond-Nature’ or ‘supernatural’. For that very reason it can operates upon Nature and yet can only succeed where conditions are favourable.

Therefore, I believe that the above abilities do require a supernatural (and hence unobservable) component, but that component is manifested in observable forms. The alternative is to suppose that rationality is entirely determined by circumstance/observable universe/Nature, which is intolerable because then it would provide the independent measuring stick by which we make even that very statement itself.

But perhaps you disagree?

Mark Sloan · 9th July 2010 at 6:51 am

No, I see no requirement for a supernatural component.

I don’t understand “reason originates independently of circumstance, the observable universe, or ‘Nature’”. This is a hypothesis I see no need for.

If a creature has the abilities I mention above then I can argue it can reason, just like a computer with the necessary components can compute. Nothing supernatural is required.

How big the box is that the creature or the computer happens to be in is irrelevant. There is no magic requirement that there has to be something outside a box to enable reasoning, or computation, to take place within that box.

James Gray · 9th July 2010 at 7:08 am

tomess,

I think part of the problem is that we don’t understand your position. We need premises and a conclusion. We might also need definitions. “Objectivity” is part of science, but how is that supernatural? Objectivity is our ability to use procedures that give us accurate information.

Also, I don’t know what you think “reason” means. It is true that “rationality” includes “logic” and I think I brought this point up already. How we understand logic is a controversial topic in philosophy.

tomess · 9th July 2010 at 11:31 pm

Dear Mark

Thanks for the prompt response.

I think we differ as to ‘reason’. As indicated above, I would insist that reasoning must originate independently of circumstance, otherwise it’s not ‘reasoning’, in the sense that it might be dismissed by reference to environment. For example, if I’m reasoning now, and then I later tell you that I was actually making these statements because Fred was paying me $100, or that he was twisting my arm, then the value of my ‘reasoning’ is lessened.

Similarly, the ability to imagine alternative scenarios, use symbols, generalize, group symbols, and use a common symbology could be conditioned by such external stimuli, such that the results could be wholly attributed to them, unless one posits reason as originating independently of circumstance (though of course articulated within one’s own circumstances).

In other words, it is conceivable that a creature could ‘reason’ as you describe, but without the presumption that reason originates independently of circumstance, then such ‘reason’ doesn’t seem to be a full description of what reasoning beings are presumed to do.

In particular, your own statements such as ‘nothing supernatural is required’ would be all but meaningless, were it to be disclosed that my ‘friend’ Fred was twisting your arm or paying you $100 to make that statement. Instead, the presumption that reason originates independently of circumstance is so ubiquitous that both you and I presume (with respect) that that statement itself is not utterly determined by your circumstances, but in fact has an independent origin.

Likewise, it is implicit in the very statement that “there is no magic requirement that there has to be something outside a box to enable reasoning, or computation, to take place within that box” that the statement is independent of your biological, socio-economic and other circumstances, and hence is applicable to me, in my particular circumstances which are well removed from yours.

No?

tomess · 9th July 2010 at 11:40 pm

Dear James

Hi. My position is that reason must originate independently of the observable universe, or ‘supernatural’, in order to provide an independent measuring stick for it. You might call it a conclusion, based on the fact that reason gives an independent account of the observable universe, and therefore must originate independently of it (though of course being manifest in observable forms such as the human brain). Or you might call it a premise, for without that premise reason becomes merely subjective. You define ‘objectivity’ as ‘our ability to use procedures that give us accurate information’ – ‘accurate’ according to what? An independent and therefore constant standard, methinks. A contingent or changing standard wouldn’t be a ‘standard’ at all, I reckon.

I think I do know what ‘reason’ means; the capacity for rational, analytical, and logical thought.

I guess it’s the controversial topics in philosophy that make it, well, controversial!

Cheers,

t

Mark Sloan · 10th July 2010 at 4:57 am

Tomess,

You got it when you said: “I think we differ as to ‘reason’. ”

As shown by your implied definition of reasoning: “As indicated above, I would insist that reasoning must originate independently of circumstance, otherwise it’s not ‘reasoning’, in the sense that it might be dismissed by reference to environment. For example, if I’m reasoning now, and then I later tell you that I was actually making these statements because Fred was paying me $100, or that he was twisting my arm, then the value of my ‘reasoning’ is lessened.”

So if my arm is being twisted, or my ‘reasoning’ is being shaped by my biology or culture, then I am not capable of reasoning by your definition?

That seems to me a virtually useless definition. I am assuming you are attempting to salvage some utility by making claims about the necessity of the supernatural, but the problem is in your definition of reason, not in any difficulty in the natural world.

tomess · 13th July 2010 at 7:26 pm

Dear Mark

Hi. I would define reason as the capacity for rational, analytical and logical thought. The description of reason in reference to arm-twisting and circumstance was to make the point that if reason does not originate independently of circumstance, if it is ultimately contingent upon circumstance, then it can hardly be said to provide an independent perspective about the collection of circumstances and contingencies that make up the universe.

‘Salvaging utility’? On the contrary, I’m propounding a definition of reason that in one sense has no use at all. I’m increasingly of the opinion that science will one day be able to explain the entire observable universe.

Yet the ability to explain anything at all requires an independent perspective, something that I suspect will become increasingly apparent as science explains more of the observable universe.

Cheers,

t

Mark Sloan · 14th July 2010 at 4:40 am

Tomess,

You are talking about only one very special kind of reasoning when you said:
“The description of reason in reference to arm-twisting and circumstance was to make the point that if reason does not originate independently of circumstance, if it is ultimately contingent upon circumstance, then it can hardly be said to provide an independent perspective about the collection of circumstances and contingencies that make up the universe.”

The only examples of reasoning that I believe are arguably not contingent on circumstance are exercises in logic and mathematics that have little to do with most people’s day to day lives. Indeed, much of science is contingent on circumstance in the sense that science is only provisionally true till a new hypothesis is shown to better meet the criteria for scientific utility. So your definition of reason leaves virtually everyone’s ‘reasoning’ as not real reasoning since everyone’s reasoning, even yours, is contingent on circumstances. I see no utility in such a definition.

I also do not see how there can be any necessity that the universe produce “an independent perspective” that you are proposing as supernatural. What force in the universe could make that a necessity? Such a perspective might be interesting or even useful, but just because a fact would be interesting or even useful provides no justification at all for arguing that it exists.

tomess · 5th August 2010 at 8:53 am

Dear Mark

Hi. My apologies for delaying in responding to you.

If I may reduce your first two paragraphs to ‘So your definition of reason leaves virtually everyone’s ‘reasoning’ as not real reasoning since everyone’s reasoning, even yours, is contingent on circumstances. I see no utility in such a definition.’

My definition of reasoning would indeed, have no utility if its limited use dissociated it from everyday and scientific use. Yet, I suggest that all other reasoning is predicated upon its assertion of independence. If so, it has great utility. Also, I must question your statement that “everyone’s reasoning, even years, is contingent on circumstances”. Instead, I assert that reason originates independently of circumstance, and yet is manifest in circumstance.

Your last paragraph then enquires about an independent perspective. Your statement that you ‘do not see how there can be any necessity’ is problematic. It presumes that you see clearly. That is, it assumes that your view is not impeded by your ethnicity, socio-economic position or any other observable circumstance. I respectfully agree. However, my agreement requires the presumption that your ability to reason originates independently of those particular contingencies within which you reason. Otherwise, wouldn’t your statement be limited to your circumstances, rather than applicable to me also? There’s a presumption of universality underlying your statement, one that you appear to deny, with respect.

You then ask ‘What force in the universe could make that a necessity?’ That sounds like a religious question, but it can answered by simply observing that it is a necessity in fact, without engaging in religious questions.

You conclude by stating ‘Such a perspective might be interesting or even useful, but just because a fact would be interesting or even useful provides no justification at all for arguing that it exists.’ Quite, but if that fact is a presumption that reason originates independently of circumstance, and you yourself repeatedly rely upon it, then it is true that it need not be argued.

Or perhaps you disagree?

t

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *