The Ethical Project: Evolving Moral Minds

Published by timdean on

In my last post I offered my initial review of Philip Kitcher’s new book, The Ethical Project, which is a bold attempt to offer a thoroughly naturalistic rendering of ethics, devoid of any divinity or dubious metaphysics. And overall, I’m very pleased with the account – not least because it is largely in sync with my own.

For too long has ethics been dominated by discussions of moral semantics, of naturalistic fallacies, of rational agents and an expectation that once we discover moral truths, people will kick themselves for not having happily obeyed them in the past.

But this is not the only way to talk about morality. Instead of seeing morality as a truth-seeking endeavour, or springing from the will of some deity, we can alternatively look at morality from what Owen Flanagan, Hagop Sarkissian and David Wong (2008) call “human ecology”.

Better than defining morality by what it is – i.e. about truth, about happiness, about god’s will etc – we can define morality by what it does. This, at its heart, is the moral functionalist perspective. It’s central to Kitcher’s account (as it is to mine), and I believe it’s key to understanding morality as a natural phenomenon – i.e. a practice enacted by human beings throughout history through to this day.

And once we understand better what morality does, we might gain some insights into what it is, and even how we ought to behave. Thus, shockingly, this descriptive programme might have normative implications.

In this post I explore some of the themes raised in The Ethical Project and add some elements of my own research to fill in some gaps left by Kitcher. I have more to say than will fit in one post, so I’ll add more after this one.

Evolving minds

I largely agree with Kitcher’s “how possibly” account for how altruistic sentiments might have emerged in early hominin populations, lending them a selective advantage by reducing conflict, increasing cooperation and enhancing the ability to form stable coalitions that could compete against other individuals and groups.

From this seed morality grew.

However, Kitcher’s account does leave out some important elements that are crucial to understanding our moral psychology today. The earliest vestiges of our moral psychology may have been emotions encouraging behavioural and psychological altruism, such as a propensity for empathy (encouraging cooperation), outrage (punishing defection), friendship (building trust enabling direct reciprocity), gossip (facilitating indirect reciprocity) etc.

But there’s a big leap from these to an “ability to apprehend and obey commands” (p74) that needs to be explained. Kitcher does mention things like language in passing, but that’s not enough for a thorough account of how our moral psychology developed.

It appears as though our species’ dramatic cognitive explosion was precipitated by increased social living itself. The trigger for this runaway change is still speculative, but it seems that once we began engaging more socially and cooperatively, the selection pressure for greater cognitive and communicative abilities was intense. After all, someone who was adroit at navigating the social landscape could easily have outcompeted a less social individual who’s main strength was navigating the physical landscape.

In this environment, social learning becomes an incredibly powerful force. Instead of innovating new behavioural strategies over the space of generations through mutation, or even over the space of a lifetime through trial and error learning, we can adopt new behavioural strategies by learning them from others. This would have been one of the many selection pressures for greater language skills, along with the ability to abstract, imagine, predict, generalise etc.

And once things got social, it payed to learn how to play socially. The thing is, unlike the relatively static physical environment in which our ancestors lived, the social environment was highly dynamic – heterogeneous, as they say. This means there was not necessarily any one behaviour that would be successful in every environment. This selects for conditional and plastic cognition.

There was also the ever present threat of deception and defection. Social living can have its benefits, but being ganged up on by a coalition of others, or being lied and cheated by individuals who look honest enough, can be fatal. This added to the selection pressure for reliable communication, costly signalling and possibly wariness of deception and cheating.

From here, it’s not a big step towards norm psychology, where we don’t just adopt behaviours for ourselves, but we desire for others to adopt similar behaviours and desire to punish them when they don’t. Then we desire to punish those who, in turn, fail to punish non-conformists.

The seed becomes a seedling.

Now we can better bridge the gap between Kitcher’s chapter 1 about psychological altruism and chapter 2 about normative guidance.

Plasticity and variability

However, one last pivotal point on the development of our moral psychology. Our social environment wasn’t just heterogeneous, but the very amount of heterogeneity itself varied. There was a heterogeneity of heterogeneity!

Now, plasticity is a nifty trick in a heterogeneous environment. But it’s costly. It takes time to learn and adapt to the environment, and organisms can settle upon bad strategies instead of good ones. In a static environment, rigidity is a better cognitive trait, particularly if the behavioural strategies it encourages are well adapted to that environment.

This raises a conundrum. If the amount of heterogeneity itself varies, how plastic is optimal? The thing is, there’s no right answer.

So, what we evolved is plasticity that varies throughout the population. Some individuals are highly plastic, some are less so. And this stable polymorphism of cognitive types is maintained through negative frequency-dependent selection.

And it’s not just plasticity that varies. Other aspects of our psychology also vary, including the Big Five personality traits, the intensity of our emotional responses, levels of aggressiveness etc.

Plasticity is good. But sometimes hardwired variability is better. And put them both together and you have homo sapiens.

In my next post, I’ll tackle a central component of Kitcher’s (and my) naturalistic account of morality: functions.


6 Comments

Katherine Gordy Levine · 25th January 2012 at 7:12 am

Like enough to link to one of my EFTI blogs. Will let you know when. Thank you for this.

Mark Sloan · 25th January 2012 at 8:08 am

Tim, I can’t resist quoting parts of your post I especially like.

“Better than defining morality by what it is – i.e. about truth, about happiness, about god’s will etc – we can define morality by what it does. This, at its heart, is the moral functionalist perspective.”

“The earliest vestiges of our moral psychology may have been emotions encouraging behavioural and psychological altruism, such as a propensity for empathy (encouraging cooperation), outrage (punishing defection), friendship (building trust enabling direct reciprocity), gossip (facilitating indirect reciprocity) etc.”

The above is very direct and easy to understand.

However, I would put a different spin on the following. Relevant to morality, I don’t see the critical advances being either an “ability to apprehend and obey commands” or “social learning becomes an incredibly powerful force” (which is of course true, but not necessarily relevant to morality).

Summarizing your relevant points:

“But there’s a big leap from these to an “ability to apprehend and obey commands” (p74) that needs to be explained.”
“In this environment, social learning becomes an incredibly powerful force.”
“From here, it’s not a big step towards norm psychology, where we don’t just adopt behaviours for ourselves, but we desire for others to adopt similar behaviours and desire to punish them when they don’t. Then we desire to punish those who, in turn, fail to punish non-conformists.”

I see it as useful to distinguish “social learning”, such as how to build fires or hunt, which may have no inherent emotional or motivational content from the category of “social learning” that is defined by its inherent emotional or motivational content.

The critical advance for our ancestors regarding morality was not “social learning” in general, but a specific biological adaptation (obviously selected for by increased reproductive fitness) that, by recognizing and incorporating a specific category of knowledge into our moral intuitions, inherently attached emotional and motivational attributes. Thus were born the “emotional oughts” (including obligations to punish bad people) that our moral intuitions provide. That motivational power is largely immune to rational thought and even our own expectations of what will be in our best interests and is the starting point for normativity.

Finally, I am sympathetic to your following view and have thought the same thing myself. However, I do not know enough evolutionary theory to know if “hardwired variability is better” is evolutionarily likely or the best evolutionary understanding of heterogeneity as it is actually observed. Do you know of any examples of generally accepted heterogeneity as an evolutionary adaptation, either from biology or from numerical simulations? I am not saying they don’t exist; I just have not come across them.

“Plasticity is good. But sometimes hardwired variability is better. And put them both together and you have homo sapiens.”

Mark Sloan · 26th January 2012 at 5:14 pm

Tim, it might be useful to compare some possible counterexamples to four functional definitions of morality: Kitcher’s, yours, a variation of yours, and mine.

1) Kitcher’s original overall function is “to remedy those altruism failures provoking social conflict” (p223)

Counterexamples: Social conflict such as war and interaction with criminals which can be morally admirable. Would not Kitcher’s function leave “Do unto others as you would have them do unto you” as never immoral, which would be non-sense? And obviously, in the past the function of morality has often been to enable the cooperation within the in-group needed to destroy other groups. This might be called “war”, “defending the group”, or even just “getting good stuff for our group with less work”.

2) Yours: “The function of morality is to facilitate prosocial behaviour within the group”

Counterexamples: All self-interested prosocial behaviors. For example, self-interested economic cooperation, as Adam Smithy famously pointed out, can be a very prosocial behavior but is not morally admirable (not an enforced cultural norm, people who do not act self-interestedly do not attract punished) since it lacks the attribute of altruism.

3) A variation of yours: “The function of morality is to facilitate altruistic prosocial behaviour within the group”

Counterexamples: Altruistic prosocial behaviors that reduce the benefits of cooperation in groups. For example, charity that motivates resentment or indolence that reduces the benefits of cooperation in groups is not morally admirable. To be admirable, altruism must increase the benefits of cooperation in groups.

4) Mine: The function of morality is to increase the benefits of cooperation in groups by altruistic acts.

I have no surprise here for you, I am not aware of any counterexamples.

If “function” is defined, then it seems to me that what the function of morality ‘is’ is an empirical question. This provisional truth (the normal sense of truth in science) is resolvable by ability to best meet relevant criteria from science such as:

a) No contradiction with known facts (such as counterexamples of enforced cultural norms).

b) Explanatory power for myriad known facts and puzzle about morality.

c) Unity with the rest of science.

I think we ought to judge what the function of morality is based on these kind of criteria.

Arguing from a very different perspective but coming to the same conclusion, it seems apparent that the only reason for maintaining societies, aside perhaps from sexual reproduction, is to “increase the synergistic benefits of cooperation in groups”. If these benefits did not exist, there would be no point in maintaining societies. Second, it would be a waste of a group’s resources to enforce behaviors that were self-interestedly motivating. So far as I know, all enforced cultural norms, both past and present, were enforcing norms advocating behaviors that were costly to the individual but benefited others, otherwise known of as altruistic behaviors.

It seems to me unavoidable that enforced cultural norms “advocate behaviors that increase the benefits of cooperation in groups” and enforced cultural norms define the current function of “social morality” in cultures.

I appreciate this opportunity to comment and thus work through my own thinking on these subjects.

The Ethical Project: Functionalism and Disagreement « Ockham's Beard · 25th January 2012 at 5:50 pm

[…] my first post on Philip Kitcher’s The Ethical Project I outlined his main argument. In the second post I addressed his account of the evolution of our moral psychology, and filled in a few gaps with my […]

The Ethical Project: Measuring Ethical Progress « Ockham's Beard · 27th January 2012 at 4:38 pm

[…] follows on from my review of  Philip Kitcher’s The Ethical Project, then my post looking at the evolution of our moral psychology, and a post on moral […]

The Ethical Project: The Future of Ethics « Ockham's Beard · 28th January 2012 at 5:00 pm

[…] to Philip Kitcher’s new book, The Ethical Project. You can read my initial review, my look at our evolving moral psychology, on moral functionalism, and my last post on ethical progress. In this post I want to sum up my […]

Leave a Reply to The Ethical Project: The Future of Ethics « Ockham's Beard Cancel reply

Avatar placeholder

Your email address will not be published. Required fields are marked *