Archive of ‘psychology’ category

Redefining the Liberal-Conservative Spectrum

Yeah, there are already dozens of ways of painting the political ideological spectrum. Many are interesting, but most also raise further questions, such as why does it look such, and how can they resolve apparent contractions within each polar ideology.

And contradictions there are, as flagged by Jost et al. (2003):

We now take it for granted in the United States that political conservatives tend to be for law and order but not gun control, against welfare but generous to corporations, protective of cultural traditions but antagonistic toward contemporary art and music, and wary of government but eager to weaken the separation of church and state. They are committed to freedom and individualism but perennially opposed to extending rights and liberties to disadvantaged minorities, especially gay men and lesbians and others who blur traditional boundaries. There is no obvious political thread that runs through these diverse positions (or through their liberal counterparts) and no logical principle that renders them all consistent.

The popular Nolan chart. Not wrong, but just not simple enough.I’d like to suggest that none of these existing approaches is the most parsimonious nor the most powerfully explanatory when it comes to defining the key variable in political ideology. And many have trouble with the contradictions mentioned above.

I’ve already mentioned my fondness (with reservations) of George Lakoff’s nation-as-a-family metaphor account of liberalism and conservatism – with liberals adopting a nurturant parent metaphor and conservatives a strict father metaphor. But even if that account contains a nugget of truth, it doesn’t explain why these two metaphors consistently emerge to characterise the ideological poles.

So, I’d like to propose a new one-dimensional spectrum simply based around a difference in worldview concerning whether the world is a safe or dangerous place.


From Genes to Politics: How Biology Influcences the Way You Vote

It might seem a leap too far, but bear with me, because I’m going to attempt to show that genes influence the way you vote. Let’s start at the end and work our way towards the beginning.

Your adopted political ideology strongly influences the way you vote. Certainly, there might be circumstances in which a liberal might vote for a conservative candidate, such as if the liberal candidate was an obvious dud (or the conservative candidate was a shining star), or if the conservative candidate happened to offer a better policies for the present environment (say, being a hawk in a time of war). But, all things being equal, self-identifying liberals vote for liberal candidates and parties.

However, your political ideology isn’t something you come to adopt from out of the blue. We’re not political blank slates. One of the greatest influences on what ideology you adopt is your worldview, which I loosely define as the implicit framework you use to make sense of the world around you.

Your worldview is both descriptive and prescriptive – it helps understand the way the world is, and it’s value-laden, so it helps you understand good and bad, desirable and undesirable. Many empirical and theoretical studies have shown that underneath our political attitudes lie (often unconscious) beliefs about the way the world is.


Cultivating Virtue: How to Encourage Moral Behaviour

It seems a crucial but oft overlooked step in discussions of morality: how to actually encourage moral behaviour?

Most moral philosophy is obsessed with either understanding the nature of moral judgement, or in developing a system that reliably produces the correct moral judgements. Good on it, but that’s not the end of the story. Even if we did have a system that produces judgements on which we can all agree, what then? How to we translate that theoretical triumph into the actual end goal of moral enquiry: encouraging moral behaviour? Seems little ink has been spilled by moral philosophers on this issue.

The exception to the above is virtue ethicists, who do emphasise the role of personality and character in producing moral behaviour, although much virtue ethics is also related to justifying particular judgements or actions rather than talking about how to shape personality or character such as to promote particular judgements or actions.

Moral psychology fares little better. Certainly it provides crucial insights into how we form the moral judgements we do although, like moral philosophy, most studies stop at the point of forming a moral judgement, and don’t investigate how people behave once they form that judgement.

The psychology of action is undoubtedly complex, and even in moral psychology the path between judgement and behaviour is poorly understood. But I think there are a few things we can say with some confidence as to how to encourage moral behaviour in the majority of people in the real – rather than theoretical – world.


Reflective Equilibrium and Political Ideology

Here’s a thought in the which-comes-first-political-ideology-or-political-psychology? department. The answer to which could well be: both. If so, then perhaps some bastardisation of the process of reflective equilibrium could benefit our understanding of political ideology as well as the way in which people are actually motivated to behave politically.

See, for decades it’s been the remit of political scientists to explore the nature of political ideology, to construct definitions and to investigate the way people behave in a political context. Yet political science operates in a highly rarefied environment. It looks at ideology in theoretical terms, almost as if the various ideologies exist in the world to be discovered as various objective ways of being or of running society.

It also often abstracts the messy complexities of human behaviour down to the clean quantifiable lines of rational choice theory. It’s a very top-down approach, starting with theories of political organisation and then noting how these completed, coherent ideologies are disseminated down to the people.

But this approach has its shortcomings, particularly in explaining how and why individuals adopt a particular political ideology, and how that ideology motivates their behaviour. Because people aren’t rational agents and ideologies aren’t clear cut things that people adopt holus-bolus.

Top-down political science even had an ‘end of idology’ crisis through the late 20th century, where ideology was nearly abandoned as a concept because it was conceived as too far beyond the ken of the average schmo to comprehend the complexities of an entire political ideology (Jost, 2006). As such, it was thought most people’s attributions and identifications with one ideology or another were incomplete, misguided or disingenuous. Ideology was on shaky ground.

Yet political ideologies are important. They do influence beliefs. And they do motivate behaviour. But not in the abstract way outlined by many political scientists in the 20th century.


Science and Politics: Why Conservatives Don’t Get Science

Only 6% of scientists self-identify as Republican. Six per cent! And there are five times as many who don’t even have a partisan affiliation. And only 9% self-identify as conservative. Fascinating.

But not entirely unexpected.

These numbers, uncovered by the PEW Research Center, have been the topic of much discussion, sparked by this piece on Slate by Daniel Sarewitz, followed up by a number in The Economist’s Democracy in America blog. Both express concern about the implications of so few conservatives in science. And both speculate as to the cause, first Sarewitz:

It doesn’t seem plausible that the dearth of Republican scientists has the same causes as the under-representation of women or minorities in science. I doubt that teachers are telling young Republicans that math is too hard for them, as they sometimes do with girls; or that socioeconomic factors are making it difficult for Republican students to succeed in science, as is the case for some ethnic minority groups. The idea of mentorship programs for Republican science students, or scholarship programs to attract Republican students to scientific fields, seems laughable, if delightfully ironic.

And The Economist:

I can think of three testable hypotheses they might look into. The first is that scientists are hostile towards Republicans, which scares young Republicans away from careers in science. The second is that Republicans are hostile towards science, and don’t want to go into careers in science. The third is that young people who go into the sciences tend to end up becoming Democrats, due to factors inherent in the practice of science or to peer-group identification with other scientists.

I’d like to advance a fourth hypothesis: the same psychological proclivities that predispose individuals towards conservatism and the Republican party are the same psychological proclivities that predispose those individuals to not have a strong interest in science.

Contrary to the popular view that political attitudes and ideological commitments are the product of environmental factors, such as family upbringing, socio-economic conditions, or rational reflection, in fact it’s psychology that plays a dominant role in influencing an individual’s political leanings. And career choices.


The Problem with Moral Philosophy (and Moral Psychology)

Moral philosophy is obsessed with moral judgements. As is moral psychology. And this is a problem.

Moral philosophy cares about moral judgements because it wants to figure out how to make the right ones, and how to know they’re right. Metaethicists are interested in what moral judgements themselves are.

Moral psychology cares about moral judgements because it wants to figure out by which thought processes people arrive at them.

But the problem is that morality isn’t just about judgements, it’s about behaviour. And it seems to me there’s a dearth of research showing how someone goes from moral judgement X to corresponding behaviour Y.

Or why two individuals might go from the same moral judgement X to differing behaviours Y and Z.

Or why someone can hold moral judgement X to be correct, and not perform the corresponding behaviour Y.

Or why someone could hold moral judgement X to be correct, and not know how to behave.

I could go on. But the moral of the story is clear (if you excuse the pun): there is a clear gulf between judgement and action, and that gulf is tremendously important if moral philosophers and moral psychologists want to develop a complete picture of not only what constitutes good behaviour, but how to bring it about.

My concern on the philosophy side is that even if we have a foolproof moral theory that allows us to arrive at the correct moral judgement in any situation (a scenario I find implausible, but let’s assume it’s possible), we might still fail to account for the way behaviour does or doesn’t spring from someone holding that correct judgement. We might even already have the answer, but we’ve failed to connect the dots so people are none the wiser in terms of how to behave.

There’s also the issue of moral decision making. A complete moral theory might provide the tools to evaluate a particular action or outcome and determine whether it was right/wrong or good/bad, but it might not provide the tools for individuals to make moral judgements on the fly, as they’ll need to when it comes to responding to the world around them.

As far as I can see, the vast majority of moral philosophy and moral psychology – the former with it’s emphasis on justification, the latter with its trolley dilemmas – have neglected the final crucial step from judgement to behaviour. Even those philosophers who have looked at practical reason and action theory still talk about both theoretically; they talk about how to understand an action or how to judge its merits after the fact.

No-one I know of gives a thorough philosophical and psychological account of how, given a certain theory or belief, to reliably turn that into behaviour on a day-to-day basis.

The closest I can think of is virtue ethicists who seek to cultivate the kind of disposition that will produce the desired behaviours. But even then, virtue ethicists need a comprehensive account of psychology to explain how certain characteristics lead to certain behaviours and why these behaviours are good.

I know it’s all too easy to get sucked in to the current debates. But this is dangerous when the current debates focus on only a fraction of the whole picture, leaving crucially important issues untouched. Particularly when those issues are as important as how to encourage people to behave morally.

My suggested solution: don’t necessarily stop philosophers or psychologists from doing their present research, but encourage an acknowledgement that morality is about behaviour, and a completed moral theory is one that enables people to behave in accordance with the theory. This might inspire more research into that final leap. Because any moral theory that doesn’t include behaviour is not a complete moral theory.

Evolution and Moral Ecology, Mini PhD Version

I’ve posted a new static page with an outline of my PhD thesis on evolution and moral ecology. If you’re interested in my overarching theory, it’s worth reading. Hopefully it’ll put a lot of the other missives I write in context. Although I don’t doubt it’ll also raise a lot of questions and objections. Happy to hear them. Any criticism that can steer me in a better direction will improve my thesis. I call it PhD 2.0.

Can There Be a Science of Morality?

Can we have a science of morality? This question has been thrown around quite a bit of late, especially fuelled by the spirited ejaculations of one Sam Harris. Harris firmly believes there are no barriers to a science of human values, but I fear things aren’t that simple, and I’m not alone in this concern.

Sam Harris

While a ‘science of morality’ is a laudable notion in a loose sense, such a science would, by necessity, look nothing like what Harris has in mind. Harris is seeking not only a science of morality, but a science of human values. He wants a “universal conception of human values” that can be checked, verified and proven using the tools of empirical science.

But that’s just not going to work. Science doesn’t do that kind of thing. At least not without assistance from other disciplines, like philosophy. And if we try to force science alone into providing us with values, there is no shortage of traps that will inevitably spring up.


Morality and the Obsession with Harm and Fairness

Where you can find contemporary moral philosophers talking about the content of morality (instead of their preferred pastime of quibbling over metaethics), you’ll often find them talking about issues concerning harm and fairness. But is this all there is to morality? What of moral prohibitions concerning food, or cleansing rituals, or burial practices? Can you just translate such norms into norms about harm and fairness? Or is the domain of morality larger than many philosophers might readily suggest?

This was one of the questions broached by ANU’s Ben Fraser in a seminar at Sydney University yesterday. Fraser’s paper was about the limits of the moral domain, specifically defending Richard Joyce’s account of morality from criticisms mounted by Stephen Stich. I won’t cover everything said by Fraser (you can read his entire paper here), but I am particularly interested in what it is that we’re really talking about when we’re talking about morality.

And I tend to believe that defining morality in terms of harm/fairness exclusively is a bit narrow – but understandable. Even so, we shouldn’t limit ourselves to issues of harm/fairness if we want to understand the full scope of morality and moral phenomena.


1 2 3 4 5 6 7