December 2010 archive

Reflective Equilibrium and Political Ideology

Here’s a thought in the which-comes-first-political-ideology-or-political-psychology? department. The answer to which could well be: both. If so, then perhaps some bastardisation of the process of reflective equilibrium could benefit our understanding of political ideology as well as the way in which people are actually motivated to behave politically.

See, for decades it’s been the remit of political scientists to explore the nature of political ideology, to construct definitions and to investigate the way people behave in a political context. Yet political science operates in a highly rarefied environment. It looks at ideology in theoretical terms, almost as if the various ideologies exist in the world to be discovered as various objective ways of being or of running society.

It also often abstracts the messy complexities of human behaviour down to the clean quantifiable lines of rational choice theory. It’s a very top-down approach, starting with theories of political organisation and then noting how these completed, coherent ideologies are disseminated down to the people.

But this approach has its shortcomings, particularly in explaining how and why individuals adopt a particular political ideology, and how that ideology motivates their behaviour. Because people aren’t rational agents and ideologies aren’t clear cut things that people adopt holus-bolus.

Top-down political science even had an ‘end of idology’ crisis through the late 20th century, where ideology was nearly abandoned as a concept because it was conceived as too far beyond the ken of the average schmo to comprehend the complexities of an entire political ideology (Jost, 2006). As such, it was thought most people’s attributions and identifications with one ideology or another were incomplete, misguided or disingenuous. Ideology was on shaky ground.

Yet political ideologies are important. They do influence beliefs. And they do motivate behaviour. But not in the abstract way outlined by many political scientists in the 20th century.


Introducing Synthesis: the Science and Philosophy of Everything

There’s an academic discipline missing. Terrible oversight. About time we put it right. For the sake of simplicity, I’ll call it ‘Synthesis,’ although you can call it whatever you like. In essence, it’s the science and philosophy of everything. All at once.

The interrelation of only a few academic disciplines.

Synthesis is a massively interdisciplinary meta-discipline that seeks to weave together all other fields into a single, holistic tapestry, and which serves to facilitate interdisciplinary interaction between disparate academic disciplines with a vision to share insights and open new avenues of enquiry.

Why do we need Synthesis?

There’s no question that increasing academic specialisation has been a growing trend over the past couple of centuries. Specialisation isn’t necessarily a bad thing; it’s the only way we can hope to tackle the deep and complex problems that occur at the fringes of our understanding of the natural world. But there’s an increasing awareness that having dozens – if not hundreds – of siloed disciplines, each with their own language, methodology, sharp boundaries and cadre of specialists, makes fruitful conversation between disparate disciplines more difficult.

Yet, each of these disciplines is attempting to explain some facet of the very same world.


Science and Politics: Why Conservatives Don’t Get Science

Only 6% of scientists self-identify as Republican. Six per cent! And there are five times as many who don’t even have a partisan affiliation. And only 9% self-identify as conservative. Fascinating.

But not entirely unexpected.

These numbers, uncovered by the PEW Research Center, have been the topic of much discussion, sparked by this piece on Slate by Daniel Sarewitz, followed up by a number in The Economist’s Democracy in America blog. Both express concern about the implications of so few conservatives in science. And both speculate as to the cause, first Sarewitz:

It doesn’t seem plausible that the dearth of Republican scientists has the same causes as the under-representation of women or minorities in science. I doubt that teachers are telling young Republicans that math is too hard for them, as they sometimes do with girls; or that socioeconomic factors are making it difficult for Republican students to succeed in science, as is the case for some ethnic minority groups. The idea of mentorship programs for Republican science students, or scholarship programs to attract Republican students to scientific fields, seems laughable, if delightfully ironic.

And The Economist:

I can think of three testable hypotheses they might look into. The first is that scientists are hostile towards Republicans, which scares young Republicans away from careers in science. The second is that Republicans are hostile towards science, and don’t want to go into careers in science. The third is that young people who go into the sciences tend to end up becoming Democrats, due to factors inherent in the practice of science or to peer-group identification with other scientists.

I’d like to advance a fourth hypothesis: the same psychological proclivities that predispose individuals towards conservatism and the Republican party are the same psychological proclivities that predispose those individuals to not have a strong interest in science.

Contrary to the popular view that political attitudes and ideological commitments are the product of environmental factors, such as family upbringing, socio-economic conditions, or rational reflection, in fact it’s psychology that plays a dominant role in influencing an individual’s political leanings. And career choices.


The Problem with Moral Philosophy (and Moral Psychology)

Moral philosophy is obsessed with moral judgements. As is moral psychology. And this is a problem.

Moral philosophy cares about moral judgements because it wants to figure out how to make the right ones, and how to know they’re right. Metaethicists are interested in what moral judgements themselves are.

Moral psychology cares about moral judgements because it wants to figure out by which thought processes people arrive at them.

But the problem is that morality isn’t just about judgements, it’s about behaviour. And it seems to me there’s a dearth of research showing how someone goes from moral judgement X to corresponding behaviour Y.

Or why two individuals might go from the same moral judgement X to differing behaviours Y and Z.

Or why someone can hold moral judgement X to be correct, and not perform the corresponding behaviour Y.

Or why someone could hold moral judgement X to be correct, and not know how to behave.

I could go on. But the moral of the story is clear (if you excuse the pun): there is a clear gulf between judgement and action, and that gulf is tremendously important if moral philosophers and moral psychologists want to develop a complete picture of not only what constitutes good behaviour, but how to bring it about.

My concern on the philosophy side is that even if we have a foolproof moral theory that allows us to arrive at the correct moral judgement in any situation (a scenario I find implausible, but let’s assume it’s possible), we might still fail to account for the way behaviour does or doesn’t spring from someone holding that correct judgement. We might even already have the answer, but we’ve failed to connect the dots so people are none the wiser in terms of how to behave.

There’s also the issue of moral decision making. A complete moral theory might provide the tools to evaluate a particular action or outcome and determine whether it was right/wrong or good/bad, but it might not provide the tools for individuals to make moral judgements on the fly, as they’ll need to when it comes to responding to the world around them.

As far as I can see, the vast majority of moral philosophy and moral psychology – the former with it’s emphasis on justification, the latter with its trolley dilemmas – have neglected the final crucial step from judgement to behaviour. Even those philosophers who have looked at practical reason and action theory still talk about both theoretically; they talk about how to understand an action or how to judge its merits after the fact.

No-one I know of gives a thorough philosophical and psychological account of how, given a certain theory or belief, to reliably turn that into behaviour on a day-to-day basis.

The closest I can think of is virtue ethicists who seek to cultivate the kind of disposition that will produce the desired behaviours. But even then, virtue ethicists need a comprehensive account of psychology to explain how certain characteristics lead to certain behaviours and why these behaviours are good.

I know it’s all too easy to get sucked in to the current debates. But this is dangerous when the current debates focus on only a fraction of the whole picture, leaving crucially important issues untouched. Particularly when those issues are as important as how to encourage people to behave morally.

My suggested solution: don’t necessarily stop philosophers or psychologists from doing their present research, but encourage an acknowledgement that morality is about behaviour, and a completed moral theory is one that enables people to behave in accordance with the theory. This might inspire more research into that final leap. Because any moral theory that doesn’t include behaviour is not a complete moral theory.

What WikiLeaks is About (Hint: It’s Not About Exposing Individual Wrongdoing)

It seems my previous post may have been made in haste. After reading this enlightening post on Zunguzungu about Julian Assange’s philosophy, it’s now clear that his intention isn’t just to provide a medium by which whistleblowers can expose individual cases of wrongdoing, he’s instead attempting to alter the communication and secrecy landscape entirely, thus eroding the very possibility of what he calls ‘conspiracy’.

Where details are known as to the inner workings of authoritarian regimes, we see conspiratorial interactions among the political elite not merely for preferment or favor within the regime but as the primary planning methodology behind maintaining or strengthening authoritarian power.

Authoritarian regimes give rise to forces which oppose them by pushing against the individual and collective will to freedom, truth and self realization. Plans which assist authoritarian rule, once discovered, induce resistance. Hence these plans are concealed by successful authoritarian powers. This is enough to define their behavior as conspiratorial.

His idea is to basically remove the capability of ‘conspirators’ to be able to communicate effectively and in secret, thus removing their ability to conspire. The leaks also turn the organisation on itself and fosters paranoia, thus further reducing its ability to ‘conspire’.

The more secretive or unjust an organization is, the more leaks induce fear and paranoia in its leadership and planning coterie. This must result in minimization of efficient internal communications mechanisms (an increase in cognitive “secrecy tax”) and consequent system-wide cognitive decline resulting in decreased ability to hold onto power as the environment demands adaption. Hence in a world where leaking is easy, secretive or unjust systems are nonlinearly hit relative to open, just systems. Since unjust systems, by their nature induce opponents, and in many places barely have the upper hand, mass leaking leaves them exquisitely vulnerable to those who seek to replace them with more open forms of governance.

This is a kind of assymetric information war that could only be fought in the internet age. And, in a way, it was probably inevitable – remember Stuart Brand’s “information wants to be free”, meaning not that information ought to be made freely available, but that the cost of distribution is so low that it becomes harder and harder to keep it from moving around.

I find Assange’s essay compelling, but I still have concerns. They centre around trust:

How do we know we can trust Assange and WikiLeaks?

For the time being, I think we can. But the very lack of transparency in its own workings – by Assange’s own theory – make it ripe for descending into a conspiracy of its own.

Yet opening up WikiLeaks would likely cause it to stop functioning, as the leakers would suddenly be exposed, and the organisation itself would be vulnerable to being taken down, whether through its leaders being arrested or its servers disabled.

This suggests a paradox: in order for openness to thrive, there must be some restrictions on some information.

And I don’t believe there is any way to resolve this paradox satisfactorily.

That’s not to say this paradox is unique to this context; Hobbes faced the very same paradox when considering how to prevent people from descending into a war of all against all. His solution was an all-powerful sovereign state that has the power to force people to cooperate without defecting – to take a Prisoner’s Dilemma analogy.

But… how do the citizens know the sovereign won’t defect on them? I.e., how do they know the state won’t abuse its power and engage in some kind of conspiracy to further its own ends to the detriment of the people?

There is, to date, no perfect solution to this problem. The separation of powers, a free press and as much transparency as possible is the best we’ve come up with until now. And the very existence of WikiLeaks suggests that even this hasn’t sufficient to date.

Another concern is whether full transparency is always a good thing. I see it as being a sliding scale: more transparency means more bad acts are caught, but also more benign acts might also be inappropriately exposed (say, revealing a political leader is gay or atheist, causing a perfectly competent leader to be ousted for reasons irrelevant to their capability to perform their job).

Some things should remain private. Although too much privacy and secrecy means less benign acts are exposed, but also makes it easier for bad acts to occur unseen.

There’s no perfect solution to this problem. It’s a trade-off.

So, check and balances. Checks and balances. It’s the best we can do. And while WikiLeaks is putting itself forward – commendably, in my opinion – as a check on authoritarian and ‘conspiratorial’ power, it needs to be aware of the potential trap it’s digging for itself.

Seems we’re at a crossroads in how information is used, and the current state is unstable. It’ll have to settle into equilibrium at some point, and it’ll be interesting to see how it does.

Who Watches the WikiLeakers?

There’s a whole lot of hubbub about the recent WikiLeaks so-called ‘dump’ (not the term I would use). But the whole escapade raises some serious questions about the ethics of leaking, and the difference between whistleblowing and breaching privacy. And it raises an even more important question about checks and balances on the whole process.

The tradition of leaking previously restricted information to expose corruption or abuses of power has a long and noble history. WikiLeaks’ publishing of the Apache helicopter attack in Iraq might well fall into that category; there’s good reason to believe it could be evidence that a crime was committed, and was then covered up.

But the recent ‘dump’ is different. It’s just a grab bag of diplomatic communiqués. Certainly, some of the communiqués suggest evidence of unsavoury behaviour, such as the US pressuring Germany not to prosecute CIA agents involved in illegal torture and rendition. These ought to be released.

But many others are just typical diplomatic messages, much of it amounting to gossip. This is not only unimportant in terms of revealing corruption or abuses of power, but it undermines the diplomatic process. The language of diplomacy is incredibly carefully couched, and even a simple message between two states can take a great deal of fine tuning before it’s suitable for release. (Steven Pinker has some cute anecdotes about diplomatic language in his book, The Stuff of Thought.*)

There’s a good reason for this: diplomacy is about touchy business, and you don’t want to risk pissing off an ally (or an enemy) because of a careless word. And what’s said behind closed doors, in briefings and conversations between two nations about a third, can be highly damaging to international relations.

There are also other commiuniques that ought not be released, as The Economist‘s Democracy in America notes:

Some diplomatic cables from United States embassies will have concerned American interventions on behalf of dissidents in authoritarian countries. Release of such cables would endanger any future such American intervention, since authoritarian governments would fear that concessions to secret American requests would eventually embarrass them if the requests were made public.

Unless such material is suspected of revealing some crime or some misuse of power it should not be released.

There’s a difference between freedom of information and breaches of privacy and diplomatic privilege.

Not all information ought to be free.  We value our privacy, and business need theirs. What if WikiLeaks began revealing personal information about individuals, or began releasing forward plans of businesses, undermining their ability to compete?

It’s unlikely WikiLeaks would go that far, but it’s taken a step in that direction by ‘dumping’ an indiscriminate collection of communiqués, many of which show no evidence of crimes being committed or abuses of power.

The key is that WikiLeaks needs to exhibit the kind of transparency as its values demand of others. WikiLeaks needs oversight, and that oversight needs to be monitored. There need to be checks and balances that ensure WikiLeaks is serving the cause of whistleblowing, not just undermining the confidence of governments in being able to hold a conversation in private.

And WikiLeaks needs to acknowledge that not all information is worthy of release. One wonders how they’d feel if Julian Assange’s location was leaked, or if the identity of the leakers was revealed.

There are a number of ways this could happen, almost all of which involve decentralising control from the hands of Assange. The fact is we can’t trust any one individual with this kind of power. I tend to agree with The Economist’s Democracy in America about how this should be done:

But like other human-rights and humanitarian organisations, such as Human Rights Watch, Amnesty International, Doctors Without Borders and the International Committee of the Red Cross, it needs to lay down some clear, public ethical guidelines about how and why it does what it does. And it needs to bring in a board of directors of people from a wide range of countries, backgrounds and institutions to review the organisation’s conduct on ethical and other grounds.

I’d also add that governments, or the targets of the leaks, could be informed ahead of the leak to give them the opportunity to do the right thing and release the information themselves, or to give them a chance to offer an explanation, in case the information really is benign. The damage from a false or fraudulent leak could be greater than the damage from not leaking something legitimate.

Even though we’re living in the internet age, old fashioned checks and balances are still important. Sometimes information should move slower than we’re used to seeing these days. Sometimes information shouldn’t even be made public. WikiLeaks has an opportunity to become an important tool for exposing corruption, but it’s a powerful tool, and as we all know, with great power comes great responsibility.

*Extract from Pinker’s The Stuff of Thought (2007) page 397, quoting an American former treasury official:

At one point in my federal career, I wrote up an explanation of a complicated matter in what I considered an extremely clear , cogent matter. The senior government official to whom I reported read it carefully, ruminating and adjusting his glasses as he read. Then he looked up at me and said, “This isn’t any good. I understand it completely. Take it back and muddy it up. I want the statement to be able to be interpreted two or three ways.” The resulting ambiguity enabled some more compromise between competing governmental interests.