X Close

How politics makes us irrational Assume that many of the positions you take in the culture war are wrong

Credit: Jack Taylor / Getty

Credit: Jack Taylor / Getty


January 16, 2019   5 mins

If you did philosophy at university, you’ll remember logical syllogisms. If P, then Q; P; therefore Q. “All men are mortal; Socrates is a man; therefore Socrates is mortal.”

The idea is that the conclusions follow inexorably from the premises. If you accept that all men are mortal, and that Socrates is a man, then you cannot claim that Socrates is not mortal without breaking the laws of logic. It doesn’t mean that the conclusion is true – the premises could be false – but the argument is logically sound.

This is argumentation at its most stripped down. It doesn’t need statistics or evidence or anything; it is a mechanical, algorithmic process. If P, then Q; not Q; therefore not P.

Advertisements

And yet, according to a new paper soon to be published in the journal Social Psychological and Personality Science, whether or not you can correctly follow that algorithmic process depends, in part, on whether the outcome supports your political beliefs or not.

The paper, titled “(Ideo)Logical Reasoning”, asked several thousand people in three separate experiments to judge which of a series of syllogisms were sound. It’s been documented that we find it harder to judge them correctly when they have counterintuitive results – for instance, “All things made of plants are healthy. Cigarettes are made of plants. Therefore, cigarettes are healthy.” That’s a perfectly sound syllogism. The conclusion is obviously wrong, because the first premise is false – lots of things made from plants aren’t healthy, such as deadly nightshade, or tobacco – but the logic is correct: IF all things made from plants are healthy, THEN, etc. Still, many people will claim that it is unsound, because the conclusion is so jarring; they find it harder to see the logic.

What the (Ideo)Logical Reasoning authors hypothesised was that people who are strongly conservative would find it harder to judge syllogisms with liberal-sounding conclusions correctly, and vice versa. (The authors are American, so “conservative” and “liberal” maps pretty well onto British understandings of “Right-wing” and “Left-wing” respectively.)

So they asked people to rate themselves from “very liberal” to “very conservative” and then asked them to judge some ideologically loaded syllogisms – “All drugs that are dangerous should be illegal. Marijuana is a drug that is dangerous. Therefore, marijuana should be illegal”, for instance, or “Judge Wilson believes that if a living thing is not a person, then one has the right to end its life. She also believes that a foetus is a person. Therefore, Judge Wilson concludes that no one has the right to end the life of a foetus.”

It turns out that conservative-minded people are quite a lot better at spotting false syllogisms when they have liberal conclusions, and vice versa. It wasn’t an enormous effect, but it was significant – in one of the experiments, conservatives spotted unsound arguments about 80% of the time when they had liberal conclusions, but only 60% when they had conservative ones, and the effect was almost exactly reversed for liberals. The effect was somewhat smaller in the other experiments, but it was seen in all three, which were carried on different groups of subjects.

On one level, it isn’t a huge surprise – we know that we are more likely to believe things that support our prior beliefs. But it shocked me somewhat that it applies even at this most basic level, that even rudimentary logic gets derailed to an extent by partisanship.

I spoke to Brian Nosek, one of the authors of the study, and he said that surprised him too. “We think of the biases in politics being about our beliefs,” he said. “What this shows is that it has an effect even on the basic processes of reasoning. It shows how challenging these biases are to address; if I can’t see the dislogic in my arguments and you can’t see them in yours, how can we get to productive debate?”

It’s interesting to note as well that this is a very bipartisan arrangement. Both Left-wingers and Right-wingers find it harder, to roughly the same degree, to make accurate judgments in the face of an ideological headwind. This, incidentally, is also what another study – an upcoming meta-analysis to be published in Perspectives on Psychological Science – finds: a modest but consistent bias preventing us from reliably understanding arguments that undermine our beliefs, present almost equally in liberals and conservatives.

Nosek warns that it’s hard to be sure how accurate this is – “it’s the thing that’s most of interest but hardest to answer”, he says, and we need to be cautious – but certainly there’s no overwhelming evidence that either side is much worse than the others.

These studies are quite powerful, statistically speaking, and have avoided some of the obvious pitfalls that can undermine scientific studies, so although you should never trust any single study implicitly, they’re a good bet. If they’re correct, what can we do? We’re all ideological to some degree or another; we all have tribes which we consciously or unconsciously identify with.

One thing is just to be extremely sceptical about yourself and your co-ideologists, and assume that a large number of the positions you take in the great ongoing culture war are wrong and you’ll never know which ones they are.

That’s especially true for the media. Both to the people writing it – they’re as blinkered (I’m as blinkered!) as anyone, and just as likely to fail to spot the flaws in convenient arguments – and to the people reading it. If you find yourself nodding along with some piece which purports to show how it is obvious that the solution to the Brexit impasse is exactly what you already thought it was, it’s worth remembering that there’s a good chance that the problems with their argument would be obvious to someone who disagreed. (Though trying to do this is really uncomfortable and unpleasant.)

Nosek has another suggestion. One of the things that scientists do to avoid biasing their results is “preregistration”: essentially, publicly writing down your hypothesis in advance, so that you can’t, after you’ve got your data, chop it up in as many ways as it takes until you find something. Ironically enough, the “(Ideo)Logical” study itself was started before Nosek’s Centre for Open Science began preregistering studies; it gets around the data-fishing problem, though, by using the same tests on all three experiments and by replicating some of them.

He suggests we could do something similar. If you think some policy is good or bad – raising taxes, say – then say in advance what effect you think it will have, in explicit terms. I think a no-deal Brexit will be followed by at least two quarters of negative economic growth, for example. If, then, you don’t see what you were expecting, then it should give you some pause to think that your reasoning was influenced by your political beliefs.

This goes double for politicians and media pontificators who actually influence policy, of course. (Incidentally, this is very much what the “superforecasters” I wrote about recently think we should do, as well.)

Mostly, though, it’s just another reminder that pointing out the other side’s biases is a really un-self-aware thing to do; the important and useful thing to do is to be vigilant for your own. Remember the syllogism: All humans are biased; I am a human; therefore even I am biased.

PS: One of the syllogisms mentioned in this piece is logically unsound. Did you notice?


Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.

TomChivers

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments