X Close

You’re more biased than you think Daniel Kahneman won a Nobel Prize for studying errors — and still makes them

A scientist, searching for her blind spot. (Illustration by GraphicaArtis/Getty Images)

A scientist, searching for her blind spot. (Illustration by GraphicaArtis/Getty Images)


May 27, 2021   7 mins

You know the old joke: a man goes to the doctor and is told he only has a month to live.

“Surely not!” he gasps. “I want a second opinion!”

“Alright then,” says the doctor. “You’re hideously ugly, too.”

The misunderstanding arises because the doctor is arrogant enough to think her patient trusts her as an expert on multiple issues, when the patient was, in fact, worried about error. The doctor might have seen a positive test result for a killer disease and taken it at face value, without considering that the disease is vanishingly rare, so the test result was likely a false positive. That is, the patient might be concerned that the doctor’s judgement — because of her failure to consider the “base rate” of the disease — might have been subject to bias. In other words, skewed in a specific direction.

Alternatively, maybe the patient was concerned that the doctor had carelessly misread the test results, or even read those of a different patient. Another doctor, even one of similar skill, would be unlikely to make the exact same mistake, hence the request for a second opinion. So rather than bias, the patient might have been worried about noise: the tendency for human judgments to vary in unwanted, unpredictable and arbitrary ways.

Advertisements

The first type of error, bias, is well-known, thanks to the work of Daniel Kahneman, who is among the most famous psychologists in the world. As he chronicled in his mega-blockbuster popular-science book Thinking, Fast and Slow, Kahneman spent decades with his colleague Amos Tversky cataloguing all the ways human thinking can go off the rails: not just the “base rate neglect” that we saw above, but all sorts of other biases. These include “anchoring” — best explained by the sales move where a shop gives an item a super-high price and then gives you 50% off, though 50% of the price is higher than you’d have paid for it if you’d never seen that initial value. There’s also “framing”, where asking a question in different ways can affect people’s answers (would you choose to have surgery that has a “10% death rate”? What if I told you it had a “90% survival rate”?). For these and many other contributions, Kahneman remains the only psychologist ever to have won a Nobel Prize, in 2002.

There was, however, a certain irony in Thinking, Fast and Slow. Whereas the biases and heuristics that Kahneman identified have been borne out extremely well by subsequent studies, a good chunk of the rest of the book, where Kahneman talked about other scientists’ work, hasn’t. For instance, Kahneman devotes a chapter to a certain kind of social psychology study where barely noticeable “priming” stimuli are shown to participants in lab studies, with the intention of changing their behaviour. For example, one set of researchers claimed that showing people a screensaver with banknotes on it made them less likely to want to help a struggling student — because it “primed” the idea of money, and thus selfishness, in their minds.

Long story short: those studies were weak, and other scientists can’t find similar results when they try to re-run the experiments. There’s plenty of evidence for priming in language — people react faster when asked to decide which of “CHAIR” and “CHIAR” is a real word if they’ve just seen the word “TABLE”, compared to if they’ve just seen a word unrelated to furniture. But the type of priming study where a barely noticeable prime makes major, measurable changes to people’s subsequent actions? Not so much. And yet, here’s how Kahneman, in Thinking, Fast and Slow, summarised his views on that kind of priming research:

“[D]isbelief is not an option… You have no choice but to accept that the major conclusions of these studies are true. More important, you must accept that they are true about you.”

Perhaps there isn’t a name for the specific bias on display here, but it afflicts a great many scientists. They put far too much stock in individual scientific studies, which we know are subject to their very own array of biases and other errors, without double and triple-checking whether the results are robust. To his credit, Kahneman later issued a mea culpa, admitting that he’d gotten carried away about the priming studies, and should have been more sceptical.

Among the reasons that scientific studies can end up unreliable is simple chance. Numbers tend to fluctuate each time you take a sample (of people, or temperatures, or particle densities, or anything else), and the results in your sample might randomly be close or far from the “truth” of the matter. It’s all too easy, if you aren’t careful, to seize on a fluke result that’s very different from the “true” effect as you run your statistical analysis — even if there’s no bias at play. It’s all too easy, in other words, to capitalise on statistical noise. But a study suffering from too much noise fails to tell us about reality, and instead tells us more about the random quirks of its particular dataset.

Which brings us to Kahneman’s latest project. As we noted above, human judgements bounce around in the same noisy way as statistical samples: one doctor might give one diagnosis (a month to live!) to a patient, whereas another, given the same information, might say something quite different (you have years ahead of you!) Five separate judges might give five very different sentences to criminals who’ve committed the same crime. Different examiners might give hugely different grades to the same essay. And so on.

Teaming up with the legal scholar Cass Sunstein (of Nudge fame) and the management researcher and consultant Olivier Sibony, Kahneman has written a whole book on this phenomenon entitled Noise: A Flaw in Human Judgment. The authors argue that, whereas biases are regularly invoked to explain mistakes, far fewer people understand that errors also come about through sheer noise. We shouldn’t be too puzzled as to why we are biased towards discussing biases: they are much more fun to think about than noise (“aren’t people silly for not knowing about base rates!”) Kahneman has devoted almost his entire career until now to explaining them. His latest book is an attempt to redress the balance.

The three authors build a taxonomy of different kinds of noise, just as Kahneman and Tversky did for biases. They talk about “system noise”, where supposedly interchangeable human judgements end up being far more varied than we’d want. Some of this system noise is caused by bias: if some judges always give harsher sentences and some are always more lenient, they’re individually biased. But on aggregate they make the system noisy. As the authors write, it’s no good to say that on average a fair sentence is meted out if the system is routinely over and under-sentencing people. There might also be idiosyncrasies in how judges approach individual cases, and even the same judge on two different occasions might give very different sentences, not for a good reason but because they’re affected by the mood they happened to be in on each day.

Alas, unlike the menagerie of psychological biases, the different noise types tend to blur into one another — and the bland names they’re given (as well as “system noise” there’s “level noise”, “pattern noise”, et cetera) don’t do much to help. Despite its entertaining subject, many of the millions who bought Thinking, Fast and Slow might recall not reaching the end – it was something of a slog. Noise is similar. It purports to be aimed at the general public, but the style is drab and tedious (try not to let your eyes glaze over during the chapter called “The Mediating Assessments Protocol” — or even just when reading its title). If the book itself was a noise, it would be a drone.

Also dismaying is the discovery that Kahneman doesn’t appear to have learned the lessons from Thinking, Fast and Slow: he and his co-authors cite a few very unstable-looking studies, including a famous, but heavily criticised, study on Israeli judges giving harsher sentences when they’re hungrier (this one is also mentioned in the publicity for the book), and a very ropey-looking study about calorie labels on food packaging. On top of that, some statistically-minded readers have discovered some howlers in the book’s discussion of correlation and causation.

But dullness and sloppiness aside, are the authors correct in that main argument, about how pressing it is that we understand noise, and how underestimated an issue it is? Is Kahneman right to try to try to balance out his older research with this new focus? Yes and no.

On the one hand, their claims that nobody thinks about noise — Sunstein told one interviewer that “we think we’ve discovered a new continent” — are contradicted by the book itself. They themselves discuss decades-old noise-reduction attempts, such as when the US introduced mandatory sentencing guidelines in 1984 (which admittedly are now only advisory, having been found unconstitutional by the Supreme Court in 2005). Many other countries have various levels of mandatory sentencing, specifically in an attempt to iron out judge-to-judge inconsistencies.

On the other hand, there clearly is a lot of noise in many of our systems — and between different systems too, as the erratic response to the pandemic from country to country has shown. Anything that draws attention to the gap between the stated intention (systems that are consistent, on-target, and fair for all) and the actual outcome (systems that are not just biased but also suffused with unwanted randomness) is no bad thing. But many of the authors’ suggestions for dealing with noise — the aforementioned guidelines and checklists; taking the average of many different judgements; improving the skill levels of those doing the judging — are either crashingly obvious or already widely-used.

Still, introducing more people to the concept of noise, and what can be done about it, is worthwhile. It’s a shame you’d have to screw up your eyes, and skip a few of the draggier chapters, for Noise to serve this useful purpose. The subject deserves to be illuminated with good-quality studies and evidence, not the very variable — one might even say noisy — list of references that the authors adduce.

It’s always good to be reminded that the world is a complex, noisy, often-ironic place. Even complicated systems with rulebooks and procedures can produce unfair and inconsistent outcomes. Even well-intentioned people’s judgements can vary dramatically in unintended ways. Even pop-science books that earn seven-figure advances can be extremely boring. Even scientists who’ve spent their careers building good-quality evidence can lower their standards when discussing fields other than their own. And — as I’m sure Kahneman himself would be the first to admit — even world experts in human reasoning can make silly mistakes. It isn’t just at the doctor’s surgery where it might be worth asking for a second opinion.


Stuart Ritchie is a psychologist and a Lecturer in the Social, Genetic and Developmental Psychiatry Centre at King’s College London

StuartJRitchie

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

59 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
zacharia77
TT
zacharia77
2 years ago

Very often a book could have been written as an essay. The central Idea needn’t be stretched into a long and tedious book because the Idea of a book is more attractive.

Fraser Bailey
FB
Fraser Bailey
2 years ago
Reply to  zacharia77

Yes, I have said this for some years and have read countless such books, including Thinking Fast And Slow. A couple of notable exceptions, which I would recommend to anyone, are Paper Money Collapse by Detlev Schlichter, and The Master And His Emissary by Iain McGilchrist.

Galeti Tavas
VS
Galeti Tavas
2 years ago
Reply to  Fraser Bailey

How about ‘The Creature from Jekyll Island: A Second Look at the Federal Reserve’ about the global money elites running the world? Have not read it yet, but sounds great – and I know you read a very great deal Fraser. Unherd is too dainty aesthetically to ever discuss money, so I ask the BTL guys…
“Where does money come from? Where does it go? Who makes it? The money magicians’ secrets are unveiled. We get a close look at their mirrors and smoke machines, their pulleys, cogs, and wheels that create the grand illusion called money. A dry and boring subject? Just wait! You’ll be hooked in five minutes. Reads like a detective story – which it really is. But it’s all true. This book is about the most blatant scam of all history. It’s all here: the cause of wars, boom-bust cycles, inflation, depression, prosperity. Creature from Jekyll Island is a “must read.” Your world view will definitely change. You’ll never trust a politician again – or a banker.”

Jake Jackson
JJ
Jake Jackson
2 years ago
Reply to  Galeti Tavas

Anyone who “trusts” a politician or a banker deserves the screwing that will result.

D Bagnall
DB
D Bagnall
2 years ago
Reply to  Fraser Bailey

Thank you for the suggested reading. I remain curious and have read on both subjects but was unaware of those authors.

CHARLES STANHOPE
CS
CHARLES STANHOPE
2 years ago
Reply to  zacharia77

The Bible being a good example.

Galeti Tavas
VS
Galeti Tavas
2 years ago

Well, we do always note the brevity of your posts, not even achieving a sentence on this one.

CHARLES STANHOPE
CS
CHARLES STANHOPE
2 years ago
Reply to  Galeti Tavas

Why waste words?

Galeti Tavas
VS
Galeti Tavas
2 years ago

Why Not?

Arnold Grutt
AG
Arnold Grutt
2 years ago

It’s not a book. It’s a collection of books: τὰ βιβλία. You’re not expected to read it all at once (or even ever).

Brian Dorsley
0
Brian Dorsley
2 years ago

Christianity is a desert weed that doesn’t flourish in a lush environment.

imackenzie56
IM
imackenzie56
2 years ago
Reply to  zacharia77

As a former publisher of professional/business books, the explanation is simple: only by stretching a small (even if worthy) idea into a book can any money be made.

Rasmus Fogh
RF
Rasmus Fogh
2 years ago

Maybe Mr Kahneman has his own bias: he believes that humans are rubbish at thinking and reasoning, and is happy to accept any result that confirms his opinion.

When first introduced to his ideas, in a company training course, I was convinced that management was deliberately trying to undermine our trust in our own judgement, in order to make us more obedient to management orders, stupid or otherwise. Crucially, nothing was said about how to identify or correct for bias, only that we are riddled with it. Trying to finish ‘Thinking fast and slow’ later, I saw that the course was actually quite faithful to Kahneman’s ideas. As I remember, he thought that a problem like officer candidate selection could best be handled by removing all human judgement from the process and replacing it with something like a twelve-line Python program. He may or may not be right, but short of handing over the running of our lives to some primitive computer program, what can we use it for?

Last edited 2 years ago by Rasmus Fogh
Johnny Sutherland
JS
Johnny Sutherland
2 years ago
Reply to  Rasmus Fogh

I have difficulty believing this – I actually upvoted you <G>

Fred Dibnah
FD
Fred Dibnah
2 years ago
Reply to  Rasmus Fogh

You could use it to become more self aware.

Peter LR
Peter LR
2 years ago

And where does peer pressure come into making judgments and even conclusions from observations? Is it the case that institutions are making ‘woke’ decisions because proponents are elevating ‘noise’ and they bow to peer pressure?
One could perhaps argue that this article is influenced by noise in changing the generic pronoun from male to female in the joke. Women don’t make those kinds of personal remarks to people: it could only ever be an insensitive bloke.

Jon Redman
HJ
Jon Redman
2 years ago
Reply to  Peter LR

When men get together they insult each other and they don’t mean it. When women get together they pay each other compliments and they don’t mean it.

Chris Wheatley
CW
Chris Wheatley
2 years ago

For me, the most interesting article of the year, which makes me a bit weird. One eminent psychologist criticising the work of another eminent psychologist in a way which the ordinary person can understand. Two problems for me in this:
As I have said before, I don’t really think of medics as scientists. The bottom line is that they do make a lot of mistakes, especially the experienced ones. In the USA there was an extensive trial of a computer-based diagnosis system which always beat the doctors but especially made the experienced ones look very bad – because medical ideas change very quickly and the medics don’t change fast enough as they get older.
The explanation of the statistics showed the one important thing – that we focus too much on averages. For example, I have read many times that a lockdown for Covid should not have happened because the average age of death was 82. Here, the average is not necessarily the correct figure to use – the way the average is made up can sometimes be more important.
So, a psychologist criticising a psychologist about the abuse of science – both of them use statistics in their work but neither is a scientist. Arguably, we could do with fewer psychologists and more scientists.

Andrew Harvey
AH
Andrew Harvey
2 years ago
Reply to  Chris Wheatley

You mean like those “scientists” who created the Covid pandemic and have been furiously trying to cover their trail for the past 18 months?

Lab leak? What lab leak? I didn’t see a lab leak.

Lockdowns are good for you.

Here, have a mask.

Chris Wheatley
CW
Chris Wheatley
2 years ago
Reply to  Andrew Harvey

Don’t understand. If you mean that all scientists should be treated with suspicion and all findings should be taken with a pinch of salt, I suggest that you should suspect all forms of transport as unsafe, that all bridges are about to collapse, that phones don’t really work (just an evil government trick).

You can only really not believe in science if you are prepared to go back to nature. You should stay in comfort behind your computer and make clever comments. Oh, yes – a computer, another evil trick.

Last edited 2 years ago by Chris Wheatley
Jon Redman
HJ
Jon Redman
2 years ago
Reply to  Chris Wheatley

It depends surely. There’s science and there’s science.
There’s practical science and there’s prediction / woo-woo lent spurious authority by claiming that it’s “science”.
Phones and computers objectively do work, and bridges usually don’t fall down.
Models of spacecraft trajectory work pretty well. Models of car crash risk also work pretty well. It’s no coincidence that the science in these – the physics, the statistics – produce replicable results.
No problem with the science in those. Purportedly predictive models of the likelihood of a global financial crash risk, however, or COVID fatality, objectively don’t work at all. Climate science and psychology likewise. They don’t become science just because they happen to use an Excel spreadsheet overlay on top of some political assumptions that fundamentally aren’t science.
The challenge is in how you recognise them from each other and respond.

Rasmus Fogh
Rasmus Fogh
2 years ago
Reply to  Jon Redman

Climate science and COVID fatality prediction are actually decent,even if they are not quite as good as weather forecasting. The system is too complex to understand fully, but at least the main mechanisms are understood. It is just that a lot of people do not like their conclusions. Finance and psychology are a lot worse off.

Johnny Sutherland
JS
Johnny Sutherland
2 years ago
Reply to  Rasmus Fogh

Anywhere that the claim “the science is complete” is made is not science. Climate science in terms of forecasting is about as accurate as a 1960s weather forecast for the month after next.

COVID fatality prediction – OK I’ll bite – how do you know?

Rasmus Fogh
RF
Rasmus Fogh
2 years ago

It is a question of what you have to work with. Pure curve-fitting without understanding is unlikely to work – which is why you cannot extrapolate your way to future stock prices. Economic predictions and psychology have no well-understood laws to make a secure basis for anything. With climate or COVID we have enough of a basis that our predictions should at least bear some reasonable relationship to reality. For more precision that that you would need detailed knowledge.

On the climate we know that CO2 in the atmosphere has increased steadily, we know that the first-order effect is to heat up the planet, we know at least a fair bit about how the different parts of the system interact, and our data do say the planet is heating up and the glaciers are melting faster. If our models fit the data and predict that further increasing CO2 will further increase temperatures that does sound at least plausible.

With COVID we know (or are getting to know) how the disease spreads, how long you are infective, the proportion of deaths in each age group. We cannot really model the social interactions or population behaviour, but again, we have a handle on what could happen.

Rasmus Fogh
RF
Rasmus Fogh
2 years ago
Reply to  Jon Redman

A long, complex and contradictory discussion on which unnamed climate scientist may have made an exaggerated throwaway remark, and why that would mean that all scientific conclusions on the future climate can be ignored. Rather like saying that Boris Johnsons (alleged) comment on ‘Let the bodies pile up’ should be proof that lockdowns work. I am just not buying it.

Wade’s article on the lab leak was convincing because it addressed the actual arguments, not the bad faith of the people he disagreed with, and gave a convincing alternative explanation for the available facts. If there is someone who can do the same for climate change it will be worth the time it takes to read it. At least it would hopefully tell us what these people believe is happening, and why, not just why everybody else should not be trusted.

Last edited 2 years ago by Rasmus Fogh
shively
RS
shively
2 years ago
Reply to  Rasmus Fogh

…even if they are not quite as good as weather forecasting.

The irony is strong in this one. 🙂

Norman Powers
NP
Norman Powers
2 years ago
Reply to  Rasmus Fogh

Weather forecasts are only really reliable about 3 days into the future, and they don’t even try to do forecasts more than 10 days into the future (outside of climatology).
So what you’re saying is that epidemiology is a bit worse than something that is only reliable 3 days into the future. Yes, that sounds about right. So why do they try to predict trends over a span of months or years?

Rasmus Fogh
RF
Rasmus Fogh
2 years ago
Reply to  Norman Powers

Because they are predicting trends, not individual events, which is a lot easier. And because that is the question we need an answer for.

Norman Powers
NP
Norman Powers
2 years ago
Reply to  Rasmus Fogh

This is an academic distinction – a “trend” in epidemiology is started by an “event”, because they don’t try to predict when outbreaks start at all.
Also, it’s clearly not easier because their predictions are always either wrong, or useless, or both!

Jon Redman
HJ
Jon Redman
2 years ago
Reply to  Rasmus Fogh
CHARLES STANHOPE
CHARLES STANHOPE
2 years ago
Reply to  Jon Redman

Or in other words it is time to:
“Prepare to repel boarders!”

Rasmus Fogh
RF
Rasmus Fogh
2 years ago
Reply to  Jon Redman

Yes, the Guardian wrote an exaggerated warning about the possible future in 2005. Again, try telling us what you think is happening and why, that would be a lot more constructive.

Alex Lekas
AL
Alex Lekas
2 years ago
Reply to  Chris Wheatley

I would suggest that people who traffic in absolutes should be viewed with suspicion. We were told unequivocally that lockdowns and mandates and masking were the answers, and that to question our betters was tantamount to heresy. How’d that work out? I have a lot more respect for a person with the humility to say “I don’t know” or “I’m not sure” when one of those is the real answer. The others are not practicing science; they’re resorting to scientism with a public all too happy to parrot them with an appeal to authority.
You are spot on about medical mistakes; they’re among the top causes of death in the US. Turns out doctors are not gods, after all, just men and women with all the inherent flaws that come of being human. Seems having biases is part of that, too.

Last Jacobin
LJ
Last Jacobin
2 years ago
Reply to  Alex Lekas

Were we told ‘unequivocally that lockdowns and mandates and masking were the answers’? I thought they were proposed as the best possible guess as to what to do based on the information available and an assumed priorities of minimising imminent deaths or illness.

Alex Lekas
AL
Alex Lekas
2 years ago
Reply to  Last Jacobin

Those things were not “proposed.” They were mandated. No business closed its doors voluntarily. In many jurisdictions, masks were not suggestions.
Had various officials just said “we recommend” and let adults make their own adult decisions, that would be quite a different affair.

Last Jacobin
LJ
Last Jacobin
2 years ago
Reply to  Alex Lekas

I see what you mean. Yes, in some places they were made laws. I suppose my point was that the arguments behind the laws wasn’t presented (to my ears, anyway) as unequivocal fact but the best available option. Making them compulsory? If some UK businesses had not been required to shut down then the UK furlough scheme would not have been plausible.
As has been gone over many times, letting adults make their own decisions as to whether they do stuff to protect other people doesn’t necessarily lead to those adults making the decisions that protect other people.

John Jones
JJ
John Jones
2 years ago
Reply to  Alex Lekas

That attitude helps explain why the US did such a poor job of handling the pandemic.

Galeti Tavas
VS
Galeti Tavas
2 years ago
Reply to  Last Jacobin

They were mere political manipulation. Lockdowns and masks were not to save anything, but for nefarious political agenda. Take 50 USA States, look at WHO was in charge of each, and what they made their state do, that was the thing Who, not What, not the covid. Left never waste a good crisis.

Johnny Sutherland
JS
Johnny Sutherland
2 years ago
Reply to  Chris Wheatley

For me the problem is that far to many “ologies” have been declared science. If you cannot falsify and repeat then its not science. Most of the soft sciences should disappear in a mushroom shaed cloud.

Rasmus Fogh
Rasmus Fogh
2 years ago
Reply to  Chris Wheatley

If medical ideas change very quickly, how do you know that today’s ideas are the correct ones? It is not surprising that a program recently written to follow the most modern ideas, well, conforms to the most modern ideas. But unless you have some independent confirmation it does sound a bit like it is marking its own homework.

Chris Wheatley
CW
Chris Wheatley
2 years ago
Reply to  Rasmus Fogh

Yes but you have to ask, “So what?” My point is that medical science comes from scientists, people who are trained scientifically, who use experimentation to confirm or deny their findings.

By this definition, medics (medical doctors) are not scientists – they merely do as they are told. Psychologists have almost no scientific training but they do perform experiments, often badly.

Medical doctors are taught to observe, to count symptoms, how to react when those symptoms occur. They do this by rote. People believe that they get better with age because of experience. In fact, they get fixed in their ways and don’t change as quickly as they should. Therefore, they make a lot of mistakes in diagnosis, in emphasis and in bedside manner.

Fred Dibnah
FD
Fred Dibnah
2 years ago
Reply to  Chris Wheatley

There was a program in the last year or 2 on do you give oxygen to a person having a heart attack . Over about 3 decades 3 different studies had shown that patient outcomes were worse when oxygen was given. They were trying to get the message across to health professionals.

Norman Powers
NP
Norman Powers
2 years ago
Reply to  Chris Wheatley

One of the big learnings for the public in the past year is that medical science doesn’t actually come from generic “scientists” at all, it comes from a grab-bag of pharma firms (who seem to be competent) and academics pretending to be scientists (who don’t). Any type of “science” that makes predictions but never validates them against the real world is not actually a science at all, and medicine is absolutely filled with that sort of thing. Nobody within the scientific institutions seems to notice or care, and they just flatly reject all outside criticism.
That’s before we get to the maths errors, the programming errors, the logic errors … frankly having read their output, it left me terrified of going into a hospital. The health literature is flooded with papers that look superficially like what medical science is meant to look like, but then you scratch the surface and discover the whole thing is a movie set in which 80% of the buildings are actually just facades and everything falls down if you lean on it.

Andrew McDonald
AM
Andrew McDonald
2 years ago
Reply to  Norman Powers

On the other hand, would you rather be treated for a serious illness in a 2021 UK hospital or a 1971 UK hospital, and who do you think did the research that makes 2021 (even under current pressures) the obvious choice?

Johnny Sutherland
JS
Johnny Sutherland
2 years ago
Reply to  Rasmus Fogh

Easy – its because if the patient is male the system says he wants to me female and have bits chopped off and other bits stuck on, if female she wants to be male and have some bits chopped off and other bits stuck on.

There you go – a fully modern medical diagnostics system.

Fred Dibnah
FD
Fred Dibnah
2 years ago
Reply to  Chris Wheatley

My wife would agree with you about (most) doctors not being trained as scientist. She would also tell you that the Lancet is not a scientific journal but a magazine for doctors.

Fran Martinez
FM
Fran Martinez
2 years ago
Reply to  Chris Wheatley

Averages are meaningless when the shape of the distribution is too skewed or worse yet multimodal. Anyone in the hard sciences understands that, but I don’t think medics, biologists, or people on the social sciences are very aware of it.

Last edited 2 years ago by Fran Martinez
Last Jacobin
LJ
Last Jacobin
2 years ago
Reply to  Chris Wheatley

Interesting comment and I’ve never really thought about whether medics are scientists before but the more I think about it the more it rings true that they’re not. Making a diagnosis, prognosis, deciding on treatment uses some data produced by science but then lots of other stuff that is not science as well. And computer programs have been found to be better at reading scans than radiologists.
Maybe we need more statisticians and computer programmers and fewer psychologists and scientists. Though Psychology could then re-badge itself as a sort of interpretive art form – which wouldn’t necessarily make it any less valuable.

Galeti Tavas
Galeti Tavas
2 years ago
Reply to  Chris Wheatley

Chris, you are a very worthwhile poster to read, but all I got from the above article was gibberish arranged to sound like it had meaning. when I hear “psychologists” I think Astrologer, Phrenologists, Primal Scream Therapy, Freud, Shaman, and so on. I guess unconscious bias.

imackenzie56
IM
imackenzie56
2 years ago
Reply to  Chris Wheatley

There is a very good reason that I am “biased” in favor of young doctors and young dentists from only top schools. Experienced members of professions are the most outdated and dangerous and the doctor who came in last at Grenada Medical School is still called Doctor. Continuing Education is a joke–I know, I provided it at a handsome profit for many years to financial advisors and CME (the medical version) is no better.

Johnny Sutherland
Johnny Sutherland
2 years ago

Despite its entertaining subject, many of the millions who bought Thinking, Fast and Slow might recall not reaching the end – it was something of a slog.

I’m one of those millions. I gave up not because it was a slog but because it was give up or start ranting about conclusions being drawn from minuscule studies being extrapolated to the whole of humanity.

Later I read Science Fictions by Stuart Ritchie which confirmed my bias.

Brendan O'Leary
Brendan O'Leary
2 years ago

I’m another one who didn’t finish Thinking Fast and Slow.
Not that I disagreed with it, but because it merely confirmed what a lifetime of experience and self-criticism has taught me. And then repeated it in each chapter.
The really intriguing question was: “Is this stuff really revelatory to academics? They must be even more out of touch than I thought” (with my inbuilt bias).

Annette Kralendijk
Annette Kralendijk
2 years ago

Humans have biases – News at 11.

Kremlington Swan
KS
Kremlington Swan
2 years ago

Sounds like that may be an interesting read. I first came across the concept of cognitive bias in The Mechanism of Mind, published by Edward de Bono in 1969.
I have no idea whether his model of cognition is widely accepted or not. I suspect that rather fewer people than might be desirable understand the concept of a self-organising system, which is central to the model (as far as I understand it).

Jake Jackson
Jake Jackson
2 years ago

Long story short: those studies were weak, and other scientists can’t find similar results when they try to re-run the experiments.

As those who have looked into it know, there is a crisis of replication in the “scientific” world. Strip away the verbiage, and the bottom line is that there is a lot of politics in “science,” going back to Galileo and further. Whether religious (Galileo) or academic careerism, or today’s rush for university grants tied to media attention and public superstition, the scientific method, which depends on independent replication, is all too often observed in the breach.

My favorite example is at the link. This is a British site, so I do not expect readers here to be familiar with it. The example is very far from trivial, and I can cite other recent examples.

http://www.detectingdesign.com/harlenbretz.html

Last edited 2 years ago by Jake Jackson
Jake Jackson
Jake Jackson
2 years ago

Hey Unherd, what’s with the censorship? Are you trying to become MSNBC or something?

Blanche Osgood
Blanche Osgood
2 years ago

Are you biased if you hate everyone and everything?

Peter Mott
PM
Peter Mott
2 years ago

I think the sub-editor has inserted a very noisy title.