(Photo by Fox Photos/Getty Images)


January 21, 2020   5 mins

Imagine that you’re trying to decide which school you want to send your child to. Of course, your little darling is the most gifted and brilliant child in the world — anyone can see that! That time he set the headteacher’s hair on fire was only because he wasn’t feeling sufficiently challenged. Anyway, it’s time to find somewhere that will really push him. So you’re looking at the exam results of the various schools in your area.

Most of the schools report that 80% or so of their children achieve A-to-C grades in all their exams. But one school reports 100%. They all appear to be demographically similar, so you assume, reasonably enough, that the teaching is much better in that one school, and so you send little Mephiston there.

But a year later, his grades have not improved, and he is once again in trouble for dissecting a live cat in biology class. You dig a little deeper into the exam results, and someone tells you that the school has a trick. When a child doesn’t get a result between A and C, the school simply doesn’t tell anyone! In their reports, they only mention the children who get good grades. And that makes the results look much better.

Presumably, you would not feel that this is a reasonable thing to do.

It is, however, exactly what goes on a lot in actual science. Imagine you do a study into the efficacy of some drug, say a new antidepressant. Studies are naturally uncertain — there are lots of reasons that someone might get better or not get better from complex conditions like depression, so even in big, well-conducted trials the results will not perfectly align with reality. The study may find that the drug is slightly more effective than it really is, or slightly less; it may even say that an effective drug doesn’t work, or that an ineffective one does. It’s just the luck of the draw to some degree.

That’s why — as I’ve discussed before — you can’t rely on any single study. Instead, the real gold standard of science is the meta-analysis: you take all the best relevant studies on a subject, combine their data, and see what the average finding is. Some studies will overestimate an effect, some will underestimate it, but if the studies are all fair and all reported accurately, then their findings should cluster around the true figure. It’s like when you get people to guess the number of jelly beans in a jar: some people will guess high, some low, but unless there’s some reason that people are systematically guessing high or low, it should average out.

But what if there is such a reason? What if — analogous to the school example above — the studies that didn’t find a result just weren’t ever mentioned? Then the meta-analyses would, of course, systematically find that drugs were more effective than they are.

And that is exactly what happens. For a variety of reasons — not all of them fraudulent, but all of them damaging — studies that find negative results have a tendency not to make it into journals, and thus can’t make it into meta-analyses; so meta-analyses systematically overstate the efficacy of drugs. (And other things, but it’s probably pharmaceutical drugs that we are most immediately concerned about.)

This doesn’t mean that all research is useless, but it has a measurable effect. When scientists go and ask for unpublished data, and then redo the meta-analyses including it, they find, for instance, that unpublished trials of antipsychotic drugs find much smaller effects than published ones. Similarly, a study reported that only 51% of registered trials into antidepressants find a positive result, but that 94% of published ones do. This bias can make “ineffective and potentially harmful” drugs appear effective.

(For the record, so that I’m not guilty of an ironic version of publication bias myself, I should note that one analysis I came across found that including unpublished data was just as likely to make results look better as it was to make them look worse.)

In 2007 the US government realised that this was a terrible situation and passed a law requiring all drug trials to publish their results, positive or negative, on the website clinicaltrials.gov within one year of the trial’s completion. That law came into effect in January 2018. Now, a new analysis in the Lancet, by scientists at the University of Oxford, has looked at how that’s going. 

The answer is: not well. Of the 4,209 trials that have been completed since that date, 1,722 managed to report within one year; a further 964 reported late; and 1,523 haven’t reported at all. Contrary to what you might expect, it’s not (mainly) unscrupulous pharma companies sitting on negative data: industry-sponsored studies were much more likely to report on time. (A similar analysis into compliance with EU publication rules, by the same authors in 2018, found very similar numbers and a similar tendency for industry trials to do better.)

These researchers are literally breaking the law; if the federal government enforced that law, then billions of dollars of fines would be owed.

The reasons behind it are pretty obvious. Aside from the “industry wants to sell its drugs” thing, which no doubt is a factor, scientists are incentivised to get their studies published – academia’s model of “publish or perish” means that if you’re not getting your research into journals, you’re not doing well in your career. And journals, for bad but long-standing reasons, are interested in “novel” results – i.e. results that show something interesting and unexpected. So they often won’t publish a study that finds null results, because that isn’t “novel”. Both researchers and journals, therefore, have a tendency to hide boring, didn’t-find-anything results, even though they’re a vital part of the overall picture.

It’s probably not possible to say accurately what the overall impact of publication bias is. It would amaze me, though, if it’s not of the order of tens or hundreds of millions of dollars, and thousands of years of life, lost every year, simply because so many people are prescribed ineffective drugs around the world every day. This is a significant social problem.

People are trying to fix it. One of the authors of the Lancet study, Ben Goldacre, founded the AllTrials initiative a few years ago, calling for all clinical trials to be registered at the start and have all their results reported within one year. Another initiative is Registered Reports, which tries to fix the broken incentive system in science by getting journals to agree to publish studies not on the basis of their results, after those results are in, but on the strength of their methods, before the data is collected. That would avoid the drive for “novelty” and encourage scientists to make null results public.

I don’t want to overstate the case here and say that science is broken. Science works, and achieves astonishing things. But there are systemic problems which slow down its progress, and since science is the primary driver of the improvement in human lives we’ve seen in the last few hundred years, that means we are not improving lives as fast as we could.

A school that hid its negative exam results would be obviously cheating the metric. The fact that science has done it systematically, and continues to do it despite literal legal requirements, is no better.


Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.

TomChivers