Why you REALLY can’t trust small studies: the small study effect

You’ll often see loony zealots refer you to a study showing how effective their preferred treatment is — there usually is some small study supporting the use of almost any...

You’ll often see loony zealots refer you to a study showing how effective their preferred treatment is — there usually is some small study supporting the use of almost any treatment.

You’ll also often hear people reply that the study was only small, so shouldn’t be trusted. But why shouldn’t you trust small studies? Sure, they won’t provide quite as much statistical power as larger ones, but surely they can still be useful.

And that’s true. They can be useful, and they do provide important information. But a meta-epidemiological study in the British Medical Journal recently showed a really interesting fact about small studies.

The researchers highlight what is known as the “small study effect”: a very particular bias that small studies introduce into systematic reviews.

It turns out that small studies are systematically biassed towards the effectiveness of the intervention they are testing.

Systematic reviews pool the results of all the relevant studies on a particular issue and usually provide the very best evidence. Before major decisions are made about some particular treatment, we usually wait for a big systematic review of the literature to be published.

But if there have been lots of small studies done, then when researchers conduct a systematic review, it turns out they might end up being slanted.

These particular researchers looked at studies that tested various treatments for osteoarthritis and they plotted all the studies for each treatment on a graph with the larger trials near the top and the smallest ones near the bottom.

If the study showed the treatment was very effective, they plotted it further to the left and if it showed it was ineffective (or had a negative effect) they plotted it further the the right.

They call them funnel plots because if small trials are not biassed, the graphs will resemble funnels. The large studies will group together at the top and the small studies will scatter evenly on either side.

The results are visually striking. Far from resembling funnels, they resemble toppling towers — with small studies heavily drawing the plots to the left, towards the treatment being more effective.

The funnel plots for the 13 meta-analyses studied.

You can see just by looking at these plots that if systematic reviews are conducted, pooling all these data, the final analysis will generally be skewed towards supporting the treatment.

So why are small studies biassed in this way? The study authors suggest that there might be several factors at play. For one thing, there might be a selection bias: Small studies that show less effect might be less likely to be published. They also suggest a number of other explanations including a problem with participants being excluded from the analysis after being randomised into one of the arms.

They urge that authors of systematic reviews include the above funnel plots in all systematic reviews, and if a “small study effect” is observed, the reviewers should include a separate analysis that excludes all the small studies.

Update: The researchers defined “small” studies in this paper as ones with less than an average of 100 participants in each arm. Thanks, Simon, for pointing this out in the comments.

So next time someone points you to a small study showing how effective acupuncture is, how reflexology relieves depression or how fish oil cures everything, you can rest easy knowing that you’re under no obligation to accept the study’s conclusions. It’s much better to wait for a larger study, a meta-analysis, or even better, a meta-analysis that controls for the small study effect.

ResearchBlogging.org Nüesch E, Trelle S, Reichenbach S, Rutjes AW, Tschannen B, Altman DG, Egger M, & Jüni P (2010). Small study effects in meta-analyses of osteoarthritis trials: meta-epidemiological study. BMJ, 341 PMID: 20639294