Thursday, February 12, 2015

The importance of priors!

I was amused to see the different publicity, and reaction, to two pieces of research recently. The first was that silly article about running, which claimed that too much running is bad for you. Interestingly, that BBC page seems to have changed from the tautologous "too much" to the term "running hard", which if anything makes it worse, becaue too much being bad is just a definition of what too much means, whereas running hard...well that's where the research falls down. It only took a couple of mins to find the relevant paper, which shows...huge error bars on estimated risk for hard runners, such that the confidence interval on the hazard ratio actually goes below 1 (at which point running hard is good for you!). The underlying problem is that study was small, there were only 36(?) runners in this group, and this simply isn't enough to show conclusively what the health effects would be. I believe that More or Less has dealt with this, though I haven't listened to it yet.

Then just yesterday, David Spiegelhalter drew my attention to a study on the effects of alcohol, which claimed that modest drinking had no benefits (in opposition to the widely held view that it did). He explains that the study was again underpowered, such that any modest effect would by construction not be "statistically significant". The underlying problem is that, as Andrew Gelman often mentions, where an effect is probably small (but non-zero) and only weak studies with small samples are used, any "significant" result will necessarily be a huge overestimate of the effect (ie, if the true value is x but the error bar is ±10x then only estimates that come out to as much larger than x, and perhaps even with the wrong sign, can be reported as significant), and any realistic estimate close to the true value of x will be found "insignificant" and therefore be liable to being discarded or denied by silly scientists.

One obvious solution is to use a Bayesian approach with a reasonable prior, which in both cases would have found that the new data were insufficient to overturn what was previously believed to be the case...but that won't publish high profile papers or sell newspapers.

No comments: