Scientific fraud

Standard

Alex Berezow:

A stunning report published in the Annals of Internal Medicine concludes that researchers often make “inappropriate requests” to statisticians. And by “inappropriate,” the authors aren’t referring to accidental requests for incorrect statistical analyses; instead, they’re referring to requests for unscrupulous data manipulation or even fraud.

Full report in the Annals of Internal Medicine here.

This seems… not as surprising nor shocking as I expected?

Some of the results are somewhat open to interpretation. 24% reported that they had been asked to “remove or alter some data records (observations) to better support the research hypothesis.” 30% had been asked to “interpret the statistical findings on the basis of expectations, not the actual results.” On the one hand–yes, this runs the risk of confirmation bias. But on the other hand, don’t you have to be somewhat hypothesis-driven when doing scientific research?

There is a real, active debate with measurable results in quantitative trading. Basically, there are two schools of thought. One school (e.g., AQR) basically replaced its traditional, frat-bro/swim-team human traders that traded by drawing hypotheses out of the data with economics and computer science PhDs that could manipulate this data faster and better. The other school (e.g., Renaissance, Winton) do the same thing but let computers make all the decisions. The difference is that the latter will often trade on counter-intuitive signals (“if there were signals that made a lot of sense that were very strong, they would have long ago been traded out”), while the former will generally only trade on signals that they can intuitively understand, even if they could not have gotten to the signals on first principles. Matt Levine has more here and here.

At first glance, this would imply that the news in Annals is terrible. After all, Renaissance and Winton are purported to outperform the market, so it must work. Should we not force our scientists to be data-driven and lay aside ingoing hypotheses?

But the problem is that there are very few Renaissances and Wintons in the world. The reason for this is that there are very few people in the world who are good enough at data science to be able to differentiate between random statistical significance and counter-intuitive hypotheses that generate alpha. And the intuitive quant firms have not done badly! (At least, until the last few years–and that may be driven more by an uber-bull market, which makes it hard for any firm to generate alpha.)Maybe scientific research should be “good enough.” This is super controversial, since scientists almost universally believe that changing data to fit hypotheses is wrong–but then again, this data show that scientists are doing that anyway. Think about it. You’re a budding associate professor, trying to make tenure, and you discover a result that implies that the world is flat, or pirates are inversely correlated with global warming, or green jelly beans cause cancer. What do you do? Do you just uncritically hit “publish”? Or do you assume that your data were wrong and “interpret the statistical findings on the basis of expectations, not the actual results”?

Leave a Reply

Your email address will not be published. Required fields are marked *