Quoting but also making substantial changes, kind of like plagiarism,

A statistical study guarantees that at least 5% of false hypotheses are accepted as true. And we will fail to accept some true hypotheses, it depends on a variety of factors such as the sample size. Putting this together ... a rate of 75% true.

Ioannidis says most published research findings are false. This is plausible in his field of medicine where it is easy to imagine that there are more than 800 false hypotheses out of 1000. Studies in medicine also have notoriously small sample sizes. Lots of studies that make the NYTimes involve less than 50 people - that reduces the probability that you will accept a true hypothesis and raises the probability that the typical study is false.

The more researchers seek to prove something that is not true, one will probably find evidence it is true, and then publish the false result. A meta analysis will go some way to fixing the last problem, but editors and referees (and authors too) like results which reject the null. One of the few times that journals will accept a paper that fails to reject the null is when the evidence against the null is strong, increasing his belief in a (probably false) theory.

This reminds me of a recent NYTimes article that discussed differences in male/female thinking during math problems that had a sample size of 14. But, male/female differences titillates the public...

The lesson: be wary of a single study, trust the result of many studies that generally agree, even if a few do not support the conclusion.

A regression analysis would seem prone to problems because if you follow a large number of factors that you think might cause a particular result, then some of them will appear to be causally connected when they are not.

## No comments:

Post a Comment