It’s not just “Data” that can “go bad”, but the “Interpretation”.
This month’s New Scientist cover story “Hidden depths: Brain science is drowning in uncertainty” (registration required) presents a superb account of the challenges in interpreting even the most sophisticated, highest-tech data…
- “[Jon Ioannidis] has documented serious flaws in the ways that many – if not the vast majority of – neuroscience studies are designed, analysed and reported. That should perhaps be a warning whenever we read headlines about studies capturing snapshots of the brain on ‘love’, ‘fear’, ‘religion’ or ‘politics’. It turns out that many of those colourful brain scans may offer little more than mirages, obscuring the true picture of the human mind in action…A jaw-dropping study from the University of Michigan demonstrated that an fMRI experiment could be analysed in nearly 7000 ways…One tongue-in-cheek report showed that even a dead salmon’s brain could appear to be ‘thinking’ inside a scanner if the wrong techniques were used…92 percent of scan examining the anatomy of conditions like autism might have missed the true answer, with many reporting links that weren’t really there.”
The article examines a number of reasons for the endemic problems in studying the brain from “neuromania’, inadequate models, ‘double dipping’, data dredging, and confirmation bias. Ionnidis’ embraces these errors with a campaign of transparency to bring them to light (“I don’t like hiding things under a carpet. I prefer to identify issues and solve them.”). In particular, he has started the Open fMRI Project where researchers can share their raw data with anyone who is interested inviting more comprehensive scrutiny through a key part of the scientific method – replication (ie. independent investigators replicating the findings in an independent examination/experiment).