Dealing with Uncertainty at Backreaction, in the context of “science is never 100% certain” and how this plays out with public perception.
There are times when this seems to be a no-win scenario: if you fail to address the uncertainty and have to make any changes to your conclusions, you lose credibility, but if you point out the uncertainty, someone will run with it, exaggerating it. One need go no further than discussions of global warming to see this in action.
One of my least favorite phrases in this area of discussion is “for all we know.” Statements that sound like “For all we know, the phenomenon could be caused by blargh” should be taken with a huge grain of salt, because one of the things science does is to widen the scope of what “all we know” entails, and correspondingly narrow the possible undiscovered explanations for the phenomenon. We rule things out, and attempt to do so in a quantifiable way — we limit the uncertainty. If you are doing an experiment and see something unusual in your data, you start systematically testing to see what could possibly be causing it. So if someone were to claim, “For all we know, that glitch is caused by a spurious magnetic field,” you can respond with “No, we tested the effect of a magnetic field, and eliminated that as a cause.” You do this all the time in setting up an experiment, and you continue to do it when running the experiment — doing everything you can to confirm that the correlation you see is actually causal. But I don’t think that this gets portrayed very well. There’s always someone out there trying to leverage science not being 100% certain, and instead portray uncertainty as being 0% certain, which is far from the truth.
Bee notes that
As I have previously said (eg in my post Fact or Fiction?) uncertainties are part of science. Especially if reports are about very recent research, uncertainties can be high.
And I recall that Feynman touches on this in Surely You’re Joking, Mr. Feynman!. Someone drew a conclusion based on the last data point in some experiment, and he realized that the last data point isn’t so trustworthy — if you weren’t pushing the limit of the apparatus, you’d have obtained more data, so this is certainly a valid point. And here one starts fighting the tendencies of the media, because if the result isn’t novel, it isn’t newsworthy. What ends up happening is that that the least reliable results, the ones most likely to be mistaken, are often the ones making the headlines. The study that challenges a long line of other research (which, being “as expected,” was ignored) gets notice, even though one expects, statistically, the occasional contradictory study. Such is the essence of random noise. This is made worse by the journalistic desire to show both sides of a story, even if there really aren’t two sides, as they have massively different amount of evidentiary support. This, too, misleads the general public about what is know, what is unknown and what level of confidence exists in science.
The greatest obstacle to understanding reality is not ignorance but the illusion of knowledge. Reality is not a peer vote. “Autoritätsdusel ist der größte Feind der Wahrheit,” Albert Einstein, 1901.
” It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong,” Richard Feynman. No Higgs, no SUSY, no string theory. Get over it and start doing better (or at least different) (pdf)) experiments. Theory predicts what it is told to predict.
OTOH… Vote with the stupid. How can so many people be wrong?