In case you missed it, Stephen Hawking was in the news recently for purportedly claiming there are no black holes. It got a fair amount of press, because it’s freaking Stephen Hawking and because it sounds like a bold claim. There was some back-and-forth on twitter about this, because all that was presented were some presentation notes (not rigorous work, i.e. no math) based on an arXiv paper (not peer-reviewed) and the actual discussion is more subtle, which means the popular science press kinda got it wrong (even more so in the headlines)There was also the observation that this got a lot more press because it was Hawking who presented it. Matthew Francis discusses this effect of celebrity in a piece at Slate.
In short, in science (as opposed to science journalism) we don’t automatically accept things from certain people just because of their status — that’s not how science rolls; if it’s wrong it’s wrong. What status offers is an increased opportunity to be heard, and credibility that your work is carefully done — you’re going to get the benefit of the doubt but not immunity to people checking your work. While this story would have gotten less press if it had come from someone else, it probably would have gotten some, because there are plenty of pop-sci “physicists say” stories that are based on similarly untested proposals. I think this grabbing any and all new results and tendency toward hyperbole are a problem. Overselling results, some of which will end up being wrong because they were plucked when the weren’t ripe, probably undermines peoples’ confidence in science. It certainly gives fodder to the anti-science types.
There’s a hierarchy to the acceptance of science. We have to remember that what we’re doing is to try and less wrong (and there are different degrees of being wrong), and to do so we have to have confidence that we are right, and it’s really easy to convince yourself that you’re right, so the true test is in objectively convincing other people.
So how do we convince people? We tell them about the ideas and let them check our work. We let others weigh in, and fix things that are wrong with the idea. We iterate.
The backdrop to any publicly shared science is that you’ve done internal reviews. If it’s theory, the idea has been developed and shared with colleagues and if it’s experiment the proper checks have been done to make sure there isn’t some systematic or calibration effect giving you the wrong answer, lest you end up suggesting the possibility that neutrinos are superluminal or something like that.
You might then give a talk at a conference on your idea or results. Conference talks aren’t peer-reviewed (in my experience, at least) and don’t contain enough detail for much analysis, though the conference proceedings may provide this detail.
Peer review during the process of publication provides another opportunity for knowledgable people to give feedback and object to shortcomings. But peer review is sort of a double-edged sword. It’s a dividing line in this hierarchy — one can generally dismiss claims that are not peer reviewed as a sort of “you must be this tall to ride this ride” — any popular article or discussion board claim that’s based on someone’s ill-formed idea should be taken with a huge grain of salt and conference talks a smaller one — but peer review does not carry with it any guarantee of correctness. There are plenty of peer-reviewed papers whose claims are later shown to be wrong — that’s part of the process.
As I see it, this is part of a larger problem in communicating science. People will take ideas at these various stages of development and run with them, whether it’s a crackpot who read a website treatise claiming that electrons are really tiny blue pyramids (and this explains e v e r y t h i n g!) or it’s a pop-sci article that gives entirely too much weight to a conference talk. Until other scientists have had a chance to give formal feedback to theory, or try and replicate results of an experiment, you have only limited confidence that the work is correct. The more confirmation you get, the higher the confidence level. What you really want is a bunch of experiments by many different groups.
If someone had measured a result for the deflection of light by the sun that disagreed with general relativity, what would happen? Back in 1920, it would have called into question Eddington’s experiment, and several more rounds of measurement would have to be done. If it happened today, much like the superluminal neutrino result, the vast majority of the feedback would be saying the experiment must have some error to it. (Though undoubtedly a handful of people would jump in with their version of modified GR to explain the result) Once the weight of experimental result hits a certain critical mass, the expectations swing away from needing data to confirm a theory to needing exceptional data to disprove it. Science journalists understandably don’t want to wait this long to write about results, of course, which is a bit of a conundrum, but I think the system has a tendency to overstate the confidence we have in our scientific musings and findings, even without the additional amplification of having Hawking’s name attached to things.
(another discussion that may come up at ScienceOnline)
The Pioneer anomaly, Fifth Force, and Gran Sasso superluminal neutrinos illustrate theory sourcing anything (Aristotle) without being empirical (Galileo). If a levied penalty is less than profit in hand, it’s not a deterrent – it’s a business plan. Discovery is unfundable for lacking a PERT chart. Young faculty have no calculable DCF/ROI.
SUSY and quantum gravitation are empirical disasters. 40 years of our finest minds have not found the last parity violation, symmetry breaking, chiral anomaly, or Chern-Simons repair of Einstein-Hilbert action. Boson photon vacuum symmetries are not exact hadron (fermion quark) symmetries. This is endlessly observed but it cannot be Officially true.
Phys. Rev. 104(1) 254 (1956), http://prola.aps.org/pdf/PR/v104/i1/p254_1
Yang and Lee say the universe is not mirror-symmetric. This is madness.
Phys. Rev. 105(4) 1413 (1957), http://prola.aps.org/pdf/PR/v105/i4/p1413_1
It’s true. They are Nobel Laureates later that year.
Phys. Rev. 105(4) 1415 (1957), http://prola.aps.org/pdf/PR/v105/i4/p1415_1
Anyone could have been Yang and Lee – and more easily.
PNAS 14(7) 544 (1928), http://www.pnas.org/content/14/7/544.full.pdf html
Parity violation was explicitly observed 30 years earlier. It was madness.
We now embrace a grant funding business environment in which neither Cox nor Yang and Lee can be insubordinate to halcyon ephemerides of MBA omniscience. SUSY and quantum gravitation will never work – assuring perpetual grant funding and publishing for all players. Show a lot of thigh, then forever promise higher goals achieved.