Trust Me, Maybe?

In case you missed it, Stephen Hawking was in the news recently for purportedly claiming there are no black holes. It got a fair amount of press, because it’s freaking Stephen Hawking and because it sounds like a bold claim. There was some back-and-forth on twitter about this, because all that was presented were some presentation notes (not rigorous work, i.e. no math) based on an arXiv paper (not peer-reviewed) and the actual discussion is more subtle, which means the popular science press kinda got it wrong (even more so in the headlines)There was also the observation that this got a lot more press because it was Hawking who presented it. Matthew Francis discusses this effect of celebrity in a piece at Slate.

In short, in science (as opposed to science journalism) we don’t automatically accept things from certain people just because of their status — that’s not how science rolls; if it’s wrong it’s wrong. What status offers is an increased opportunity to be heard, and credibility that your work is carefully done — you’re going to get the benefit of the doubt but not immunity to people checking your work. While this story would have gotten less press if it had come from someone else, it probably would have gotten some, because there are plenty of pop-sci “physicists say” stories that are based on similarly untested proposals. I think this grabbing any and all new results and tendency toward hyperbole are a problem. Overselling results, some of which will end up being wrong because they were plucked when the weren’t ripe, probably undermines peoples’ confidence in science. It certainly gives fodder to the anti-science types.

There’s a hierarchy to the acceptance of science. We have to remember that what we’re doing is to try and less wrong (and there are different degrees of being wrong), and to do so we have to have confidence that we are right, and it’s really easy to convince yourself that you’re right, so the true test is in objectively convincing other people.

So how do we convince people? We tell them about the ideas and let them check our work. We let others weigh in, and fix things that are wrong with the idea. We iterate.

The backdrop to any publicly shared science is that you’ve done internal reviews. If it’s theory, the idea has been developed and shared with colleagues and if it’s experiment the proper checks have been done to make sure there isn’t some systematic or calibration effect giving you the wrong answer, lest you end up suggesting the possibility that neutrinos are superluminal or something like that.

You might then give a talk at a conference on your idea or results. Conference talks aren’t peer-reviewed (in my experience, at least) and don’t contain enough detail for much analysis, though the conference proceedings may provide this detail.

Peer review during the process of publication provides another opportunity for knowledgable people to give feedback and object to shortcomings. But peer review is sort of a double-edged sword. It’s a dividing line in this hierarchy — one can generally dismiss claims that are not peer reviewed as a sort of “you must be this tall to ride this ride” — any popular article or discussion board claim that’s based on someone’s ill-formed idea should be taken with a huge grain of salt and conference talks a smaller one — but peer review does not carry with it any guarantee of correctness. There are plenty of peer-reviewed papers whose claims are later shown to be wrong — that’s part of the process.

As I see it, this is part of a larger problem in communicating science. People will take ideas at these various stages of development and run with them, whether it’s a crackpot who read a website treatise claiming that electrons are really tiny blue pyramids (and this explains e v e r y t h i n g!) or it’s a pop-sci article that gives entirely too much weight to a conference talk. Until other scientists have had a chance to give formal feedback to theory, or try and replicate results of an experiment, you have only limited confidence that the work is correct. The more confirmation you get, the higher the confidence level. What you really want is a bunch of experiments by many different groups.

If someone had measured a result for the deflection of light by the sun that disagreed with general relativity, what would happen? Back in 1920, it would have called into question Eddington’s experiment, and several more rounds of measurement would have to be done. If it happened today, much like the superluminal neutrino result, the vast majority of the feedback would be saying the experiment must have some error to it. (Though undoubtedly a handful of people would jump in with their version of modified GR to explain the result) Once the weight of experimental result hits a certain critical mass, the expectations swing away from needing data to confirm a theory to needing exceptional data to disprove it. Science journalists understandably don’t want to wait this long to write about results, of course, which is a bit of a conundrum, but I think the system has a tendency to overstate the confidence we have in our scientific musings and findings, even without the additional amplification of having Hawking’s name attached to things.

(another discussion that may come up at ScienceOnline)

It Truly is Neverending

Over at Uncertain Principles, Chad has more to say on the topic of yesterday’s link/post: Science Journalism vs. Sports Journalism

A very good point: one should look at less-popular sports for this comparison, and the Olympics gives us a good example of sports many of us only see every 4 years. That’s more like the situation science journalism is in.

Also that the multi-level reporting exists if you look across multiple sources, because not all science publications are trying to reach the same audience. The trouble is they are competitors.

The Internet would seem to offer the ability to do this via links, but most media organizations regard links to other publications as slightly less desirable than painful and disfiguring disease. Any reader leaving the site is seen as lost forever, so they make it as difficult as possible to get anywhere else. Most of them won’t even link to the source papers and/or press releases, which is maddening.

Amen to that, and to bloggers in general who overall do a much better job of this cross-pollination.

The Neverending Story

The eternal tug of war between science journalists and scientists. A graphical story.

Timely, as ScienceOnline is coming up this week, and this topic is sure to come up.

One of the interesting points Bee addresses is that science journalism isn’t treated like sports — nobody writes sports stories for people who know nothing about the sport being reported on. Some knowledge is assumed. I share the frustration of reading a science story and not being able to figure out what science they’re actually talking about, since it’s been simplified so much. I don’t think the multi-level story she describes will come to be anytime soon, but it’s an interesting idea.

Follow the Bouncing Ball

A drop makes waves – just like quantum mechanics?

This simple experiment creates a surprisingly complex coupled system of the driven oscillator that is the oil and the bouncing droplets. The droplets create waves every time they hit the surface and the next bounce of the droplets depends on the waves they hit. The waves of the oil are both a result of the bounces as well as a cause of the bounces. The drops and the waves, they belong together.

Does it smell quantum mechanically yet?

This looks pretty cool (watch the video in the link) but I share Bee’s skepticism that this explains QM; it reproduces a few of the basic behaviors, but that’s a far cry from explaining all of it. I think it’s a tendency of human nature to become enamored of what works and conveniently ignore what doesn’t, but as scientists we have to make sure this isn’t happening.