What's it Like?

What It Felt Like to Test the First Submarine Nuclear Reactor, with substantial quotes from an earlier article

This was of interest to me, owing to my ~5 year stint as an instructor in the nuke program. Some of the details point toward Rickover’s vision; things like realizing that more could be learned by building the test reactor in the same configuration that a sub’s reactor have in a submarine — starting with a prototype in any other configuration would leave too many unknowns when the “operational” configuration was built (making systems more compact invariably introduces new problems), and too much time would be wasted. And the general attitude of over-engineering the reactor — scaling down features is usually far easier than beefing up or adding new ones.

I Got Those Light Emitting Diode Blues

The Nobel prize would cheer someone up. Chad has an excellent summary of this over at Uncertain Principles, which includes many links to more good stuff.

Nobel Prize for Blue LEDs

That [difference between red and blue LEDs] seems like a pretty small change, dude. How is that worth a Nobel?
Well, because it’s really frickin’ hard to do.

It’s not a paradigm shift in terms of the basic physics, but it’s a ton of hard work and new technological development, and richly deserves the Nobel.

Another good piece at Starts With a Bang

It's Not a Proper Cat

Last week there was a splash in the news about a new particle that had been discovered, called a Majorana particle. Sadly, the coverage was disappointing, but since I had spent most of the week standing on my porch and shaking my fist at things, (and it’s not really my area of physics) I didn’t blog about it.

In short, this new discovery was a quasi-particle, i.e. a composite system, which was information that was buried in every article I read. To me this is reminiscent of the magnetic monopole coverage from a while back, which was another quasi-particle. Interesting, to be sure, but not really what was advertised in the headline.

Turns out, I wasn’t the only one a little miffed at how it was reported. Jon Butterworth was, too.
Majorana particles – Fundamentally confusing

… I was excited to read about the new particle, and somewhat diappointed when I did so to find out that it is not a fundamental Majorana fermion, still less a neutrino. A bit of a let-down for me and my particle-physics colleagues. Nevertheless, the result is interesting for a number reasons.

What has been seen is a quantum state in one-atom-thick wire which in a certain energy range behaves like a Majorana fermion. It is not a fundamental particle, it is a composite state, and the behaviour emerges from the interactions of atoms, electrons and photons, described by quantum electrodynamics, in which all fermions are Dirac. The fact that Majorana behaviour has been predicted, and then observed, to emerge as collective behaviour from a “more fundamental” (i.e. higher energy, shorter distance scale) theory is fascinating.

Neat stuff, no need to sex it up with the misleading inference that it’s an actual particle. And a good explanation of what’s what to boot.

Obscure title reference

Have a Little Trouble With Bears, Did Ya?

(You can only go to the “the hard is what makes it great” well only so many times.)

Research is Hard

Yes, you have to learn lots of math, physics, programming, and many other related things in order to tackle new and interesting research questions in astronomy. The same is true for many fields. But at the end of the day? We are all banging our heads against walls over a minus sign, or a factor of 2, or mixing up log-10 with natural log, or losing track of which star is which. This kind of “stupid mistake” hurdle is what really makes research hard.

The sheer volume of little details makes it inevitable that at least one will be wrong the first time through any problem, whether theoretical or experimental.

Also note the whole “we first tested this on data where we know the answer” bit. Good protocol.

A Cold (War) Light

Weapons-Grade Private Enterprise

In 1991, the Cold War ended without making the world immediately safer. The Soviet Union had split up: Russia was dead broke, and much of its nuclear arsenal was split among the newly-independent countries of Belarus, Kazakhstan, and Ukraine, which were also broke. The reasonable fear was that the nuclear stuff and the nuclear scientists would go to the highest bidder. True, countries were negotiating how to get rid of nukes and the stuff of which nukes are made, but international negotiation is slow and international bidding likely to be much faster.

That fall of 1991, Neff wondered whether Russia could un-enrich its weapons-grade uranium, sell it to the U.S., and the U.S. would pay in dollars and use the un-enriched uranium to fuel its civilian nuclear reactors.

Interesting bit of swords-to-ploughshares history.

Getting Ahead of Getting Ahead of Yourself

When Science Gets Ahead Of Itself

Science certainly can get ahead of itself, but I don’t think the author chose a particularly good example of it. The beginning of this article is mostly fine — the history is laid out, and the whole idea of “science requires confirmation, and Planck didn’t confirm BICEP2, so that’s just too bad” is spot on.

It’s this:

The BICEP2 results were announced in a press conference before they had gone through the referee process. That meant the hard-core examination of the data and their analysis had not yet been subject to a peer review by someone (or a bunch of someones) who was not part of the team. It would have been the referee’s job to be merciless in his or her criticism, catch potential problems and, hopefully, make the paper better. Peer review is an awesome process and it’s one critical reason behind science’s powerful capacity for finding the true voice of the world.

Scientists often complain about how the media blow science stories out of proportion or get the details of those stories wrong. But in this case, by press-releasing their results before this full scientific process was completed, the international media machine was engaged by the scientists themselves.

And while, of course, everyone was careful to include “if these results are confirmed,” the point is — within weeks — people were already noticing problems with dust and the BICEP2 results related to dust.

I think there’s a fairly significant sin of omission here.

Here’s a hint: all these people who “were already noticing problems with dust and the BICEP2 results related to dust” — where did they get the detailed information required to notice these problems? The press release? No, there was a preprint uploaded to arXiv on the same day the press release was issued.

So what would have been different without the press release? Not a lot, I think. Scientists would have latched on to the results pretty quickly, and there would have been blogging and tweeting and other communication that doesn’t involve the traditional media, who might have taken a tad longer to pick up on the story. From a science perspective, though, the impact is that maybe the science discussion gets started a day later and accelerates a little more slowly. Otherwise, Clarice, the press release is INcidental. I think it only mattered to the mainstream press and scientists outside of the field who were not in a position to add anything to the discussion, and served to reduce confusion and inaccuracy in some articles (a positive, not a negative). Such as [ahem] avoiding saying “gravity waves” instead of “gravitational waves”.

The advent of arXiv is a huge difference between the actions of Pons and Fleischmann of cold-fusion infamy, who also issued a press release before peer-review. The difference is that now there is a preprint server, widely used, where physicists upload papers before peer-review and publication. Making a preprint widely available has become common practice. I can’t believe the objection is to structured discussion or dissemination of work among/between scientists before formal peer review, because that would include not only preprints, but any kind of conference or colloquium presentation that occurred before publication — those are not peer-reviewed either. That would be a rather radical objection, and I don’t think that was the intent — it’s not what I take away from the article.

If there’s a possible criticism here, it’s perhaps that the paper’s authors needed to temper their predictions based on criticism, but I don’t really have the expertise to go through and find and comprehend whatever changes they made between the original submission and the final, published article. (Oh, did I mention that the paper actually made it through peer review? Yes, it was published in June.) So perhaps they did. Whatever objections were present didn’t stop publication, nor is it clear that they should have. In fact, I think it’s likely that the widespread discussion meant that reviewers had access to more objections and discussion that took place in public than if there had been no preprint and press release. Remember, peer review isn’t a guarantee of correctness. It always comes down to verification, and more verification.

As a scientist who spends a lot of time explaining science to the public, I just wish the BICEP2 press-released team had waited. I wish they’d have let the usual scientific process run its course before they made such a grand announcement. If they had, odds are, it would have been clear that no such announcement was warranted — at least not yet — and we’d all be better off.

And I think the process did run its course, quite properly, and eliminating a press release would have changed nothing of importance. I think the author needs to metaphorically pick up the flag here. Incidental contact, no foul.

The Price of a Spherical Cow

The Value of Idealized Models

I’m going to take some exception to something, again.

Superficially, it might seem like a good thing if our theoretical models can match real-world data. But is it? If I succeed in making a computer spit out accurate numbers from a model that is too complex for my meagre mortal mind to disentangle, can I claim to have learnt anything about the world?

In terms of improving our understanding and ability to develop new ideas and innovations, making a computer produce the same data as an experiment has little value.

I agree with that. You could also take the example that you can model any data set with a polynomial of sufficient order — that would tell you little or nothing about the actual mathematical function in play for your data (think epicycles — arbitrarily good agreement of planet positions, no insight into gravity). But I think the mistake here is extrapolating this class of models — ones that are too complex to comprehend or are otherwise not descriptive of the interactions taking place — and concluding that all precise and accurate models are bad (which is the vibe I’m getting here). It’s not the case — the whole goal of many physicists is to get models that match experiment to the highest degree possible and that we also understand the details. I think the author is cherry-picking the drawbacks of good models to make his point.

If I want to learn how an amoeba (or anything) works, by theoretical modelling, I need to leave things out of the model. Only then will I discover whether those features were important and, if so, for what.

Again, this is true as far as it goes, but only addresses one path to understanding. You can add things on to a simple model, too, or have sufficiently different effects that you know which part is contributing. Again — overselling. There are a lot of models that are built up over time as we get better data; not everyone goes in with all the potential pieces available and have to do such pruning.

Atomic physics has a model of the Hydrogen atom, and there is a basic model that predicts the gross energy-level structure; similar to the Bohr model, but even better because it gets other details correct that the Bohr model lacks. (IOW the Bohr model isn’t better because it’s simple. Simple doesn’t trump being wrong, but that’s not really my point). The simple QM model doesn’t quite work. We can add in corrections to the simple QM model and account for relativistic effect and spin-orbit interactions and this accounts for what we call the fine structure, which shifts the energy states. When we include the detail that the nucleus has a magnetic moment that affects the electrons and bingo, we get the smaller hyperfine splitting. Add some QED into the mix and you explain the Lamb shift which lifts the degeneracy of two of the levels of the first excited state.

There’s also the problem that if your model is too simple for the problem, then you have no idea if the basic idea is right, because none of the data will match up. You have to be able to construct experiments where your model is at least approximately right for reasonable hope in confirming its correctness.

There is incredible value in simple models. But the value doesn’t automatically diminish if you add some well-understood, higher-order corrections.

(The spherical cow joke is in the link. Point-cow joke and cartoon here)