Mystery Behind Galaxy Shapes Solved
Short version: new model does a much better job of predicting the distribution of galaxy types. Score! Nothing bad there.
But the title of the article bugs me just a tiny bit. It gives the impression that we’ve utterly nailed it: Game over; bye, bye, see you later. But that’s not science, or at least “nailed it” means something different to a scientist, and from field to field. (In some fields, a factor of two improvement might qualify; in others it might mean more than an order of magnitude improvement in precision) I’m not intending to hold this up as an example of bad science journalism, per se. If it is, I don’t know how to fix it — editors need to exercise brevity in titles. But I fear that the un(der)initiated get a cumulative wrong picture of science, if they’re reading these headlines.
We usually don’t just pack our bags and move on to the next problem. Often the current problem still has issues to be resolved; I imagine in this case, the group will continue to improve it, or someone else working on one will do so, and work with better data that come along. This result, while having passed through peer-review, has to await an onslaught of feedback that might await. And it involves a model, like virtually all of science — these days, there are Global Warming discussions that imply that if it isn’t perfect, then we know nothing. The fear of uncertainty spreads, because impossible promises are implied by the absolute certainty of “the puzzle is solved.” Such arguments are crap, of course, but how do you recognize that if you aren’t aware of the subtleties of the situation?