How to Think Like Approximately 1 Physicist

Think Like a Physicist

Physicists and estimation.

Students (the vast majority of whom are engineers and chemists) invariably look at me like I’ve sprouted an extra head when I do dimensional analysis tricks, though, and whenever I assign a problem asking for an estimate, I’m all but guaranteed to get answers reported to all the digits that a calculator can muster, which misses the point.

But I’ve also had this happen even with other faculty from science and engineering departments. I’ve had several meetings where I’ve done some back-of-the-envelope toy model to check the plausibility of something or another, and get baffled stares from everybody else. Or arguments about how the round numbers I used weren’t exact (“But we don’t have 600 students in the first-year class. There are only 587 of them…”) It was a real shock the first time that happened, because I’ve always thought of that as a general science trick, but I’m coming around to the idea that it’s really more of a physicist trick. And maybe, if you’re looking for an explanation of what it means to think like a physicist, specifically, that might be the place to look.

I recall the first time I experienced this, in a physics class in college. The professor gave an answer to a question to within a factor of 2 faster than anyone with a calculator got to the more precise answer, and he explained that in a lot of (informal) cases, a factor of 2 or even order of magnitude would be sufficient — able to rule out possibilities or make a plausibility argument, or even check that you haven’t fat-fingered an answer on your calculator and gotten an obviously wrong answer. He was right, and I’v used the technique quite a bit. Later, in the navy, I heard this estimation technique called “radcon math” — the radiation control folks on a ship/sub care mainly about the order of magnitude of a radiation dose when first assessing a situation, because that tells you the level of urgency should you need to cordon off/evacuate an area. So it’s not just physicists, per se, but it’s plausible estimation is more prevalent in disciplines that do more computation.

Habits of Highly Successful People

… do not include studying the habits of highly successful people

Survivorship Bias

The Misconception: You should study the successful if you wish to become successful.

The Truth: When failure becomes invisible, the difference between failure and success may also become invisible.

In general, the lesson that once you’ve applied a filter to your sample, you usually don’t have a normal distribution anymore. Taking those numbers as typical is like gathering anecdotal evidence.

Also, the story about analyzing bomber damage in WWII is one I’d heard before and liked. I’m glad it’s actually be true.

A Mathectomy Will Kill the Patient

Math and Science Are Not Cleanly Separable

What I, and many other physical scientists, object to is the notion that math and science are cleanly separable. That, as Wilson suggests, the mathematical matters can be passed off to independent contractors, while the scientists do the really important thinking. That may be true in his home field (though I’ve also seen a fair number of biologists rolling eyes at this), but for most of science, the separation is not so clean.

As much as I agree with Wilson’s statement about the need for detailed knowledge to constrain math, even in physics, there is also some truth to the reverse version of the statement, which I have often heard from physicists: If you don’t have a mathematical description of something, you don’t really understand it. Observations are all well and good, but without a coherent picture to hold them all together, you don’t really have anything nailed down. Big data alone will not save you, in the absence of a quantitative model.

Yeah, what he said.

Good to the Last Drop

The Ups and Downs of Making Elevators Go

Here is a typical problem: A passenger on the sixth floor wants to descend. The closest car is on the seventh floor, but it already has three riders and has made two stops. Is it the right choice to make that car stop again? That would be the best result for the sixth-floor passenger, but it would make the other people’s rides longer.

For Ms. Christy, these are mathematical problems with no one optimum solution. In the real world, there are so many parameters and combinations that everything changes as soon as the next rider presses a button. In a building with six elevators and 10 people trying to move between floors, there are over 60 million possible combinations—too many, she says, for the elevator’s computer to process in split seconds.

“We are constantly seeking the magic balance,” says the Wellesley math major. “Sometimes what is good for the individual person isn’t good for the rest.”

New Math

From the three you then use one
To make ten ones…
(And you know why four plus minus one
Plus ten is fourteen minus one?
‘Cause addition is commutative, right.)
And so you have thirteen tens,
And you take away seven,
And that leaves five…

Well, six actually.
But the idea is the important thing.

Tom Lehrer, “New Math”

Let’s Get Rid of Zero!

What can we take from this introduction? Well, our author can’t be bothered to define basic arithmetic properly. What he really wants to say is, roughly, Peano arithmetic, with 0 removed. But my guess is that he has no idea what Peano arithmetic actually is, so he handwaves. The real question is, why did he bother to include this at all?

My own experience is primarily with physics crackpots and creationists, but there are obviously math cranks out there, too.

Remember to Drink Your Ovaltine

They Cracked This 250-Year-Old Code, and Found a Secret Society Inside

It was actually an accident that brought to light the symbolic “sight-restoring” ritual. The decoding effort started as a sort of game between two friends that eventually engulfed a team of experts in disciplines ranging from machine translation to intellectual history. Its significance goes far beyond the contents of a single cipher. Hidden within coded manuscripts like these is a secret history of how esoteric, often radical notions of science, politics, and religion spread underground. At least that’s what experts believe. The only way to know for sure is to break the codes.

Silver for the Gold

Silver Medal
Subtitle: Obama’s big win does not mean Nate Silver is a towering electoral genius.

It’s well after midnight on the East Coast, and the results are in: Nate Silver has won the 2012 presidential election by a landslide. His magic formula for predictions, much maligned in some corners in recent weeks, appears to have hit the mark in every state—a perfect 50 green M&Ms for accuracy. Now my Twitter feed is blowing up with announcements of his coronation as the Emperor of Math and the ruler of the punditocracy. Wait—it was even more than that, they say: a victory for blogging, and also one for rational thought. He proved the haters wrong! He proved science right! Is this guy getting lucky tonight or what?
But all these stats triumphalists have it wrong. Nate Silver didn’t nail it; the pollsters did. The vaunted Silver “picks”—the ones that scored a perfect record on Election Day—were derived from averaged state-wide data. According to the final tallies from FiveThirtyEight, Obama led by 1.3 points in Virginia, 3.6 in Ohio, 3.6 in Nevada, and 1.9 in Colorado. He won all those states, just like he won every other state in which he’d led in averaged, state-wide polls. That doesn’t mean that Silver’s magic model works. It means that polling works, assuming that its methodology is sound, and that it’s done repeatedly.

Two things: 1) yes, it does mean — to some degree of certainty — that Silver’s model works, and 2) you’re missing the point of the triumph. This wasn’t Nate Silver vs the pollsters, it was Nate Silver vs the pundits. And most of the pundits botched almost everything having to do with statistics beyond a trivial interpretation, and said that the predictions from the 538 blog were bogus. This was a triumph of statistics done right over the people who abuse, or are clueless about, statistics.

Put another way, the pundits had the same access to the polling data. And they were all over the place in their predictions, because they went with their gut instead of the data. That’s the underlying lesson.

The article points out, quite fairly, that other people use statistics properly, and had similar success in their predictions. Which raises the question — why all the other pundits weren’t doing this? The message here, if you hadn’t already figured it out, is that punditry is not about prediction, it’s about rabble-rousing and guesswork. Claiming that doing the electoral math is easy is a bit disingenuous when almost nobody who had a big platform (i.e. television) was doing it. It’s easy to see in hindsight, and apparently it’s easy to continue to try and marginalize the effort and the results.

Further, when you insist that predicting the result of the presidential race doesn’t prove he was right — with which I agree — you can’t then turn around and look at other individual races to say he was wrong.