Death Star Economics Redux

The Death Star Is a Surprisingly Cost-Effective Weapons System

[H]ow big is the Republic/Empire? There’s probably a canonical figure somewhere, but I don’t know where. So I’ll just pull a number out of my ass based on the apparent size of the Old Senate, and figure a bare minimum of 10,000 planets. That means the Death Star requires .03 percent of the GDP of each planet in the Republic/Empire annually. By comparison, this is the equivalent of about $5 billion per year in the current-day United States.

Went there first, I did, but not in as much detail.

All Blog Posts Are Interconnected

The kerfuffle is not dead yet. First, here’s a piece from The Guardian, specifically Jon Butterworth’s Life and Physics column: On Pauli and the interconnectedness of all things

Now, declaration of interest, Brian and Jeff are both old friends of mine, and I even starred briefly in “Night of the Stars” as “elbow behind Jonathan Ross’s head”. I have never met Sean, though I have read some of his work (and used his links) and I have a lot of respect for him. Anyway, this is about physics, not about taking sides in a celebrity scientist face-off.

My celebrity non-status must be why my contribution(s) are only hinted at (“some previous blogs” and “**it”; I guess you can call me et. al) but the main objections, or more precisely, my main objections (which I delineated) were the claim that a response to change in one electron’s energy would be instantaneous, and that the cause would be the Pauli Exclusion Principle. It seems to me that Jon admits that Brian Cox was incorrect on both of these points, though there’s some hedging on the instantaneous part — he gives an example of the electron in a potential well, i.e. an electromagnetic interaction, but then cites the phenomenon as being nonlocal, which I don’t understand. (Yet somehow he manages to conclude this was a “high-score draw”, which brings the Black Knight’s “We’ll call it a draw!” to mind)

So in principle one has to treat the potential of the whole universe, all the atoms, as a single system (a single Hamiltonian). All agree on this, as far as I can tell.

This already means that saying “it’s in a different place” is not sufficient reason to say of an electron “it’s in a different quantum state”.

This is something I don’t accept as given. I still point to my example of composite Fermions. Nature thinks that individual atoms are identical, because Fermionic atoms obey Fermi-Dirac statistics. If the electron energy levels were different, they would no longer be identical and would not do this. Nature seems to be saying that this assumption is incorrect.

Another issue has been pointed out by Dr. Skyskull in Pauli, “armchair physicists”, and “not even wrong”, in which he walks you through some of the background before discussing the problem, which is useful. (Part of the post concerns some of the remarks that have been made, and I’m happy to skip over that and focus on the physics, as I have already noted).

The additional argument comes near the end, regarding a claim that while the splitting is there, it’s so small that we can’t measure it, which garners a “physics fail” epithet.

Here Cox explicitly acknowledges that his “universal Pauli principle” consequences are something that not only cannot be measured today, but in principle can never be measured, by anyone

There a notion in science that can be summarized as: pics (i.e. experimental results) or it didn’t happen. You simply can’t make a claim in science without some kind experimental evidence to back it up — without that support it’s merely hypothesis or conjecture. You come to expect this from the fringe folks, but not from actual scientists. It’s hard to fathom that argument being brought up.

If you want to ruminate on the implications of treating the universe as a single system, fine — there’s a lot to discuss, such as “what does ‘identical’ really mean in this context?” Much of it will be interesting and some of it quite subtle. But presenting it as accepted science, to a lay audience? No.

A Whole Lotta Shakin' Going On

You need to a flashplayer enabled browser to view this YouTube video

The story where I saw this calls the phenomenon “ground resonance”. While it looks and acts a lot like an unbalanced washing machine in the spin cycle, which doesn’t seem like a resonance phenomenon, from what I understand this can be caused by a shock to the system from landing, and the helicopter is susceptible at certain rotor speeds. If the compensation is 180º out of phase with the wobble you get positive feedback, so it makes sense that this could happen; you’d either want to speed up or slow down the rotor, were it safe and easy to do so, to change the feedback. But this happened pretty quickly.

That's a Big Twinkie

How Big a Battery Would It Take to Power All of the U.S.?

Generating capacity is, however, only one side of the story. Storage systems are rated not only by their power, or how fast they can crank out energy (measured in gigawatts), but also by the total amount of energy they store (measured in gigawatt-hours). A facility with an energy capacity of one-gigawatt that can only supply electricity for 10 minutes would not be very helpful; in an ideal world it could do so for, say, 100 hours, thus storing 100 gigawatt-hours. Building up new pumped hydro-facilities similar to existing ones would probably help in all but the most disastrously long of wind lulls. For those worst-case scenarios, we might still have to brace for rolling blackouts.

Of course, this simple calculation also assumes current consumption levels. How would we power all those electric cars that we’re supposed to be driving in the future?

Doin' it Right

Quantum entanglement is a topic that often gets mangled in the popular press (much my to my torment), so it’s nice when a physicist writes about it.

Tangled Up in Quantum Mechanics

[H]ere’s the problem: the first measurement does not cause anything to happen with the second system: they cannot be in communication in any way, because the distance between them is arbitrary. In other words, they could be separated by several parsecs without changing the outcome, so if they were actually passing information, that would be in violation of relativity. You can’t send signals faster than light using entanglement as a result: the only way you could kinda-sorta communicate is if you had two groups of researchers who agreed in advance on what the settings of their instruments would be before they parted company; no new information would be available, since the real communication takes place at light-speed or slower, before the measurements are even performed.

Fair warning: at the end of the post, under the heading of “What Entanglement Is Not”, the discussion loops back into the “everything is connected” kerfuffle.

Time Has Come Today, Part II

Building a Better Clock

I see no progress in this industry. These clocks are no faster than the ones they made a hundred years ago.
– Henry Ford

I really hope he was kidding, but assuming he was it’s pretty funny to a timing geek like me. The way you make clocks better is by making them more precise and accurate, and the levers for this are hidden in the equation for “counting the ticks”. If our ability to count precisely is somehow limited, e.g. if we had an oscillator — like a wheel — and we could measure its angle to a precision of 3.6º, then letting it go for one oscillation represents a 1% measurement, but that same absolute error for 100 oscillations is 0.01%, and we can get there either by integrating longer or by having an oscillator with a higher frequency. So “more ticks” is better … if we don’t have a noisy oscillator. Certain noise processes don’t integrate down, so another lever is to improve the noise, or possibly the noise characterization, of our clock.

Better clocks came in the form of Harrison’s chronometer which could be put to sea, and which included advances like using multiple kinds of metals to reduce temperature effects, and a spring which maintained constant tension. On land, improvements came in the form of better pendulum clocks, culminating in Riefler and Shortt clocks in the early 1900’s, with temperature compensated pendula (to inhibit the length from changing), kept under moderate vacuum to reduce drag and possible humidity effects, and were capable of performing at a precision of around a millisecond per day, and are examples of going to a higher frequency (a period of a second rather than a day) and minimizing the noise effects. Going into the 1930’s-1940’s, quartz oscillators, using much higher frequencies (many kHz rather than 1 Hz) became the best clocks.

Up to this point, the length of the second was defined in terms of a fraction of the tropical year in 1900, which was close but not identical to 86,400 seconds per day (being off by a few milliseconds), but atomic standards were investigated and in 1967 the definition of 9,192,631,770 oscillations of Cs-133 hyperfine transition was adopted, and atomic timekeeping defined Coordinated Universal Time (UTC) starting in 1972. This also marked the start of inserting whole leap seconds to match atomic time with earth rotation time; prior to that it was done by adjusting clock frequencies or inserting fraction-of-a-second steps in time.
Continue reading

Finding Your Mistakes is Good Science

Everybody’s a Critic

Almost all of my colleagues had put very, very low odds on the OPERA experiment’s result being correct — not because the people who did the measurement were considered incompetent or stupid, but because (a) doing experimental physics is very challenging; (b) this particular result was especially implausible; and (as everyone in the field knows) (c) most experiments with a shocking result turn out to be wrong, though it can take months or years to find the mistake.

I haven’t seen the vitriol or scorn the author mentions, but I don’t read everything. Complex experiments are hard, as he notes. In the kinds of experiments I’ve worked on we’ve spent lots of time chasing down subtle vacuum problems and electronic problems like ground loops — if your circuitry has a “loop” configuration it acts as an antenna, converting changes in magnetic fields into voltages (from Faraday’s law) and picking up the noise radiated by all of the electronics in your lab.

“Loose cable” was not high on the list of problems by the betting bystanders, since “Einstein was wrong” and “GPS is fouled up” sounded much sexier, and one might not expect such a mundane-sounding problem to either survive or cause these problems, but there is subtlety there. (I can recall a recent problem with an sma connector with a bad thread — it felt properly tight, but wasn’t. Caused all sorts of weird signals, and took an extra set of eyes to help spot the problem.)

They still need to confirm that this was indeed the source of the anomaly. That’s just good science.