It's Been Such a Long Time

Ask Ethan #30: Long-term timekeeping

In this week’s Ask Ethan, we take on perhaps the longest question of them all, and look at how to keep time for arbitrarily long times.

It’s a good post as usual, though there are a few things Ethan glosses over, which is where I step in.

[millisecond pulsars] are also the most accurate clocks we’ve ever discovered. They are so regular that we could watch one, look away for a year, and know — when we look back — whether ten billion pulses have gone by… or whether it’s ten billion-and-one. In fact, we can get down to around microsecond accuracy to their timing over periods of many decades, meaning we can get timing accuracy to around one part in 10^15!

This is bettered only by the most advanced atomic clocks on Earth

In terms of fractional stability that’s true (and I think he means precision rather than accuracy here), but Tom- man-made atomic clocks reach this stability in a matter of hours or days, not years. It’s only by having these good clocks that we can measure how well the pulsars are doing.

I have a recollection of a discussion about timing with pulsars from years ago (this isn’t a new idea). Pulsars don’t actually have an inherently stable frequency — they are slowing down, just like other macroscopic spinning objects, so the timing will show a drift. But pulsars do this at a very predictable rate, so you can characterize them and account for the drift. Some pulsars haven’t “settled in” and can undergo a star quake, which changes their rotation abruptly, but I think the ones under discussion are past that age.

From his followup post

Atomic clocks require a lot of power to continuously stimulate atomic transitions, and a lot of cryogenic fuel to keep the atoms at ultra-low temperatures. Not such a big deal when you’re talking about doing this in a continuously powered laboratory on Earth, but that’s a lot of resources to devote to keeping a simple clock running. The mechanism I gave in the original article — counting atoms or looking at a pulsar — has the advantage that all you have to do is look once at the beginning and once at the end, and requires no devoted power in the intermediate time.

The cryogenics part is a common misconception, since the best clocks are cold-atom clocks. So the assumption is understandable, but these clocks are all offshoots of laser cooling and trapping. Some crystal oscillators in frequency standards are cryogenic, but not in any continuously-runnung clock. That’s a logistical nightmare.

I think our clocks are not getting quite as much respect as they deserve — the pulsars have to be characterized to be useful, and no two pulsars are going to have the same frequency. You could, in principle make a stable reference from measuring several of them, but to actually tell time you have to tie it back to a standard, which means a man-made device.

The second part, about radioactive decays, stresses the longevity of the clock but ignores the precision of the measurement. Unless you have a huge chunk of the material and can determine the number of atoms precisely, your counting statistics will limit the precision of your measurement.

The upshot of all this is that timing is a little more subtle than having something that will tick for a long time. Accuracy, precision and durability aren’t interchangeable attributes. There are often tradeoffs between each of them, so it depends on which is more important.

How Do You Make an Aluminum Float?

Here’s a couple of videos I ran across involving magnetic levitation. You have a changing magnetic field, which induces a current in a conductor, making a field that gives you a net repulsion.

You need to a flashplayer enabled browser to view this YouTube video

So what happens if you let the levitating material continue to heat up? This second video starts a little slow, but be patient. It’s fun at the end.

You need to a flashplayer enabled browser to view this YouTube video

Absolutely

You need to a flashplayer enabled browser to view this YouTube video

Great job distilling the concepts here — that not everything is relative. It’s just that there are things like length and time (and related concepts like simultaneity) that we once thought we absolute, and it’s hard to wrap one’s head around the fact that they depend on your frame of reference.

WWLD

(Thought I scheduled this to post last week, but apparently I didn’t)

To Save Drowning People, Ask Yourself “What Would Light Do?”

I’ve seen a discussion of the lifeguard dilemma before, but there’s some added value here. Especially the disavowal of the “dog doing calculus” bit

Tim was impressed enough by Elvis’s trick to write a paper called “Do Dogs Know Calculus?” In it he reassures the reader that “Elvis does not know calculus…In fact,” Tim adds, “he has trouble differentiating even simple polynomials. More seriously, although he does not do the calculations, Elvis’s behavior is an example of the uncanny way in which nature often finds optimal solutions.”

I’m glad to see this, as opposed to the “animal of some sort does calculus” headlines I’ve run across a few times.

Gimme That Old Style Energy Storage

Energy Storage Hits the Rails Out West

It’s an interesting concept. They’ve already built a test system, so I’m going to assume this was thought out and will actually work. Let’s look at the numbers.

Each car carries 230 tons and the hill is roughly 3000 feet high, so if we convert to metric and grab an envelope, mgh is about 2 billion Joules of potential energy per car. The output is designed to be 50 MW, so each car gives 40 seconds of runtime at that power, and that’s assuming 100% efficiency of the regenerative brakes. Obviously we need more than one car.

We can also look at this from a different perspective. In this case the power output is P =Fv (that’s actually a dot product, if you’re scoring at home). The incline is given as 6 to 9 degrees, so one car would have to travel at least 160 m/s for that output (at 9º), but that scales with the number of cars. 160 cars can travel at 1 m/s and give us 50 MW (or 80 cars at 2 m/s, etc.), again, modified by the efficiency of the system. Slow speeds would allow the system to “ramp up” by getting to the target speed quickly and operate safely — I doubt anyone wants these trains running down a hill at tens of meters a second.

One last bit of data is that the track is 6.5 km long. I’m assuming that’s the length in addition to the length of the train itself, i.e. it’s how far the train can go.

Let’s assume the efficiency (e) is 75% and we have n cars.

t= (30s)*n and also t = L/v where L is our effective track length. vn = L/30s = 215 m/s

We also know that en(mg)(sin9º)v = 50 MW, or vn = 200 m/s.

Not bad — the answers are within 10%. (Of course it could mean I made the same underlying error in both estimates, or two that happen to cancel)

The final factor is how long the system runs. More cars going slower extends the time, but runs into a space limit. But 50 cars at 4 m/s goes for ~25 minutes, which is not bad for a gap-stopper.

What Happens Next Will Astound You

Top 10 Physics Findings That Will Tangle Your Brain

zapperz has already covered this; it’s got the usual hits like equating quantum teleportation and Star Trek, but the idea that the slowing of the earth means that time is slowing down is a new one. The mistake it makes is old, though; the slowing is an acceleration. 1.4 ms/day/century means that in another 100 years, all things being equal, the slowdown will be 2.8 ms/day. And that even if the rotation stabilized, if the rate were smaller then the earth would still run slow. It would just do it at a constant rate.

The one that gets me in the list is “stopped light.” The experiment is quite cool — being able to absorb light and then recreate the beam later with all of the information about its coherence and polarization intact — but “stopped light” is hyperbole.

Here Be Dragons That Rarely Interact

Physicists Produce Antineutrino Map Of The World

Physicists know that almost all of [the earth’s internal] heat is generated by the decay of radioactive elements such as potassium-40, thorium-232 and uranium-238. But how are these elements distributed and how much heat does each contribute?

In the next few years, geophysicists hope to get some detailed answers to this question thanks to the emerging science of neutrino geophysics. The radioactive decay inside the Earth produces subatomic particles known as antineutrinos. So an experiment that measures the antineutrinos coming out of the Earth should provide a detailed picture of the distribution of these elements within it.

This isn’t a map made by detection, but by calculation based on reactors. Still pretty cool.

Should You or Shouldn't You?

Should you get your PhD? (in science)

I added the parenthetical in science because I’m not sure how well the advice works well outside of it — it may have less applicability outside of physics. I have quibbles with a few things, as I’m an experimentalist and an atomic physicist, and Ethan is a theorist and is trained as an astrophysicist. There are bound to be some differences, but I think most of it is going to hold up for science PhDs in general.

There are plenty of brilliant people who get them, of course, but there are also plenty of people of average or even below-average intelligence who get them. All a PhD signifies, at the end of the day, is that you did the work necessary to earn a PhD. There are many people who have PhDs who will dispute this, of course. There are plenty of people who are insecure about their lives, too, and base their entire sense of self-worth on their academic achievements and accolades. You probably have met a few of them: they are called jerks.

This was probably the biggest surprise in grad school to me — how much the ratio of intelligence to stubbornness actually was in the student population, vs. the larger value I had naively expected it to be.