Don't Stop Me if You've Heard This Before

It’s time for the seemingly semi-annual announcement (which you may have already seen) about the new work coming out of some lab (often it’s NIST), where a new experimental technique, or new atom or ion, or some other ingenuity or heroic effort allows them to come up with a better frequency standard measurement. In this month’s game of Clock Clue it’s NIST (plus collaborators), in an optical lattice, with neutral Ytterbium.

Ytterbium Clock Sets New Stability Mark

An international team of researchers has built a clock whose quantum-mechanical ticking is stable to within 1.6 x 10^-18 (a little better than two parts in a quintillion).

This is pretty awesome work (do they get bored with being awesome on a regular basis?) But now comes my standard disclaimer: this isn’t really a clock, it’s a frequency standard. Side note: I had a communication from someone doing a little background on a similar situation and they made the comment that it seems that people in the timing community are kind of sensitive about the distinction between frequency standards and clocks. I don’t know this to be true — I’m the only one who seems to spend any effort making the distinction. I’m not terribly upset by it (I understand why clock is used) and I can’t speak for anyone else. Everyone in the community already knows, so they aren’t confused by it, and they probably don’t care all that much about what goes on in the popular press. But I blog, and this is the sort of thing that matters more in the science communication field, and it affects me when someone says they read about a new clock that NIST build and am I working on that too? And if it happens to me, I’m sure it’s part of certain discussions that happen above my pay grade.

In other words, it matters with respect to people who fund these efforts. I’m reasonably sure there are higher-level inquiries, asking if we’re working on this sort of thing, and why the hell not, and/or not understanding the difference in measuring frequency and time. If you don’t see the difference, you might think that there’s a duplication of effort going on. Even if you get the distinction, you might think this is a technology we should be investigating*.

So let me explain with an analogy that might be easier to understand than timing.

Imagine you are navigating a vessel in eternal fog — there is no way to do any kind of observing for a navigational fix. You want to follow a path — let’s say you want to go exactly north, so you can think of a line drawn on a map, going north, from where you are. That’s the course you wish to follow. (we’re assuming a flat earth here, so all lines north are parallel)

You have a compass that’s pretty good but not perfect. There is going to be some steering “noise” because of this. If the compass exhibits 1 degree of error, that means your velocity vector is going to randomly point anywhere from 1 degree port to 1 degree starboard, randomly. On average your direction will be correct, but that’s for your velocity vector. For your displacement, which is what’s important to you, there will be a random walk, because that’s what the integral of white noise becomes — a random walk. Put another way, even though the direction averages to zero, the errors do not cancel — being off by some angle to port is not immediately followed by being off to starboard by the same amount — the steering error is never undone. It accumulates with each random jiggle of the compass, and there’s nothing you can do about it.

The result is that your good compass means you will random walk some distance to the side of your ideal path that you’d have for a perfect compass. You’re traveling north, and when you reach your destination you might have a random walk to the east by a mile, and that’s bad. You want a better compass.

Let’s say you have a much, much better compass. Good to an arc second instead of a degree — that’s 3600 times better. If you could use it all the time, your 1 mile lateral random walk becomes a few feet. For all intents and purposes, it’s perfect.

However, for some reason, you can’t use it all the time. (insert any plot twist you like for a reason why). Let’s say you can only use it half a day. While you’re using it you accumulate essentially no error in your path, but when you are stuck using the old compass, you still accumulate your error. Since you can use the perfect compass half the time, your random walk error is cut in half, even though the new compass is 3600 times better. The actual improvement in performance is a combination of two things: the precision and the duty cycle.

It’s the same with clocks. Since you are counting “ticks” to keep time, it means that time is an integral of frequency — any clock with white frequency noise will random walk away from perfect time. And you can only count ticks when a clock is running. What do you do when it’s not running? It’s the worst clock in the world when it’s not running! So you have to a have a flywheel — some other clock (in practice a group of them, sometimes called a timing ensemble) to keep time when your über-cool device isn’t running. Even if you add a device that’s 100 times better, its improvement to your timekeeping is limited by its duty cycle, just as with the compasses.

In this case, they ran for 7 hours to make one stability measurement. How often can they do that? Every 3 days? That’s a 10% duty cycle, and even though its stability is 100 times better than currently used clock systems, it would only represent a 10% improvement in your timing ensemble’s performance. Depending on the size of your ensemble, you might see the same (or better) improvement just by adding another continuously-running clock to it, and averaging them all together — ideally, the stability of an ensemble of identical clocks depends on the square root of the number of clocks.

The Ytterbium device is really neat, with stability of a part in 10^18 being a big achievement. There is a lot of neat physics you can do with one, or better yet, two of them. But for the application of timekeeping, the ability to run essentially continuously is very important, and timekeeping is primarily what a clock is for. The better analogy in this case is a stopwatch rather than a clock, just in case you care about the distinction. That doesn’t make for a good headline, though: NIST builds a better stopwatch sounds a bit dismissive and I don’t want to diminish the accomplishment in any way, which is why clock is going to be used even though it’s technically wrong. Until the technology becomes robust enough to run all the time, though, it’s not something that’s going to become part of a true clock.

*it happened when Bose-Einstein Condensates were in the news. Lots of questions about whether we were going to make a clock out of a BEC.

Looking Before We Leap

The wait of the world

Mainly about leap seconds.

On the split-second level, ‘leap’ mediates between the precision of atomic time and the position of our Sun in the sky. It is worth noting that while a leap year is a year with an extra day (Leap Day — February 29, when turnabout is fair play), a leap second lasts no longer than any other second. Applied to a minute, a positive leap second creates a 61-second interval that is not called a leap minute. (Nor would the 59-second outcome of a negative leap, should one ever be required, be called a leap minute.)

A leap minute, rather, is a hypothetical way of putting off till tomorrow what leap seconds do today. If instituted, it would allow the powers responsible for time measurement and distribution to defer insertion till the leap-second debt reached 60, and trust some future authority to intercalate them all at once. But a leap minute would likely add up to a much bigger headache than the sum of its 60 leap seconds.

I’m not sure if the US has established an official position on the matter. I know there have been discussions about the pros and cons, both within the US and with international attendance, such as the conference mentioned in the article.

Perhaps even more of an affront to British pride than the misplaced meridian is the fact that Greenwich Mean Time (GMT) is no longer the world standard. GMT fell out of official favour in the 1920s, for semantic reasons.

It may be worth mentioning again that in the US, GMT was used as the official, legal reference for determining the time and time zones until the 2007 America Competes Act, where it was finally changed to UTC (the change is spelled out in sec. 3570).

I wouldn’t mind additional leap second insertions. But then, I don’t programme computers, or control air traffic, or perform any of the myriad time-sensitive activities that would make me a stakeholder in the leap-second debate. I am merely a person who still wears a wristwatch, owns a sundial, and takes an abiding interest in all aspects of finding, keeping, and telling time.

From what I have heard and observed first-hand, it’s a pain to do this, and we’re just lucky that things haven’t gone wrong from the many potential problems inherent to the issue. My own view is that counting on being lucky is a terrible standard operating procedure.

No Time to Lose

An upcoming symposium, Time for Everyone

“Time for Everyone” is a unique opportunity to learn about the origins, evolution, and future of public time from some of the foremost authorities in many branches of time measurement. From its natural cycles in astronomy, to its biological evolution, to how the brain processes it differently at various stages of life and under different circumstances, to how we find it, how we measure it, and how we keep it, this symposium will explore many facets of this fascinating subject of unfathomable depth. The program has been designed for a diverse audience and the speakers carefully chosen not only for their knowledge, but also for their ability to bring their subjects to life.

Not surprisingly, I’ve met a number of the speakers and heard a few of them give talks (or parts of talks). That list includes Sean Carroll (Arrow of time), Tom Van Baak (amateur “time nut” who did the gravitational time dilation experiment I mention at the end of this post), Geoff Chester (Public Affairs Officer here at the Observatory), and Bill Phillips (Nobel Prize in ’97 for laser cooling and trapping) who is giving the keynote at the banquet.

It’s in the next fiscal year, so the probability of getting to go is not identically zero.

We Did a Science!

And by “we” I really mean the first author (Steve) who did all legwork of analyzing the copious clock data we generate, and had realized that our continuously-running clocks had an advantage over other groups who have been doing these measurements over longer intervals. I helped out a bit with the clock-building (and clock building-building) and thus data generation, and some feedback.

The arXiv version of “Tests of LPI Using Continuously Running Atomic Clocks” was posted (some time ago, sorry this is late) so you can follow along with the home version of the game, if you wish. Keep in mind that I am an atomic physicist, Jim, and not someone who really works with general relativity past the point of including gravitational time dilation in discussions about timekeeping.

One of the tests of general relativity, or specifically of the Einstein equivalence principle, is that of local position invariance. That is, local physics measurements not involving gravity must not depend on one’s location in space-time. Put another way, there shouldn’t be any effects other than gravitational ones if you do an experiment in multiple locations — the gravitational fractional frequency shift should only depend on the gravitational potential: \(frac{Delta f}{f} = frac{Phi^2}{c^2}\)

So you look for a variation in this. One possibility of investigation is to compare co-located clocks of different types as the move to a new location, that could behave differently if LPI were violated. This can arise if the electromagnetic coupling, i.e. the fine structure constant, weren’t the same everywhere. Then clocks using different atoms would deviate from the predicted behavior. Since we’re looking at transitions involving the hyperfine splitting, nuclear structure is involved, so the other possibilities that can be tested are variations in the electron/proton mass ratio and the the ratio of the light quark mass to the quantum chromodynamics length scale. One need not do any kind of (literal) heavy lifting of moving the clocks into different gravitational potentials because the earth does it for us by having an elliptical orbit — we sample different gravitational potentials of the sun over the course of the year.

In order to get the statistics necessary to put good limits on the deviation, other groups have done measurements over the span of several years, but this was because their devices were primary frequency standards, which (as I’ve pointed out before, probably ad nauseum) don’t run all the time, so you only get a handful of data points each year. Continuously running clocks, on the other hand, allow you to do a good measurement in significantly less time. You want to sample the entire orbit along with some overlap — about 1.5 years does it (as opposed to a few measurements per year, where you really need several years’ worth of data to try and detect a sinusoidal variation).

Another key is having a boatload of clocks. Having a selection is especially important for Hydrogen masers, since they have a nasty habit of drifting, and sometimes the drift changes. Having several from which to choose allows one to pick ones that were well-behaved over the course of the experiment. Having lots of Cesium clocks, which are individually not as good (but don’t misbehave as often), allows one to average them together to get good statistics. Finally, having four Rubidium fountains, which are better than masers in the long-term, adds in another precise measurement.

All of the clocks are continually measured against a common reference, so you can compare any pair of clocks by subtracting out the common reference, so we have relative frequency information about all the clocks. The basic analysis was to take the clock frequency measurements and remove any linear drift that was present in the frequency, and check the result for an annually-varying term. The result isn’t zero, because there’s always noise and some of that noise will have a period of a year, but the result is small with regard to the overall measurement error such that it’s consistent with zero (and certainly does not exclude zero in a statistically significant way).

We’ve pushed the limit of where any new physics might pop up just a little further down the experimental road — relativity continues to work well as a description of nature.

Why Don't These Things Cost $50k?

Pop quiz, hotshot: Your really long optical fiber isn’t letting (much) light through, so there’s obviously a break in it somewhere. You need to fix the fiber. What do you do? What…do…you…do?

Obviously, shooting the hostage is not an option here. The fiber is probably buried underground, so it would be really helpful to know where the break is, to a resolution of at least the location of the nearest manhole, so you can go in, find the fault and splice the fiber. The solution is an optical time-domain reflectometer (OTDR). You send a pulse of light down the fiber and measure the delay of any reflection, because breaks (and other faults) tend to reflect the light, as any change in index of refraction causes a reflection. Since the speed of light transmission in a medium is simply c/n, if you can measure the return time of the pulse you can figure out how far way the fault is.

To do this in a helpful way, though, one needs to locate the fault to within a few meters, and light in a fiber will be traveling at around 200,000 km/sec, or 5 nanoseconds per meter, which means we need timing at a level at around the 10 nanosecond level. That sounds like the precision realm of commercial atomic clocks, and that sounds expensive — that kind of clock can run you several tens of thousands of dollars. But there’s an important distinction: an atomic clock gives precision long-term timing, and we don’t need that. If our optical fiber is 100 km long, a round-trip signal will take no longer than a millisecond. In other words, we don’t need a clock that will add fewer than 10 nanoseconds in a day, we just need one that won’t add more than 10 nanoseconds in a millisecond. There is almost 8 orders of magnitude difference in performance in those two systems. Put another way, we don’t want to measure the time, we want to measure a short time interval. A timing error of 10 nanoseconds in a millisecond is 10 parts-per-million, a performance that is easily reached by a cheap quartz oscillator (Here’s a cheap system that does 2 parts per million along with some extra functions we wouldn’t need). As long as the oscillator is calibrated, such a device would be just fine for this task.

Another example of this time interval application is a GPS receiver. These receivers compute your location based on the time difference between signals from multiple satellites, but since the satellites have precise clocks on them and broadcast that information, the receiver only has to measure the difference in those time tags. GPS satellites orbit at altitudes of around 20,000 km, but it’s the differences in the distances that are important to us. Overhead satellites are closest, while ones nearer the horizon are farther away, by a few thousand km. That’s a factor of ~10 greater distance than our OTDR signal (though our speed is very close to c), and we want somewhat better timing, so that puts our needs closer to 0.1 ppm, but this is also achievable, though undoubtedly a little more expensive. The great part about GPS receivers, though, is that you can actually use the timing signals to synchronize a local clock, and gain the benefit of the atomic time on the satellites, which is synchronized to the earth’s atomic time, UTC. (You might recall that such synchronization was initially — and incorrectly — blamed for timing errors in the superluminal neutrino story a little over a year ago. It’s actually quite good.)

The Tell-Tale Strontium Heart

Beating heart of a quantum time machine exposed

A little vacuum system porn for you.

The lasers are fired through three of the glass shafts emanating from the cube, but must be carefully directed out of the other side to prevent them scattering within the clock, which is why there are six shafts in total.

However:

… the beating heart of a time machine! Or “clock”, as most people call them …

… or possibly “frequency standard” as I like to pedantically point out. Though this being an ion clock, it can probably run for extended periods of time, and one might actually be able to say it’s running as a clock.

I also find the description of the six arms to be curious; normally, trapping schemes send light in both directions. It’s true you don’t want the light scattered in the chamber, but the description implies there are only three, and none of the NPL write-ups I have read say anything about a novel cooling geometry requiring only three beams.

Aaand it gives the Sr transition frequency as an exact number. There should be an uncertainty, since it’s the Cs hyperfine transition which is defined.

So read it for the picture, and not so much the article.

Merry New Year!

ThankyouforcorrectingmyEnglishwhichstinks

Happy return to an arbitrarily chosen starting point in the orbit about our gravitational enslaver

Get Used to Disappointment

Alan Alda asks scientists to explain: What’s time?

The actor known for portraying Capt. Benjamin Franklin “Hawkeye” Pierce on the TV show “MASH” and more recent guest shots on NBC’s “30 Rock” is also a visiting professor at New York’s Stony Brook University school of journalism and a founder of the school’s Center for Communicating Science.

The center is sponsoring an international contest for scientists asking them to explain in terms a sixth-grader could understand: “What is time?”

This is the followup to last years so-called “flame challenge”, in which he solicited explanations about what a flame is. But there’s a problem: in asking “what is a flame?” the real question is about what is going on in the process of combustion — it’s an analysis of a physical process, and people were asked to explain that. The winner did an excellent job, though Feynman’s pretty good, too.

However, asking “What is time?” is a different beast. I’m guessing they won’t be satisfied with the stock answers of “time is what is measured by a clock” or “time is what keeps everything from happening at once”. However, unlike fire, time isn’t a process that can be broken down into simpler parts, at least as far as we currently know — it’s much more fundamental than that. (It might be an emergent phenomenon, but we haven’t sussed that out to the point where anyone can offer anything as a reasonable answer.) Which puts the question squarely in the realm of philosophy — metaphysics — rather than science.

As I see it, the problem is similar to this: Take a word and try and define it, using only words that are already defined. You can’t. For each word you use in a definition, you need to define that word, and in each definition, you need to define all those words. You end up with circular definitions, so you have to rely on a collection of words that we simply accept because we inherently know what they mean or we give examples rather than a definition. (This is vaguely reminiscent of Gödel’s Incompleteness Theorem — that within a mathematical theory there will be certain arithmetic truths which cannot be proven. Perhaps there is a formal analogue for languages, which would be beyond my experience.) We have some concepts in physics which are fundamental, and it limits what we can do, explanation-wise. We can describe how time behaves and how we can measure it, and use it as a basis of explaining other things, but not what time is.

There is another answer, though it’s still consistent with the thread’s title. Time is a bookkeeping convenience, like other concepts we have (such as momentum and energy). We notice that it has a certain predictable behavior and that it’s useful, so we exploit those properties. In this case, that events happen in a certain order. It matters, for instance, if a piano drops out of the sky and onto a location where you have been standing, if you are there (or somewhere else) when the piano hits. You can be where the piano hit, you can be at home, you can be at the store, you can be at work or school, but all of those are not simultaneously true — there is some orthogonal coordinate that can keep those separate and helps us keep track of what’s going on. Meaning that time helps us solve kinematics problems and other problems in physics.

This is not an argument that time is illusory — it’s real, as far as I’m concerned, but it’s conceptual rather than physical. Which puts it in the same category as momentum and energy and even length. Funny thing, though, is people generally don’t as the same kind of deep question, “What is length?” They can see it, rather than have some other perception, and that seems to be enough, just like the foundational words that make up a language that can’t truly be defined.

Maybe I’m wrong. Perhaps someone out there will rise to the challenge and really be able to explain what time is. But if they can’t, I won’t be disappointed.