It’s time for the seemingly semi-annual announcement (which you may have already seen) about the new work coming out of some lab (often it’s NIST), where a new experimental technique, or new atom or ion, or some other ingenuity or heroic effort allows them to come up with a better frequency standard measurement. In this month’s game of Clock Clue it’s NIST (plus collaborators), in an optical lattice, with neutral Ytterbium.
Ytterbium Clock Sets New Stability Mark
An international team of researchers has built a clock whose quantum-mechanical ticking is stable to within 1.6 x 10^-18 (a little better than two parts in a quintillion).
This is pretty awesome work (do they get bored with being awesome on a regular basis?) But now comes my standard disclaimer: this isn’t really a clock, it’s a frequency standard. Side note: I had a communication from someone doing a little background on a similar situation and they made the comment that it seems that people in the timing community are kind of sensitive about the distinction between frequency standards and clocks. I don’t know this to be true — I’m the only one who seems to spend any effort making the distinction. I’m not terribly upset by it (I understand why clock is used) and I can’t speak for anyone else. Everyone in the community already knows, so they aren’t confused by it, and they probably don’t care all that much about what goes on in the popular press. But I blog, and this is the sort of thing that matters more in the science communication field, and it affects me when someone says they read about a new clock that NIST build and am I working on that too? And if it happens to me, I’m sure it’s part of certain discussions that happen above my pay grade.
In other words, it matters with respect to people who fund these efforts. I’m reasonably sure there are higher-level inquiries, asking if we’re working on this sort of thing, and why the hell not, and/or not understanding the difference in measuring frequency and time. If you don’t see the difference, you might think that there’s a duplication of effort going on. Even if you get the distinction, you might think this is a technology we should be investigating*.
So let me explain with an analogy that might be easier to understand than timing.
Imagine you are navigating a vessel in eternal fog — there is no way to do any kind of observing for a navigational fix. You want to follow a path — let’s say you want to go exactly north, so you can think of a line drawn on a map, going north, from where you are. That’s the course you wish to follow. (we’re assuming a flat earth here, so all lines north are parallel)
You have a compass that’s pretty good but not perfect. There is going to be some steering “noise” because of this. If the compass exhibits 1 degree of error, that means your velocity vector is going to randomly point anywhere from 1 degree port to 1 degree starboard, randomly. On average your direction will be correct, but that’s for your velocity vector. For your displacement, which is what’s important to you, there will be a random walk, because that’s what the integral of white noise becomes — a random walk. Put another way, even though the direction averages to zero, the errors do not cancel — being off by some angle to port is not immediately followed by being off to starboard by the same amount — the steering error is never undone. It accumulates with each random jiggle of the compass, and there’s nothing you can do about it.
The result is that your good compass means you will random walk some distance to the side of your ideal path that you’d have for a perfect compass. You’re traveling north, and when you reach your destination you might have a random walk to the east by a mile, and that’s bad. You want a better compass.
Let’s say you have a much, much better compass. Good to an arc second instead of a degree — that’s 3600 times better. If you could use it all the time, your 1 mile lateral random walk becomes a few feet. For all intents and purposes, it’s perfect.
However, for some reason, you can’t use it all the time. (insert any plot twist you like for a reason why). Let’s say you can only use it half a day. While you’re using it you accumulate essentially no error in your path, but when you are stuck using the old compass, you still accumulate your error. Since you can use the perfect compass half the time, your random walk error is cut in half, even though the new compass is 3600 times better. The actual improvement in performance is a combination of two things: the precision and the duty cycle.
It’s the same with clocks. Since you are counting “ticks” to keep time, it means that time is an integral of frequency — any clock with white frequency noise will random walk away from perfect time. And you can only count ticks when a clock is running. What do you do when it’s not running? It’s the worst clock in the world when it’s not running! So you have to a have a flywheel — some other clock (in practice a group of them, sometimes called a timing ensemble) to keep time when your über-cool device isn’t running. Even if you add a device that’s 100 times better, its improvement to your timekeeping is limited by its duty cycle, just as with the compasses.
In this case, they ran for 7 hours to make one stability measurement. How often can they do that? Every 3 days? That’s a 10% duty cycle, and even though its stability is 100 times better than currently used clock systems, it would only represent a 10% improvement in your timing ensemble’s performance. Depending on the size of your ensemble, you might see the same (or better) improvement just by adding another continuously-running clock to it, and averaging them all together — ideally, the stability of an ensemble of identical clocks depends on the square root of the number of clocks.
The Ytterbium device is really neat, with stability of a part in 10^18 being a big achievement. There is a lot of neat physics you can do with one, or better yet, two of them. But for the application of timekeeping, the ability to run essentially continuously is very important, and timekeeping is primarily what a clock is for. The better analogy in this case is a stopwatch rather than a clock, just in case you care about the distinction. That doesn’t make for a good headline, though: NIST builds a better stopwatch sounds a bit dismissive and I don’t want to diminish the accomplishment in any way, which is why clock is going to be used even though it’s technically wrong. Until the technology becomes robust enough to run all the time, though, it’s not something that’s going to become part of a true clock.
*it happened when Bose-Einstein Condensates were in the news. Lots of questions about whether we were going to make a clock out of a BEC.
Precision = frequency standard
Precision duty cycle = clock
Nice