A Camera The Flash Would Love

First the good: The website.

We have built an imaging solution that allows us to visualize propagation of light. The effective exposure time of each frame is two trillionth of a second and the resultant visualization depicts the movement of light at roughly half a trillion frames per second. Direct recording of reflected or scattered light at such a frame rate with sufficient brightness is nearly impossible. We use an indirect ‘stroboscopic’ method that records millions of repeated measurements by careful scanning in time and viewpoints. Then we rearrange the data to create a ‘movie’ of a nano-second long event.

Unfortunately, there’s also this video (or, more specifically, the first few seconds of this video), which I saw before finding their site.

You need to a flashplayer enabled browser to view this YouTube video

We have built a virtual slow-motion camera where we can see photons, or light particles, moving through space.

Prof. Raskar has whipped out (and abused) his poetic license: you cannot literally see photons moving through space. You only know light is there if it scatters into your sensor — if it is light that simply goes by you/it, you would never know it’s there. If you shine a laser out into space, you don’t see that light — you only see light that scatters back to you. Unfortunately, by leading off with that sound bite, I fear everybody who sees the video is going to be repeating that line: OMG, we can see actual photons moving through space!

What they have recreated is a way to visualize the photons or a wavefront moving through space. Which is no small feat and is very cool.

And I just saw that Rhett has a post up about this, with some details of how it works, and is also repulsed by the sound-bite. I don’t have a huge problem with the trillion fps claim, because they are pretty clear that this is a virtual, post-processed effect, where you are sort of combining strobe and stop-action to give you the result, with the caveat that the stop-action is static — this generally wouldn’t work if anything were moving.

7 thoughts on “A Camera The Flash Would Love

  1. Glass tank of 2-photon fluorophore solution with mirrored rear. Mode-locked laser pulse train. When incoming overlaps reflected outgoing you see the pulse progressively develop. In vacuum, lightspeed is a foot/nsec. Consider the physical length of a nsec pulse after entering a 1.5 refractive index (e.g., toluene) medium. Carbon disulfide gets you to 1.63, and methylene iodide to 1.74 at the sodium D-line.

  2. Hmmm, their description suggests that the visualization shows the propagation of the wave front across the surface of the subject. But from what I can understand, it’s a series of snapshots of the surface that gets reflected into the camera in a given amount of time.

    These are subtly different things, as objects farther from the camera must be illuminated sooner in order to be seen at the same time as something in the near field. So you’re not actually seeing how the light propagates across the surface.

    Or am I misunderstanding something?

    If the light source is coincident with the camera, they can infer symmetry and assume the light propagates at half of the time it takes to observe, but clearly that’s not the case.

  3. Ok, now I understand my misunderstanding…it says “the pulse of light is a millimeter long”. So in effect, it’s the flash, and not the shutter, that determines the exposure time.

  4. Nope…never mind, both of them have to be constrained, otherwise you would see the whole scene. I guess my original criticism still holds.

  5. The pulse is narrow, so if I understand it correctly the pulse cannot make more than one scatter and get to the lens array while the detectors are accepting light, unless the additional path length is very short.

  6. If the subject was moving, given sufficient scene data, there is no reason why the movement could not be post-processed out as well.

Comments are closed.