Digital Light Field Photography

Radical changes in the nature of photography are underway in the research labs all across the world, but up to now they have not been available commercially. The easiest way of summarizing these changes is to say that conventional photography captures an image whereas the new approaches capture the light field. With an image, you are stuck with the composition, point of view and focus, and although you can enhance and improve upon technical components such as color and contrast afterwards, you are stuck with the image that resulted by the choice of viewing position, shutter speed, focal length, and focus.

It the future, all this will change. By capturing the light field, focus and composition can be determined after the fact. Up to now, he equipment required to do this was new and exotic and as a result it was expensive and delicate. Moreover, huge amounts of computational power were required to view the results. But now, the first fruits of all this research are about to be released in a product available to everyday photographers. Ren Ng's PhD thesis "Digital Light Field Photography" demonstrates a system for doing this in a practical way. Take the picture today: decide which parts should be in focus tomorrow. And given that the thesis was done at Stanford in the heart of Silicon Valley, he has already started a company and received glorious press reviews.

Take your picture now: focus it later. See the samples at Lytro, his company, to experience the power: go to the website, select "Picture Gallery" and click on one of the out-of-focus parts of the photo.

You have to view the examples on the website and read the explanations to understand. But suddenly, taking pictures is much easier. The cameras don't have to focus, lowering some of their complexity. The one or two second lag between pressing the shutter button and having the picture taken is eliminated because the lag was caused by the focusing mechanism which is no longer needed. And when you view the photos at home, you can decide which parts of the scene should be in focus, which parts should not be (for example, the background).

There must be some disadvantages, right? Yes, today there are tradeoffs. The technology works best with extremely high-resolution sensors, which means big sensors, ideally the size used by SLR cameras. As a result the camera is large - the same size as used by todays discerning photographers, but much larger than the point-and-shoot cameras so popular today. And the cost is apt to be high.

But all this will change. The camera sizes will decrease (as we discover what the acceptable tradeoff is between image resolution and the added convenience and power). The costs will decrease rapidly. And some day, every camera will capture light fields, not images. Moreover, the power will expand to allow recomposing a shot after the fact, the deblurring of moving images, and even stereoscopic images from single photos.

Where to find more:
Ren Ng's dissertation that explains how it works. Note that this is the most readable and understandable technical dissertation I have ever read. You don't have to know mathematics to understand the main points. All interested photographers should at least read chapters 1 and 2.

Lytro's website. This is the company that Ng founded to commercially release the camera. The website has more descriptions, a pointer to the thesis, and lots of amazing examples of photographs that you can refocus right there on the website.

The Stanford website on computational photography