Traditional cameras use a single lens or rather a single lens system to produce an image of the subject on the camera’s sensor or film. A plenoptic camera, in contrast, employ an array of microlens to capture light from a scene. The microlens array sits between the lens of the camera and the image sensor, and refocuses light onto the image sensor to create tiny images taken from slightly different viewpoints. This enables the camera to record each fragment of the scene with better detail and with more information about individual light rays entering the camera.
Later, by using a rendering software any part of the image can be brought to focus, and the depth of field adjusted on the fly, even after the photo has been captured. Plenoptic lenses also enables creation of stereoscopic 3D images from a single photo. Blurry, out-of-focus pictures will become an artifact of the past.
The technology has been around for at least half a decade but it is yet to be commercialized. At a recent Nvidia’s GPU Technology Conference, Adobe demonstrated such a lenses system developed by the company bringing plenoptic cameras a step closer to realization.
Traditionally, when a camera takes a picture, a ray of light enters the lens and gets recorded on a specific spot, like this:
With a plenoptic lens, that same light ray passes through several lenses before making it to the sensor, so it’s getting recorded from several different perspectives.
Because there’s all these tiny little lenses in front of the sensor, the resulting image looks like this:
But, using post processing you can achieve results like these:
Watch the intriguing video demonstration below:
[via Laptop Mag]