A new camera chip design from scientists at Stanford University has opened up the possibility of 3D photos. The chip has stacked 16 x 16 pixel arrays and a host of micro-lenses, much like a fly's eye, enabling the whole chip to "see" in three dimensions, unlike a normal 2D pixel array digital camera sensor. Here's how it works: Data from the "multi-aperture array" then goes through image processing to extract a standard RGB image, along with a "depth map" for each pixel—very useful for applications like face- or object-recognition.
Essentially, each tiny sub-array of pixels in the Stanford sensor sees objects in front of the camera from a slightly different viewpoint. Software then looks for relative shifts between the same object's image in different lenses, and processes this parallax data to work out the object's distance.
As well as giving depth information, the design may reduce the color-crosstalk problems current sensors suffer from. It can also take macro close-ups in restricted spaces, making it potentially useful in medical situations.
Adobe has demonstrated a similar device in the past, but this new design is compacted onto one chip, and much simpler to integrate into current camera technology. For now, the pixel count is limited, and the image processing requirements would put a hefty strain on camera batteries. But, given a little time, your DSLR might one day be able to snap 3D family groupings, ready to show on your
Wonder when theres the first 3D porn mag