Highlights from the 2014 LA Auto Show

Prototype gigapixel camera incorporates 98 microcameras

By

June 21, 2012

Sample gigapixel image of the Seattle skyline captured by the prototype camera (Photo: Duk...

Sample gigapixel image of the Seattle skyline captured by the prototype camera (Photo: Duke University Imaging and Spectroscopy Program)

Image Gallery (2 images)

While digital cameras such as the Hasselblad H4D-200MS and Nikon D800 have pushed the megapixel boundary in recent times, and Nokia’s inclusion of a 41-megapixel camera into its 808 PureView smartphone got plenty of attention, researchers at Duke University and the University of Arizona say the age of consumer gigapixel cameras are just around the corner – and they’ve created a prototype gigapixel camera to back up their claim.

The prototype camera was developed by electrical engineers from Duke University, along with scientists from the University of Arizona, the University of California - San Diego, and Distant Focus Corp., with support from DARPA. By synchronizing 98 tiny cameras, each with a 14-megapixel sensor, in a single device, the team created a camera that can capture images of around one gigapixel resolution. However, with the addition of extra microcameras, the researchers say it has the potential to capture images at resolutions of up to 50-gigapixels - the team points out this is five times better than 20/20 human vision over a 120-degree horizontal field.

Similar to the way the GigaPan Epic mount creates high resolution panoramas by capturing multiple images that are then stitched together, each of the prototype device’s 98 cameras captures data from a specific area of the field of view, which is then stitched together to form a single image offering incredible detail.

The prototype 50-gigapixel camera that incorporates 98 tiny cameras into one device (Photo...

“A computer processor essentially stitches all this information into a single highly detailed image,” explains Duke’s David Brady, who led the team. “In many instances, the camera can capture images of things that photographers cannot see themselves but can then detect when the image is viewed later."

“The development of high-performance and low-cost microcamera optics and components has been the main challenge in our efforts to develop gigapixel cameras,” Brady said. “While novel multiscale lens designs are essential, the primary barrier to ubiquitous high-pixel imaging turns out to be lower power and more compact integrated circuits, not the optics.”

Because of this, the researchers believe that the continuing miniaturization of electronic components will see the next generation of gigapixel cameras becoming available to the general public within the next five years.

“Traditionally, one way of making better optics has been to add more glass elements, which increases complexity,” said the University of Arizona’s Michael Gehm, who led the team responsible for the software that combines the input from the microcameras. “Our current approach, instead of making increasingly complex optics, is to come up with a massively parallel array of electronic elements.”

“A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations,” Gehm adds. “Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don’t miss anything.”

Although the current prototype camera measures 2.5 x 2.5 x 1.6 feet (76 x 76 x 51 cm), the optical elements account for only around three percent of the camera’s volume. The rest is taken up by the electronics and processors needed to process all the information, and the cooling components required to keep it from overheating. It is these areas the team believes can be miniaturized in the coming years, resulting in practical hand-held gigapixel cameras for everyday photographers.

The camera is described online in the journal Nature.

About the Author
Darren Quick Darren's love of technology started in primary school with a Nintendo Game & Watch Donkey Kong (still functioning) and a Commodore VIC 20 computer (not still functioning). In high school he upgraded to a 286 PC, and he's been following Moore's law ever since. This love of technology continued through a number of university courses and crappy jobs until 2008, when his interests found a home at Gizmag.   All articles by Darren Quick
3 Comments

Can you use optics to focus the camera array on a small area?

Then can you potentially visualize the very small? Such as air particles? DNA? A single photon?

PRyan1068
21st June, 2012 @ 06:05 pm PDT

It likely wouldn't be that much of an advantage to microscopy. Air particles and DNA are smaller than the wavelength of visible light. The camera operates in the visual light spectrum so it would do no better than a nice microscope in a lab would.

Karl Harmon
22nd June, 2012 @ 02:50 pm PDT

The relationship to human vision is inaccurate. Normal human vision is one to two arcminutes. Over 120 degrees that's 7200 resolvable points at the upper limit. Mulitply this by 80 degrees (assuming a 6:4 photographic ratio) and the figure is about 34,560,000 pixels, giving a one gigapixel camera twenty nine times better resolution than good human vision and a fifty gigapixel camera about 1400 times better resolution.

Gerry Lavell
20th August, 2012 @ 10:20 pm PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 29,558 articles
Recent popular articles in Digital Cameras
Comparison Reviews