Shopping? Check out our latest product comparisons

MIT's new image-processing chip improves digital snapshots

By

February 19, 2013

Scientists at MIT have created a chip that is said to enhance digital snapshots more quick...

Scientists at MIT have created a chip that is said to enhance digital snapshots more quickly and using less power than image-processing software (Photo: Shutterstock)

Image Gallery (2 images)

Snapshots banged off on a smartphone, tablet or point-and-shoot camera could soon be getting a lot better looking thanks to a new processor chip. Developed by researchers at MIT’s Microsystems Technology Laboratory, the new chip enhances images within milliseconds, and reportedly uses much less power than the image processing software installed on some devices.

The chip works by dividing photos into a matrix of small blocks, known as a bilateral grid. A histogram (a graphical representation of data) is created for each block, in which the block’s X and Y axes represent its location within the photo as a whole. This is combined with another histogram for that same block, which represents its brightness levels.

One of the things that the chip can do is create High Dynamic Range (HDR) images. Have you ever noticed how your eyes are able to simultaneously expose for the bright and dark elements of a high-contrast scene, whereas a camera has to either overexpose one or underexpose the other? Well, HDR is kind of like your eyes – the bright sky and the shady spot under a tree will both be properly exposed in an HDR shot.

To manage this, the chip actually records three Low Dynamic Range images of each shot – one normally-exposed image (like a camera would take in Auto mode), one that’s overexposed to pick up details in dark areas, and one that’s underexposed to properly capture bright elements. Those three images are then merged into one HDR photo – the whole process takes a few hundred milliseconds for a 10-megapixel photo, and could reportedly even be applied to video. The researchers say the chip uses considerably less power than existing software-based systems that rely on CPUs and GPUs for the number crunching.

The chip is also able to enhance shots taken in dark environments, again using multiple images. In this case, it records two images of the scene – one using the flash, and one without. Each of those images is then divided into two layers – a base layer that just contains the large-scale ambient background features of the scene, and another that only contains the sharper details. The chip then combines the base layer of the non-flash shot (which would be underexposed in the flash shot), with the detailed layer of the flash shot (which would be grainy in the non-flash shot).

Finally, in order to clean up noise in photos, the chip is able to smooth out the shot by blurring “undesired” pixels into the pixels adjacent to them. In order not to blur the edges of objects within the shot (as does occur in some noise-reduction software), the blurring function isn’t applied when neighboring pixels have significantly different brightness values.

There’s no word yet on when the chip might start to appear in consumer devices.

Source: MIT

About the Author
Ben Coxworth An experienced freelance writer, videographer and television producer, Ben's interest in all forms of innovation is particularly fanatical when it comes to human-powered transportation, film-making gear, environmentally-friendly technologies and anything that's designed to go underwater. He lives in Edmonton, Alberta, where he spends a lot of time going over the handlebars of his mountain bike, hanging out in off-leash parks, and wishing the Pacific Ocean wasn't so far away.   All articles by Ben Coxworth
Tags
2 Comments

Your eyes cannot expose for both the bright and dark areas in a scene simultaneously. Your retina closes down to adjust the incoming light for whatever you are looking at, your brain makes a composite out of the various light and dark areas viewed and allows you to "think" everything is light the same!

Jerry Peavy
20th February, 2013 @ 01:01 pm PST

It's called bracketing in photography....do it all the time when I want to get a back lit photo, with the foreground exposed properly. The problem with a lot of HDR photos you see on the web, is people get a little over zealous with the settings, and it looks cartoonish.

Rusty Harris
20th February, 2013 @ 07:09 pm PST
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 28,127 articles