August 17, 2007 The advent of digital photography has opened up a new world of image editing possibilities including the ability to fill-in blanks or replace unwanted parts of an image. A new algorithm devised by James Hayes and Alexei A. Efros of Carnegie Mellon University facilitates this process by drawing on a huge database of more than a million images from the World Wide Web in order to seamlessly fill in the missing areas of incomplete photographs.
There could be many reasons for an image to feature an undesirable blank area - a patch of bright light that needed to be cropped out or perhaps a shadow, a person or an object ruined an otherwise perfect shot.
The algorithm tackles this problem by completing a given image in a number of different ways leaving the user to select the one which is deemed most suitable. This can be achieved without the user having to label the image fragments being used, or for that matter, offer any direction at all.
‘Holes’ in images are ‘patched’ as suitable image fragments are found and re-arranged to complete the image in a manner that is claimed to be semantically valid. That is, the patched area is consistent with the rest of the image. Hays and Efros claim that their algorithm is a means to restore data missing from an image that ‘should have been there’. Existing methods of filling such blank areas have largely involved drawing image fragments from other parts of the same picture. This algorithm is quite unique in that it draws from an exterior database and also in the means by which it achieves this.
To learn more visit this page on the Carnegie Mellon University Graphics site where a PDF paper and Presentation are available for download.
See also Slashdot.
See the stories that matter in your inbox every morning