Top 100: The most desirable cars of all time

Disney algorithm builds high-res 3D models from ordinary photos

By

July 22, 2013

Disney's algorithm at work

Disney's algorithm at work

Image Gallery (9 images)

Disney Research has developed an algorithm which can generate 3D computer models from 2D images in great detail, sufficient, it says, to meet the needs of video game and film makers. The technology requires multiple images to capture the scene from a variety of vantage points.

The 3D model is somewhat limited in that it is only coherent within the field of view encompassed by the original images. It does not appear to fill in data.

However, judging from Disney Research's demo video, the detail achieved is incredibly impressive. The team used a hundred 21-megapixel photos for each of their demo models. These were captured by moving the camera along a straight line for each shot. Though this approach makes it easier to process the data, the team says that the algorithm can be applied to less regimented sets of images.

A photo from Disney's sample set
A photo from Disney's sample set
The corresponding 3D model
The corresponding 3D model

Unlike other systems, the algorithm calculates depth for every pixel, proving most effective at the edges of objects.

The algorithm demands less of computer hardware than would ordinarily be the case when constructing 3D models from high-res images, in part because it does not require all of the input data to be held in memory at once.

The system is not yet perfect. Depth measurements are less accurate than they would be if captured with a laser scanner, and the researchers admit that more work is needed to handle surfaces which vary in reflectance.

Alexander Sorkine-Hornung of Disney Research suggests that the algorithm could also be used in the manipulation of 2D images, by removing backgrounds or creating new 3D scenes from a combination of source images, for instance.

The team will be demonstrating the technology this week at SIGGRAPH 2013. The team's video is below.

Source: Disney Research

About the Author
James Holloway James lives in East London where he punctuates endless tea drinking with freelance writing and meteorological angst. Unlocking Every Extend Extra Extreme’s “Master of Extreme” achievement was the fourth proudest moment of his life.   All articles by James Holloway
Tags
7 Comments

Isn't this 'just' an (advanced) form of structure by motion??

Don't get me wrong; I don't think it's simple. But it doesn't sound like anything new...

X. Vink
23rd July, 2013 @ 02:34 am PDT

Isn't this work connected to the work of Peter Schaeren (Diss. ETHZ, approx. 1994)

Peter Aschwanden
23rd July, 2013 @ 02:41 am PDT

Going from multiple stereoscopic images to a 3D model doesn't sound too difficult but getting stereoscopic pictures from random single frame photos sound almost impossible.

Slowburn
23rd July, 2013 @ 04:18 am PDT

Hmm... Perhaps this is new at Disney, but Microsoft PhotoSynth and all of the Open Source tools that support its "point cloud" data sets have been around for many years: http://en.wikipedia.org/wiki/Photosynth.

kalqlate
23rd July, 2013 @ 11:40 am PDT

Isn't this essentially what Autodesk's 123D Catch has been offering as a free service for quite a while now?

http://www.123dapp.com/catch

PatrikD
23rd July, 2013 @ 12:08 pm PDT

I've been trying for months to get 123D Catch to capture a 360 degree mesh of a small (1 cm long) natural object with unique and potentially high utility. I have taken at least 40 surround shoots of the object most recently with a macro ring light and it simply will not work. No joy so far. There is an online as well as a Windows native app for working with projects but they are not remotely integrated or even similar. The results of working online are useless to the native app even though they share a common database.

With the native app you can supposedly improve on it's guess at stitching the photos together (which I find pretty poor) by manually identifying points on the photos that are coincident. If, however, the points you tell it are coincident deviate too much from it's current guess it responds with a big red notice that in effect says, "Sorry but that just can't be right" and it rejects your specification. Incredibly frustrating. In nearly every case so far manual stitching produces a worse result than its poor guess. This program is far from ready for prime time. I'm not sure what they could be thinking releasing it for public use. It's a black eye rather than a feather in the cap.

Granted, it is a hard problem but they haven't solved it. There are many supposed successes in a gallery on their site but they only appear successful because the the photos are projected onto the captured mesh. Look at the mesh by itself to see the real quality of the capture. It ain't sterling and completely lacks the detail that the projected photos fool you with.

Caution. If you use it for a small object, shooting with a macro lens or setting, you absolutely _must_ remove all pincushion or other lens distortion (which may not be at all evident from your pictures) with some correction utility. Gimp can do it if you photograph and correct a rectangular grid and then apply the result to all photos but it is slow and you must correct one shot at a time which takes forever. Otherwise your attempts to manually stitch will be an exercise in maddening futile perversity. The app simply refuses to accept your input or modifies it in bizarre ways and any results will be worthless. I wasted an enormous amount of time trying to do so before the possibility of distortion pollution dawned on me. Mea culpa for that one.

DonGateley
23rd July, 2013 @ 05:17 pm PDT

Looks to be the beginning of the 3D photo machine in 'BladeRunner"

dionkraft
27th July, 2013 @ 09:08 pm PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 29,851 articles
Recent popular articles in Computers
Product Comparisons