Matterport scans 3D objects and spaces "20 times faster" than competitors


April 11, 2012

Matterport's 3D scanning technology claims to be 20 times faster and 18 times cheaper than its nearest rival

Matterport's 3D scanning technology claims to be 20 times faster and 18 times cheaper than its nearest rival

Image Gallery (6 images)

It may be based on apparently familiar technology, but Y Combinator startup Matterport reckons it's putting its 3D scanning technology, which it claims can scan real environments into 3D digital representations 20 times faster than the competition, to innovative use.

"We turn reality into 3D models and our scanner is 20 times faster and 18 times cheaper than any other tool on the market," Matterport co-founder Michael Beebe claimed at the Y Combinator 2012 demo day at the end of March. And though that claim might be pushing it slightly - 3D scanners have been around for the better part of two decades - the technology demonstrated in Matterport's demo video is remarkable.

The handheld scanner, which at first glance might be mistaken for a Kinect sensor, is simple waved at the object or interior environment to be scanned in such a way as takes in the object's entire surface. The technology is not only speedy, but also easy, apparently requiring no precision in use.

But so far there has been precious little hard information on how the Matterport scanner actually works - and their demo video is curiously lacking in in-focus close-up shots of the scanner itself. But two things appear to be clear: the scanner does not appear to emit any so-called visible light, and in addition to capturing 3D forms, it is able to apply relatively accurate colors and patterns to the surfaces, reflecting the real object's appearance.

An old website for the technology, before the project was renamed Matterport, reveals that a Kinect sensor was indeed used as the basis for early prototypes, and though the scanner featured in Matterport's promo is clearly not a Kinect sensor, it seems more than likely that the same principles are at work, with two infrared laser depth sensors for depth and form sensing, and an RGB camera for detail. But to get from a Kinect sensor to the technology apparently on display in the promotional video must require some serious software to back it up.

Matterport is currently working with a handful of "beta partners" in fields such as real estate and video games development. We've reached out for more technical info on what makes this tick, and if we find out more you'll be the first to know. Check out Matterport's promo video, if you're curious, under the neath.

Sources: Matterport,, Venturebeat

About the Author
James Holloway James lives in East London where he punctuates endless tea drinking with freelance writing and meteorological angst. Unlocking Every Extend Extra Extreme’s “Master of Extreme” achievement was the fourth proudest moment of his life. All articles by James Holloway

Why would you expect them to tell you (or anyone) how this works? Presumably, once they're for sale, and / or patented, we'll all know.


Go to KinectFusion you will see similar concept (not as sweet but still pretty cool). I love the 20 times faster claim but it would be more believable if the very beautiful Marketing Manager didn't have 3 T-shirt changes in the video (after all the Y-Combinator facility is not that big right?) But great work guys we look forward to seeing it emerge into a real product sometime later this year - can't wait to see more.

Mike MacMillan

...hmmm, looks like a hybrid M$ Kinect hooked to an Apple laptop... ;)

Matt Rings

The sensor they are using is an asus xotion - similar to the kinnect.


This would be an excellent way for a robot to have meaningful information about its environment to a point at which it could be called a robot's eyesight if it could do the mesh scan 20ish times per second.

Alex Lekander

Whaaat? Need details! They don't say anything at all on their site. How does it compare to the Zprinter line of scanners? It looks like it doesn't have any distance limitations, or at least not the horrible limitations of the Zprinter line's "up to 12 inches away" range. I doubt you could scan a cathedral with it (although that would be nice...).

However, it looks like the quality is WAY lower than zprinter's thousandth of an inch resolution.

Wonder how the price compares? It looks like this device as we see it here would be good for making a basic, low quality scan of an environment, but not good enough to scan in faces and such for computer graphics applications, unless the quality we're seeing in the video doesn't represent the finest quality (by about 100x) that it can handle. I'd like to know though.

Dave Andrews

I realize not the same thing, but FARO has way more interesting products...

Tyler Rattray

Thanks everyone for watching our teaser video. More details around our system will be released later this fall, as we get closer to launch.

@Alex L. – Using the xtion, we have a range limit of around 15 feet. That just means yes you have to walk around to capture your rooms. By the way, no more carrying around a laptop in the latest version.

With regard to scanning a cathedral, we do have partners who will attempt such things with our system so never say never.

We are still working out details on price but as stated in the article, it will remain significantly cheaper to create a 3D model with our product than existing tools.

Our focus is on reconstructing spaces. Scanning faces is really cool but a different problem. You know who does this really well? ShapeShot ( Their founder Michael Raphael is a great guy.

@Dave A. – Our product video will focus on explaining the end-to-end workflow so you can see that the entire process (scan to textured 3D model) occurs in the timeframe of minutes versus hours or days. Thanks for the kind words; we look forward to showing off the real product too!

Post a Comment

Login with your Gizmag account:

Related Articles
Looking for something? Search our articles