Highlights from the 2014 LA Auto Show

Wearable system creates digital maps as users walk through buildings

By

September 24, 2012

MIT's wearable mapping device

MIT's wearable mapping device

Image Gallery (2 images)

A number of research institutions are currently developing systems in which autonomous robots could be sent into places such as burning buildings, to create a map of the floor plan for use by waiting emergency response teams. Unfortunately, for now, we still have to rely on humans to perform that sort of dangerous reconnaissance work. New technology being developed by MIT, however, kind of splits the difference. It’s a wearable device that creates a digital map in real time, as the person who’s wearing it walks through a building.

The prototype device, which is about the size of an iPad and worn on the user’s chest, wirelessly transmits data to a laptop in a distant room. That data comes from a variety of sensors, including accelerometers, gyroscopes, a stripped-down Microsoft Kinect camera, and a laser rangefinder. Much of the system utilizes technology used in a previous project, in which a wheeled robot was equipped to perform a similar mapping function.

MIT has developed a wearable system that automatically creates a digital map of a building...

The laser rangefinder creates a 3D profile of its surroundings by sweeping its beam across the immediate area in a 270-degree arc, measuring the amount of time that it takes for its light pulses to be reflected back by the surfaces around it. When used with the robot, it could get fairly accurate readings as it was able to operate from a relatively level, stable platform.

When worn by a person who’s walking, however, the rangefinder is constantly tipping back and forth – not at all ideal conditions for its use. That’s where the gyroscopes come in. By detecting when and in what way the mapping device is tilted, they provide data that is applied to the information gathered by the rangefinder, so that the user’s movements can be corrected for in the final profile.

The accelerometers, meanwhile, provide data on how fast the person is walking – a service that was performed in the robot using sensors in its wheels. Additionally, the accelerometers provide information on changes in altitude, as would be experienced when the user moved from one floor of the building to another. In some experiments, a barometer was added to the setup, which was also good at indicating changes in altitude. By knowing when the person has moved to another floor, the system avoids merging two or more floors into one.

The camera comes into play every few meters, as it snaps a photo of its surroundings. For each image, software takes note of approximately 200 unique visual details such as color patterns, contours or shapes. These details are matched to the map location at which each shot is taken. Subsequently, should the user return to a spot that they’ve already traveled through, the system will be able to identify it not only by its relative position, but also by comparing the newest snapshot of the area to one taken previously.

In the current prototype, a push button is used to “flag” certain locations. After the initial reconnaissance, users can then go back through the finished map, and add annotations to those flags. In a future version, however, the developers would like to see a function whereby users could make location-tagged speech or text annotations live on site.

Ultimately, it is hoped that the device could be shrunk to about the size of a coffee mug. More information on the research is available in the video below.

Source: MIT

About the Author
Ben Coxworth An experienced freelance writer, videographer and television producer, Ben's interest in all forms of innovation is particularly fanatical when it comes to human-powered transportation, film-making gear, environmentally-friendly technologies and anything that's designed to go underwater. He lives in Edmonton, Alberta, where he spends a lot of time going over the handlebars of his mountain bike, hanging out in off-leash parks, and wishing the Pacific Ocean wasn't so far away.   All articles by Ben Coxworth
Tags
3 Comments

Absolutely brilliant. Frankly, I'm surprised this hadn't already been invented. I guess it was so obvious that no one thought to make it until now.

Joel Detrow
24th September, 2012 @ 05:30 pm PDT

Scientists have been working on this problem (Simultaneous Localization and Mapping, SLAM) for decades, but it is still a challenging problem to solve especially under unconstrained motion and in real time. The case is simpler on wheeled robots with only 3 degrees of freedom.

http://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping

http://18.7.29.232/bitstream/handle/1721.1/70952/MIT-CSAIL-TR-2012-013.pdf?sequence=1 (PDF)

niko_n
24th September, 2012 @ 06:42 pm PDT

Very cool, it reminds me of all those spaced aged video games I played as a kid, where you had to visit part of the map before it came up on radar.

Skyler Baird
4th October, 2012 @ 01:05 am PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 29,538 articles
Recent popular articles in Science
Comparison Reviews