Decision time? Check out our latest product comparisons

MIT unveils colorful solution for cheap, accessible gesture-based computing

By

May 26, 2010

MIT researchers have unveiled a cheap and easy to use gesture recognition system which use...

MIT researchers have unveiled a cheap and easy to use gesture recognition system which uses a colored glove, a webcam and some software with positive results

Image Gallery (4 images)

They're not a failed attempt at Belgian jigsaw camouflage or a trophy from clown school, these colorful lycra gloves are the vital component in a new gestural user input system developed by researchers at MIT. When used with a standard webcam and some clever software, the wearer's hand movements are instantaneously translated into on-screen commands or control gestures. Commercial development of the system could lead to widespread availability of cheap and easy-to-use spatial gesture interfaces.

MIT's Robert Y. Wang and Jovan Popovic have developed a gestural tracking system that uses just a standard webcam, a multi-colored cloth glove and some clever software which includes a new algorithm for rapidly searching through a database for visual matches. Instead of tracking reflective or colored tape attached to a user's digits as seen in other setups, this system can track and register a 3D representation of the whole gloved hand.

The setup and how it is used

The apparent random configuration of 20 irregular colored shapes was in fact specifically chosen, according to Wang, to be "distinctive and facilitate robust tracking." As such, background objects can be ignored and the system will work in various lighting conditions whilst avoiding reading errors from color/shape collisions.

The webcam captures the image of the gloved hand and the software crops the background. The image is then reduced to a tiny 40 by 40 pixel resolution digital model. A specially-developed algorithm searches through megabytes of information in a database for a visual match and then produces the appropriate hand shape and position on the display. Of course, all of this is undertaken in a fraction of a second so that the time difference between the gloved and virtual hand is minimal.

The conversion of a video image into a 40 x 40 tiny image

Different hand sizes don't present too much of an issue either. The one-size-fits-all stretchy lycra glove just needs to be recalibrated in the software for any new users, a process that takes only a few seconds, and then it's good to go.

OK, so it's not as visually appealing as Tom Cruise standing in front of a transparent screen manipulating videos with the wave of a hand but the research holds the promise to bring gestural user input within consumer reach. As well as potential gaming applications, Wang sees future use in 3D modeling scenarios now fairly common in engineering and design.

The Wang and Popovic system was initially shown at a computer graphics conference last year but it was a little buggy and took too long to set up. Since then, enhancements have been introduced to make the system a lot faster, more stable and flexible. Wang is currently looking to expand the system to the whole body which will make the conversion of live actor movement into digital animation or athletic evaluation a whole lot cheaper and easier to use, at the expense of the wearer looking somewhat ridiculous.

The following video shows Wang overviewing several possible applications of the gesture recognition system:

About the Author
Paul Ridden While Paul is loath to reveal his age, he will admit to cutting his IT teeth on a TRS-80 (although he won't say which version). An obsessive fascination with computer technology blossomed from hobby into career before the desire for sunnier climes saw him wave a fond farewell to his native Blighty in favor of Bordeaux, France. He's now a dedicated newshound pursuing the latest bleeding edge tech for Gizmag.   All articles by Paul Ridden
Tags
1 Comment

They seem to be ignoring a possible market segment. This could be used to "transcribe" sign language for the deaf. I assume most deaf persons can sign far faster and more intuitively than they can type. Typing can be too time-consuming for realtime communication. I suppose webcams can work, but they're often too jerky with low framerates to support effective signing.

Gadgeteer
27th May, 2010 @ 07:38 pm PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 28,990 articles