— Wearable Electronics
MYO armband delivers one-armed gesture control
Thalmic Labs' MYO lets you control computers via one-armed gestures
Over the last five years, the touchscreen has supplanted the mouse and keyboard as the primary way that many of us interact with computers. But will multitouch enjoy a 30-year reign like its predecessor? Or will a newcomer swoop in and steal its crown? One up-and-comer, Thalmic Labs, hopes that the next ruler will be 3D gesture control.
Like Microsoft Kinect and the upcoming Leap Motion, MYO lets you control a computer with Minority Report-like gestures. But unlike those devices, which rely on optical sensors, MYO employs a combination of motion sensing and muscular activity.
The actual MYO device is an armband. When worn, it senses gestures, and sends the corresponding signal (via Bluetooth 4.0) to a paired device. The company claims that the muscular detection (via proprietary sensors) “can sense changes in gesture down to the individual finger.”
In the company’s promo video (which you can watch below) we see people controlling iTunes tracks, playing Mass Effect 3, and giving boardroom presentations – all via gesture. The video closes with a skier (wearing a Google Glass-like device) posting his first-person extreme winter sports video to Facebook with a few flips of the wrist.
One thing you won’t see in the video is anybody using anything other than one arm. Since the device wraps around one arm, that limb – including its corresponding hand and fingers – is all that it can sense. MYO’s optical-based competition – Leap Motion and Kinect – don’t have this constraint.
MYO is already up for pre-order for US$149. The company has also launched a developer API to get a jump on software support. Thalmic Labs says the MYO will ship in “late 2013.”
Can MYO stand out in a gesture-control field that will include Microsoft, Leap Motion, and - who knows - maybe Apple? Check out the video below and decide for yourself.
Source: MYO via TheNextWeb
About the Author
Will Shanklin is Gizmag's Mobile Tech Editor, and has been part of the team since 2012. Before finding a home at Gizmag, he had stints at a number of other sites, including Android Central, Geek and the Huffington Post.
Will has a Master's degree from U.C. Irvine and a Bachelor's from West Virginia University. He currently lives in New Mexico with his wife, Jessica.
All articles by Will Shanklin
How about reversing the application? Have a sensor grid that will, say, "read" a localized activity (e.g., someone playing a piano within range of the sensors), analyze the input data, translate it into digitally-encoded electrical impulses which would then be received by a person wearing an input device (a future iteration of Google Glasses?) to function as the sensor-to-brain interconnect, providing instructions causing the person to play the piano as picked up by the sensors. Essentially the user is able to mimic whatever activity to which they direct the sensor grid's directional beams (using those Google Glasses again).
Anybody else ever watch Natalia Zakharenco's last film? Like that but with live action transference.
Just a thought.
This has a pretty good advantage over optical control if it works well - you can operate in a complex and detailed way in a crowded environment). However, I know that the more complicated versions of these devices (that sense brain and face muscle activity) don't often work so well, and require goop (it's a technical term :) ) in order to get the signals through skin.
@mick - having experienced electrical stimulation of muscles, I can't say it's an altogether pleasant or particularly targeted sensation. Since even complex key presses are very easy for a computer to record and replicate from the key end, I don't see your idea catching on unless it also somehow improves muscle memory.
I wonder if two devices were used ... could they work together to understand sign language?... would potentially be faster than typing and would undoubtedly respond better than voice control in loud environments etc. Can't wait to see this all integrated with android!
Nevermind sign language, two bands for the arms and then a couple bands for the legs? Never mind multitouch, imagine being able to produce complex multi-limb gestures. Imagine fine tuning your yoga, tai chi or martial arts moves via computer "ok, now move your left hand a little up, point your right index finger slightly further down...now relax...."
I don't think this will take off. However I see hope for better prosthetic.
See : Leap Motion.
Over 160,000 people receive our email newsletter
See the stories that matter in your inbox every morning