Highlights from the 2014 LA Auto Show

MYO armband delivers one-armed gesture control

By

February 25, 2013

Thalmic Labs' MYO lets you control computers via one-armed gestures

Thalmic Labs' MYO lets you control computers via one-armed gestures

Image Gallery (3 images)

Over the last five years, the touchscreen has supplanted the mouse and keyboard as the primary way that many of us interact with computers. But will multitouch enjoy a 30-year reign like its predecessor? Or will a newcomer swoop in and steal its crown? One up-and-comer, Thalmic Labs, hopes that the next ruler will be 3D gesture control.

Like Microsoft Kinect and the upcoming Leap Motion, MYO lets you control a computer with Minority Report-like gestures. But unlike those devices, which rely on optical sensors, MYO employs a combination of motion sensing and muscular activity.

The actual MYO device is an armband. When worn, it senses gestures, and sends the corresponding signal (via Bluetooth 4.0) to a paired device. The company claims that the muscular detection (via proprietary sensors) “can sense changes in gesture down to the individual finger.”

Uses

If enough developers get on board, the sky's the limit

In the company’s promo video (which you can watch below) we see people controlling iTunes tracks, playing Mass Effect 3, and giving boardroom presentations – all via gesture. The video closes with a skier (wearing a Google Glass-like device) posting his first-person extreme winter sports video to Facebook with a few flips of the wrist.

One thing you won’t see in the video is anybody using anything other than one arm. Since the device wraps around one arm, that limb – including its corresponding hand and fingers – is all that it can sense. MYO’s optical-based competition – Leap Motion and Kinect – don’t have this constraint.

MYO is already up for pre-order for US$149. The company has also launched a developer API to get a jump on software support. Thalmic Labs says the MYO will ship in “late 2013.”

Can MYO stand out in a gesture-control field that will include Microsoft, Leap Motion, and - who knows - maybe Apple? Check out the video below and decide for yourself.

Source: MYO via TheNextWeb

About the Author
Will Shanklin Will Shanklin is Gizmag's Mobile Tech Editor, and has been part of the team since 2012. Will has a Master's degree from U.C. Irvine and a Bachelor's from West Virginia University. He currently lives in New Mexico with his wife, Jessica.
  All articles by Will Shanklin
6 Comments

How about reversing the application? Have a sensor grid that will, say, "read" a localized activity (e.g., someone playing a piano within range of the sensors), analyze the input data, translate it into digitally-encoded electrical impulses which would then be received by a person wearing an input device (a future iteration of Google Glasses?) to function as the sensor-to-brain interconnect, providing instructions causing the person to play the piano as picked up by the sensors. Essentially the user is able to mimic whatever activity to which they direct the sensor grid's directional beams (using those Google Glasses again). 

Anybody else ever watch Natalia Zakharenco's last film? Like that but with live action transference.

Just a thought.

mickBelker
26th February, 2013 @ 01:38 am PST

This has a pretty good advantage over optical control if it works well - you can operate in a complex and detailed way in a crowded environment). However, I know that the more complicated versions of these devices (that sense brain and face muscle activity) don't often work so well, and require goop (it's a technical term :) ) in order to get the signals through skin.

@mick - having experienced electrical stimulation of muscles, I can't say it's an altogether pleasant or particularly targeted sensation. Since even complex key presses are very easy for a computer to record and replicate from the key end, I don't see your idea catching on unless it also somehow improves muscle memory.

Charles Bosse
26th February, 2013 @ 11:44 am PST

I wonder if two devices were used ... could they work together to understand sign language?... would potentially be faster than typing and would undoubtedly respond better than voice control in loud environments etc. Can't wait to see this all integrated with android!

Josh Ansbridge
26th February, 2013 @ 12:32 pm PST

Nevermind sign language, two bands for the arms and then a couple bands for the legs? Never mind multitouch, imagine being able to produce complex multi-limb gestures. Imagine fine tuning your yoga, tai chi or martial arts moves via computer "ok, now move your left hand a little up, point your right index finger slightly further down...now relax...."

Bryan Paschke
26th February, 2013 @ 10:14 pm PST

I don't think this will take off. However I see hope for better prosthetic.

Equilibrium
28th February, 2013 @ 09:26 am PST

See : Leap Motion.

Evelyn Wyngowski
5th May, 2013 @ 06:55 am PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 29,485 articles