Shopping? Check out our latest product comparisons

New tech could allow drone aircraft to recognize deck crews' arm signals

By

March 14, 2012

Aircraft carrier deck crews may one day be able to direct autonomous drones, using standar...

Aircraft carrier deck crews may one day be able to direct autonomous drones, using standard arm signals

We’ve all seen footage of flight crews on the decks of aircraft carriers, directing taxiing planes using arm signals. That’s all very well and good when they’re communicating with human pilots, but what happens as more and more human-piloted military aircraft are replaced with autonomous drones? Well, if researchers at MIT are successful in one of their latest projects, not much should change. They’re currently devising a system that would allow robotic aircraft to understand human arm gestures.

The MIT team divided the project into two parts. The first involved getting the system to identify body poses within “noisy” digital images, while the second was concerned with identifying specific gestures within a series of movements – those deck crews don’t stay still for very long.

A stereoscopic camera was used to record a number of videos for the study, in which several different people demonstrated a total of 24 gestures used commonly on aircraft carrier runways. While a device like the Microsoft Kinect could now pick out the body poses in that footage reasonably well, such technology wasn’t around at the time the study began. Instead, a system was created that picked out the positions of the subjects’ elbows and wrists, noted whether their hands were open or closed, and if the thumbs of those hands were up or down.

What the researchers are focusing on now is a way of sifting through all those continuous back-to-back poses, and isolating the different gestures for identification by the drones. It would take too long and require too much processing to retroactively analyze thousands of frames of video, so instead the system breaks the footage up into sequences about three seconds (or about 60 frames) in length. Because one gesture might not be fully contained within any one of those sequences, the sequences overlap one another – frames from the end of one sequence are also included in the beginning of the next.

The system starts by analyzing the person’s body pose in each frame. It then cross-references that pose with each of the 24 possible gestures, and uses an algorithm to calculate which gesture is most likely being made. This estimation process is then applied to the string of poses that make up the whole sequence, and then to several successive sequences.

So far, in identifying gestures from the video database, it’s managed an accuracy rate of about 76 percent. However, the researchers are confident that by refining the algorithms, that rate could be vastly improved.

More details are available in the video below.

Source: MIT

About the Author
Ben Coxworth An experienced freelance writer, videographer and television producer, Ben's interest in all forms of innovation is particularly fanatical when it comes to human-powered transportation, film-making gear, environmentally-friendly technologies and anything that's designed to go underwater. He lives in Edmonton, Alberta, where he spends a lot of time going over the handlebars of his mountain bike, hanging out in off-leash parks, and wishing the Pacific Ocean wasn't so far away.   All articles by Ben Coxworth
Tags
7 Comments

FYI, The signals used by plane captains and other ground crew are used almost everywhere you find operating aircraft, including civilian airports. Ground crews also use hand signals to communicate between themselves when working around aircraft, where it is frequently to noisy to hear even shouted requests.

flink
15th March, 2012 @ 03:02 am PDT

Why not replace the human crews waving their arms with drones?

Dawar Saify
15th March, 2012 @ 08:01 am PDT

So they are putting a limited Kinect on the things...is this a major step?

Rob Ayotte
15th March, 2012 @ 09:56 am PDT

Anything less than 100% accuracy on a flight deck would spell disaster. Having a drone's computer "best guessing" what the signal is won't do.

WebsterG
15th March, 2012 @ 01:52 pm PDT

Why not use an electronic homing device? They can't sneeze or make mistakes.

warren52nz
15th March, 2012 @ 03:40 pm PDT

Why isn't the entire process handed over to a computerised auto-pilot system once on the flight deck removing the need for any signalling and taking the human error factor out of the equation and also out of a very volatile and dangerous location.

Controll can be handed back to the pilot or drone for takeoff and the system could hold until the aircraft safely takes off or lands before processing the next move.

A human is a single point of failure with no redundancy. A computerised system would have multiple redundancies and require multiple point failure before becoming inoperable and never gets tired, never gets divorced and never has a pilot having an affair with a ground crew's wife or husband etc.

Foxy1968
15th March, 2012 @ 08:16 pm PDT

re; Foxy1968

A computer can not problem solve.

Slowburn
16th March, 2012 @ 05:32 pm PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 27,857 articles