Introducing the Gizmag Store

I am what I am, I’m Popeye the audio-visual robot

By

November 3, 2009

The Popeye audio visual robotic head developed by the POP team

The Popeye audio visual robotic head developed by the POP team

The ease with which human beings make sense of their environment through a range of sensory signals belies the complex processing involved. Approaches to give robots the same purposeful perception we take for granted have typically involved studying visual and auditory processes independently. By combining data from both sound and vision European researchers have developed technology that could facilitate robotic understanding and responses to human behavior and even conversations, bringing us closer to a future where humanoid robots can act as guides, mix with people, or use perception to infer appropriate actions.

Although the team from the Perception On Purpose (POP) project encountered difficulties in attempting to integrate two different sensory modalities, namely sound and vision, it found that combining the two senses helped overcome limitations of both. Vision allows the observer to infer certain properties, such as size, shape, density and texture, whereas sound is used to locate the direction of the source, and identify what type of sound it is. On its own, a sound source is difficult to pinpoint because it needs to be located in a 3D space, and there is also the problem of background noise to contend with. By combining visual and auditory data the researchers found it was much easier for a robot to decide what is foreground and what is background.

The team managed to integrate all the technology required, including two microphones and two cameras, into the head of its Popeye robot, resulting in a neat and compact robotic platform. Using this approach with the algorithms the team developed, its robot, called Popeye, was able to identify a speaker with a fair degree of reliability.

POP’s coordinator, Radu Horaud, feels that some modern uses of artificial intelligence (AI), like chess applications, are limited because they do not learn from their environment. They are programmed with abstract data – say, chess moves – and they process that.

“They cannot infer predicates from natural images; they cannot draw abstract information from physical observations,” he stresses.

For now, POP has achieved many of its aims and commercial applications for this type of technology are not out of the question. The researchers also hope to continue their work in a further project that would look at extending some of POP’s results into a functioning humanoid robot. In the meantime, POP’s work means that the purposefully perceptive robot has become a not-so-distant future technology.

About the Author
Darren Quick Darren's love of technology started in primary school with a Nintendo Game & Watch Donkey Kong (still functioning) and a Commodore VIC 20 computer (not still functioning). In high school he upgraded to a 286 PC, and he's been following Moore's law ever since. This love of technology continued through a number of university courses and crappy jobs until 2008, when his interests found a home at Gizmag.   All articles by Darren Quick
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles

Just enter your friends and your email address into the form below

For multiple addresses, separate each with a comma




Privacy is safe with us because we have a strict privacy policy.

Looking for something? Search our 26,559 articles