Top 100: The most desirable cars of all time

Honda's HEARBO robot has excellent hearing

By

November 19, 2012

Honda's HEARBO can distinguish between four different types of sound simultaneously

Honda's HEARBO can distinguish between four different types of sound simultaneously

Image Gallery (3 images)

A team led by Kazuhiro Nakadai at Honda Research Institute-Japan (HRI-JP) is improving how robots process and understand sound. The robot, aptly called HEARBO (HEARing roBOt), can parse four sounds (including voices) at once, and can tell where the sounds are coming from. The system, called HARK, could allow future robot servants to better understand verbal commands from several meters away.

The HARK system (HRI-JP Audition for Robots with Kyoto University) processes audible noise with eight microphones inside the robot's head. First the software singles out the sounds generated by its 17 motors, which are cancelled in real-time in a process known as "ego-noise suppression." It then processes the remaining audio, while applying a sound source localization algorithm to pinpoint the origin of a sound to within one degree of accuracy.

"By using HARK, we can record and visualize, in real time, who spoke and from where in a room," explains Nakadai on the HRI-JP website. "We may be able to pick up voices of a specific person in a crowded area, or take minutes of a meeting with information on who spoke what by evolving this technology."

In one experiment, the robot took food orders from four people speaking simultaneously – and knew who had ordered what. In another experiment, the robot played a game of rock-paper-scissors with three people. Each person said either rock, paper, or scissors at the same time, and the robot was able to determine who won. Others have taught the robot what different musical instruments sound like, which could allow the robot to separate a song into various parts.

HARK allows the robot to parse up to four speakers simultaneously, as shown in this exampl...
HARK allows the robot to parse up to four speakers simultaneously, as shown in this example of "verbal rock-paper-scissors"

HARK represents just one domain of artificial intelligence known as robot audition, which any practical robot helper will require in daily life. Honda has reportedly invested more than US$60 million dollars into its humanoid robot, ASIMO, with plans to one day commercialize. Earlier work by the same team was applied to the latest version of ASIMO, which can understand different words spoken by three people simultaneously.

In the first video demonstration below, HEARBO is bombarded with a beeping alarm clock, music, and a person speaking to it. Not only can it distinguish between the types of sound it is hearing, but it turns its head in the direction of the sound it is seeking. In the second demonstration, the robot listens to verbal commands while music plays. It estimates the song's tempo and dances to the rhythm, and performs ego-noise suppression to cancel out its own servo noise.

Source: Honda HRI-JP via IEEE Spectrum

About the Author
Jason Falconer Jason is a freelance writer based in central Canada with a background in computer graphics. He has written about hundreds of humanoid robots on his website Plastic Pals and is an avid gamer with an unsightly collection of retro consoles, cartridges, and controllers.   All articles by Jason Falconer
2 Comments

Hi Jason,

Bit left field, just throwing it out there:

"The system, called HARK, could allow future robot servants to better understand verbal commands from several meters away."

I understand these machines at present have no consciousness, and will essentially provide a service to the end user in a limited capacity.

Might just be a cultural thing but connotations of servitude in my mind blend with servant and slave.

In the likely chance that an intelligent machine is scanning through these articles 10-15 years from now, it might be nice to refer to these machines as 'helpers' or 'aids'. :)

Nairda
19th November, 2012 @ 07:30 pm PST

@Nairda

The word "robot" is derived from a term essentially meaning serf labor :)

I think it's good to be optimistic about the technology but I think we'll probably have to wait a bit longer than 15 years before we need to worry about hurting a robot's feelings.

Jason Falconer
20th November, 2012 @ 08:35 am PST
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 29,888 articles
Recent popular articles in Robotics
Product Comparisons