Introducing the Gizmag Store

Harnessing blowflies to teach robots how to see

By

August 11, 2009

Scientists are using a fly 'flight simulator' to understand how a blowfly can process visu...

Scientists are using a fly 'flight simulator' to understand how a blowfly can process visual information so quickly (Image: MPI Neurobiology)

Image Gallery (2 images)

One of the biggest challenges facing robotics is teaching machines to perceive surroundings and make sense of what they see. Attempting to duplicate the complexity of human perception is next to impossible, so researchers at Cognition for Technical Systems (CoTeSys) in Munich are, instead, studying how blowflies process images using a 'flight simulator'. Despite having a brain the size of a pinhead, a fly can process and interpret 100 discrete images per second – four times better than humans.

Blowflies may not seem the most graceful creatures, but they’re actually brilliant fliers: able to change direction in an instant, reverse direction at high speed, avoid obstacles and land with great precision. And it all comes down to exceptional eyesight. A blowfly’s compound eyes obviously deliver an enormous amount of visual information and the CoTeSys scientists are now trying to understand how this data is mapped in the brain.

In what sounds like a high-tech version of tethering a fly with cotton, the researchers have created a 'flight simulator' for flies, where an individual blowfly is held in place with a halter and then bombarded with images and movements in a wraparound display. Electrodes register the reactions of brain cells so the researchers can analyze how the fly 'sees' its environment in flight.

What has already become apparent, in the way the scientists talk of 'optical flux fields', is that a fly processes visual signals very differently to people. Rather than trying to make sense of objects, the fly simply registers them as movements in relation to itself – so an object to the side rushes past and one in front gets bigger.

It then combines these motion vectors to create a 3-D model of the environment for a higher level of its brain’s vision center, the lobula plate. There are, incredibly, only 120 nerve cells in this vision center, but each one will react with particular intensity when presented with the pattern appropriate to it. It’s as though you could steer a fly by pressing any one of 120 buttons.

But, for the scientists, the way in which motion information from each eye is combined is the most interesting part. It’s these specialized neurons - VS cells – that give a fly a precise fix on its position and movement. If their operation can be duplicated for machines, it might finally be possible for a robot to make sense of its place in the world. Of course, it would probably also need a fly’s superior sense of smell to decide where it wanted to go next.

Tags
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles

Just enter your friends and your email address into the form below

For multiple addresses, separate each with a comma




Privacy is safe with us because we have a strict privacy policy.

Looking for something? Search our 26,501 articles