Highlights from the 2014 LA Auto Show

“Seeing” NeuFlow supercomputer based on the human visual system

By

September 15, 2010

NeuFlow takes its inspiration from the mammalian visual system, mimicking its neural netwo...

NeuFlow takes its inspiration from the mammalian visual system, mimicking its neural network to quickly interpret the world around it

The brain’s ability to quickly visually interpret our environment requires such an enormous number of computations it is pretty amazing that it accomplishes this feat so quickly and with seemingly little effort. Coming up with a computer-driven system that can mimic the human brain in visually recognizing objects, however, has proven difficult, but now Euginio Culurciello of Yale’s School of Engineering & Applied Sciences has developed a supercomputer based on the human visual system that operates more quickly and efficiently than ever before.

Dubbed NeuFlow, the system takes its inspiration from the mammalian visual system, mimicking its neural network to quickly interpret the world around it. The system uses complex vision algorithms developed by Yann LeCun at New York University to run large neural networks for synthetic vision applications. It is extremely efficient, simultaneously running more than 100 billion operations per second using only a few watts to accomplish what it takes desktop computers with multiple graphic processors more than 300 watts to achieve.

“One of our first prototypes of this system is already capable of outperforming graphic processors on vision tasks,” Culurciello said.

One potential application for the system that Culurciello and LeCun are focusing on is a system that would allow cars to drive themselves. In order to be able to recognize the various objects encountered on the road – such as other cars, people, stoplights, sidewalks, not to mention the road itself –NeuFlow processes tens of megapixel images in real time.

The algorithm used by the system employs temporal-difference image sensors to recognize objects and people’s postures in real time. It works with any regular off-the-shelf camera and image sensor array and is lightweight enough to be implemented in embedded platforms, sensor networks and mobile phones.

Culurciello embedded the supercomputer on a single chip, making the system much smaller, yet more powerful and efficient, than full-scale computers. “The complete system is going to be no bigger than a wallet, so it could easily be embedded in cars and other places,” Culurciello said.

Beyond the autonomous car navigation, the system could be used to improve robot navigation into dangerous or difficult-to-reach locations, to provide 360-degree synthetic vision for soldiers in combat situations, or in assisted living situations where it could be used to monitor motion and call for help should an elderly person fall, for example.

Culurciello presented a report on NeuFlow on Sept. 15 at the High Performance Embedded Computing (HPEC) workshop in Boston, Mass.

Overview of the NeuFlow system:

About the Author
Darren Quick Darren's love of technology started in primary school with a Nintendo Game & Watch Donkey Kong (still functioning) and a Commodore VIC 20 computer (not still functioning). In high school he upgraded to a 286 PC, and he's been following Moore's law ever since. This love of technology continued through a number of university courses and crappy jobs until 2008, when his interests found a home at Gizmag.   All articles by Darren Quick
Tags
3 Comments

So it uses temporal difference imagine? Which is what, like heat vision basically. What about extremely hot or extremely cold places, why don't we work on something that can process a regular image feed like our eyes do, so that we know it can work everywhere we can, then maybe throw in some thermal on top so that it can have an extra advantage.

Aradoth
16th September, 2010 @ 06:17 am PDT

Um... "Temporal" means "in time" and not "in temperature." Think "temporary." Essentially the system detects changes over time in the environment around it by processing different frames of a movie. If a person were to stand perfectly still, the system might identify it as a statue (were it that sophisticated), and a statue being towed down the street on a dolly might inadvertently be identified as a person. The model isn't perfect, but it's a great start, and has far-reaching implications. The implementation in such a small form factor with such low power consumption is significant.

J.D. Ray
16th September, 2010 @ 11:35 am PDT

The word "temporal" refers to time. You are thinking "thermal" which refers to heat. They ARE mimicking the eye. If you found the word "temporal", I suspect it's akin to the compression algos that only take note of changes over time, like "the car moved since the last frame" This makes a class of standing still things and a class of moving things and allows to more directly focuses attention on the moving things. Eyes do this, check for accuity on your periphery (edge of what you can see) move your finger till its almost unseeable....then note the big flash of attention when you move it. Horses are terribly upset on windy days because it gets in the way of noticing the approaching meat eater behind the grass.

I'm more concerned with the car computer crashing, resulting in the car crashing. "windows is shutting down now"...ooops!!!!! I bet a chauffeur is cheaper, and oh, those uniforms are to die for.

waltinseattle
16th September, 2010 @ 12:22 pm PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 29,560 articles
Recent popular articles in Science
Comparison Reviews