Decision time? Read Gizmag's latest product comparisons

"Inexact" computer chip makes mistakes, but is 15x more efficient

By

May 17, 2012

A prototype “inexact” computer chip that is around 15 times more efficient than current mi...

A prototype “inexact” computer chip that is around 15 times more efficient than current microchips (Avinash Lingamneni/Rice University/CSEM)

Image Gallery (2 images)

Last year, a team of U.S. researchers applied the pruning shears to computer chips to trim away rarely used portions of digital circuits. The result was chips that made the occasional mistake, but were twice as fast, used half as much energy, and were half the size of the original. Now, building on the same “less is more” idea, the researchers have built an “inexact” prototype silicon chip they claim is at least 15 times more efficient than current technology in terms of speed, energy consumption and size.

In the traditionally exacting world of computing, it might seem counter intuitive to set out to develop a chip that is allowed to make a few errors. But by managing the probability of errors and restricting which calculations are allowed to produce errors, the research team led by Krishna Palem has been able to slash energy demands while also boosting performance.

In addition to removing certain processing components, the team also employed another innovation in the prototype chip to further cut energy demands called “confined voltage scaling,” which trades some performance gains by taking advantage of improvements in processing speed.

“In the latest tests, we showed that pruning could cut energy demands 3.5 times with chips that deviated from the correct value by an average of 0.25 percent,” said Avinash Lingamneni, a Rice graduate student and co-author of the study. “When we factored in size and speed gains, these chips were 7.5 times more efficient than regular chips. Chips that got wrong answers with a larger deviation of about 8 percent were up to 15 times more efficient.”

While you probably wouldn’t want to find any inexact chips in the cockpit of an airplane or a missile guidance system, there are plenty of applications where a certain margin of error is acceptable.

“Particular types of applications can tolerate quite a bit of error. For example, the human eye has a built-in mechanism for error correction,” says project co-investigator Christian Enz. “We used inexact adders to process images and found that relative errors up to 0.54 percent were almost indiscernible, and relative errors as high as 7.5 percent still produced discernible images.”

Frames produced with video-processing software on traditional hardware (left), inexact pro...
Frames produced with video-processing software on traditional hardware (left), inexact processing hardware with a relative error of 0.54 percent (middle) and with a relative error of 7.58 percent (right) (Image: Rice University/CSEM/NTU)

Palem says devices such as hearing aids, cameras and other electronic gadgets that use special-purpose “embedded” microchips are likely to be the first applications for the pruned processors.

The inexact design is also integral to the I-slate educational tablet being developed by the Rice-NTU Institute for Sustainable and Applied Infodynamics (ISAID). Intended for Indian classrooms where there is no power, the low-cost tablet is being designed to run on solar power from small panels like those found on solar-powered calculators by using pruned chips that cut power requirements in half.

Earlier this year, Indian officials in the Mahabubnagar District announced plans to put 50,000 I-slates into middle and hig school classrooms over the next three years. Palem expects the first I-slates, along with the first prototype hearing aids, to contain pruned chips will appear by 2013.

The research team, made up of experts from Rice University in Houston, Singapore’s Nanyang Technological University (NTU), Switzerland’s Center for Electronics and Microtechnology (CSEM) and the University of California, Berkeley, unveiled their prototype pruned chips at the ACM International Conference on Computing Frontiers in Calgliari, Italy, this week, where they picked up best-paper honors.

Source: Rice University

About the Author
Darren Quick Darren's love of technology started in primary school with a Nintendo Game & Watch Donkey Kong (still functioning) and a Commodore VIC 20 computer (not still functioning). In high school he upgraded to a 286 PC, and he's been following Moore's law ever since. This love of technology continued through a number of university courses and crappy jobs until 2008, when his interests found a home at Gizmag.   All articles by Darren Quick
14 Comments

If they link three or more together to use only the most common outputs, they will be able to compensate for most of the 'mistakes' while still being more efficient than a single chip. (If their efficiency numbers are right).

Tiltrotortech
18th May, 2012 @ 01:37 am PDT

so finally a Turbo for Processor :)

doesen't this means that an inExact processing(15x fast) can be achieved in a regular Processor by patching it to Bios level and blocking less used section of it...

the benefit it is "its reversible"

Imran Sheikh
18th May, 2012 @ 07:41 am PDT

Seemed to be a horrible idea at first, then I ran into the use for photos above. Photos seemed like a good idea untill I studied the pictures. I'm very surprised I can tell the 0.54 percent error from the no-error picture. The error of 7.58 percent photo appearance difference is not an acceptable degradation. From this I can imagine using it for video. I think it would be great for robot vision. I can't think of any other application where I'd want a computer part dumbed down for "efficiency".

Dave B13
18th May, 2012 @ 10:39 am PDT

Now if they just can make a regular chip that can decide to prune itself when requested.

This would allow the programmer to decide when to dumb things down for specific processes.....

For example for recording monitoring a security camera the chip could dumb down itself until it sees movement and then kick into high efficiency/low speed/high energy use.

PrometheusGoneWild.com
18th May, 2012 @ 05:44 pm PDT

Well, I'm not impressed. The third image is terrible, the middle one has less details, less sharp and more faded in colors at the first sight.

The whole IT technology is already loaded with bugs, you can't find any device or software able to do its function properly and flawlessly. What is more frustrating that usually nobody knows exactly what is the reason of the bugs, why it doesn't work like it should, and there are just trial-and-error ways to fix it.

Why we want to have more errors for some more speed? We already accepted that there is no guarantee for any software (read the EULA), and I have no doubt that companies will raise they "error tolerance" level to apply such chips, so we can experience more bugs, but at least they will come faster. :)

Great prespective!

Imhof Iván
18th May, 2012 @ 06:06 pm PDT

I think you guys are not getting the picture. Today, our computers have many 'processors'. So adding a couple of inexact cores to a CPU could enable it to increase efficiency on demand by shutting down some of the exact ones. And it would keep the OS and any required program running in an exact core. So don't panic.

Another task for these inexact computing is solving non polynomial problems that need heuristic approaches. Like finding the best route for mail delivery, air traffic, inferring phylogeny, etc. These approaches usually consist of many educated trials an errors and keeping the best solution so far, so one mistake will not be significant, and you can always re-check the best solutions. Oh, and these problems use a huge amount of computing power, so this is not a minor breakthrough!

cachurro
18th May, 2012 @ 07:05 pm PDT

Dave, those are frames rendered from video, so yeah, that's the idea. Obviously photos wouldn't necessarily be allowed to make so many mistakes. One thing I wonder about is the resolution of the video - I'm pretty sure the 7.58% error would be greatly muted in higher-resolution video such as 4 or 8K.

I can imagine graphics processing exploding once again with the use of this kind of pruning - enthusiasts who really care about image quality can set it so every pixel is exactly as it should be, but folks who don't care if some pixels are a few hair-shades off could play at substantially higher resolution & framerates. I bet games of the future will have options to set the % error in addition to all the other graphics options we get now. Very cool!

Joel Detrow
18th May, 2012 @ 07:35 pm PDT

Somehow, I am reminded of Rosie on the Jetsons... :-)

Dan Stillings
18th May, 2012 @ 08:23 pm PDT

To err is Human. To really screw things up requires a computer.

And here we have the proof, processing chips designed to make errors.

Gregg Eshelman
19th May, 2012 @ 02:31 am PDT

@Dave B13 I can think of many applications. Like Dennis said, it can have several chips inside and only use the high powered chip when it needs to. There could also be settings on the device. Some people would gladly accept a 7.58 loss if it meant not having to charge the device once a month instead of every several days.

Reread the article. You fixated on the most pruned chips. You mean to tell us that you won't accept a device that only varies 1/4 of a percent for 3.5 times less power usage?

VoiceofReason
19th May, 2012 @ 09:29 am PDT

@Dennis. That is actually a pretty good idea. It could be done now with two circuits and a couple of relays. But it would work much better as one unit. It never makes sense to me why people will install cameras of so low quality that the image is almost useless. If it is worth doing, it is worth doing right.

kellory
19th May, 2012 @ 09:42 am PDT

With memresistor technology now out, these chips without check bits and maths coprocessors are looking washed up before a market is found.

Nice if you want an on chip ccd processor to pre render the data for compression, not so nice if its used to process coded or compressed data, the loss of 1% of which would destroy it.

L1ma
21st May, 2012 @ 06:10 am PDT

No science. Nothing new. Simple things are more effective and fast.

Society is waiting that scientists will rediscover that most people don't need complicated mobile phones, book reader tablets and other devices - TVs, PCs, cars etc.

Many people do need simple, reliable, and fast things. Not for playing but for use.

New niche awaiting for manufacturers.

E.g., throw out 1/3 unnecessary things from Windows and you get a simple and fast media for daily use. I have seen such a Windows XP, it starts in about 10 sec, opens internet access in about 5 sec. I would like to have all programs yet still 10 times faster.

Imants Vilks, Artificial Intelligence Foundation Latvia researcher

Imants
21st May, 2012 @ 10:06 am PDT

1. What are they allowing inaccuracy in? I am sure I would love to be the person flying in a plane that said there were no problems because we accepted less than correct results. Or the person whose bank account shows overdrawn because the accounting software generated an error because of a bad chip.

2. For higher speed use multi-core processors. There are even motherboards out there that allow for plugging in multiple processors. With the right OS and compiler (one that has parallel processing possible) and the right software to take advantage of parallel processing by programs can be sped up 4-10 times now with the CORRECT answer every time. The problem is the programs are not written to take advantage of parallel processors if available.

3. For lower heat you can use the smaller die sizes. The computer builders already are doing this.

You may also remove the hardwired instruction coding from the chips for seldom used operation codes and replace it with microcode in the OS. If a particular op-code comes back invalid, the OS could use instructions in the OS to perform the operation effectively working as an emulator. Unfortunately when these operations are called upon the system would run slower, but by definition these are instructions used rarely, so the overall speed should not be impacted.

3. @cachurro: Yes there are many problems today that are essentially impossible for computers to give the correct answer such that we accept approximations using heuristics, an example as you pointed out being the Traveling Salesman Problem. If you needed the perfect shortest route, for say 10,000 cities it would take 10,000 factoral paths to be computed or a whole lot more power than Big Blue if you needed the answer this year. A close answer is good enough.

I would like to keep guesses and approximations out of my normal processing.

I do think possibly have an inexact core that could be accessed via an op-code would be a reasonable and desirable thing though. But it should ONLY be utilized when specifically requested by a program. If I misunderstood you, I apologize.

NatalieEGH
22nd May, 2012 @ 06:36 am PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 28,714 articles