Highlights from the 2014 LA Auto Show

Researchers demonstrate self-repairing chip

By

March 16, 2011

The die for CRISP's self-repairing chip (Image: CRISP)

The die for CRISP's self-repairing chip (Image: CRISP)

Image Gallery (2 images)

As chips continue to get smaller, the technological possibilities just get larger. One of the trade-offs of miniaturization, however, is that smaller things are also often more fragile and less dependable. Anticipating a point at which chips will become too tiny to maintain their current level of resilience, a team of four companies and two universities in The Netherlands, Germany, and Finland have created what they say could be the solution – a chip that monitors its own performance, and redirects tasks as needed.

"Because of the rapidly growing transistor density on chips, it has become a real challenge to ensure high system dependability," said Hans Kerkhoff of The Netherlands' University of Twente, and part of the CRISP (Cutting-edge Reconfigurable ICs for Stream Processing) consortium. "The solution is not to make non-degradable chips, it's to make architectures that can degrade while they keep functioning, which we call graceful degradation."

A block diagram of CRISP's self-repairing chip (Image: CRISP)

In order to make that graceful degradation possible, the CRISP chip incorporates multiple cores. Different tasks are assigned to different cores, by a built-in resource manager. The connections of those cores are continuously tested, and when a fault is detected, the task assigned to that core is simply reallocated to another one.

Although the chip itself isn't actually any stronger, it can function at full capacity for a longer period of time.

CRISP's self-testing, self-repairing chip was recently demonstrated at the DATE2011 conference in Grenoble, France.

About the Author
Ben Coxworth An experienced freelance writer, videographer and television producer, Ben's interest in all forms of innovation is particularly fanatical when it comes to human-powered transportation, film-making gear, environmentally-friendly technologies and anything that's designed to go underwater. He lives in Edmonton, Alberta, where he spends a lot of time going over the handlebars of his mountain bike, hanging out in off-leash parks, and wishing the Pacific Ocean wasn't so far away.   All articles by Ben Coxworth
3 Comments

Isn't it ironic that the solution to problems created by miniaturisation is to bloat the thing out with more cores and extra circuitry to monitor and switch. That then makes the thing bigger again, but without the robustness that might be attained by simply not miniaturising it as much perhaps?

As for graceful degradation, am I the only one that sees a problem with that? So it lasts a bit longer. We want it to last forever. Especially if it's in an aircraft flight-control system.

I guess as long as it can issue a warning that it is degrading that could still be useful - effectively a backup system.

The claim of operating at full capacity for longer once degraded seems a bit of a contradiction to me though. While the spare cores aren't being used, that's not full capacity.

Adrien
16th March, 2011 @ 03:51 pm PDT

@Adrien

Yes , may be adding extra circuitary brings complexity.. but i believe thats the required component the "Intelligent component " its embedded with some AI in it

"I guess as long as it can issue a warning that it is degrading that could still be useful - effectively a backup system. "

Thats because a h/w element cannot know whats happening to it , it must rely on the "smart memory" ,just because a h/w is melting away it cannot give a warning...

Josyula Krishna
17th March, 2011 @ 06:50 pm PDT

I think that by 'degradation' it means that the chip would work ok ALL the time but just gradually get slower and slower.

cachurro
18th May, 2012 @ 07:20 pm PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 29,573 articles
Recent popular articles in Electronics
Product Comparisons