Shopping? Check out our latest product comparisons

Boosting software speed by up to 20 percent

By

April 6, 2010

Researchers from North Carolina's State University have come up with a way to break up pro...

Researchers from North Carolina's State University have come up with a way to break up programs into different threads, resulting in a 20 percent increase in run speed

For some programs, the arrival of multi-core processing power has made little difference to how they operate. Some applications, such as word processors and web browsers, are unable to split process operation over a number of cores and instead pile everything onto just one. Researchers from North Carolina's State University have come up with a way to break up such programs into different threads, resulting in a 20 percent increase in run speed.

Computer chips are having more and more processing cores squeezed onto them these days which should result in significant performance improvements. Applications where some operations can only continue after the outcome of different elements within the program is known, such as word processors and web browsers, are renowned for their stubborn refusal to break this flow chart-like cycle so that such operations can be spread across multiple cores for parallel processing.

A research team from North Carolina State University has developed a method for separating the memory management aspect of program operation and running it as another thread. Instead of a program entering a repeating cycle of performing a computation and then preparing for the addition or release of storage space to accommodate the result of the operation on a single central processing unit, Dr Yan Solihin and the team have managed to take the memory management step and have it operate on its own thread.

With this approach, according to lead author of the research paper Devesh Tiwari: "the computational thread notifies the memory-management thread - effectively telling it to allocate data storage and to notify the computational thread of where the storage space is located. By the same token, when the computational thread no longer needs certain data, it informs the memory-management thread that the relevant storage space can be freed". The upshot being that both processes can operate in parallel over different cores and in doing so allow the program to function with up to 20 percent more efficiency.

The technique also opens up development opportunities for simultaneous application integrity or security checks which would otherwise adversely impact on program, and possibly system, performance. The paper, "MMT: Exploiting Fine-Grained Parallelism in Dynamic Memory Management", is to be presented at the IEEE International Parallel and Distributed Processing Symposium in Atlanta on April 21.

About the Author
Paul Ridden While Paul is loath to reveal his age, he will admit to cutting his IT teeth on a TRS-80 (although he won't say which version). An obsessive fascination with computer technology blossomed from hobby into career before the desire for sunnier climes saw him wave a fond farewell to his native Blighty in favor of Bordeaux, France. He's now a dedicated newshound pursuing the latest bleeding edge tech for Gizmag.   All articles by Paul Ridden
Tags
1 Comment

I want to buy it. NOW.

Great magazine but what is it with the stupid teenager crap, code words when registering. It is almost as stupid as Google. PLAIN ENGLISH, FULL CAPS. It will do the job without the silly Google high school, secret squirrel rubbish.

On a happy note....., best Tech news sight on the web.

PS

I buy stuff after reading your reviews.

Ronnie
20th April, 2010 @ 05:28 pm PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 28,276 articles