Computers

MIT system can fix your software bugs on its own (by borrowing from other software)

MIT system can fix your software bugs on its own (by borrowing from other software)
CodePhage, a software tool from MIT, can reportedly fix a common type of computer software bug by borrowing from other software
CodePhage, a software tool from MIT, can reportedly fix a common type of computer software bug by borrowing from other software
View 1 Image
CodePhage, a software tool from MIT, can reportedly fix a common type of computer software bug by borrowing from other software
1/1
CodePhage, a software tool from MIT, can reportedly fix a common type of computer software bug by borrowing from other software

New software being developed at MIT is proving able to autonomously repair software bugs by borrowing from other programs and across different programming languages, without requiring access to the source code. This could save developers thousands of hours of programming time and lead to much more stable software.

Bugs are the bane of the software developer's life. The changes that must be made to fix them are often trivial, typically involving changing only a few lines of code, but the process of identifying exactly which lines need to be fixed can be a very time-consuming and often very frustrating process, particularly in larger projects.

But now, new software from MIT could take care of this, and more. The system, dubbed CodePhage, can fix bugs which have to do with variable checks, and could soon be expanded to fix many more types of mistakes. Remarkably, according to MIT researcher Stelios Sidiroglou-Douskos, the software can do this kind of dynamic code translation and transplant (dubbed "horizontal code transplant," from the analogous process in genetics) without needing access to the source code and across different programming languages, by analyzing the executable file directly.

How it works

As an example, let's say you've written a simple computer program that asks the user to input two numbers and outputs the first number divided by the second. Let's also say that, in your code, you forgot to check that the second number is not zero (a division by zero is mathematically undefined).

CodePhage starts with a (bugged) application and two inputs one that triggers no errors ("safe input") and one that does ("unsafe input"). Using a large database of applications, it finds one that can read and correctly process both inputs. In our case, the system would search a vast repository for a function that can divide two numbers safely.

"We have tons of source code available in open-source repositories, millions of projects, and a lot of these projects implement similar specifications," says Sidiroglou-Douskos. "Even though that might not be the core functionality of the program, they frequently have subcomponents that share functionality across a large number of projects."

The system differentiates between a "donor" program – the software from which the fix will be borrowed – and a "recipient" program – the bug-ridden code which the MIT system will attempt to fix.

The first step is to feed the "safe" input to the donor code and dynamically track which constraints are being imposed on the input variables. Then, the software does the same with the second, "unsafe" input and compares the two sets of constraints. The points of divergence between the two identify a constraint that was met by the safe input, but not by the unsafe one, and so is likely to be a security check that is missing from the recipient code.

In our example, the safe input would be the divisor being any non-zero number, and the unsafe input would be the divisor being zero. The MIT system would detect that the condition "divisor must be different from zero" is met by the safe input, but not by the unsafe one, correctly identifying that the check for this specific condition is missing from the recipient code and is the probable cause of the bug.

At this point, CodePhage will take all the discrepancies in the input checks in the donor software and translate them into the programming language of the recipient software. The system will then try to add the new checks to the source code of the recipient program in different parts of the code, until the unsafe input is processed correctly (and the program still behaves as expected when checked against a test suite).

Toward bug-free software?

"The longer-term vision is that you never have to write a piece of code that somebody else has written before," says MIT professor Martin Rinard, who was part of the study. "The system finds that piece of code and automatically puts it together with whatever pieces of code you need to make your program work."

Rinard and team say a developer could, for instance, reduce development and testing effort by omitting checks for illegal inputs, and then use their tool to automatically transfer in checks from more robust software, including closed-source, proprietary applications.

According to the researchers, in modern commercial software, security checks can take up 80 percent of the code – so the impact on programming time could, at least in theory, be quite substantial. What's more, while the system is currently limited to analyzing variable checks, the researchers say that the same techniques are also designed to track, extract and insert any computation, as long as the system is able to correctly identify the values assigned to the variables in the donor software.

The MIT system could also be used to transfer checks between different versions of the same application, to try and prevent newly released software patches and updates from introducing new bugs.

When tested on seven common open-source programs, CodePhage was reportedly able to patch up the vulnerable code every time, taking up to 10 minutes per repair. In future versions, the researchers hope to reduce this lag time as much as possible.

The advance was presented at the Association for Computing Machinery’s Programming Language Design and Implementation conference this month, and an open-access paper describing the system can be found online.

Source: MIT

8 comments
8 comments
Kaiser Derden
of course if no one else has ever tried to code what you are trying to code its useless ...
developers without a recipe will always code bugs ...
better requirements = better software ... and good QA practices = even better software ...
Brian M
Not sure if this would really help its often bugs in the unique decision process. Yes the example divide by zero is a classic example and one that should not exist but is dead easy to locate. Its the more application specific ones for example do we trigger a patient monitoring alarm if temperature goes above 38 degrees or at 45C. From a program point of view neither is right or wrong, For the patient, well that's a different matter!
How does the MIT bug diagnostic program know that this is not a monitoring system for making cakes!
Software and test engineers probably don't have to worry just yet about job security!
Don't think this approach will generally work and possible even more dangerous than the bugs
martinkopplow
This goes way beyond requirements and QA.
It will probably still need some kind of underlying network or maybe even a rating-system before it can go mainstream, though if implemented right, it might have an unforeseeable impact on the future development especially of open-source software. Looking back from the future one day, this could turn out to have been an important milestone on the way towards a universal software architecture.
mathcpat
They are solving the wrong problem - that being coding in some language just like we have been doing for the past 40 years.
If MIT would create a languageless software development environment, programmer productivity would increase by reducing or eliminating these types of bugs to begin with.
Lbrewer42
GREAT!
How do I run my Microsoft=made programs through it?
Douglas Bennett Rogers
Need to be able to flag acceptable errors, such as division by zero at poles.
artmez
Most bugs are more than ordinary typos. They are implementation (software) or requirements (system) design errors. Those "bugs" can be much more tragic since they pass all the tests with flying colors even though the design is flawed.
There can be tragic consequences when testers and programmers are completely independent, because the requirements can be interpreted differently by both. The best person to test a program is its programmer since there is no lapse in communication or understanding of how the requirement was implemented. Some of the safety standards "require" independence and these are applied to aircraft, automotive, medical, and other "safety critical" areas.
Requirements validation is the long pole in the tent.