Everyone knows that, unless you’re extraordinarily gifted, you need to crawl before you can walk. Turns out the same principle could also apply to robots. In a first-of-its-kind experiment conducted by University of Vermont (UVM) roboticist Josh Bongard, both simulated and physical robots were created that, like tadpoles becoming frogs, change their body forms while learning how to walk. He found that these evolving robots were able to learn more rapidly than ones with fixed body forms and that, in their final form, the changing robots had developed a more robust gait.
So far, engineers have been largely unsuccessful at creating robots that can continually perform simple, yet adaptable, behaviors in unstructured environments. This is why Bongard and other robotics experts have turned to computer programs to design robots and develop their behaviors, instead of trying to program the robots’ behavior directly.
Using a sophisticated computer simulation, Bongard unleashed a series of synthetic beasts that move about in a 3-dimensional space. Generations of the creatures then run a software routine called a genetic algorithm that experiments with various motions until it develops a slither, shuffle, or walking gait – depending on its body plan – that allows it to reach a light source without tipping over.
“The robots have 12 moving parts,” Bongard says. “They look like the simplified skeleton of a mammal: it’s got a jointed spine and then you have four sticks – the legs – sticking out.”
Some of the creatures begin flat to the ground, like tadpoles or snakes with legs; others have splayed legs, a bit like a lizard; and others ran the full set of simulations with upright legs, like mammals. Bongard found that the generations of robots that progressed from slithering to wide legs and, finally, to upright legs, ultimately performed better and discovered the desired behavior faster than robots that started out in the upright position.
“The snake and reptilian robots are, in essence, training wheels,” says Bongard, “they allow evolution to find motion patterns quicker, because those kinds of robots can’t fall over. So evolution only has to solve the movement problem, but not the balance problem, initially. Then gradually over time it’s able to tackle the balance problem after already solving the movement problem.”
The process mimics the way a human infant first learns to roll, then crawl and, finally, walk.
“Yes,” says Bongard, “We’re copying nature, we’re copying evolution, we’re copying neural science when we’re building artificial brains into these robots.”
“Realizing adaptive behavior in machines has to date focused on dynamic controllers, but static morphologies,” Bongard writes in his paper that appears in the Proceedings of the National Academy of Sciences. “This is an inheritance from traditional artificial intelligence in which computer programs were developed that had no body with which to affect, and be affected by, the world.”
“One thing that has been left out all this time is the obvious fact that in nature it’s not that the animal’s body stays fixed and its brain gets better over time,” he says, “in natural evolution animals bodies and brains are evolving together all the time.”
Bongard says this hasn’t been done in robotics because it’s much easier to change a robot’s programming than it is to change its body.
“While the brace is bending the legs, the controller is causing the robot to move around, so it’s able to move its legs, and bend its spine,” he says, “it’s squirming around like a reptile flat on the ground and then it gradually stands up until, at the end of this movement pattern, it’s walking like a coyote.”
Bongard's research at UVM, which is supported by the National Science Foundation, is part of a wider venture called evolutionary robotics that aims to produce robots as quickly and consistently as possible. He says that, while it’s a simple prototype, his robot is a proof of concept.
See the stories that matter in your inbox every morning