Semiautonomous driving system takes over when drivers make mistakes


July 16, 2012

MIT researchers have developed a semiautonomous safety system which allows a human driver full control of a vehicle until it detects that the car is heading toward a hazard or obstacle, at which point it takes control and steers to safety (Image courtesy of Sterling Anderson)

MIT researchers have developed a semiautonomous safety system which allows a human driver full control of a vehicle until it detects that the car is heading toward a hazard or obstacle, at which point it takes control and steers to safety (Image courtesy of Sterling Anderson)

Image Gallery (9 images)

We all like to think we're in control ... never more so than when we're behind the wheel of a car, but there are occasions when errors in judgement can lead to a gentle bump, or something far worse. MIT researchers have developed a semiautonomous collision avoidance system where the human driver has full control of the vehicle until the system detects that the car is headed for a collision or is too close to an obstacle for safety. When such a hazard is detected, the system will take control of the vehicle, bring it back within a calculated safe zone, and then hand control back over to the driver.

The so-called intelligent co-pilot system is the work of Sterling Anderson (PhD student at MIT's Department of Mechanical Engineering) and Karl Iagnemma (principal research scientist at the Institute's Robotic Mobility Group). Instead of using a path-based control, such as self-parking systems where a driver allows the vehicle to take over control of the vehicle to safely park, the system uses selective enforcement of constraints.

"This basis in constraints and corresponding fields of safe travel allows us to do something more than autonomous systems can do," Anderson told Gizmag. "Rather than simply control the vehicle autonomously (which, without a human in the loop is a much simpler proposition), our system is also capable of sharing control with the human driver. Additionally, our approach bases its control actions on threat - the perceived need for intervention - and allows us to tailor the mode and level of intervention to the performance and/or preference of the human driver."

Data gathered by onboard sensors, a front-facing camera and laser rangefinder is analyzed by a custom algorithm, which determines a safe zone where the human driver has full navigational control of the vehicle. Should the semiautonomous safety system detect that the actions of the driver are about to take the vehicle outside of that zone, perhaps heading straight for an obstacle or hazard, it takes over and steers the vehicle back to safety. Once within the zone again, control is handed back to the driver.

Anderson and Iagnemma have put the system through more than 1,200 trials in Michigan since September 2011, where test drivers were sat in front of a computer monitor showing a forward-pointing video feed streamed wirelessly from a heavily modified Kawasaki 4010 Mule out on an obstacle-laden test range. The utility vehicle was equipped with a Velodyne LIDAR, an inertial measurement unit, GPS, an onboard Linux PC for processing the sensor and positioning data, and steering/accelerator/braking actuators.

"Our Kalman filter combines the data provided by the GPS and IMU into a more accurate estimate of the vehicle's true position (gets us down to ~0.5 meters accuracy)," explained Anderson. "Note that because we use the laser to sense obstacles, the relative position of obstacles with respect to the vehicle is known with greater (~0.1 meter) precision. The controller identifies, evaluates, and selects one of the various path homotopies (or 'corridors') available in the environment, designs vehicle position constraints to bound it, combines those position constraints with known limits on the vehicle state and actuators (ie. steering limits, tire friction limits, etc.), and predicts an optimal escape trajectory. Basically, this trajectory tells us how close the vehicle will get to its limits if it is to remain within the safe corridor. We use this prediction to guide when, how, and how much the system intervenes."

Test drivers used a torque-enabled steering wheel and gas/brake pedals to navigate the vehicle over the obstacle course, occasionally receiving instructions from the researchers to head straight for an obstruction and let the system kick in and do its stuff. There were still a few collisions recorded, however.

"System failures that we've experienced to date reflect an experimental platform whose quirks we've identified and (believe we) know how to solve, but which we've largely relegated to later refinement," said Anderson. "In its current configuration and on a challenging obstacle course, the system reduces the occurrence of accidents by over 75 percent, while allowing the driver to decrease his/her course completion time by >30 percent. We believe we can reduce the collision rate to zero with the integration of a tactical-grade IMU (as opposed to the cheap one we're using currently). This will allow us to, for example, more accurately track and avoid obstacles that pass through the LIDAR's ~3 meter [9.8-foot] blind spot. Other changes to our obstacle detection approach (like simply lowering the LIDAR to reduce its blind spot) can also eliminate some of these failures."

Perhaps a manual over-ride of some sort might be a good idea, so that drivers can take back complete control in the event of system failure. Interestingly, Anderson observed that test drivers who put complete faith in the system performed better than those who were untrusting. He also says that drivers unaware that the system is operating may just attribute effective collision avoidance to good driving, which he acknowledged as not necessarily being a good thing (especially for those just starting out, possibly building false confidence in a driver's own weak ability and leading to poor skill development).

Experts, too, may well find the system too controlling. Imagine a police officer unable to catch up with a fleeing suspect because the onboard system determines it unsafe to do so. To make the system more adaptable, the researchers have included tweaks to cater for different levels of driving experience.

"As written, our algorithm allows for adaptation to various levels of driver preference or performance," said Anderson. "For those who prefer smoother, safer rides at the expense of some control freedom, the system is more active. Those who need or prefer more freedom can dial back the level of intervention, reducing it to a late-stage backup that does not kick in until the very last minute."

They're also looking at the possibility of using the camera, accelerometer and gyro in a dash-mounted smartphone to provide the necessary feedback to the system.

The research was supported by the United States Army Research Office and the Defense Advanced Research Projects Agency. The experimental platform was developed in collaboration with Quantum Signal LLC with assistance from James Walker, Steven Peters and Sisir Karumanchi.

A paper entitled Constraint-Based Planning and Control for Safe, Semi-Autonomous Operation of Vehicles was presented at the Intelligent Vehicles Symposium in Spain last month.

Source: MIT

About the Author
Paul Ridden While Paul is loath to reveal his age, he will admit to cutting his IT teeth on a TRS-80 (although he won't say which version). An obsessive fascination with computer technology blossomed from hobby into career before the desire for sunnier climes saw him wave a fond farewell to his native Blighty in favor of Bordeaux, France. He's now a dedicated newshound pursuing the latest bleeding edge tech for Gizmag. All articles by Paul Ridden

either have the driver in complete control, or have complete autonomous function (weather full driving or just parking) mixing will just lead to tragedy.


Right. If I had a nickel for every time a stability controller or lane-keeping system led to tragedy... wait...

I really like MIT's idea. Humans have proven they're good at decision making, yet bad under pressure. Computers are not as good at decision making, yet good under pressure. Combining the two seems like you could exploit some serious synergies.


Good idea for when it is human error causing the incident, when it is a fault with the car that caused it, I would query how much of a good idea it is to rely on the car to know how to react.


According to this US Department of Transportation study, 99% of the traffic crashes investigated were caused or contributed to by human error. ( For as much as people like to blame the vehicle (eg. Toyota accelerator crisis), human error is almost always the source of the problem.


Observing an intersection in any major American city (and many large cities around the world) would prove that humans simply don't belong behind the wheel. The only reason traffic isn't completely anarchic is that people still have an instinct for self-preservation. Otherwise, most drivers break as many laws as they think they can get away with.


re; DaveG People get used to not driving when they are driving and then get into a different car.


How the system will identify whether it's better to 1. hit the object (which suddenly started to run over the road) or 2. hit another car but avoid object? How the system will know whether this object is child or dog/chimp/pig etc.?

Robert Silagadze

Have we considered the cost of the Velodyne LIDAR sensor? At about $75000 a sensor, how many of you are willing to add the sensor to your cars? While I respect the research efforts, I think that academia must keep the biggest factor in mind when it comes to product improvements: Affordability.


I've been saying for almost a decade that autonomous control is the way to go for cars if cars are to remain our primary vehicle for day to day travel. But I like driving. This idea is great if you ask me.


re; warren52nz

Fully autonomous cars will be great but encouraging drivers to give less of their attention to their driving while they are driving is a bad idea.


@Slowburn This isn't autonomous driving at all. It's accident or collision prevention. The driver never knows the system is even there until it prevents a collision or accident caused by driver error. When the driver goes outside the safe limits of operation (ie just before a crash), the system trims the steering or applies brakes to bring the vehicle back into safe operating limits.

As far as other comments suggesting this is too expensive to be in cars in the near future, MIT is very aware of that. They aren't submitting a Kickstarter project, they are developing technology and studying different methods of achieving goals. This isn't a Consumer Reports article, it's Gizmag.

Jay Lloyd

umm- Google autonomous cars are already out but same problem- tech is too expensive. Legal in Vegas and California now right? Red liscense plates with infinity symbols.

Funny- I trust the machines way more than a bunch of drunk driving or semi conscious, texting/cell phone talking yahoos.

I think it would be great if it caught you making too many or too bad mistakes and would either take full command or pull over and call the cops.

Tyler Hall
Post a Comment

Login with your Gizmag account:

Related Articles
Looking for something? Search our articles