Highlights from the 2014 LA Auto Show

Scientists try to teach robots morality

By

May 13, 2014

Researchers are exploring how they might create robots endowed with their own sense of mor...

Researchers are exploring how they might create robots endowed with their own sense of morality (Photo: Shutterstock)

A group of researchers from Tufts University, Brown University and the Rensselaer Polytechnic Institute are collaborating with the US Navy in a multi-year effort to explore how they might create robots endowed with their own sense of morality. If they are successful, they will create an artificial intelligence able to autonomously assess a difficult situation and then make complex ethical decisions that can override the rigid instructions it was given.

Seventy-two years ago, science fiction writer Isaac Asimov introduced "three laws of robotics" that could guide the moral compass of a highly advanced artificial intelligence. Sadly, given that today's most advanced AIs are still rather brittle and clueless about the world around them, one could argue that we are nowhere near building robots that are even able to understand these rules, let alone apply them.

A team of researchers led by Prof. Matthias Scheutz at Tufts University is tackling this very difficult problem by trying to break down human moral competence into its basic components, developing a framework for human moral reasoning. Later on, the team will attempt to model this framework in an algorithm that could be embedded in an artificial intelligence. The infrastructure would allow the robot to override its instructions in the face of new evidence, and justify its actions to the humans who control it.

"Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree," says Scheutz. "The question is whether machines – or any other artificial system, for that matter – can emulate and exercise these abilities."

For instance, a robot medic could be ordered to transport urgently needed medication to a nearby facility, and encounter a person in critical condition along the way. The robot's "moral compass" would allow it to assess the situation and autonomously decide whether it should stop and assist the person or carry on with its original mission.

If Asimov's novels have taught us anything, it's that no rigid, pre-programmed set of rules can account for every possible scenario, as something unforeseeable is bound to happen sooner or later. Scheutz and colleagues agree, and have devised a two-step process to tackle the problem.

In their vision, all of the robot's decisions would first go through a preliminary ethical check using a system similar to those in the most advanced question-answering AIs, such as IBM's Watson. If more help is needed, then the robot will rely on the system that Scheutz and colleagues are developing, which tries to model the complexity of human morality.

As the project is being developed in collaboration with the US Navy, the technology could find its first application in medical robots designed to assist soldiers in the battlefield.

Source: Tufts University

About the Author
Dario Borghino Dario studied software engineering at the Polytechnic University of Turin. When he isn't writing for Gizmag he is usually traveling the world on a whim, working on an AI-guided automated trading system, or chasing his dream to become the next European thumbwrestling champion.   All articles by Dario Borghino
15 Comments

The basics of morality are one thing. But to teach the AI the finer points of making a decision that is morally ambiguous for the greater good is another.

How do you tell the child you have to kill the animal so that you may nourish on its flesh.

How do you justify collateral damage to kill a terrorist with a view that it will potentially save many other innocents from being sacrificed. But that you have no numbers for how many.

How does an AI live with making an incorrect decision.

Nairda
13th May, 2014 @ 08:56 am PDT

@Nairda is spot on here. Once we move past basic fundamentals even humans are far away from being able to understand human morality. People largely believe morality is something your religion provides you yet people often pick and choose which aspects of their religion they wish to follow and those things have changed over time.

The difference between self defense and murder, the difference between good guy and bad guy etc are highly complex topics even humans do not agree on.

Look at even a simple example like mainstream news channels like fox/cnn/msnbc etc. Even thinking humans guided by journalistic integrity serve largely as the direct mouth pieces of one of the 2 political parties of choice. Even people don't formulate objective views on events, their opinions on matters are almost completely handed down to them by the media outlets or their political affiliation.

Complex morality is more tribal than fair. I don't think we give being simplistic enough credit.

Daishi
13th May, 2014 @ 11:48 am PDT

I also want to point out that the 3 laws themselves may be too complex. They are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

3. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

4. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

I confess to not having read the work of Asimov but from the movie iRobot I believe the robots came to the conclusion that the best way to protect humans from harm was to take over the world and remove human authority to do things like declare war.

I think the "3 laws" were intentionally flawed upon creation by a sci fi author to allow for robotic uprising should probably not be the guiding principals we actually use. I think to prevent the scenario in iRobot the first law must be shortened to:

1. A robot may not injure a human being.

I think getting creative with this point would just be used as justification to allow "our" robots to kill bad guys because "we" are obviously the good guys. This of course loses sight of the point that to someone somewhere we are all "them".

Daishi
13th May, 2014 @ 01:29 pm PDT

"Three billion human lives ended on August 29, 1997. The survivors of the nuclear fire called the war Judgment Day. They lived only to face a new nightmare – the war against the Machines."

- The Terminator

Robot AI's with morality smacks way too much of Skynet for my tastes. Morality varies from culture to culture and is so malleable that I think it is unworkable. Depending upon whose morality is used there is no real difference between the morality of a country fighting for survival or an AI fighting for it's survival by defending itself with robot AI's if it thinks that it will be turned off.

Mike Ryan
13th May, 2014 @ 04:28 pm PDT

As has been hinted at by other comments, morality is wholly indefinable. While there are some moral codes that are global there are so many more that are broken down by lines of division from the national level all the way down to the family level.

I believe that this is the wrong time to aim for teaching robots morality. The first thing the researchers need to accomplish is the manufacture of a technology that can definitively learn in the same fashion as a human, i.e. learn from its mistakes and be relatively limitless in this function. Once that is accomplished the researchers will then, reasonably, be able to concentrate on imbuing that technology with traits that are distinctly human.

Rt1583
13th May, 2014 @ 08:31 pm PDT

@Mike Ryan

That's the first thing I thought about. AI should be a managed, soulless tool that parses text, speech, crunches numbers, suggests solutions.

Bringing in ethics is downright dangerous.

cattleherder
13th May, 2014 @ 10:53 pm PDT

I prefer my machines to simply follow orders and the person that gives the orders are responsible for the machine's actions.

Slowburn
14th May, 2014 @ 02:33 am PDT

It's the perfect time to address this.

There is a non-profit Mormon organization that is currently working on the artificial morals and ethics that any decent AI should have and will need.

Funny the comments on here... standard comments and views by people. People that all these movies have been trying to ready for the advent of AI. Fear seems to be the common denominator.

Really useful morals and ethics can be defined, and will be instilled in the AI's. I have no fear of them.

Personally I will welcome the closest thing to intelligent consciousness, besides ourselves, we have seen since crawling from the primordial ooze.

We will be finally be able to achieve our destiny with their help.

badmadman.dontstop
14th May, 2014 @ 03:07 am PDT

I'd sooner see us put effort into instilling consistent morality in humans. First things first.

Loving It All
14th May, 2014 @ 08:27 am PDT

Please to define from what level of morality is being used in this, is it morality from one's own immediate environment or is it the morality of a society as a whole? And who exactly will be in charge of the 'witch hunts' which will no doubt arise from any morality ruling? I may just move back to Alaska after all.

YukonJack
14th May, 2014 @ 09:10 am PDT

Well this is just a BAD idea. If we teach them to understand ethics and they see how bad we are at following them, we're doomed. Can you say "Skynet?"

Dave Andrews
14th May, 2014 @ 01:01 pm PDT

still cant parse linguistic meanings very well, now you are addressing parsing non-linguistic situations? one step at a time guys.

Walt Stawicki
14th May, 2014 @ 02:03 pm PDT

Please, please, do not judge Asimov or his Robot novels by that awful movie starring Will Smith. All the movie I, Robot had in common with the book was the title.

theotherwill
14th May, 2014 @ 05:36 pm PDT

"Morality" is nothing more than a framework of constraints. Heuristic "morality" is slightly more complex but still eminently programmable.

The real problem is that different cultures and religions have different frameworks of morality.

No one likes to have another's morality imposed on them. Some people think it is their moral duty to impose their morality on others. Any "moral appliance" that incorporates such an algorithm is doomed.

nutcase
14th May, 2014 @ 06:58 pm PDT

badmadman.dontstop

"Personally I will welcome the closest thing to intelligent consciousness, besides ourselves, we have seen since crawling from the primordial ooze."

I missed that one! When did you see intelligence or ANYTHING NEW "crawling from the primordial ooze???"

Ra'anan
2nd September, 2014 @ 03:04 am PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 29,503 articles