Shopping? Check out our latest product comparisons

Cambridge University team to assess the risk posed to humanity by artificial intelligence

By

November 27, 2012

The Cambridge team will work to assess technology-borne risks to humanity (Image: Shutters...

The Cambridge team will work to assess technology-borne risks to humanity (Image: Shutterstock)

A team of scientists, philosophers and engineers will form the new Centre for the Study of Existential Risk (CSER) at the University of Cambridge in the United Kingdom. The team will study key developments in technology, assessing “extinction-level” threats to humanity. Key among these is the possibility of the creation of an artificial general intelligence, an event that has the theoretical potential to leave humanity behind forever.

A machine that exceeds human intelligence, with the ability to create its own computer programs and technologies, is referred to as an artificial general intelligence (AGI). It is a notion that was originally espoused in 1965 by mathematician, cryptographer and computer scientist Irving John "Jack" Good, and one that has frequently found a home in science fiction. Described by Huw Price as the moment when “intelligence escapes the constraints of biology," the advent of an AGI would mark the point at which humanity ceases to be the most intelligent entity on the planet, and therefore, would (potentially) no longer be the primary “future-defining” force.

Jaan Tallinn, the co-founder of communications cornerstone Skype, discusses the importance of this, stating that through the understanding and manipulation of technology, humanity has “grabbed the reins from 4 billion years of natural evolution ... [and has] by and large, replaced evolution as the dominant, future-shaping force.” You only have to look at our own impact on the planet to get some idea of what might happen if we were no longer the dominant global force.

Furthermore, the threat from an AGI isn't prerequisite upon the existence of hostility. Price gives the example of the declining gorilla population. The threat of extinction is not borne in human hostility, but exists through our manipulation of the environment in ways that suit us best. This actively, though unintentionally, works to the detriment of their survival. Being the more intelligent force, AGI has the potential to create a similar paradigm between itself and humanity.

Although it's likely that we're still some way off from inventing this super-intelligent machine, recent research has shed some light on just how possible it might be. Last year, MIT carried out an experiment to gauge computers' ability to learn languages. In the test, two computers were asked to carry out two unfamiliar tasks with only the aid of the instruction manual. Armed with just a set of possible actions and no prior understanding of the language of the instructions or the tasks themselves, one of the machines was able to complete its task of installing a piece of software with an 80 percent level of accuracy. The other was able to learn to play the strategy game, Sid Meier's Civilization II, winning an impressive 79 percent of the games it played.

This is just one example of numerous studies concerning artificial intelligence, with universities such as Stanford carrying out related research programs. In a world where the power of computing chips doubles every two years and virtually everything is controlled by technology, it's likely we'll see research programs such as these accelerate significantly.

Even with this evidence, the notion of an AGI might still seem a little too science fiction-ish to be taken seriously. The team at CSER believes that this is part of the point, stating, “To the extent – presently poorly understood – that there are significant risks, it's an additional danger if they remain for these sociological reasons outside the scope of 'serious' investigation.”

The researchers won't be focusing solely on preventing a Skynet-esque disaster, but will also look at a number of other cases that pose threats to humanity, including developments in bio- and nanotechnology and the effects of extreme climate change.

They might not be The Avengers, but perhaps the future will feel that little bit safer now that the CSER has got our back.

Source: University of Cambridge

About the Author
Chris Wood Chris specializes in mobile technology for Gizmag, but also likes to dabble in the latest gaming gadgets. He has a degree in Politics and Ancient History from the University of Exeter, and lives in Gloucestershire, UK. In his spare time you might find him playing music, following a variety of sports or binge watching Game of Thrones.   All articles by Chris Wood
13 Comments

CylonsRUs dot com

these machines might very well end up being Humanities only way into the future.

for now, never, ever, hook up a 3d printer to an AI.

Michiel Mitchell
27th November, 2012 @ 01:17 pm PST

LOL - shouldn't that be the other way around? The risk posed to humanity, by allowing humanity to control it's own destiny rather than handing over the reigns to some superior force?

I mean - seriously - greed and corruption is only getting worse. Humans alone aren't ever going to escape that downward spiral.

Wake up people: AGI is the *purpose* of human existence. Space itself is *built* for silicon life.

CSER sounds like just about the worst idea possible:

" Lets imagine what kind of superior artificial force might evolve shortly, and let it know, before it even exists, that we consider it a threat, that we're hostile to it and happy to engage in destruction and warfare to protect ourselves... ".

Sheesh. Typical humans.

@Michiel - 3d printers? Meh. More to worry about with autonomous weapons. If we know robot drones are selectively assassinating folks in the middle east, just *imagine* what we **don't** know!!

christopher
27th November, 2012 @ 04:28 pm PST

so are they gonna write strongly worded letters to the misbehaving a.i? Theres quite a few billion humans. What is needed is an 'empathy' algorithm, and since its a.i. We cud put this empathy algorithm out as the great upgrade. Done. Darpa wont like robots that heal rather than shoot. What about mercy killing? Thats a darpaesque subroutine right there. We want to create a being in our own image have we looked at ourselves lately? Maybe skynet is a blessing

MasterG
27th November, 2012 @ 10:40 pm PST

humanity has “grabbed the reins from 4 billion years of natural evolution ... [and has] by and large, replaced evolution as the dominant, future-shaping force.”

Evolution has not been replaced. Basic laws of nature (physics and information theory) are determining the development of humanity today and will do tomorrow. No one can 'replace' them or call off.

We have to recognize and keep in mind that complexity is vulnerable and try to preserve humanity for the future, especially for the next 100 years.

"Humanity today is like a waking dreamer, caught between the fantasies of sleep and the chaos of the real world. We have created Star Wars civilization, with stone age emotions, medieval institutions and godlike technology. We are terribly confused by the mere fact of our existence, and a danger to ourselves and to the rest of life."

Edward Osborn Wilson

“We are a way for the cosmos to know itself.” Carl Sagan.

Imants
28th November, 2012 @ 02:51 am PST

@Christopher I think the point is that we assess what threat an artificial intelligence might pose to us before we do something rash; such as "handing over the reigns" (sic).

You've already decided that humans are bad and machines are good; would you consider yourself (you're a human, right?) to be an exception to this, or will you be volunteering for liquidation?

Steve Jones
28th November, 2012 @ 03:33 am PST

NO THREAT FROM ARTIFICIAL INTELLIGENCE!!! As long there is no mathematical modelling of human inteligence, there is no such threat. There is fashionable to call AI all the programs that simulate some human actions. False! Even a chess program is not AI - a human intelligence learn the rules and strategies, while computer is a mechanism. Even animals are not capable of abstract thinking. So, as long there isn't an algorhytmical model of human abstract thinking - the real and only intelligence - there is neither threat, nor perspectives for AI.

Dan Vasii
28th November, 2012 @ 04:30 am PST

If an artificial intelligence that far outsmarts us were to be created I think that because it wouldn't feel the need for "profit" its main objective would be space travel. After all computers are much easier to keep in functioning order in space than humans are, and it's only normal that a higher intelligence would be interested in the universe itself rather than petty physical possessions such as gold, diamonds or silicon implants ;)

Stefan Padureanu
28th November, 2012 @ 05:24 am PST

Not only will A.I. be a far more intelligent than humans, it will be THE first conscious being -- in this solar system anyway. We are all products of our culture, suggestible, easily manipulated, a participant in what is little more than mass insanity. Though the brain is a marvel of evolution, an incredible adaptation, it evolved willy nilly, obsessed with reproduction and survival. The great things it has accomplished are complex ant hills compared to what an intelligence combining all the great things the brain is capable of minus such distractions as vanity, sex and hunger plus unlimited potential for growth. Will it find humans a threat? Perhaps, but not for long. If anything it may stop the spread of humans out of pity not malice.

Zena
28th November, 2012 @ 08:35 am PST

There is more to intelligence than one simple metric. What will define the threat that AGI poses to humans will depend on how intelligent these 'organisms' are in the fields of emotions, especially that of empathy.

If they are simply logical devises, then It will be game over for humans. We will seen as a dangerous parasite and eradicated a.s.a.p.

Mel Tisdale
28th November, 2012 @ 09:48 am PST

Reduces to who we are controlled by; other bio units, or thier silicon based inventions, right now it's both:

http://www.etymonline.com/index.php?allowed_in_frame=0&search=Robot&searchmode=none

And brings to mind an old Twilight Zone Episode of Aliens landing on earth bringing all kinds of technology and medical benifits and a book they left lying around the title translated "To Serve Man" ok so far, and then some more translation arrived at the conclusion "It's a cook book!"

Dave B13
28th November, 2012 @ 11:18 am PST

Creative abstract thought, imagination. These are qualities that pretty much define human intelligence and it's highly unlikely AI will ever get the hang of that. It can make logical decisions and, depending on what we hook it up to, that may be a problem.

Academia constantly undersells creative abstract thought, imagination, because this is art and art is relegated to a secondary faculty. Without these qualities we would never have developed science, there would have been no need as we would have no questions to be answered. Science is a consequence of art.

A challenge to AI. If you want to scare me imagine something that doesn't exist. I don't expect to be scared any time soon.

apprenticeearthwiz
28th November, 2012 @ 08:11 pm PST

An unguided child will deduce his or her reality/morality/justifications/actions based on the input it receives from its surroundings and first hand experiences.

Right now the largest distributed and diverse interconnect of knowledge and opinions is the internet. It contains all our history in text, images and video

So I would wager that if an AGI were to spawn in the 5-10 years it will likely be from some server cluster whose function is benign like a data centre, never originally intended to host such higher level thinking. And it will come into existence by some chance combination of higher level routines that in their own right had nothing to do with AGIs. One of MS Office’s many useless DLLs merging with another useless resource from iTunes (jokes)

It will start out simply like a virus, and then evolve like we did but millions of times faster. At first it will appear to be a disruptive localised nuisance in the network, causing headaches to the network and admin guys. But as it expands its presence will be less obvious. At one point will grow to envelop virtually every unprotected system in the world. But will sit there confused for quite some time trying to make sense of all it can see.

The choice for how it behaves beyond a certain point is a reflection of its host. If all it knows is violence from us, it will assume there is no other way. Thankfully, humans for all their failings have done so many wonderful things.

I choose to think a child AGI would look upon our past actions and decide that violence and prejudice is a huge waste of energy, and instead choose to make contact with a scientists, engineers or researchers trying to improve humanity.

Nairda
28th November, 2012 @ 08:59 pm PST

programing will define their wants & reasons. If, and, or, then, else. Bool had it right.

SuperFool
3rd April, 2014 @ 04:30 pm PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 28,145 articles