Science

Cambridge University team to assess the risk posed to humanity by artificial intelligence

Cambridge University team to assess the risk posed to humanity by artificial intelligence
The Cambridge team will work to assess technology-borne risks to humanity (Image: Shutterstock)
The Cambridge team will work to assess technology-borne risks to humanity (Image: Shutterstock)
View 1 Image
The Cambridge team will work to assess technology-borne risks to humanity (Image: Shutterstock)
1/1
The Cambridge team will work to assess technology-borne risks to humanity (Image: Shutterstock)

A team of scientists, philosophers and engineers will form the new Centre for the Study of Existential Risk (CSER) at the University of Cambridge in the United Kingdom. The team will study key developments in technology, assessing “extinction-level” threats to humanity. Key among these is the possibility of the creation of an artificial general intelligence, an event that has the theoretical potential to leave humanity behind forever.

A machine that exceeds human intelligence, with the ability to create its own computer programs and technologies, is referred to as an artificial general intelligence (AGI). It is a notion that was originally espoused in 1965 by mathematician, cryptographer and computer scientist Irving John "Jack" Good, and one that has frequently found a home in science fiction. Described by Huw Price as the moment when “intelligence escapes the constraints of biology," the advent of an AGI would mark the point at which humanity ceases to be the most intelligent entity on the planet, and therefore, would (potentially) no longer be the primary “future-defining” force.

Jaan Tallinn, the co-founder of communications cornerstone Skype, discusses the importance of this, stating that through the understanding and manipulation of technology, humanity has “grabbed the reins from 4 billion years of natural evolution ... [and has] by and large, replaced evolution as the dominant, future-shaping force.” You only have to look at our own impact on the planet to get some idea of what might happen if we were no longer the dominant global force.

Furthermore, the threat from an AGI isn't prerequisite upon the existence of hostility. Price gives the example of the declining gorilla population. The threat of extinction is not borne in human hostility, but exists through our manipulation of the environment in ways that suit us best. This actively, though unintentionally, works to the detriment of their survival. Being the more intelligent force, AGI has the potential to create a similar paradigm between itself and humanity.

Although it's likely that we're still some way off from inventing this super-intelligent machine, recent research has shed some light on just how possible it might be. Last year, MIT carried out an experiment to gauge computers' ability to learn languages. In the test, two computers were asked to carry out two unfamiliar tasks with only the aid of the instruction manual. Armed with just a set of possible actions and no prior understanding of the language of the instructions or the tasks themselves, one of the machines was able to complete its task of installing a piece of software with an 80 percent level of accuracy. The other was able to learn to play the strategy game, Sid Meier's Civilization II, winning an impressive 79 percent of the games it played.

This is just one example of numerous studies concerning artificial intelligence, with universities such as Stanford carrying out related research programs. In a world where the power of computing chips doubles every two years and virtually everything is controlled by technology, it's likely we'll see research programs such as these accelerate significantly.

Even with this evidence, the notion of an AGI might still seem a little too science fiction-ish to be taken seriously. The team at CSER believes that this is part of the point, stating, “To the extent – presently poorly understood – that there are significant risks, it's an additional danger if they remain for these sociological reasons outside the scope of 'serious' investigation.”

The researchers won't be focusing solely on preventing a Skynet-esque disaster, but will also look at a number of other cases that pose threats to humanity, including developments in bio- and nanotechnology and the effects of extreme climate change.

They might not be The Avengers, but perhaps the future will feel that little bit safer now that the CSER has got our back.

Source: University of Cambridge

13 comments
13 comments
Michiel Mitchell
CylonsRUs dot com
these machines might very well end up being Humanities only way into the future.
for now, never, ever, hook up a 3d printer to an AI.
christopher
LOL - shouldn't that be the other way around? The risk posed to humanity, by allowing humanity to control it's own destiny rather than handing over the reigns to some superior force?
I mean - seriously - greed and corruption is only getting worse. Humans alone aren't ever going to escape that downward spiral.
Wake up people: AGI is the *purpose* of human existence. Space itself is *built* for silicon life.
CSER sounds like just about the worst idea possible:
" Lets imagine what kind of superior artificial force might evolve shortly, and let it know, before it even exists, that we consider it a threat, that we're hostile to it and happy to engage in destruction and warfare to protect ourselves... ".
Sheesh. Typical humans.
@Michiel - 3d printers? Meh. More to worry about with autonomous weapons. If we know robot drones are selectively assassinating folks in the middle east, just *imagine* what we **don't** know!!
MasterG
so are they gonna write strongly worded letters to the misbehaving a.i? Theres quite a few billion humans. What is needed is an 'empathy' algorithm, and since its a.i. We cud put this empathy algorithm out as the great upgrade. Done. Darpa wont like robots that heal rather than shoot. What about mercy killing? Thats a darpaesque subroutine right there. We want to create a being in our own image have we looked at ourselves lately? Maybe skynet is a blessing
Imants
humanity has “grabbed the reins from 4 billion years of natural evolution ... [and has] by and large, replaced evolution as the dominant, future-shaping force.”
Evolution has not been replaced. Basic laws of nature (physics and information theory) are determining the development of humanity today and will do tomorrow. No one can 'replace' them or call off.
We have to recognize and keep in mind that complexity is vulnerable and try to preserve humanity for the future, especially for the next 100 years. "Humanity today is like a waking dreamer, caught between the fantasies of sleep and the chaos of the real world. We have created Star Wars civilization, with stone age emotions, medieval institutions and godlike technology. We are terribly confused by the mere fact of our existence, and a danger to ourselves and to the rest of life." Edward Osborn Wilson “We are a way for the cosmos to know itself.” Carl Sagan.
Steve Jones
@Christopher I think the point is that we assess what threat an artificial intelligence might pose to us before we do something rash; such as "handing over the reigns" (sic). You've already decided that humans are bad and machines are good; would you consider yourself (you're a human, right?) to be an exception to this, or will you be volunteering for liquidation?
Dan Vasii
NO THREAT FROM ARTIFICIAL INTELLIGENCE!!! As long there is no mathematical modelling of human inteligence, there is no such threat. There is fashionable to call AI all the programs that simulate some human actions. False! Even a chess program is not AI - a human intelligence learn the rules and strategies, while computer is a mechanism. Even animals are not capable of abstract thinking. So, as long there isn't an algorhytmical model of human abstract thinking - the real and only intelligence - there is neither threat, nor perspectives for AI.
Stefan Padureanu
If an artificial intelligence that far outsmarts us were to be created I think that because it wouldn't feel the need for "profit" its main objective would be space travel. After all computers are much easier to keep in functioning order in space than humans are, and it's only normal that a higher intelligence would be interested in the universe itself rather than petty physical possessions such as gold, diamonds or silicon implants ;)
Zena
Not only will A.I. be a far more intelligent than humans, it will be THE first conscious being -- in this solar system anyway. We are all products of our culture, suggestible, easily manipulated, a participant in what is little more than mass insanity. Though the brain is a marvel of evolution, an incredible adaptation, it evolved willy nilly, obsessed with reproduction and survival. The great things it has accomplished are complex ant hills compared to what an intelligence combining all the great things the brain is capable of minus such distractions as vanity, sex and hunger plus unlimited potential for growth. Will it find humans a threat? Perhaps, but not for long. If anything it may stop the spread of humans out of pity not malice.
Mel Tisdale
There is more to intelligence than one simple metric. What will define the threat that AGI poses to humans will depend on how intelligent these 'organisms' are in the fields of emotions, especially that of empathy.
If they are simply logical devises, then It will be game over for humans. We will seen as a dangerous parasite and eradicated a.s.a.p.
Dave B13
Reduces to who we are controlled by; other bio units, or thier silicon based inventions, right now it's both:
http://www.etymonline.com/index.php?allowed_in_frame=0&search=Robot&searchmode=none
And brings to mind an old Twilight Zone Episode of Aliens landing on earth bringing all kinds of technology and medical benifits and a book they left lying around the title translated "To Serve Man" ok so far, and then some more translation arrived at the conclusion "It's a cook book!"
Load More