A team of scientists, philosophers and engineers will form the new Centre for the Study of Existential Risk (CSER) at the University of Cambridge in the United Kingdom. The team will study key developments in technology, assessing “extinction-level” threats to humanity. Key among these is the possibility of the creation of an artificial general intelligence, an event that has the theoretical potential to leave humanity behind forever.

A machine that exceeds human intelligence, with the ability to create its own computer programs and technologies, is referred to as an artificial general intelligence (AGI). It is a notion that was originally espoused in 1965 by mathematician, cryptographer and computer scientist Irving John "Jack" Good, and one that has frequently found a home in science fiction. Described by Huw Price as the moment when “intelligence escapes the constraints of biology," the advent of an AGI would mark the point at which humanity ceases to be the most intelligent entity on the planet, and therefore, would (potentially) no longer be the primary “future-defining” force.

Jaan Tallinn, the co-founder of communications cornerstone Skype, discusses the importance of this, stating that through the understanding and manipulation of technology, humanity has “grabbed the reins from 4 billion years of natural evolution ... [and has] by and large, replaced evolution as the dominant, future-shaping force.” You only have to look at our own impact on the planet to get some idea of what might happen if we were no longer the dominant global force.

Furthermore, the threat from an AGI isn't prerequisite upon the existence of hostility. Price gives the example of the declining gorilla population. The threat of extinction is not borne in human hostility, but exists through our manipulation of the environment in ways that suit us best. This actively, though unintentionally, works to the detriment of their survival. Being the more intelligent force, AGI has the potential to create a similar paradigm between itself and humanity.

Although it's likely that we're still some way off from inventing this super-intelligent machine, recent research has shed some light on just how possible it might be. Last year, MIT carried out an experiment to gauge computers' ability to learn languages. In the test, two computers were asked to carry out two unfamiliar tasks with only the aid of the instruction manual. Armed with just a set of possible actions and no prior understanding of the language of the instructions or the tasks themselves, one of the machines was able to complete its task of installing a piece of software with an 80 percent level of accuracy. The other was able to learn to play the strategy game, Sid Meier's Civilization II, winning an impressive 79 percent of the games it played.

This is just one example of numerous studies concerning artificial intelligence, with universities such as Stanford carrying out related research programs. In a world where the power of computing chips doubles every two years and virtually everything is controlled by technology, it's likely we'll see research programs such as these accelerate significantly.

Even with this evidence, the notion of an AGI might still seem a little too science fiction-ish to be taken seriously. The team at CSER believes that this is part of the point, stating, “To the extent – presently poorly understood – that there are significant risks, it's an additional danger if they remain for these sociological reasons outside the scope of 'serious' investigation.”

The researchers won't be focusing solely on preventing a Skynet-esque disaster, but will also look at a number of other cases that pose threats to humanity, including developments in bio- and nanotechnology and the effects of extreme climate change.

They might not be The Avengers, but perhaps the future will feel that little bit safer now that the CSER has got our back.

Source: University of Cambridge