Quantcast

UK University To Study Technology’s Risk To Humanity

November 26, 2012

redOrbit Staff & Wire Reports – Your Universe Online

Machines rising up to enslave humanity and take over the world has been a popular theme in many science fiction works, but a new academic center in the works at Cambridge University will be dedicated to studying whether or not these kinds of potential technology-related threats could actually happen.

According to Huffington Post reporter Sylvia Hui, the project, which has been dubbed the Center for the Study of Existential Risk (CSER), has been co-founded by Cambridge philosophy professor Huw Price, Cambridge professor of cosmology and astrophysics Martin Rees, and Skype co-founder Jann Tallinn.

The Center, which is scheduled to open sometime next year, will examine the “unchecked and unabated” advances of technology in recent decades, as computers and machines have spread globally and have become essential to a vast array of different facets of life, including economics, healthcare and communication, the university said in a statement on Sunday.

“While few would deny the benefits humanity has received as a result of its engineering genius — from longer life to global networks — some are starting to question whether the acceleration of human technologies will result in the survival of man“¦ or if in fact this is the very thing that will end us,” they continued, adding that the CSER would be built in order to “address these cases — from developments in bio and nanotechnology to extreme climate change and even artificial intelligence — in which technology might pose ‘extinction-level’ risks to our species.”

“At some point, this century or next, we may well be facing one of the major shifts in human history — perhaps even cosmic history — when intelligence escapes the constraints of biology,” Price explained. “Nature didn´t anticipate us, and we in our turn shouldn´t take AGI [artificial general intelligence] for granted. We need to take seriously the possibility that there might be a ‘Pandora´s box’ moment with AGI that, if missed, could be disastrous.”

The philosophy professor said that his interest in AGI began after an encounter with Tallinn, who in recent years has become an advocate for the education about the potential ethical and safety implications of technology. Price said that Tallinn had become convinced that he was more likely to die as a result of an artificial intelligence accident than a disease such as cancer or heart disease, and his arguments both intrigued and impressed Price.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” he told Hui, adding that we would no longer be “the smartest things around” and could possibly find ourselves at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

“It tends to be regarded as a flakey concern, but given that we don’t know how serious the risks are, that we don’t know the time scale, dismissing the concerns is dangerous. What we’re trying to do is to push it forward in the respectable scientific community,” Price added.


Source: redOrbit Staff & Wire Reports - Your Universe Online



comments powered by Disqus