May 4, 2014
Hawking Concerned Advanced AI Could Spell The End Of Mankind
redOrbit Staff & Wire Reports - Your Universe Online
One of the greatest thinkers in the world believes that artificial intelligence could be “the worst thing to happen to humanity,” and that the scenario depicted in the recently-released Johnny Depp film Transcendence should not be simply dismissed as a work of science fiction.
Writing in Thursday’s edition of the British newspaper The Independent, internationally recognized theoretical physicist Stephen Hawking said that ignoring the deeper lessons of the movie – in which Depp’s character has his consciousness uploaded into a quantum computer, only to grow more powerful and become virtually omniscient – would be “a mistake, and potentially our worst mistake in history.”
Advancements in artificial Intelligence, including driverless vehicles and digital assistants such as Siri and Cortana, are often viewed as ways to make life easier for mankind, explained Daily Mail reporter Ellie Zolfagharifard.
However, Hawking expresses concern that they could ultimately lead to our downfall unless we prepare for the potential risks – such as how to respond to technology that gains the ability to think independently and adapt to its environment.
“The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list,” Hawking wrote. “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
One prime concern is the development of autonomous-weapons systems capable of selecting and eliminating targets – weapons that the UN and Human Rights Watch have proposed banning via treaty. Such weaponized machines could grow into something straight out of the Terminator movies: becoming self-aware, constantly improving their own design and essentially becoming unstoppable, noted Slashgear’s Nate Swanner.
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” said Hawking. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
The Cambridge University Department of Applied Mathematics and Theoretical Physics Research Director also seems dubious about those who claim to be experts in artificial intelligence, according to CNET writer Chris Matyszczyk.
In fact, the professor said that our preparation for dealing with advanced AI was similar to responding to an invasion threat by a superior extraterrestrial civilization by telling them to “call us when you get here – we’ll leave the lights on.” Likewise, Matyszczyk observed that “it often seems as if the commitment to engineering supersedes any threat the end product might have to humanity.”
“Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues,” Hawking concluded. “All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.”