Elon Musk: Artificial Intelligence Potentially More Dangerous To Humans Than Nuclear Weapons

redOrbit Staff & Wire Reports – Your Universe Online
SpaceX and Tesla Motors founder Elon Musk has serious concerns over the safety of artificial intelligence, using social media to issue a warning that AI could pose a more serious threat to humanity than nuclear weapons.
As reported by Alyssa Newcomb of ABC News, Musk took to Twitter over the weekend to post that people need to be “super careful” with AI, which he described as “potentially more dangerous than nukes.”
He made the comments after reading (and recommending) the book “Superintelligence” by Swedish philosopher and Oxford professor Nick Bostrom, explained VentureBeat’s Tom Cheredar. The book, which explores what will happen if and when machines become more intelligent than humans, will be released in the US on September 1.
According to Rob Wile of Business Insider, in a blurb about the book, Bostrom’s colleague Martin Rees of Cambridge University said that “those disposed to dismiss an ‘AI takeover’ as science fiction may think again after reading this original and well-argued book.”
Wile added that Bostrom was asked at a recent conference whether or not people should be scared of new technology. He responded “Yes,” but added that humanity had to be “scared about the right things. There are huge existential threats, these are threats to the very survival of life on Earth, from machine intelligence – not the way it is today, but if we achieve this sort of super-intelligence in the future.”
Musk appears to agree with the author’s assessment, although it is interesting to note that in March, he invested in California-based AI group Vicarious – a firm who hopes to design a “computer that thinks like a person… except it doesn’t have to eat or sleep,” co-founder Scott Phoenix said, according to Ellie Zolfagharifard of the UK newspaper the Daily Mail.
“I think there is potentially a dangerous outcome there,” Musk said previously in an interview with CNBC, according to Zolfagharifard. “There have been movies about this, you know, like Terminator. There are some scary outcomes. And we should try to make sure the outcomes are good, not bad.”
The Daily Mail writer added that Vicarious is currently trying to develop a program that mimics the brain’s neocortex – the top layer of the cerebral hemispheres in the brain of mammals. The neocortex is approximately three millimeters thick, and has six layers. Each of those layers is involved with a variety of different biological functions, including sensory perception, spatial reasoning, conscious thought, and language in humans.
Back in May, internationally recognized theoretical physicist Stephen Hawking expressed similar concerns after viewing the Johnny Depp film Transcendence, in which Depp’s character has his consciousness uploaded into a quantum computer, only to grow more powerful and become virtually omniscient.
Hawking said the movie should not be dismissed as science fiction, and that ignoring the story’s deeper lessons would be “a mistake, and potentially our worst mistake in history.” While advances in AI such as driverless cars and digital assistants are often looked at as beneficial to mankind, Hawking expressed concern that they could ultimately lead to our downfall, unless we prepare for the potential risks presented by independently-thinking technology.
“The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list,” he wrote in a column for the British newspaper The Independent. “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”