Home Slider Should We Fear Artificial Superintelligence?

Should We Fear Artificial Superintelligence?

by internationalbanker

“Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So, we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it,” noted physicist Stephen Hawking postulated in 2017, shortly before his death. “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”

Although Hawking did not specifically identify the moment at which AI (artificial intelligence) could pose such a danger, it would almost certainly be after the point it reaches artificial superintelligence (ASI), which refers to a state when the cognitive abilities of computers will have surpassed that of humans in all respects. So, with superintelligence being capable of outperforming human intelligence, what implications does unleashing such power on the world have for the human race? Whilst such a state has not been realised to date, some believe it may well transpire at some point in the not-too-distant future and not necessarily with beneficial consequences. On the contrary, the creation of superintelligence, according to some, could result in disaster for humanity, possibly even extinction.

Is such a scenario too far-fetched? Not if Hollywood was your only reference. Indeed, the notions of intelligent machines taking over the world, or variants thereof, have repeatedly graced the silver screen through such blockbuster films as The Terminatorand The Matrix, both of which envision an apocalyptic, doomsday scenario brought about by machines surpassing human intelligence. And while movies are rarely accurate depictions of real life, a growing number of the world’s thought leaders have begun to sound the alarm bells in recent years. 

Indeed, Hawking is not the only high-profile figure to warn about ASI being disastrous on a global scale. Tesla and SpaceX boss Elon Musk has similarly predicted dire consequences, claiming that AI is potentially more dangerous than North Korea and nuclear warheads, and has frequently called for greater regulatory oversight on the development of superintelligence. “The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are,” Musk said in 2018. “This tends to plague smart people. They define themselves by their intelligence, and they don’t like the idea that a machine could be way smarter than them, so they discount the idea—which is fundamentally flawed.”

Musk initially levelled much of his concern at Google after its DeepMind project developed the deep neural network (DNN) AlphaGo, which managed to defeat a Chinese grandmaster at Go, a 3,000-year-old game considered significantly more complex than chess. With a corporate behemoth holding something with potentially vast amounts of power, Musk has been keen for researchers to make the research and development of such projects more open source, with greater levels of regulation and accountability. “There’s a lot of risk in the concentration of power. So, if AGI [artificial general intelligence] represents an extreme level of power, should that be controlled by a few people at Google with no oversight?” According to Musk, therefore, there will ultimately be little chance that humans will be safe from AI systems.

What’s more, he’s not alone in predicting such disastrous consequences. One of the leading philosophers on this particular issue is Nick Bostrom, an Oxford University professor whose book Superintelligence: Paths, Dangers, Strategies discusses a multitude of scenarios in which humanity could be threatened by the superiority of machines. The book focuses on the stage at which AI achieves an intelligence explosion. “How could we engineer a controlled detonation that would protect human values from being overwritten by the arbitrary values of a misbegotten artificial superintelligence?” he posited.

Bostrom also stated that it might be impossible to correct an AI system that is badly designed. “Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.” To illustrate his point, he offered a thought experiment of a superintelligence with the sole objective of manufacturing as many paperclips as possible and which resists all external efforts to change this objective. “This could result in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities,” noted Bostrom. “More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it.”

Not every forecast is necessarily as pessimistic, however. Some are keen to emphasise that at this stage, there is no evidence that superintelligent robots are about to wipe out the human race. “The frightening, futurist portrayals of Artificial Intelligence that dominate films and novels, and shape the popular imagination, are fictional,” noted Stanford University’s AI100 Standing Committee’s paper “Artificial Intelligence and Life in 2030”. “In reality, AI is already changing our daily lives, almost entirely in ways that improve human health, safety, and productivity…. And while the potential to abuse AI technologies must be acknowledged and addressed, their greater potential is, among other things, to make driving safer, help children learn, and extend and enhance people’s lives. In fact, beneficial AI applications in schools, homes, and hospitals are already growing at an accelerated pace.”

Others are keen to highlight the need to design systems such that malicious intent can be blocked. According to the research paper “Safely Interruptible Agents” by Laurent Orseau, a Google DeepMind research scientist, and Stuart Armstrong, research fellow of Oxford University’s Future of Humanity Institute, AI agents in the real world are unlikely to behave optimally all the time. “If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions—harmful either for the agent or for the environment—and lead the agent into a safer situation.” As such, the paper’s “safe interruptibility” framework focuses on designing a system that enables the human operator to have ultimate control, and if need be, take the AI agent “out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for.”

At this stage, then, we have to believe that ASI will operate to enhance humanity rather than destroy it. And even Professor Hawking believed this could be the ultimate result. “I am an optimist, and I believe that we can create AI for the good of the world. That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance,” Hawking said.

But it seems that work must also be urgently done to get ahead of any potential fallout from the advancements being made. And that means ensuring that sufficient regulation, accountability and democratisation are all firmly embedded into the process sooner rather than later. The transformational potential of AI is undoubtedly immense—here’s hoping we can harness its awesome power for the safest possible outcomes.

Related Articles

Leave a Comment

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.