In December 2014, Stephen Hawking warned that, “The development of full artificial intelligence (AI) could spell the end of the human race”.

Hawking acknowledged its upsides, yet dreaded that the repercussions of AI’s rapid development would leave humanity in dire straits. Artificial intelligence has considerable advantages, but its disadvantages will prevail over its advantages in the end. Its evolution can culminate in a series of disastrous outcomes: economic, political and social crises. Before engaging in a review of what disasters the current tech boom may cause, it would be insightful to see what the world-renowned pioneers of AI predict for its development.

“I am in the camp that is concerned about super intelligence,” Bill Gates, the creator of Microsoft, stated. “First, the machines will do many jobs for us and not be super intelligent. [I] don’t understand why some people are not concerned.”

Stephen Hawking believes that AI’s development can be apocalyptic.

In 2017, Hawking made an appearance at the Global Mobile Internet Conference Beijing where he warned AI developers that this technology could be “the worst event in the history of our civilization.”  

Elon Musk, a multi-billionaire CEO of Tesla and Space X, is also alarmed about AI and claims that it is more dangerous than nuclear weapons. He believes that global AI race will cause a revolution. AI might replace us in our jobs sooner than we think.

After surveying 350 successful, competent AI developers, New Science, a renowned science magazine, concluded that machines will completely outperform humans within the next 50 years. Their research suggests that machines will be better at translating languages than interpreters in 2024 and will be able to compose high-quality high school essays by 2026. Their research also suggests that by 2027, autonomous cars will be more professional drivers than humans. In addition, AI will be able to write a bestseller by 2049 and perform surgery by 2053.

However, Elon Musk, the billionaire CEO of Tesla, Neuralink, and SpaceX, suggests that AI takeover will occur even sooner than its developers predict and he believes that by 2030 machines will surpass humans.

Stephen Hawking too predicted that, “[AI] could bring great disruption to our economy.”

Research conducted in 46 countries by the McKinsey Global Institute, a global economy analyzer, concluded that 800 million of jobs will be at stake by 2030.

If AI wins over human intelligence, then it will only exacerbate current unemployment issues. Beside poverty, individuals will lose a sense of purpose, which will impinge on their mental well-being. This will force governments to provide citizens with welfare costs, which will diminish countries’ monetary resources.

In addition, since fewer citizens will have principal to invest into anything, both the economy and production might witness stagnation. This could cause global economic damage. Apart from economic instability, AI can spark political uncertainty and put a psychological toll on people.

Cecilia Reyes, chief risk officer of Zurich Insurance Group, analyzed that if there is a mutual, coordinated effort between authority and the private sector, the newly displaced workers will put a strain on economy that will result in socio-political upheaval. In addition, like Hawking, Gates and Musk, Reyes is certain that AI will take away jobs quicker than it creates them. So bereft of means for life and a sense of purpose, the displaced workers may initiate radical reforms. As pressure increases, the global political arena will endure much graver challenges. Apart from rebellions, AI can trigger wars. Elon Musk, the CEO of Tesla, fears that AI can catalyze a third World War.

Elon Musk suggested that AI and not nuclear weapons is the greatest problem, which should be treated with apprehension. Musk is a firm believer that AI can catalyze a third World War. According to him, lethal autonomous weapon is the next threat to humankind. Autonomous weapons include selective targeting missiles and cognitive robots with the ability to understand whom and when to fight. It can be said that there is AI competition, which is driven by a burning ambition to have geopolitical dominance, scientific superiority and global leadership.

Two Noble prizewinners, Stephen Hawking and Bill Gates also keep a wary eye on AI. They urge the developers to pursue AI with extreme caution. They fear that humans will develop AI to the point that it gets beyond our control, and we will become enslaved by it. Hawking ascribed AI’s ultimate victory to the fact that, “Humans, who are limited by slow biological evolution, couldn’t compete and could be superseded by AI”.

In fact, errors have already occurred.

In July 2017, Facebook witnessed two AI machines chatting to each other, in a language intelligible to both of them, but not to their developers.

The machines did the opposite of what they were programmed to do and begun altering the programming language. Consequently, they created a new language that became incomprehensible to specialists. The conversation between the robots:

Bob: balls have zero to me to me to me to me to me to me to me to me to (x5)

Alice: you i i i everything else . . . . . . . . . . . . . . (x5)

Alice and Bob’s “glitch” alarmed everyone, and showed that robotics is dangerous, and a glitch might occur in lethal autonomous robots the way it did in them. In the book Army of None: Autonomous Weapons and the Future of War, Paul Scharre, expressed concern that autonomous system may easily slip out of control. It could happen due to a miswritten code or a cyberattack by the hackers. That, in return, could reprogram them to go against humans, and we would be unable to retaliate.

It is simple-minded to think that machines will do whatever humans tell them to do.

There is no way to a test machine’s loyalty. In addition, because there is an intense competition and hassle between countries to dominate the AI field, current developers can accidentally make one unintended mistake that will be destructive for humanity.

To summarize, in twenty years, AI will outperform an average person, making it useless to hire a human. As a result, joblessness will also severely damage the mental state of humanity. In addition, the unemployed may create a social instability by demanding economic change and publically displaying their disapproval. Another very concerning political aspect of AI is the lethal autonomous weapons; with its rise, we can wave a goodbye to our current peaceful environment. One wrong code can completely reprogram lethal weapons that will cause irreversible damage for humankind. Therefore, to answer the question asked in the beginning of whether it poses a threat to us, the answer is yes. Artificial intelligence poses a threat to its creators – human intelligence.

So as long as AI is within the grasp of human intelligence, we must make sure it is evolutionary and not revolutionary.