Are killer robots really going to take over the world? The answer is maybe

“Artificial intelligence could spell the end of the human race,” the late physicist Stephen Hawking once said. And he wasn’t alone in his concerns. Bill Gates, Elon Musk and Steve Wosniak along with some of the most prominent researchers in the field have expressed their unease at the concept of unchecked AI development. So what are they worried about? And if they’re scared, does that mean we should be too?

 

Let’s start with the least likely scenario and work backwards:

 

Sentient, human-like androids

 

Anyone who’s watched HBO’s Westworld or the classic Blade Runner knows that making a robot think and feel like a human is a very bad idea. While any other “super” AI won’t *necessarily* want to eliminate us unless we get in its way, giving the range of human emotions to what is essentially a superhuman we’ve created to exploit would probably be enough to make any of us want to go on a killing spree. Fortunately, this scenario is the least likely, as while we have managed to create some convincingly human AI, we are a long way from programming human emotion, let alone creating something that has a true sense of “self”.

 

Intelligence “explosion”

 

A slightly more likely scenario is that in which an AI learns how to self-improve or replicate, resulting in exponential self-improvement/replication and an enormous and uncontrollable advantage over humans (referred to as an “intelligence explosion”). One example of this in popular culture would be Skynet, the AI that serves as the primary antagonist of the Terminator franchise. The actions of a superintelligent AI would not necessarily be detrimental to the human race, depending on the core drives of its original code; however it is possible it would see us as a threat to whatever those drives are, or to itself (just like Skynet does).

 

This is still not a likely scenario for the near future, however this possibility was the primary cause of Stephen Hawking’s concern about the future of AI. The physicist maintained that if researchers begin to consider and pre-empt this in their development of AI now, the threat would be significantly lower.

 

Lethal autonomous weapons

 

If you have seen the episode of Black Mirror featuring that terrifying robot dog, you know exactly what a takeover by weaponised robots looks like. And if that episode didn’t frighten you at all, maybe this will: Fully autonomous weapons have already been used, with lethal force, in Israel and South Korea to take out “threats” on their borders. The US and Russia have both been developing their own weaponised autonomous drones and vehicles. And despite widespread public support for a ban on these weapons, changes to international law to do so has been blocked by all of the above countries as well as Australia.

 

Weaponizing AI goes against Asimov’s well known “first law of robotics”: A robot must not injure a human being. Without this safeguard, we are vulnerable to a number of scenarios that don’t require robot superintelligence. Depending on how available and how easily manipulable these weaponised robots are, there is a possibility of someone deliberately programming them to seek out and kill anyone and everyone. Alternatively, and more likely, an automated weapon could be unable to distinguish the enemies from their own side, and would continue to attack indiscriminately. Finally, an error might occur in which what were otherwise specific directions on who to attack are lost.

 

This scenario, or at least one in which autonomous weapons are responsible for mass human loss, is unfortunately not as unlikely as the others above, and will hopefully be prevented through the banning of this kind of AI.

 

Economic takeover

 

Robots are potentially going to be responsible for a large part of the economy, with 47% of US jobs having a high probability of being automated by 2050. Some people are concerned this will lead to widespread unemployment and poverty, even more extreme wealth inequality between regions and countries and overall unbalanced global power due to unequal access to the technology. Others are more hopeful that the automation will result in a shorter work week and higher living standards.

 

Of all the above scenarios, this is the most likely to actually happen. Fortunately, we don’t need to feel so hopeless about this one – I, for one, am going to stop using the self-serve checkouts at Woolworths so much. It’s the little things that count, right?


One Response to “Are killer robots really going to take over the world? The answer is maybe”

  1. Salman Ahmed Siddiqui says:

    well this debate is something which is greatly explain by Steven Fry about how AI and robots will take over the world
    https://www.youtube.com/watch?v=c0Ody-HLvTk

    However i think this kind of situation will take time because AI technology is still not mature enough and far away to do efficiently without human commands.
    There is one website from where i gain knowledge about trending technology and privacy is http://www.onlineprivacytips.co

Leave a Reply

Your email address will not be published. Required fields are marked *