Killer robots – existential threat or path to security?

Featured Image Source: https://commons.wikimedia.org/wiki/File:The_Pentagon_US_Department_of_Defense_building.jpg

A tiny robot flies onto stage, and as part of a demonstration blows a lethal hole in the head of a dummy. The presenter shows a mock video of these same robots taking out men who he assures the audience ‘were all bad guys’. He paints a picture of the future where these drones ‘cannot be stopped’ in their mission of justice, and where for just $25 million you can buy enough of them to take out half a city – the bad half. Within weeks, the technology falls into the hands of terrorists and rogue nations, with eleven US senators initially being killed and an artificial intelligence (AI) arms race beginning. This eventually leads to mass casualties across the globe, with the ‘slaughterbots’ being controlled by terrorists and small fringe groups alike. At the end, a protagonist states that “the weapons took away the expense, risk, and danger of waging war, and now we can’t afford to challenge anyone”.

This might sound like a particularly pessimistic science fiction film, the product of the Hollywood that brought us The Purge and 28 Days Later, but it is in fact the plot of a video from Ban Lethal Autonomous Weapons, a group dedicated to preventing the future this video depicts who form part of the Campaign to Stop Killer Robots. The video has been viewed by nearly three million people and ends with the warning that while the potential of AI to benefit humanity is ‘enormous’, allowing machines to choose to kill humans would be ‘devastating to our security and freedom’.

When the term ‘killer robot’ or ‘slaughterbot’ is used, most people, including myself, probably initially think of the killer robots of science fiction – Terminator for example. However, this is an image that is far from the truth, and in reality ‘lethal autonomous weapons’ – those weapons systems that can detect, select, and eliminate a target without a human operator – are still a long way off becoming reality, let alone the complex weapons seen in films such as Iron Man and I, Robot.  Currently, the closest thing to this are the ‘fire and forget’ precision guided missiles which require no human intervention after firing, but crucially still require a human to select the target and fire the weapon.

Source: https://www.flickr.com/photos/31029865@N06/14810867549

Fears surrounding AI have recently been exacerbated by the US’s first Artificial Intelligence strategy, which depicts its embrace of the technology in exclusively positive terms, calling it an ‘opportunity to improve support for and protection of U.S. service members, safeguard our citizens, defend our allies and partners, and improve the affordability and speed of our operations.’ However, even in this strategy, the emphasis is far more on tracking and fighting wildfires and performing maintenance on helicopters than making Skynet, the AI system from the Terminator films, a reality. Far from setting out to create such a system, the strategy actually proposes the use of AI advances in far more mundane situations, with the potential to save a significant number of lives in the event of wildfires such as those seen in California.  

Despite this, in recent months the activities of campaign groups have only increased in intensity, with a spokesperson from Human Rights Watch Mary Wareham recently becoming the latest voice calling for a ‘pre-emptive ban’ on the technology. The Campaign To Stop Killer Robots, which Human Rights Watch forms a part of,  is now pressing for an international treaty to prevent the creation of ‘killer robots’, claiming that ‘it is a moral step too far for AI systems to kill without any human intervention’. In a political climate where a poll of 26 countries has revealed that 61% of people oppose the development of this technology, it seems that the campaign is capturing the public mood, where opposition to ‘killer robots’ is fast rising, up 5% from 2016. The aim is to introduce an international treaty banning the development, production, and use of fully autonomous weapons before they can become a reality, similar to the way that landmines were banned back in 1999.

But the pertinent question remains – should lethal autonomous weapon, now popularly envisioned as ‘killer robots’ be banned before they become a reality, or should we wait and see what the potential benefits of the technology could be? The overwhelming popular and expert opinion seems to be that the potential risks of this technology are too severe, as both the risk of it falling into the hands of rogue entities or states and the moral issue of allowing robots to make life or death decisions by far outweigh any potential benefits.

This is not, however, an opinion held by all. Tom Rogan, writing in the Washington Inquirer, has suggested that autonomous weapons platforms can ‘expand US war – fighting capabilities and remove the need for human personnel to conduct certain operations’ which, he feels, makes them a ‘must – have’ for the 21st century Pentagon. Those that follow this line of thinking tend to cite two major arguments. Firstly, that the history of warfare has shown that we must ‘grasp any opportunity’ to increase our operational efficiency while reducing risk to combatants. Secondly, that there is a military imperative for developing lethal autonomous weapons, as the major rivals to the US and the West, China and Russia, are unlikely to yield such an advantage due to activist pressure. This latter argument is compelling, as Russia is the country most overtly blocking progress towards an international treaty banning the technology. Russian military leader, General Gerasimov, has also stated that ‘robots will be one of the main features of future wars’ and that Russia seeks to completely automate the battlefield’. Crucially, Russian weapons manufacturer Kalashnikov (most famous for the AK-47) have already begun to develop battle robots, which have reportedly been deployed in Syria.

These arguments do indeed make lethal autonomous weapons, the ‘killer robots’ of popular imagination, seem rather attractive. Nevertheless, it seems as if the world, especially those countries with the capabilities to credibly seek such technology, are facing a moral dilemma. For the US in particular, there seems to be two options. Firstly, it could choose to ignore activist pressure and moral qualms and pursue lethal autonomous weapons in an attempt to pre-empt Russia’s development of the technology and retain hegemony in terms of military power. This would be in line with the ‘third offset strategy’ based around technological advancement and innovation, the continuation of which has been called into question in recent years. Secondly, the US could decide to listen to activist pressure and popular opinion, choosing instead to throw its considerable influence behind a treaty banning the technology. True, this would risk Russia potentially outflanking the US in this particular area, but it would allow the US room and resources to pursue technology, such as laser based weapons, that could give the US advantages in other areas. This approach would also have the considerable advantage of giving credence to the US’s claims to moral leadership, which have suffered from credibility problems during the Trump administration. Soft power has always been vital to the US’s hegemony, and this could help to maintain this for the future.

Personally, it seems as if the latter approach is by far the more sensible. The potential benefits of AI are great, but utilised in the wrong way it seems as if the words of the late Stephen Hawking, who famously said that he felt that AI could be the ‘worst mistake ever’ that could lead to the ‘end of the human race’, may be only too accurate. A comparison to the initial potential of nuclear power versus its eventual use may be almost a cliché at this point, but it still seems rather apt. A technology initially intended to create world peace led to suffering that still continues today and sparked an arms race draining the budgets of nations and creating constant fears that it may fall into the wrong hands. If something decisive isn’t done in the next few years, then it seems all too possible that we may face a very similar situation with autonomous weapons. Only this time, the technology would be able to think for itself.

Leave a Reply