AUTONOMOUS WEAPONS: The Open Letter From AI & Robotics Researchers


The Future of Life Institute has presented an open letter signed by over 1,000 robotics and artificial intelligence (AI) researchers urging the United Nations to impose a ban on the development of weaponized AI with the capability to target and kill without meaningful human intervention. The letter was presented at the 2015 International Conference on Artificial Intelligence (IJCAI), and is backed with the endorsements of a number of prominent scientists and industry leaders, including Stephen Hawking, Elon Musk, Steve Wozniak, and Noam Chomsky. To some, armed and autonomous AI could seem a fanciful concept confined to the realm of video games and sci-fi. However, the chilling warning contained within the newly released open letter insists that the technology will be readily available within years, not decades, and that action must be taken now if we are to prevent the birth of a new paradigm of modern warfare. Consider now the implications of this. According to the open letter, many now consider weaponized AI to be the third revolution in modern warfare, after gunpower and nuclear arms. However, for the previous two there have always been powerful disincentives to utilize the technology.


For rifles to be used in the field, you need a soldier to wield the weapon, and this in turn meant putting a soldiers life at risk. With the nuclear revolution you had to consider the costly and difficult nature of acquiring the materials and expertize required to make a bomb, not to mention the monstrous loss of life and international condemnation that would inevitably follow the deployment of such a weapon and the threat of mutually assured destruction (MAD). These deterrent factors have resulted in only two bombs being detonated in conflict over the course of the nuclear era to date.

The true danger of an AI war machine is that it lacks these bars to conflict. AI could replace the need to risk a soldier’s life in the field, and its deployment would not bring down the ire of the international community in the same way as the launch of an ICBM. Furthermore, according to the open letter, armed AI drones with the capacity to hunt and kill persons independent of human command would be cheap and relatively easy to mass produce. The technology will have the overall effect of making a military incursion less costly and more appealing, essentially lowering the threshold for conflict. Furthermore, taking the kill decision out of the hands of human being does by its nature remove the element of human compassion and a reasoning process which, at least in the foreseeable future, is unmatchable by a mere machine.


Another chilling aspect of weaponized AI tech that the letter highlights is the potential of such military equipment to make its way into the hands of despots and warlords who wouldn’t think twice about deploying the machines as a tool to check discontent, or even perform ethnic cleansing. ( By Anthony Wood from ) Here below you can read the original text of the letter:


“Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms. Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people. Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons. In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control” –  Note: For more information about Future Of Life Institute, to sign this letter or to see the list of signatories, just follow the link below.