We have made some amazing strides in tech over the past few decades, including huge advancements in robotics and artificial intelligence. Before we take these advancements too far, however, some of the world’s smartest people would like us to be really, really cautious before it’s too late.

An open letter, signed by over 1,000 people from the worlds of tech, space travel, computing, and mathematics, will be presented tomorrow in Buenos Aires at the International Join Conference on Artificial Intelligence. This letter calls for an end to the autonomous weapons arms race.


Included in the list of signatures are Stephen Hawking, Elon Musk, Noam Chomsky, Steve Wozniak, and many more. The idea behind this initiative is that humans will one day solely rely on machines to do their killing and that these weapons will end up in the wrong hands. Unfortunately for Musk and co., these machines may already exist, and this line in the sand that’s being drawn by this letter may already have been crossed. The military is constantly looking for new weapons and ways to keep soldiers off the battlefield, so it wouldn’t be too surprising if this tech is already being used.

The Future of Life Institute, who helped coordinate this letter, is determined to develop “optimistic visions of the future,” but maybe, this future is already long gone.

Check out the letter in its entirety below:

“Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”