War has always happened throughout human history and, chances are, it will continue to do so in the future. With this in mind, it’s important to ensure that if it does occur it’s carried out as humanely as possible, which is why treaties such as the Geneva Convention exists. Violating certain parts of this treaty, such as the use of chemical and biological weapons, for example, constitutes a war crime. With recent developments in artificial intelligence, a new version of the convention may be required. There have been two major revolutions in warfare so far: gunpowder and nuclear weapons, and the use of artificial intelligence is seen by many as the third such revolution. In an open letter to the United Nations, more than 100 leading robotics experts, including Elon Musk, Stephen Hawking, and the founder of Google’s Deepmind have called for a ban on the use of AI in managing weapons systems. I spoke to Peter Clark, founder of Resurgo Genetics and an expert in machine learning…
- The letter aims to trigger a debate about having international legislation for AI weapons systems, much in the same way that we have for nuclear or chemical weapons.
- Current drones require a pilot (even if thousands of miles away) and therefore still maintain an element of human morals and ethics, which means they are very different to a fully autonomous weapons system.
- One possible example of this technology could be a swarm of mini drones carrying small packets of explosives that could target individuals in a population.
- Techniques that are currently used to profile people’s online behaviour could be easily applied to such weapons systems to identify and eliminate people that opposed a particular ideology.
- The technologies being discussed are all available, and could be put together now into a system that could be catastrophic for the globe, which is why this letter is so important.
You can listen to the full interview for the Naked Scientists here.