Killer robots a step too far say AI researchers

War has always happened throughout human history and, chances are, it will continue to do so in the future. With this in mind, it’s important to ensure that if it does occur it’s carried out as humanely as possible, which is why treaties such as the Geneva Convention exists. Violating certain parts of this treaty, such as the use of chemical and biological weapons, for example, constitutes a war crime. With recent developments in artificial intelligence, a new version of the convention may be required. There have been two major revolutions in warfare so far: gunpowder and nuclear weapons, and the use of artificial intelligence is seen by many as the third such revolution. In an open letter to the United Nations, more than 100 leading robotics experts, including Elon Musk, Stephen Hawking, and the founder of Google’s Deepmind have called for a ban on the use of AI in managing weapons systems. I spoke to Peter Clark, founder of Resurgo Genetics and an expert in machine learning…

  • The letter aims to trigger a debate about having international legislation for AI weapons systems, much in the same way that we have for nuclear or chemical weapons.
  • Current drones require a pilot (even if thousands of miles away) and therefore still maintain an element of human morals and ethics, which means they are very different to a fully autonomous weapons system.
  • One possible example of this technology could be a swarm of mini drones carrying small packets of explosives that could target individuals in a population.
  • Techniques that are currently used to profile people’s online behaviour could be easily applied to such weapons systems to identify and eliminate people that opposed a particular ideology.
  • The technologies being discussed are all available, and could be put together now into a system that could be catastrophic for the globe, which is why this letter is so important.

You can listen to the full interview for the Naked Scientists here.

Are Internet Filters Really Keeping Children Safe?

Internet filters that screen what can be accessed over the web are becoming commonplace in people’s homes. They block access to online content that might be unsuitable, like pages that contain blacklisted keywords, as well as games and videos. The purpose is to protect children from being exposed to inappropriate material, but a recent study suggests that, in fact, these filters are not very effective. They may lull us into a false sense of security and could even be having a negative effect because of blocking or ‘over-blocking’ of useful content. I spoke to Oxford University’s Victoria Nash…

  • The data did not suggest any strong connection between internet filters being installed at home and the exposure of 12-15 year olds to ‘nasty’ experiences online.
  • Parents were asked whether they had filters installed at home and the children were asked whether or not they’d experienced between one and seven different types of “adverse experiences online.”
  • A statistical analysis revealed no change in the likelihood that a child had a negative online experience when an internet filter was installed.
  • 8% of children had been contacted by somebody that they didn’t know that wanted to be their friend – this was more common for girls than boys.
  • Other things that were slightly less common were seeing sexual content, being cheated out of money, or feeling under pressure to share information.
  • One suggestion for the findings is that a lot of children’s internet use takes place outside of the home and so a filter can only control part of their exposure.
  • Ultimately, Victoria believes that we need to more to educate children and parents about safe use of the internet and to help to ‘build resilience’ in the children so that they know what to do if they encounter risky material or experiences online in order to remain safe.

 

You can listen to the full interview for the Naked Scientists here.

WordPress.com.

Up ↑