By Brian Wheeler, BBC Political reporter
30 November 2017
Rogue states and terrorists will get their hands on lethal artificial intelligence "in the very near future", a House of Lords committee has been told. Alvin Wilby, vice-president of research at French defence giant Thales, which supplies reconnaissance drones to the British Army, said the "genie is out of the bottle" with smart technology. And he raised the prospect of attacks by "swarms" of small drones that move around and select targets with only limited input from humans. "The technological challenge of scaling it up to swarms and things like that doesn't need any inventive step," he told the Lords Artificial Intelligence committee. "It's just a question of time and scale and I think that's an absolute certainty that we should worry about."
The US and Chinese military are testing swarming drones - dozens of cheap unmanned aircraft that can be used to overwhelm enemy targets or defend humans from attack. Noel Sharkey, emeritus professor of artificial intelligence and robotics at University of Sheffield, said he feared "very bad copies" of such weapons - without safeguards built-in to prevent indiscriminate killing - would fall into the hands of terrorist groups such as so-called Islamic State. This was as big a concern as "authoritarian dictators getting a hold of these, who won't be held back by their soldiers not wanting to kill the population," he told the Lords Artificial Intelligence committee. He said IS was already using drones as offensive weapons, although they were currently remote-controlled by human operators.
But the "arms race" in battlefield artificial intelligence meant smart drones and other systems that roamed around firing at will could soon be a reality. "I don't want to live in a world where war can happen in a few seconds accidentally and a lot of people die before anybody stops it", said Prof Sharkey, who is a spokesman for the Campaign to Stop Killer Robots. The only way to prevent this new arms race, he argued, was to "put new international restraints on it", something he was promoting at the United Nations as a member of the International Committee for Robot Arms Control. But Prof Wilby, whose company markets technology to combat drone attacks, said such a ban would be "misguided" and difficult to enforce.
He said there was already an international law of armed conflict, which was designed to ensure armed forces "use the minimum force necessary to achieve your objective, while creating the minimum risk of unintended consequences, civilian losses". The Lords committee, which is investigating the impact of artificial intelligence on business and society, was told that developments in AI were being driven by the private sector, in contrast to previous eras, when the military led the way in cutting edge technology. And this meant that it was more difficult to stop it falling into the wrong hands. Britain's armed forces do not use AI in offensive weapons, the committee was told, and the Ministry of Defence has said it has no intention of developing fully autonomous systems. But critics, such as Prof Sharkey, say the UK needs to spell out its commitment to banning AI weapons in law.
[Here I was thinking that such things are obvious – technology, especially that with military applications, will inevitably fall into the hands of bad guys and those who would do us harm. I mean, it’s not like it’s never happened before. Technology is the great leveller and the even greater enabler. Modern IT technology is even more of both. It is more easily dispersible (imagine for a moment how difficult it is to obtain the materials for a working nuke, now imagine how easy it would be to build your very own killer robot) and more easily usable by the average carrier of an AK-47 (no PhD required). The knowledge is already out there. The technology is already out there, either commercially available or steal-able, and the know-how is available in universities across the world or (naturally) on-line. Theirs is no going back. That Genie is out there and is having far too much fun to go back inside the bottle. A world-wide ban on such weapons would simply not work and is, frankly, already too late. Presently the West and other nation-states have the advantage given to all early adopters. No such advantage lasts forever. In-coming drone strikes and killer robot attacks are in our future. We need to know how to deal with them – Now.]
2 comments:
This are fascinating and important issues. In terms of terrorism I am not sure that these things will be more of a threat to life then old fashioned guns, bombs or hijacking aircraft or land vehicles.
I agree that banning certain kinds of weapons will not work. In my opinion the best that we can do is a combination of security enhancements, diplomacy as well as getting at the root causes of terrorism.
@ Brian: Technology is a force multiplier. Today a single, quickly trained, person can destroy a tank with a shoulder rocket. Tomorrow such a person with a drone or who has programmed a 'killer robot' could kill hundreds or people. So I think they high-tech weaponry, that can be downloaded or built in someone's garage on weekends is much more of a threat than 'old-fashioned guns'. It's something we're going to have to deal with and, to be honest, live with. The knowledge which produces these things (and much more scary things besides) is either already out there or soon will be. That's the core of the issue. You don't need a massive industrial infrastructure and thousands of people working in factories to produce weapons of the future. You need the Internet, you need know-how and you need a bit of time & money. All of those things, even in the wilds of beyond, in jungles and deserts across the world are becoming more ubiquitous by the day. I think we are only beginning to see the extent of the 'interesting times' we all live in.
Post a Comment