About Me

My photo
I have a burning need to know stuff and I love asking awkward questions.

Monday, August 28, 2017


Is 'killer robot' warfare closer than we think?

By Mark Smith for BBC News

25 August 2017

More than 100 of the world's top robotics experts wrote a letter to the United Nations recently calling for a ban on the development of "killer robots" and warning of a new arms race. But are their fears really justified? Entire regiments of unmanned tanks; drones that can spot an insurgent in a crowd of civilians; and weapons controlled by computerised "brains" that learn like we do, are all among the "smart" tech being unleashed by an arms industry many believe is now entering a "third revolution in warfare". "In every sphere of the battlefield - in the air, on the sea, under the sea or on the land - the military around the world are now demonstrating prototype autonomous weapons," says Toby Walsh, professor of artificial intelligence at Sydney's New South Wales University. "New technologies like deep learning are helping drive this revolution. The tech space is clearly leading the charge, and the military is playing catch-up." One reported breakthrough giving killer machine opponents sleepless nights is Kalashnikov's "neural net" combat module. It features a 7.62mm machine gun and a camera attached to a computer system that its makers claim can make its own targeting judgements without any human control. According to Russia's state-run Tass news agency it uses "neural network technologies that enable it to identify targets and make decisions". Unlike a conventional computer that uses pre-programmed instructions to tackle a specific but limited range of predictable possibilities, a neural network is designed to learn from previous examples then adapt to circumstances it may not have encountered before. And it is this supposed ability to make its own decisions that is worrying to many.

"If weapons are using neural networks and advanced artificial intelligence then we wouldn't necessarily know the basis on which they made the decision to attack - and that's very dangerous," says Andrew Nanson, chief technology officer at defence specialist Ultra Electronics. But he remains sceptical about some of the claims arms manufacturers are making. Automated defence systems can already make decisions based on an analysis of a threat - the shape, size, speed and trajectory of an incoming missile, for example - and choose an appropriate response much faster than humans can. But what happens when such systems encounter something they have no experience of, but are still given the freedom to act using a "best guess" approach? Mistakes could be disastrous - the killing of innocent civilians; the destruction of non-military targets; "friendly fire" attacks on your own side.

And this is what many experts fear, not that AI will become too smart - taking over the world like the Skynet supercomputer from the Terminator films - but that it's too stupid. "The current problems are not with super-intelligent robots but with pretty dumb ones that cannot flexibly discriminate between civilian targets and military targets except in very narrowly contained settings," says Noel Sharkey, professor of artificial intelligence and robotics at Sheffield University. Despite such concerns, Kalashnikov's latest products are not the only autonomous and semi-autonomous weapons being trialled in Russia. The Uran-9 is an unmanned ground combat vehicle and features a machine gun and 30mm cannon. It can be remotely controlled at distances of up to 10km. And the diminutive Platform-M combat robot boasts automated targeting and can operate in extremes of heat and cold. Meanwhile the Armata T-14 "super tank" has an autonomous turret that designer Andrei Terlikov claims will pave the way for fully autonomous tanks on the battlefield. Manufacturer Uralvagonzavod also didn't respond to BBC requests for an interview, but Prof Sharkey - who is a member of pressure group The Campaign to Stop Killer Robots - is wary of its potential. "The T-14 is years ahead of the West, and the idea of thousands of autonomous T-14s sitting on the border with Europe does not bear thinking about," he says. And it's not just Russia developing such weapons.

Last summer, the US Defence Advanced Research Projects Agency (Darpa) equipped an ordinary surveillance drone with advanced AI designed to discern between civilians and insurgents during a test over a replica Middle Eastern village in Massachusetts. And Samsung's SGR-A1 sentry gun, capable of firing autonomously, has been deployed along the South Korean side of the Korean Demilitarised Zone. The UK's Taranis drone - which is roughly the size of a Red Arrow Hawk fighter jet - is being developed by BAE Systems. It is designed to carry a myriad of weapons long distances and will have "elements" of full autonomy, BAE says. At sea, the USA's Sea Hunter autonomous warship is designed to operate for extended periods at sea without a single crew member, and to even guide itself in and out of port. All the Western arms manufacturers contacted by the BBC, including Boeing's Phantom Works, Northrop Grumman, Raytheon, BAE Systems, Lockheed Martin and General Dynamics, refused to co-operate with this feature, an indication perhaps of the controversial nature of this technology. But could autonomous military technology also be used simply as support for human military operations? Roland Sonnenberg, head of defence at consultancy firm PricewaterhouseCoopers, says combat simulation, logistics, threat analysis and back office functions are the more mundane - but equally important - aspects of warfare that robots and AI could perform. "The benefits that AI has to offer are only useful if they can be applied effectively in the real world and will only be broadly adopted if companies, consumers and society trust the technology and take a responsible approach," he says.

And some argue that autonomous weapons could actually reduce the number of human casualties. But Elizabeth Quintana, senior research fellow at the Royal United Services Institute for Defence and Security Studies, disagrees. "Deploying robotic systems might be more attractive to politicians because there would be fewer body bags coming home. My view is that war is an inherently human activity and that if you wage war from a distance at another group or country, they will find a way to hurt you at home because that is the only way that they can retaliate." The prospect of autonomous weapons systems inadvertently leading to an escalation in domestic terrorism or cyber-warfare is perhaps another reason to treat this new tech with caution.

[I know I keep banging on about it and I’m probably boring some of you but it needs to be said. Nations around the globe are actively building machines whose express design and mission is to kill human beings. Presently humans are still (largely) in the loop but that’s going to go the first time machine meets machine and the autonomous one beats the human controlled one to the draw. Future wars between technologically advanced nations will be humans & machines against other humans and machines. Whether or not humans are ever completely removed from the battlefield is unknown at this point. It does raise the question what exactly remains though? If machines are fighting machines in some godforsaken desert somewhere is that really warfare? Without people involved and soldiers dying on either side what exactly is happening? That’s also leaving the probability that advanced countries will be fighting less advanced ones who don’t have robots. There robots will almost exclusively be killing humans with zero risk to the country using them. Again, is this actually war or something more akin to slaughter or even, taken far enough, genocide? Are we sleepwalking into James Cameron’s nightmare?]

2 comments:

Mudpuddle said...

it's inevitable, i believe... and it has to do with the way human brains operate in combination with their tribal instincts... just evolution at work, imo...

Brian Joseph said...

Like most thing involving war, this is both interesting and disturbing.

In the short time, I am not really sure this will change the basic nature of war all that much. The issues related to these machines are remarkably similar to issues that have been intertwined with warfare for centuries past.

Until, as Nick Bostrom calls it. "Hard Artificial Intelligence" comes, that is, true thinking machines, I tend to think things will go on as they have. When hard artificial intelligence comes however, everything might change.