Automated Killer Robots ‘Threat to Humanity’: Expert
by Agence France Presse
Wednesday, February 27, 2008
Increasingly autonomous, gun-totting robots developed for warfare could easily fall into the hands of terrorists and may one day unleash a robot arms race, a top expert on artificial intelligence told AFP. “They pose a threat to humanity,” said University of Sheffield professor Noel Sharkey ahead of a keynote address Wednesday before Britain’s Royal United Services Institute.
Intelligent machines deployed on battlefields around the world — from mobile grenade launchers to rocket-firing drones — can already identify and lock onto targets without human help. There are more than 4,000 US military robots on the ground in Iraq, as well as unmanned aircraft that have clocked hundreds of thousands of flight hours. The first three armed combat robots fitted with large-caliber machine guns deployed to Iraq last summer, manufactured by US arms maker Foster-Miller, proved so successful that 80 more are on order, said Sharkey. But up to now, a human hand has always been required to push the button or pull the trigger. It we are not careful, he said, that could change.
Military leaders “are quite clear that they want autonomous robots as soon as possible, because they are more cost-effective and give a risk-free war,” he said. Several countries, led by the United States, have already invested heavily in robot warriors developed for use on the battlefield. South Korea and Israel both deploy armed robot border guards, while China, India, Russia and Britain have all increased the use of military robots. Washington plans to spend four billion dollars by 2010 on unmanned technology systems, with total spending expected rise to 24 billion, according to the Department of Defense’s Unmanned Systems Roadmap 2007-2032, released in December.
James Canton, an expert on technology innovation and CEO of the Institute for Global Futures, predicts that deployment within a decade of detachments that will include 150 soldiers and 2,000 robots. The use of such devices by terrorists should be a serious concern, said Sharkey. Captured robots would not be difficult to reverse engineer, and could easily replace suicide bombers as the weapon-of-choice. “I don’t know why that has not happened already,” he said. But even more worrisome, he continued, is the subtle progression from the semi-autonomous military robots deployed today to fully independent killing machines. “I have worked in artificial intelligence for decades, and the idea of a robot making decisions about human termination terrifies me,” Sharkey said.
Ronald Arkin of Georgia Institute of Technology, who has worked closely with the US military on robotics, agrees that the shift towards autonomy will be gradual. But he is not convinced that robots don’t have a place on the front line. “Robotics systems may have the potential to out-perform humans from a perspective of the laws of war and the rules of engagement,” he told a conference on technology in warfare at Stanford University last month. The sensors of intelligent machines, he argued, may ultimately be better equipped to understand an environment and to process information. “And there are no emotions that can cloud judgement, such as anger,” he added. Nor is there any inherent right to self-defence.
For now, however, there remain several barriers to the creation and deployment of Terminator-like killing machines. Some are technical. Teaching a computer-driven machine — even an intelligent one — how to distinguish between civilians and combatants, or how to gauge a proportional response as mandated by the Geneva Conventions, is simply beyond the reach of artificial intelligence today. But even if technical barriers are overcome, the prospect of armies increasingly dependent on remotely-controlled or autonomous robots raises a host of ethical issues that have barely been addressed. Arkin points out that the US Department of Defense’s 230 billion dollar Future Combat Systems programme — the largest military contract in US history — provides for three classes of aerial and three land-based robotics systems. “But nowhere is there any consideration of the ethical implications of the weaponisation of these systems,” he said. For Sharkey, the best solution may be an outright ban on autonomous weapons systems. “We have to say where we want to draw the line and what we want to do — and then get an international agreement,” he said.
[Whilst we are nowhere near the future world portrayed in the Terminator movies there is a clear path towards that world. Military forces all over the world – but especially in the West - are pouring Billions of dollars into developing increasing autonomous battlefield robots and are arming them with increasingly lethal weapons. I think we all know where that could end. Developing intelligent machines whose single purpose is to kill humans as efficiently as possible is not a wise thing to be doing. Let’s hope we don’t live to regret it.]
3 comments:
I for one welcome our new robotic overlords!
Seriously though, the state of AI is still in need of a major breakthrough of some sort before this happens - the AI out there still isn't that good. But, remotely human controlled machines used for warfare/police would be useful. With no risk to life, problems with itchy trigger fingers could be eliminated, and the machine should hopefully be able to sustain some damage before it goes down. Also, the whole video/audio feed could be recorded to prevent abuse of the system.
AM said: Seriously though, the state of AI is still in need of a major breakthrough of some sort before this happens - the AI out there still isn't that good.
Very true. It'll probably be generations before we get hald decent AI.
AM said: With no risk to life, problems with itchy trigger fingers could be eliminated, and the machine should hopefully be able to sustain some damage before it goes down.
..and you'd trust a low paid 'security consultant' being in charge of an armed robot during a riot? I wouldn't. If its real enough to be useful and without any immediate consequences don't you think that the number of "fire first" incidents will go up rather than down?
I'm not sure about using remote controlled machinery during riots, I was thinking more along the lines of urban warfare or any situation where the 'bad guys' could be mixed in with civilians.
In either case though, why would the operater be more likely to shoot first? Recording of the remote data would present a perfect facsimile of exactly what the operator saw and heard, meaning perfect accountability; and these robots should be able to withstand damage for long enough for the operator to ascertain who the actual bad guys are. There'd simply be no excuse for 'accidentally' shooting innocents in the heat of the moment.
Additionally, if the robot is attacked there wouldn't be any risk to the operator, which should reduce any impulse to simply shoot in panic. Unless you believe that most of these guys are out to simply kill people - even if so, there's the accountability angle.
If there were a scenario where an armed remote controlled robot started firing live rounds into a crowd during a riot (they really ought to be firing tear gas or rubber bullets, and a machine operator should have the luxury of being able to take his time to aim), you'd bet questions would be asked and the knowledge of there being, literally, a full recording of everything that happened (probably right down to a black box for the machine) would certainly make such an occurrence unlikely. Unless, of course, we're talking about some sort of fascist, totalatarian government, but they'd probably slaughter people regardless of whether robots were being used or not.
So basically, no, I think first fire incidents would go down, not up.
Post a Comment