About Me

My photo
I have a burning need to know stuff and I love asking awkward questions.

Saturday, June 18, 2016

'Harmful' robot aims to spark AI debate

By Zoe Kleinman for BBC News

13 June 2016

A robot that can decide whether or not to inflict pain has been built by roboticist and artist Alexander Reben from the University of Berkeley, California. The basic machine is capable of pricking a finger but is programmed not to do so every time it can. Mr Reben has nicknamed it "The First Law" after a set of rules devised by sci-fi author Isaac Asimov. He said he hoped it would further debate about Artificial Intelligence. "The real concern about AI is that it gets out of control," he said. "[The tech giants] are saying it's way out there, but let's think about it now before it's too late. I am proving that [harmful robots] can exist now. We absolutely have to confront it."

Mr Reben's work suggests that perhaps an AI "kill switch", such as the one being developed by scientists from Google's artificial intelligence division, DeepMind, and Oxford University, might be useful sooner rather than later. In an academic paper, the researchers outlined how future intelligent machines could be coded to prevent them from learning to override human input. "It will be interesting to hear what kill switch is proposed," said Mr Reben. "Why would a robot not be able to undo its kill switch if it had got so smart?"

Mr Reben told the BBC his First Law machine, which at its worst can draw blood, was a ‘philosophical experiment’. "The robot makes a decision that I as a creator cannot predict," he said "I don't know who it will or will not hurt. It's intriguing, it's causing pain that's not for a useful purpose - we are moving into an ethics question, robots that are specifically built to do things that are ethically dubious."

The simple machine cost about $200 (£141) to make and took a few days to put together, Mr Reben said. He has no plans to exhibit or market it. Mr Reben has built a number of robots based on the theme of the relationship between technology and humans, including one which offered head massages and film-making "blabdroid" robots, which encouraged people to talk to them. "The robot arm on the head scratcher is the same design as the arm built into the machine that makes you bleed," he said. "It's general purpose - there's a fun, intimate side, but it could decide to do something harmful."

[AI will inevitably be used for harmful purposes – that’s exactly what the military is developing it for. But, as Reben says, it’s all about control. Do we control the machines or do they control themselves. If we install safeguards can they be overwritten, ignored or circumvented. Just how safe are killer robots?]

No comments: