About Me

My photo
I have a burning need to know stuff and I love asking awkward questions.

Saturday, November 14, 2009

Smart machines: What's the worst that could happen?

by MacGregor Campbell for New Scientist

27 July 2009

An invasion led by artificially intelligent machines. Conscious computers. A smartphone virus so smart that it can start mimicking you. You might think that such scenarios are laughably futuristic, but some of the world's leading artificial intelligence (AI) researchers are concerned enough about the potential impact of advances in AI that they have been discussing the risks over the past year. Now they have revealed their conclusions. Until now, research in artificial intelligence has been mainly occupied by myriad basic challenges that have turned out to be very complex, such as teaching machines to distinguish between everyday objects. Human-level artificial intelligence or self-evolving machines were seen as long-term, abstract goals not yet ready for serious consideration.

Now, for the first time, a panel of 25 AI scientists, roboticists, and ethical and legal scholars has been convened to address these issues, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI) in Menlo Park, California. It looked at the feasibility and ramifications of seemingly far-fetched ideas, such as the possibility of the internet becoming self-aware. The panel drew inspiration from the 1975 Asilomar Conference on Recombinant DNA in California, in which over 140 biologists, physicians, and lawyers considered the possibilities and dangers of the then emerging technology for creating DNA sequences that did not exist in nature. Delegates at that conference foresaw that genetic engineering would become widespread, even though practical applications – such as growing genetically modified crops – had not yet been developed.

Unlike recombinant DNA in 1975, however, AI is already out in the world. Robots like Roombas and Scoobas help with the mundane chores of vacuuming and mopping, while decision-making devices are assisting in complex, sometimes life-and-death situations. For example, Poseidon Technologies, sells AI systems that help lifeguards identify when a person is drowning in a swimming pool, and Microsoft's Clearflow system helps drivers pick the best route by analysing traffic behaviour. At the moment such systems only advise or assist humans, but the AAAI panel warns that the day is not far off when machines could have far greater ability to make and execute decisions on their own, albeit within a narrow range of expertise. As such AI systems become more commonplace, what breakthroughs can we reasonably expect, and what effects will they have on society? What's more, what precautions should we be taking?

These are among the many questions that the panel tackled, under the chairmanship of Eric Horvitz, president of the AAAI and senior researcher with Microsoft Research. The group began meeting by phone and teleconference in mid-2008, then in February this year its members gathered at Asilomar, a quiet town on the north California coast, for a weekend to debate and seek consensus. They presented their initial findings at the International Joint Conference for Artificial Intelligence (IJCAI) in Pasadena, California, on 15 July. Panel members told IJCAI that they unanimously agreed that creating human-level artificial intelligence – a system capable of expertise across a range of domains – is possible in principle, but disagreed as to when such a breakthrough might occur, with estimates varying wildly between 20 and 1000 years. Panel member Tom Dietterich of Oregon State University in Corvallis pointed out that much of today's AI research is not aimed at building a general human-level AI system, but rather focuses on "idiot-savants" systems good at tasks in a very narrow range of application, such as mathematics.

The panel discussed at length the idea of an AI "singularity" – a runaway chain reaction of machines capable of building ever-better machines. While admitting that it was theoretically possible, most members were skeptical that such an exponential AI explosion would occur in the foreseeable future, given the lack of projects today that could lead to systems capable of improving upon themselves. "Perhaps the singularity is not the biggest of our worries," said Dietterich. A more realistic short-term concern is the possibility of malware that can mimic the digital behavior of humans. According to the panel, identity thieves might feasibly plant a virus on a person's smartphone that would silently monitor their text messages, email, voice, diary and bank details. The virus could then use these to impersonate that individual with little or no external guidance from the thieves. Most researchers think that they can develop such a virus. "If we could do it, they could," said Tom Mitchell of Carnegie Mellon University in Pittsburgh, Pennsylvania, referring to organised crime syndicates. Peter Szolovits, an AI researcher at the Massachusetts Institute of Technology, who was not on the panel, agrees that common everyday computer systems such as smartphones have layers of complexity that could lead to unintended consequences or allow malicious exploitation. "There are a few thousand lines of code running on my cell phone and I sure as hell haven't verified all of them," he says. "These are potentially powerful technologies that could be used in good ways and not so good ways," says Horvitz, and cautions that besides the threat posed by malware, we are close to creating systems so complex and opaque that we don't understand them.

Given such possibilities, "what's the responsibility of an AI researcher?" says Bart Selman of Cornell, co-chair of the panel. "We're starting to think about it." At least for now we can rest easy on one score. The panel concluded that the internet is not about to become self-aware.

[Well, at least they’re starting to think about the implications of AI. That’s a hopeful sign]

3 comments:

Thomas Fummo said...

I actually had a very lengthy discussion about this with a friend over lunch the other day.

Whilst he is convinced that one day machines will be as intelligent and/or more so than man... I am skeptical.
I think it's take a long time before we even come close to developing a computer capable of flexible thought (imagination, subconsciousness, dreaming, etc.)... so long that it's highly likely we'll either give up or we'll run out of natural resources before we get there.

CyberKitten said...

I certainly don't think AI is going to happen anytime soon - probably not in my lifetime. However, what little I know about the field leads me to believe that maybe their approach is wrong, which is why they're having so much difficulty creating thinking machines.

A huge part of the problem is that we don't even know how *we* think which would, I imagine, make it rather difficult to duplicate it in something else. A breakthrough in either area is not beyond imagination. So it would be good if we had some ideas in place to know what we do with it once it arrives. It's certainly one of the areas that I wouldn't recommend *stumbling* into.

dbackdad said...

CK said, "A huge part of the problem is that we don't even know how *we* think which would, I imagine, make it rather difficult to duplicate it in something else." - Exactly. The first big breakthrough for AI is not going to be on the computer side ... it's going to be on the human brain decoding side. Once we have that figured out, I think all the computer AI hurdles will be relatively straightforward.

I think ethics panels like these are extremely important as you cannot wait until something is here before deciding whether it should be.

Ironically, I picked up some used sci-fi books today and one non-fiction - Stephen Pinker's How the Mind Works. :-)