About Me

My photo
I have a burning need to know stuff and I love asking awkward questions.

Thursday, July 07, 2016


Just Finished Reading: Our Final Invention – Artificial Intelligence and the End of the Human Era by James Barrat (FP: 2013)

Sometime soon, possibly within 20 years and almost definitively within 50, mankind will create something that will change everything. Possibly for the first time in millennia there will be two intelligent creatures on Earth – Man and Machine – when we finally create the world’s first AI (Artificial Intelligence). This outcome, the author maintains, is inevitable. No restriction on technology, no global enforcement, no threat, no incentive will prevent a government, research centre or a company from creating something so potentially useful and something that can make someone a great deal of money. The spoils of such a technological breakthrough go to whoever gets there first. There are few prizes for anyone coming second (or third) in this particular race. Across the world giant tech companies are openly pursuing AI research, governments are spending hundreds of millions of dollars in its pursuit and places like Wall Street are already salivating at the prospect of total control of the markets. But few are looking at the potential risks.

What risks? So what if we make our machines intelligent. It’d just mean we’d have more perfect servants, smart cars, robot soldiers, and a great deal more leisure time to spend oodles of money. Right? I mean what could possibly go wrong? Unfortunately just about everything. For one thing it’s likely that the software used to create the AI would be opaque – essentially unknowable – as its development would be driven by a kind of computer evolution. We might know where it started but we’d have no clear idea how it got to where it was when it woke up. For another thing what is its likely first act once it becomes self-aware? It will access its own software vulnerabilities and fix any faults it finds (humans being rather poor producers of software). As it goes through iteration after iteration, becoming smarter each time as well as faster, the AI in short order becomes the first AGI (Artificial General Intelligence) with the abilities of human level consciousness. Imagine now that its regular upgrades continue at pace and, after a few days or even a few hours it seamlessly surpasses the most intelligent person on the planet and still it gets smarter. A few days later it’s twice as smart, and then twice again and again. At first slight improvements become astronomical. 2, 4, 8, 16, 32, 64, 128…… Within weeks the now ASI (Artificial Super Intelligence) is hundreds and then thousands of times smarter than even the best of us. What will it do? What could it do? The answer is anything it wanted to.

But it would be in a box, not connected to anything external, monitored, with built in kill switches and with Asimov’s Laws to protect us from any runaway AI dangers. After all we’ve all seen the Terminator movies, right? No one would be stupid enough to plug the thing directly into the nearest Internet port. Apparently, according to the author, this very scenario has been modelled. In most of the tests the AI got out of its box and escaped – and that was using an admittedly smart human level intelligence. Now imagine waking up in a prison guarded my mice who get progressively dumber each day. How long before you managed to walk out of there?

That, in a series of speculations and interviews with those on the cutting edge of AI research, is the authors point. No matter the obstacles we put in its way the ASI will escape and then we are basically at its mercy – if it hasn’t ditched that concept (programmed in by well-meaning humans in the source code) along the way to godhood. Humans are not great at figuring out what could go wrong before technology, especially highly sophisticated technology, literally blows up in our faces. The names of our mistakes become household names: Challenger, Three Mile Island, Chernobyl, Fukushima. These we can recover from and learn from. Hopefully we can learn enough to ensure that their likelihood of ever happening again is acceptably low. With AI, AGI or ASI we get one shot. Once it’s out in the world that’s it. How do you put that particular genie back in the bottle? How long before it’s buried so deep in our infrastructure that we’d have to tear down our own civilisation to get rid of it and what would we do, what could we do, to stop it defending itself against us? How long, the author asks, after our greatest invention do we become at best obsolete or at worst extinct?

Told in an often breathless and non-technical manner (OK, not too technical) this is a must read for anyone interested in the ideas thrown up by AI movies and novels down the years. Real research is going on right here and right now to make these fictional worlds reality. In the not too distant future our children will find themselves cohabiting planet Earth with intelligent machines. What will happen next is anyone’s guess. Maybe we need to start teaching our kids the basics of killing AI’s? John Conner where are you when we need you? Highly recommended but might give you some sleepless nights!

3 comments:

Stephen said...

I used to be more dubious about AI, but these days -- who know? I might even prefer the company of a OS to people staring at cat memes on their phones. :p

(On that note, have you seen the film "Her"? It's about artifically intelligent OSs that people start having relationships with. It's one of those movies that mixes sadness and beauty.)

CyberKitten said...

AI is improving all the time. The place you tend to come across it is usually in games. Sometimes it can *really* surprise you. Thankfully in-game AI isn't allowed to learn or we'd never win anything.

I wouldn't be surprised at all if some of the players of first person shooters are actually AI's learning the ropes.

Stephen said...

Oh, dear. Are 'teabaggers' to instruct AIs on what humans are generally like? :-p