About Me

My photo
I have a burning need to know stuff and I love asking awkward questions.

Thursday, July 11, 2019


Just Finished Reading: Life 3.0 – Being Human in the Age of Artificial Intelligence by Max Tegmark (FP: 2017)

It’s coming whether we want it or not, it’s just a matter of when. Although there’s a chance that such a thing might actually be impossible most experts in the field consider the arrival of Artificial General Intelligence (equal to that of humans) consider it a certainty. Few, however, can agree on when or what it ultimately means for humanity – its creators. Few experts consider it likely that AGI will be arriving imminently – as in the next 5 years. It’s just possible but there would need to be a number of unexpected breakthroughs to make that happen. An equally small number hold that AGI is either impossible or that it’s hundreds of years away. The present consensus seems to be hovering around the date 2045-2050.

But when it does finally arrive what will it mean to us? Few if any see Terminators in our future although some do not rule out that a future AGI could see us as a threat and be determined to do something about us. One of the scariest aspects of hyper-intelligence is that we might even be aware that our AI has determined our fate in a microsecond. It might just as easily introduce an irreversible contraceptive into our drinking water or the atmosphere as send robot killers to hunt us down. We’d probably never know until it was far, far too late. Likewise the empowered AGI could see us as a minor inconvenience (as we often see ants or wasps) an eliminate us to make way for a host of ultra-efficient batteries (just not in The Matrix). But how do we avoid this fate? A good chunk of the book – one of the less speculative sections – discusses bringing into fruition Friendly AI which won’t kill us by accident or by design. It might end up keeping us a pets or as we keep endangered species off the endangered list by paternalistically looking out for us. The bargain, according to the author, might be worth it with the trade-off for being obsolete being a long life free from pain, disease or want – a bounded paradise for as long as we want it.

The author is primarily a mathematician and cosmologist and it shows. Whilst a very large portion of the book is theoretical a good chunk is very theoretical indeed. When the discussion moves to humanity in the next billion years and colonisation of other galaxies (without the benefit of Faster Than Light travel) you know things have moved far beyond the reasonable – especially considering humans have been around for much less than 500,000 years and we haven’t managed to start a colony on the Moon or Mars yet. But that kind of expansion into known and unknown space is based on the author’s expectation that, once AGI becomes a reality, pretty much everything becomes possible. He did seem more than a little entranced with the idea of The Singularity (AKA The Rapture of the Nerds) were technology becomes so advanced so fast that it simply cannot be speculated about on this side of the ‘event horizon’. But it’s easy to get carried away – with talk of computing power thousands or even millions of times more powerful that today. Just imagine what you could accomplish if the laptop you use every day has more computing capacity and simple raw power than every machine in existence today. At the push of a few buttons and the click of a mouse you could do….. anything. Now times THAT by a million and think what we could do.

There was much here that I agreed with. AI or AGI is coming within the next 100 years and probably much sooner than that. Once it exists it will not be contained for long. Once out it will…. Well, do what it wants. If we can ensure that its goals are either in line with ours or do not actively conflict with ours we could not only see a new Golden Age but a Golden Age that extends into the far, far future. If our goals do not align and especially if they conflict then we’re in trouble – BIG trouble. That in itself is a good portion of the book, the idea that we need to be addressing this issue now before the AI arrives without any guidance or any idea of what we need to teach it. Being ready for Life 3.0 will do immeasurable help in keeping humanity alive and assisting in the birth of a future worth having for countless generations to come. It’s a noble aim that should not and cannot be ignored. Mostly definitely worth reading for anyone interested in the future of computers, of humanity and the likely consequences of getting the technology wrong. Highly recommended.   

7 comments:

mudpuddle said...

i'm sceptical... my computer can't fix a flat tire... Stanislaw Lem wrote a piece describing the evolution of a smarter-that-human computer. it rapidly became incomprehensible to man and kept getting smarter until it ate the universe. if you've never read any Lem, i'd highly recommend him: he was very smart and funny and satirical...

Brian Joseph said...

This sounds similar to Nick Bostrom’s Superintelligence which I read a few years ago. In that book both the dangers and possible solutions were covered. It is so difficult to predict what will happen. It is possible, but not certain, that any speculation about motivations might be mute. It might be that despite its power and intelligence, an AI might have no motivation. Something like motivation might just be the result of billions of years of natural selection which an AI will not be the result of.

mudpuddle said...

curious and quite possible, Brian...

Stephen said...

Does the author argue why he thinks AGI is so close? Specific intelligences I can see, like for driving or shopping, but general intelligence is a high mark. And...honestly, how do we qualify general intelligence? Will we have a test, or will we only know when the thing locks us out of the spaceship?

CyberKitten said...

@ Mudpuddle: Well, a computer can't directly fix a flat tire... but it could direct a machine to do so - and THEN destroy mankind. Intelligent machines will VERY swiftly become incomprehensible to us. The question is: Then what?

@ Brian: The author mentions Bostrom on several occasions throughout the book. Goals & Goal setting is an interesting one. You can have goals without intelligence (think of a heat-seeking missile) but can you have intelligence without goals? Of course when we design AI's we'll give them goals so they'll have that base to work with.

@ Stephen: The timeline for AGI was a combination of the consensus of experts as well as his regular statements that the research is going really fast) to say nothing of the computing power of machines ever expanding into the future. There is a great incentive to create AGI and the first one to do so will have a distinct advantage.

I think they have tests but they have a very small window of opportunity to test it before it becomes smart enough to fake its test results.

Judy Krueger said...

Pondering. The race between climate change and AGI? Seeing as how humans created the atomic bomb and still can't quite control atomic energy, it seems dicey to me but perhaps scientists at least have learned a few lessons. It sounds like the question is who will control whom. Thanks for your review!

CyberKitten said...

@ Judy: So many existential threats, so little time....

I'm certainly hoping that those working on AI have learnt from previous mistakes. This book certainly points in that direction so there is hope! However, if AI is going to be as smart as some people think it will then we haven't a hope of controlling it. The best thing we can hope for is that it is actually benevolent or at worse indifferent to us.