About Me

My photo
I have a burning need to know stuff and I love asking awkward questions.

Saturday, July 03, 2010

Picking our brains: Can we make a conscious machine?

by Celeste Biever for New Scientist

06 April 2010

CHALLENGES don't get much bigger than trying to create artificial consciousness. Some doubt if it can be done - or if it ever should. Bolder researchers are not put off, though. "We have to consider machine consciousness as a grand challenge, like putting a man on the moon," says Antonio Chella at the University of Palermo in Italy and editor of the International Journal of Machine Consciousness. The journal was launched last year, a sign of the field's growing momentum. Another landmark is the recently developed "Conscale", developed by Raúl Arrabales of the Carlos III University of Madrid in Spain to compare the intelligence of various software agents - and biological ones too.

Perhaps the closest a software bot has come so far is IDA, the Intelligent Distribution Agent built in 2003 by Stan Franklin at the University of Memphis in Tennessee. IDA assigns sailors in the US navy to new jobs when they finish a tour of duty and has to juggle naval policies, job requirements, changing costs and sailors' needs. Like people, IDA has "conscious" and "unconscious" levels of processing. At the unconscious level she deploys software agents to gather data and process information. These agents compete to enter IDA's "conscious" workspace, where they interact with each other and decisions get made. The updated Learning IDA, or LIDA, was completed this year. She learns from what reaches her consciousness and uses this to guide future decisions. LIDA also has the benefit of "emotions" - high-level goals that guide her decision-making. Another advance emerged from designing robots able to maintain their function after being damaged. In 2006, Josh Bongard at the University of Vermont in Burlington designed a walking robot with a continuously updated internal model of itself. If damaged, this self-knowledge allows it to devise an alternative gait using its remaining abilities. Having an internal "imagined" model of ourselves is considered a key part of human sentience, taking the robot closer to self-awareness.

Along with an internal model, the robot developed by Owen Holland's team at the University of Sussex, UK, is also anatomically human-like. "A robot with a body that is very close to a human's will develop cognition that is closer to the human variety," Owen claims. None of these approaches solve what many consider to be the "hard problem" of consciousness: subjective awareness. No one yet knows how to design the software for that. But as machines grow in sophistication, the hard problem may simply evaporate - either because awareness emerges spontaneously or because we will simply assume it has emerged without knowing for sure. After all, when it comes to other humans, we can only assume they have subjective awareness too. We have no way of proving we are not the only self-aware individual in a world of unaware "zombies". You cannot prove that you're not the only self-aware person in a world of unaware zombies. While we may never know for sure if a machine is experiencing consciousness or only appears to, building such a machine would revolutionise our understanding of the brain. "My real goal is to figure out how minds work," says Franklin. "You really don't know how something works until you can build it."

[I am not one of those people who believe that there is something unique or special about biological consciousness that cannot be replicated in a machine. If it can be done, and I can’t see why not, then it will be done – and along the way we might finally get a handle on our own consciousness. It’s a fascinating (and somewhat scary) idea - that we can create conscious beings like ourselves. It may feel like science-fiction today but I doubt that it will remain so for long.]

3 comments:

dbackdad said...

I wholeheartedly believe that we will eventually create consciousness. It is all biology, chemistry, physics. There is no "soul" that is external to those things.

That is not to say that there are not going to be ethical and practical problems or that it's going to happen tomorrow. I think we're still a fair ways away ... probably at least 100 years.

wstachour said...

I tend to look at these things pragmatically: what does consciousness get us? What are the functions of emotions? Does the entity have to be able to idly contemplate itself to be conscious? (And so are animals conscious?) I suppose this is mostly philosophy, which is inherently beyond my pay grade.

Emotions are thought in some circles to be the reconciliation programming between the primitive and higher brain impulses, and I wonder if we'll get closer with machine intelligence than to assign levels of urgency to that programming meeting its mandates--high levels of urgency may take on an emotional patina.

It's very interesting stuff.

CyberKitten said...

dbackdad said: It is all biology, chemistry, physics. There is no "soul" that is external to those things.

Agreed. There is nothing supernatural about it.

dbackdad said: That is not to say that there are not going to be ethical and practical problems or that it's going to happen tomorrow.

I can certainly see ethical issues surrounding how we treat our new creations. If they're conscious beings will they be covered by the Human Rights Convention?

dbackdad said: I think we're still a fair ways away ... probably at least 100 years.

I have a feeling that it will be before that. Say 50 years rather than 100.

wunelle said: I suppose this is mostly philosophy, which is inherently beyond my pay grade.

Oh, I don't think that philosophy is beyond anyone's pay grade. Afterall it's simply the love of knowledge and the ability to think about things a bit more deeply....

Emotions do seem to be important. They'll probably need to be programmed into any potential self-aware AI. Interesting indeed.....