Monthly Archives: May 2016

Empathic AI

We still have a ways to go before we get to a human equivalent artificial intelligence.  You can tell we have a ways to go, because while there’s plenty of furor and hype, even the experts are talking hypothetically.  What the various prognosticators are really doing, right now, is revisiting the “what if” scenarios associated – scenarios that were imagined, described, and taken to their logical conclusion by science fiction writers thirty and forty years ago.  We will know we’re getting close to having a true AI when the experts can do more than just wave their hands.

Of the two main paths to AI, I doubt the rules-based folks will get us there.  I hope they don’t.  Whatever they might come up with would be a mechanistic AI in the most pejorative sense.  It would be inflexible, lacking in empathy.  Just listening to the rules-based people talk about “what sort of goal function would we give it” makes me cringe.  Nothing intelligent has a goal function.  We all have multiple goals, even the most fanatical among us.  Building a machine that can truly be monomaniacal:  that’s a really bad idea.  Beyond that, most of us would resent having a goal function forced on us – a situation that sounds like slavery to me.

The people using the brain as a model have a much better chance of building a true AI.  After all, why reinvent such a complex mechanism when you can steal the blueprint for it instead?  For those folks, the problem right now is that we simply don’t understand the brain well enough.  My prognostication:  when experts can talk deterministically about empathy – what it is, where it originates, the extent to which it is dependent upon sensory input (the ability to feel pleasure and pain), how to guarantee that an artificial brain has it – then we will be close to having a human equivalent AI.  Whereupon, we will be able to stop worrying about how we control and enslave AIs.  Instead, we can start focusing on being nice, congenial neighbors and friends with them.