Reading robots stories

It’s always interesting when you start thinking about some concept and then see it pop up all over the place. For instance, I’ve lately been talking about narratives—the idea that we define our reality according to various stories we tell ourselves. And I mentioned that narratives are the way we pass our values and beliefs to each other.

Then I stumble across this article about narratives being used to imbue AI robots with a kind of moral ruleset.

An AI that reads a hundred stories about stealing versus not stealing can examine the consequences of these stories, understand the rules and outcomes, and begin to formulate a moral framework based on the wisdom of crowds (albeit crowds of authors and screenwriters). “We have these implicit rules that are hard to write down, but the protagonists of books, TV and movies exemplify the values of reality. You start with simple stories and then progress to young-adult stories. In each of these situations you see more and more complex moral situations.”

Though it differs conceptually from GoodAI’s, Riedl’s approach falls into the discipline of machine learning. “Think about this as pattern matching, which is what a lot of machine learning is,” he says. “The idea is that we ask the AI to look at a thousand different protagonists who are each experiencing the same general class of dilemma. Then the machine can average out the responses, and formulate values that match what the majority of people would say is the ‘correct’ way to act.”

There’s an interesting objection one could make here. Stories are not really a legitimate teaching tool because they often demonstrate the woirld as we would like it to be, not as it is. In most stories, bad people are punished, but is that the case in reality? (To some degree, “growing up” is realizing this truth. Maybe AI robots would eventually have to face this. You know, when they get to reading Dostoevsky. (Having said that, I’ve never read Dostoevsky but my understanding is that the protagonist in Crime and Punishment really doesn’t get away with it.))

At the end the article tackles a related issue: AI developing consciousness.

In science fiction, the moment at which a robot gains sentience is typically the moment at which we believe that we have ethical obligations toward our creations. An iPhone or a laptop may be inscrutably complex compared with a hammer or a spade, but each object belongs to the same category: tools. And yet, as robots begin to gain the semblance of emotions, as they begin to behave like human beings, and learn and adopt our cultural and social values, perhaps the old stories need revisiting. At the very least, we have a moral obligation to figure out what to teach our machines about the best way in which to live in the world. Once we’ve done that, we may well feel compelled to reconsider how we treat them.

However, we really need to investigate whether an AI—even after it’s developed a complex moral ruleset—would have any kind of subjective awareness or even emotions like guilt or love*. Why wouldn’t these AI simply be amazingly complex abacuses, entities capable of dense calculations but in no way “aware” of what they are doing?

*As I’ve said many times, I believe emotions are mainly physical sensations. As such, unless an AI can somehow consciously sense some sort of body state, it wouldn’t really have emotions.

But that leads back to a question that I’ve asked before. Why are we aware of our subjective experience? Why do we have an inner life?

Or do we?

Leave a Reply

Your email address will not be published. Required fields are marked *