Can AI be programmed to be moral?

There’s a blog called “Wait But Why” that does a good job of taking dense, philosophical topics and presenting them in an amusing but thought-provoking way. The author recently did a very long post on the kind of artificial intelligence technology that many think is around the corner (possibly only a few decades away.) He concedes that AI could radically reshape the course of history, possibly by bringing about a Utopia or possibly by causing humanity’s destruction.

In part II of the post the author gets into the more negative secenerios. A point he makes well is that AI wouldn’t end humanity because the AI is “evil” or out to get us, but rather that humanity’s extinction might just be a logical output of of badly coded commands. A simplistic example would be someone tells an AI to solve world hunger and the computer “thinks”, “Well, only living humans are hungry so if I kill them all, they can’t be hungry.” Again, a very simplistic example, but a much more convoluted one could occur and wipe us out.

So the challenge is that we have to program our morality into the AI. It needs to follow our moral rules. But this is interesting because, as I’ve pointed out before, we really don’t have a clear and concise ruleset. We think we do, the Golden Rule being a good example, but
the Golden Rule is really quite flawed
. It’s more true to say we have a general set of moral intuitions that we kind of follow sometimes. Hardly the sort of thing that can be fed to a computer program.

So maybe the great challenge for humanity right now is to create a purely logocal, objective set of moral rules. I personally suspect that is impossible,

Leave a Reply

Your email address will not be published. Required fields are marked *