“Moral Machine”: Help AI Choose Who Lives Or Dies

There will come a point in the life of a self-driving car in which the AI will be faced with a dilemma: choose who lives or dies during an impending accident.

For example, say the self-driving car experiences sudden brake failure. Its only choices are to keep going in the lane it’s already in (resulting in the death of a male doctor) or swerve and hit, instead, a pregnant woman and four cats. Mind you, the doctor was illegally crossing at a red light.

And that is just one of the offered scenarios in “Moral Machine,” from MIT Media Lab. The website is designed to gather a human perspective on moral decisions artificial intelligence might have to make. Users can take a series of “tests” featuring different scenarios involving potential car accidents—”judging,” basically, who should live and who should die.

j.jpg
one of the scenarios in “Moral Machine”

I’m going to be honest with you, a lot of this stuff gets really creepy. Choices have to be made whether or not to hit a homeless person or a person presumably with a home. Or to hit a homeless person or a cat. Or to hit a young person or an elderly person. Or to hit a male or female. Or to hit a “large woman”(!) or an athlete.

Do you see the problems here? Is a self-driving car’s AI going to be programmed—or, probably more specifically, going to learn/infer based on a substantial body of data—to hit the homeless person? Or the elderly person? Or…the “large person???”

l.jpg
Who is more expendable? The “woman,” the “female athlete,” or the “large woman?”

Look: as far as I understand it (based on writing video games, at least), you have to either “program” a computer with a solution to every scenario it might encounter, or “randomize” its response, or (in the case of AI) give it a large pool of data from which to “learn” what to do.

I can’t figure out if the crowd-sourced data derived from Moral Machine will be used in an actual data pool to test AI. According to the website, its goals are:

1) building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas, and 2) crowd-sourcing assembly and discussion of potential scenarios of moral consequence.

But if we ever do want to crowd-source morality and feed it to a computer in order for it to learn…I mean, yikes.

How many people pick the homeless person to hit? How many the “large woman” in favor for the “athletic woman?”

Untitledlp.jpg
Will you be hitting the male or female doctor?

And when this AI has to eventually make these decisions out on the road…will they be “biased” in some way? Or would they even be straight-up programmed to consider certain demographics or “types” of people more “expendable” than others?

This isn’t a knock on MIT or Moral Machine. Fact is, the topic of moral decisions AI will have to make is an important one, whether it’s self-driving cars or machines for the military/police. It has to be grappled with somehow. And here is a “first step.”

But…it’s something to really think about and discuss, especially while this artificial intelligence is still in the developing stages. And maybe that, in the end, is the whole point of “Moral Machine.” Hopefully.