Annotating Driverless Dilemma (direct link)
00:00:40
I'm Jad Abumrad.
00:01:36
This is Radiolab. [laughs]
00:01:40
Okay, so we're gonna play you a little bit of tape first just to set up the—what we're gonna do today. About a month ago, we were doing the thing about the fake news.
00:03:13
Yeah, he was like, "You know, you guys want to talk about fake news, but that's not actually what's eating at me."
00:03:24
Quite bold!
00:04:43
Yes, but you know what? It's funny. One of the things that—I mean, we couldn't use that tape, initially at least.
00:06:20
But we kept thinking about it because it actually weirdly points us back to a story we did about a decade ago. The story of a moral problem that's about to get totally reimagined.
00:06:44
So what we thought we would do is we're—we're gonna play you the story as we did it then, sort of the full segment, and then we're gonna amend it on the back end. And by way of just disclaiming, this was at a moment in our development where there's just, like, way too many sound effects. It's just gratuitous.
00:07:07
No, I'm—I'm gonna apologize because there's just too much.
00:07:31
Just too much. And also, like, we—we talk about the MRI machine like it's this, like, amazing thing, when it was—it's sorta commonplace now. Anyhow, doesn't matter. We're gonna play it for you and then talk about it on the back end. This is—we start with a description of something called "the trolley problem." You ready?
00:07:44
All right. You're gonna hear some train tracks. Go there in your mind.
00:08:04
There are five workers on the tracks, working. They've got their backs turned to the trolley, which is coming in the distance.
00:08:24
They are repairing the tracks.
00:08:33
They don't see it. You can't shout to them.
00:08:45
And if you do nothing, here's what will happen: five workers will die.
00:09:00
No, you don't. But you have a choice. You can do A) nothing. Or B) it so happens, next to you is a lever. Pull the lever, and the trolley will jump onto some side tracks where there is only one person working.
00:09:12
Yeah, so there's your choice. Do you kill one man by pulling a lever, or do you kill five men by doing nothing?
00:09:17
Naturally. All right, here's part two. You're standing near some train tracks. Five guys are on the tracks, just as before. And there is the trolley coming.
00:09:18
Same five guys.
00:09:24
Yeah, yeah, exactly. However, I'm gonna make a couple changes. Now you're standing on a footbridge that passes over the tracks. You're looking down onto the tracks. There's no lever anywhere to be seen, except next to you, there is a guy.
00:09:26
A large guy, large individual standing next to you on the bridge, looking down with you over the tracks. And you realize, "Wait, I can save those five workers if I push this man, give him a little tap."
00:09:39
He'll land on the tracks and stop the ...
00:09:51
[laughs] Right.
00:10:05
But surely you realize that the math is the same.
00:10:31
Yeah.
00:10:37
All right, here's the thing. If you ask people these questions—and we did—starting with the first.
00:10:46
"Is it okay to kill one man to save five using a lever?" nine out of ten people will say ...
00:11:42
But if you ask them, "Is it okay to kill one man to save five by pushing the guy?" nine out of ten people will say ...
00:12:13
It is practically universal. And the thing is if you ask people, "Why is it okay to murder"—because that's what it is—"Murder a man with a lever and not okay to do it with your hands?" People don't really know.
00:12:53
Yeah?
00:14:27
Mm-hmm.
00:15:55
So when people answer yes to the lever question, there are—there are places in their brain which glow?
00:16:49
Even though the questions are basically the same?
00:17:19
Well, what does that mean? What does Josh make of this?
00:20:24
Do you buy this?
00:20:36
Yeah.
00:23:19
And Josh thinks there are times when these different moral positions that we have embedded inside of us, in our brains, when they can come into conflict. And in the original episode, we went into one more story. This one, you might call the "Crying baby dilemma."
00:26:17
Right.
00:26:44
Well, who breaks the tie? I mean, they had to answer something, right?
00:28:53
So in those cases when these dots above our eyebrows become active, what are they doing?
00:29:12
Okay, so that was the story we put together many, many, many years ago, about a decade ago. And at that point, the whole idea of thinking of morality as kind of purely a brain thing, it was relatively new. And certainly, the idea of philosophers working with MRI machines, it was super new. But now here we are, 10 years later, and some updates. First of all, Josh Greene ...
00:29:42
We talked to him again. He has started a family. He's switched labs from Princeton to Harvard. But that whole time, that interim decade, he has still been thinking and working on the trolley problem.
00:29:50
For years, he's been trying out different permutations of the scenario on people. Like, "Okay, instead of pushing the guy off the bridge with your hands, what if you did it, but not with your hands?"
00:29:59
And to cut to the chase, what Josh has found is that the basic results that we talked about ...
00:30:02
It's still the case that people would like to save the most number of lives, but not if it means pushing somebody with their own hands—or with a pole, for that matter. Now here's something kind of interesting. He and others have found that there are two groups that are more willing to push the guy off the bridge: they are Buddhist monks and psychopaths.
00:30:20
That would be the psychopaths, whereas the Buddhist monks presumably are really good at shushing their "inner chimp," as he called it, and just saying to themselves ...
00:30:28
So there's all kinds of interesting things you can say about the trolley problem as a thought experiment, but at the end of the day, it's just that. It's a thought experiment. What got us interested in revisiting it is that it seems like the thought experiment is about to get real.
00:30:32
That's coming up right after the break.
00:30:52
Jad, Robert. Radiolab. Okay, so where we left it is that the trolley problem is about to get real. Here's how Josh Greene put it.
00:31:49
Okay, so self-driving cars, unless you've been living under a muffler, they are coming. It's gonna be a little bit of an adjustment for some of us.
00:32:16
But what Josh meant when he said it's the trolley problem ...
00:32:30
... is basically this. Imagine this scenario ...
00:32:31
That suddenly is a real-world question.
00:32:37
Like, what, theoretically, should a car in this situation do?
00:32:58
So if it's between one driver and five pedestrians ...
00:33:20
But when you ask people, forget the theory ...
00:34:28
So there's your problem: people would sell a car—and an idea of moral reasoning—that they themselves wouldn't buy. And last fall, an exec at Mercedes Benz face-planted right into the middle of this contradiction.
00:35:03
Okay, October 2016, the Paris Motor Show. You had something like a million people coming in over the course of a few days. All the major car-makers were there.
00:35:10
Everybody was debuting their new cars, and one of the big presenters in this whole affair was this guy ...
00:35:50
This is Christoph von Hugo, a senior safety manager at Mercedes Benz. He was at the show sort of demonstrating a prototype of a car that could sort of self-drive its way through traffic.
00:36:22
He's doing dozens and dozens of interviews through the show, and in one of those interviews—unfortunately, this one we don't have on tape—he was asked, "What would your driverless car do in a trolley problem-type dilemma, where maybe you have to choose between one or many?" And he answered, quote ...
00:36:37
If you know you can save one person, save that one person.
00:36:56
This is Michael Taylor, correspondent for Car and Driver magazine. He was the one that Christoph von Hugo said that to.
00:37:10
This is producer Amanda Aronczyk.
00:37:36
I mean, all he's really doing is saying what's on people's minds, which is that ...
00:38:00
Who's gonna buy a car that chooses somebody else over them? Anyhow, he makes that comment, Michael prints it, and a kerfuffle ensues.
00:39:44
And those trade-offs could get really, really tricky and subtle. Because obviously, these cars have sensors.
00:39:51
This is Raj Rajkumar. He's a professor at Carnegie Mellon.
00:39:54
He is one of the guys that is writing the code that will go inside GM's driverless car. He says yeah, the sensors at the moment on these cars ...
00:40:29
... pretty basic.
00:40:36
But he says, it won't be long before ...
00:40:48
Eventually they will be able to detect people of different sizes, shapes, and colors. Like, "Oh, that's a skinny person, that's a small person, tall person, Black person, white person. That's a little boy, that's a little girl."
00:40:50
So forget the basic moral math. Like, what does a car do if it has to decide oh, do I save this boy or this girl? What about two girls versus one boy and an adult? How about a cat versus a dog? A 75-year-old guy in a suit versus that person over there who might be homeless? You can see where this is going. And it's conceivable that cars will know our medical records, and back at the car show ...
00:41:10
Mercedes guy basically said in a couple of years, the cars will be networked. They'll be talking to each other. So just imagine a scenario where, like, cars are about to get into accidents, and right at the decision point, they're, like, conferring. "Well, who do you have in your car?" "Me, I got a 70-year-old Wall Street guy, makes eight figures. How about you?" "Well, I'm a bus full of kids. Kids have more years left. You need to move." "Well, hold up. I see that your kids come from a poor neighborhood and have asthma, so I don't know."
00:41:26
[laughs] How does society decide? I mean, help me imagine that.
00:41:29
Raj told us that two things basically need to happen. First, we need to get these robocars on the road, get more experience with how they interact with us human drivers and how we interact with them. And two, there need to be, like, industry-wide summits.
00:41:45
This is Bill Ford Jr. of the Ford company giving a speech in October of 2016 at the Economic Club of DC.
00:41:57
Because, like, what if the Tibetan cars make one decision and the American cars make another?
00:42:22
So far, Germany is the only country that we know of that has tackled this head-on.
00:42:58
They—the government has released a code of ethics that says, among other things, that self-driving cars are forbidden to discriminate between humans in almost any way—not on race, not on gender, not on age, nothing.
00:43:50
How we get there to that globally-accepted standard is anyone's guess. And what it will look like, whether it'll be, like, a coherent set of rules or, like, rife with the kind of contradictions we see in our own brain, that also remains to be seen. But one thing is clear.
00:44:20
Oh, there are cars coming ...
00:44:27
... with their questions.
00:44:46
Okay, we do need to caveat all this by saying that the moral dilemma we're talking about in the case of these driverless cars is gonna be super rare. Mostly what'll probably happen is that, like, the planeloads full of people that die every day from car accidents, well that's just gonna hit the floor. And so you have to balance the few cases where a car might make a decision you don't like against the massive number of lives saved.
00:45:44
Mm-hmm.
00:46:10
[laughs]
00:46:28
Premeditated, yeah.
00:46:42
Well, yeah, but in ...
00:46:49
In the particulars, in the particulars it feels dark. It's a little bit like when, you know, should you kill your own baby to save the village?
00:47:03
Like, in the particular instance of that one child it's dark. But against the backdrop of the lives saved, it's just a tiny pinprick of darkness. That's all it is.
00:47:50
And that human being needs to meditate like the monks to silence that feeling because the feeling in that case is just getting in the way!
00:48:12
See, we're right back where we started now. All right, we should go.
00:48:27
Yes. Oh, this piece was produced by Amanda Aronczyk with help from Bethel Habte. Special thanks to Iyad Rahwan, Edmond Awad and Sydney Levine from The Moral Machine Group, MIT. Also thanks to Sertac Karaman, Xin Xiang and Roborace for all their help. And I guess we should go now.
00:49:04
I'm Jad Abumrad.
00:49:10
[laughs]
00:49:18
I'm gonna rig up an autonomous vehicle to the bottom of your bed.
00:49:27
So you're gonna go to bed and suddenly find yourself on the highway driving you wherever I want.
00:49:38
Anyhow, okay, we should go.
00:49:45
I'm Jad Abumrad.
00:49:50
Thanks for listening.
Annotating Radiolab's "Driverless Dilemma"
00:00:40
I'm Jad Abumrad.
00:01:36
This is Radiolab. [laughs]
00:01:40
Okay, so we're gonna play you a little bit of tape first just to set up the—what we're gonna do today. About a month ago, we were doing the thing about the fake news.
00:03:13
Yeah, he was like, "You know, you guys want to talk about fake news, but that's not actually what's eating at me."
00:03:24
Quite bold!
00:04:43
Yes, but you know what? It's funny. One of the things that—I mean, we couldn't use that tape, initially at least.
00:06:20
But we kept thinking about it because it actually weirdly points us back to a story we did about a decade ago. The story of a moral problem that's about to get totally reimagined.
00:06:44
So what we thought we would do is we're—we're gonna play you the story as we did it then, sort of the full segment, and then we're gonna amend it on the back end. And by way of just disclaiming, this was at a moment in our development where there's just, like, way too many sound effects. It's just gratuitous.
00:07:07
No, I'm—I'm gonna apologize because there's just too much.
00:07:31
Just too much. And also, like, we—we talk about the MRI machine like it's this, like, amazing thing, when it was—it's sorta commonplace now. Anyhow, doesn't matter. We're gonna play it for you and then talk about it on the back end. This is—we start with a description of something called "the trolley problem." You ready?
00:07:44
All right. You're gonna hear some train tracks. Go there in your mind.
00:08:04
There are five workers on the tracks, working. They've got their backs turned to the trolley, which is coming in the distance.
00:08:24
They are repairing the tracks.
00:08:33
They don't see it. You can't shout to them.
00:08:45
And if you do nothing, here's what will happen: five workers will die.
00:09:00
No, you don't. But you have a choice. You can do A) nothing. Or B) it so happens, next to you is a lever. Pull the lever, and the trolley will jump onto some side tracks where there is only one person working.
00:09:12
Yeah, so there's your choice. Do you kill one man by pulling a lever, or do you kill five men by doing nothing?
00:09:17
Naturally. All right, here's part two. You're standing near some train tracks. Five guys are on the tracks, just as before. And there is the trolley coming.
00:09:18
Same five guys.
00:09:24
Yeah, yeah, exactly. However, I'm gonna make a couple changes. Now you're standing on a footbridge that passes over the tracks. You're looking down onto the tracks. There's no lever anywhere to be seen, except next to you, there is a guy.
00:09:26
A large guy, large individual standing next to you on the bridge, looking down with you over the tracks. And you realize, "Wait, I can save those five workers if I push this man, give him a little tap."
00:09:39
He'll land on the tracks and stop the ...
00:09:51
[laughs] Right.
00:10:05
But surely you realize that the math is the same.
00:10:31
Yeah.
00:10:37
All right, here's the thing. If you ask people these questions—and we did—starting with the first.
00:10:46
"Is it okay to kill one man to save five using a lever?" nine out of ten people will say ...
00:11:42
But if you ask them, "Is it okay to kill one man to save five by pushing the guy?" nine out of ten people will say ...
00:12:13
It is practically universal. And the thing is if you ask people, "Why is it okay to murder"—because that's what it is—"Murder a man with a lever and not okay to do it with your hands?" People don't really know.
00:12:53
Yeah?
00:14:27
Mm-hmm.
00:15:55
So when people answer yes to the lever question, there are—there are places in their brain which glow?
00:16:49
Even though the questions are basically the same?
00:17:19
Well, what does that mean? What does Josh make of this?
00:20:24
Do you buy this?
00:20:36
Yeah.
00:23:19
And Josh thinks there are times when these different moral positions that we have embedded inside of us, in our brains, when they can come into conflict. And in the original episode, we went into one more story. This one, you might call the "Crying baby dilemma."
00:26:17
Right.
00:26:44
Well, who breaks the tie? I mean, they had to answer something, right?
00:28:53
So in those cases when these dots above our eyebrows become active, what are they doing?
00:29:12
Okay, so that was the story we put together many, many, many years ago, about a decade ago. And at that point, the whole idea of thinking of morality as kind of purely a brain thing, it was relatively new. And certainly, the idea of philosophers working with MRI machines, it was super new. But now here we are, 10 years later, and some updates. First of all, Josh Greene ...
00:29:42
We talked to him again. He has started a family. He's switched labs from Princeton to Harvard. But that whole time, that interim decade, he has still been thinking and working on the trolley problem.
00:29:50
For years, he's been trying out different permutations of the scenario on people. Like, "Okay, instead of pushing the guy off the bridge with your hands, what if you did it, but not with your hands?"
00:29:59
And to cut to the chase, what Josh has found is that the basic results that we talked about ...
00:30:02
It's still the case that people would like to save the most number of lives, but not if it means pushing somebody with their own hands—or with a pole, for that matter. Now here's something kind of interesting. He and others have found that there are two groups that are more willing to push the guy off the bridge: they are Buddhist monks and psychopaths.
00:30:20
That would be the psychopaths, whereas the Buddhist monks presumably are really good at shushing their "inner chimp," as he called it, and just saying to themselves ...
00:30:28
So there's all kinds of interesting things you can say about the trolley problem as a thought experiment, but at the end of the day, it's just that. It's a thought experiment. What got us interested in revisiting it is that it seems like the thought experiment is about to get real.
00:30:32
That's coming up right after the break.
00:30:52
Jad, Robert. Radiolab. Okay, so where we left it is that the trolley problem is about to get real. Here's how Josh Greene put it.
00:31:49
Okay, so self-driving cars, unless you've been living under a muffler, they are coming. It's gonna be a little bit of an adjustment for some of us.
00:32:16
But what Josh meant when he said it's the trolley problem ...
00:32:30
... is basically this. Imagine this scenario ...
00:32:31
That suddenly is a real-world question.
00:32:37
Like, what, theoretically, should a car in this situation do?
00:32:58
So if it's between one driver and five pedestrians ...
00:33:20
But when you ask people, forget the theory ...
00:34:28
So there's your problem: people would sell a car—and an idea of moral reasoning—that they themselves wouldn't buy. And last fall, an exec at Mercedes Benz face-planted right into the middle of this contradiction.
00:35:03
Okay, October 2016, the Paris Motor Show. You had something like a million people coming in over the course of a few days. All the major car-makers were there.
00:35:10
Everybody was debuting their new cars, and one of the big presenters in this whole affair was this guy ...
00:35:50
This is Christoph von Hugo, a senior safety manager at Mercedes Benz. He was at the show sort of demonstrating a prototype of a car that could sort of self-drive its way through traffic.
00:36:22
He's doing dozens and dozens of interviews through the show, and in one of those interviews—unfortunately, this one we don't have on tape—he was asked, "What would your driverless car do in a trolley problem-type dilemma, where maybe you have to choose between one or many?" And he answered, quote ...
00:36:37
If you know you can save one person, save that one person.
00:36:56
This is Michael Taylor, correspondent for Car and Driver magazine. He was the one that Christoph von Hugo said that to.
00:37:10
This is producer Amanda Aronczyk.
00:37:36
I mean, all he's really doing is saying what's on people's minds, which is that ...
00:38:00
Who's gonna buy a car that chooses somebody else over them? Anyhow, he makes that comment, Michael prints it, and a kerfuffle ensues.
00:39:44
And those trade-offs could get really, really tricky and subtle. Because obviously, these cars have sensors.
00:39:51
This is Raj Rajkumar. He's a professor at Carnegie Mellon.
00:39:54
He is one of the guys that is writing the code that will go inside GM's driverless car. He says yeah, the sensors at the moment on these cars ...
00:40:29
... pretty basic.
00:40:36
But he says, it won't be long before ...
00:40:48
Eventually they will be able to detect people of different sizes, shapes, and colors. Like, "Oh, that's a skinny person, that's a small person, tall person, Black person, white person. That's a little boy, that's a little girl."
00:40:50
So forget the basic moral math. Like, what does a car do if it has to decide oh, do I save this boy or this girl? What about two girls versus one boy and an adult? How about a cat versus a dog? A 75-year-old guy in a suit versus that person over there who might be homeless? You can see where this is going. And it's conceivable that cars will know our medical records, and back at the car show ...
00:41:10
Mercedes guy basically said in a couple of years, the cars will be networked. They'll be talking to each other. So just imagine a scenario where, like, cars are about to get into accidents, and right at the decision point, they're, like, conferring. "Well, who do you have in your car?" "Me, I got a 70-year-old Wall Street guy, makes eight figures. How about you?" "Well, I'm a bus full of kids. Kids have more years left. You need to move." "Well, hold up. I see that your kids come from a poor neighborhood and have asthma, so I don't know."
00:41:26
[laughs] How does society decide? I mean, help me imagine that.
00:41:29
Raj told us that two things basically need to happen. First, we need to get these robocars on the road, get more experience with how they interact with us human drivers and how we interact with them. And two, there need to be, like, industry-wide summits.
00:41:45
This is Bill Ford Jr. of the Ford company giving a speech in October of 2016 at the Economic Club of DC.
00:41:57
Because, like, what if the Tibetan cars make one decision and the American cars make another?
00:42:22
So far, Germany is the only country that we know of that has tackled this head-on.
00:42:58
They—the government has released a code of ethics that says, among other things, that self-driving cars are forbidden to discriminate between humans in almost any way—not on race, not on gender, not on age, nothing.
00:43:50
How we get there to that globally-accepted standard is anyone's guess. And what it will look like, whether it'll be, like, a coherent set of rules or, like, rife with the kind of contradictions we see in our own brain, that also remains to be seen. But one thing is clear.
00:44:20
Oh, there are cars coming ...
00:44:27
... with their questions.
00:44:46
Okay, we do need to caveat all this by saying that the moral dilemma we're talking about in the case of these driverless cars is gonna be super rare. Mostly what'll probably happen is that, like, the planeloads full of people that die every day from car accidents, well that's just gonna hit the floor. And so you have to balance the few cases where a car might make a decision you don't like against the massive number of lives saved.
00:45:44
Mm-hmm.
00:46:10
[laughs]
00:46:28
Premeditated, yeah.
00:46:42
Well, yeah, but in ...
00:46:49
In the particulars, in the particulars it feels dark. It's a little bit like when, you know, should you kill your own baby to save the village?
00:47:03
Like, in the particular instance of that one child it's dark. But against the backdrop of the lives saved, it's just a tiny pinprick of darkness. That's all it is.
00:47:50
And that human being needs to meditate like the monks to silence that feeling because the feeling in that case is just getting in the way!
00:48:12
See, we're right back where we started now. All right, we should go.
00:48:27
Yes. Oh, this piece was produced by Amanda Aronczyk with help from Bethel Habte. Special thanks to Iyad Rahwan, Edmond Awad and Sydney Levine from The Moral Machine Group, MIT. Also thanks to Sertac Karaman, Xin Xiang and Roborace for all their help. And I guess we should go now.
00:49:04
I'm Jad Abumrad.
00:49:10
[laughs]
00:49:18
I'm gonna rig up an autonomous vehicle to the bottom of your bed.
00:49:27
So you're gonna go to bed and suddenly find yourself on the highway driving you wherever I want.
00:49:38
Anyhow, okay, we should go.
00:49:45
I'm Jad Abumrad.
00:49:50
Thanks for listening.
Radiolab Driverless Dilemma
00:00:40 - 00:00:40
I'm Jad Abumrad.
00:01:36 - 00:01:36
This is Radiolab. [laughs]
00:01:40 - 00:01:40
Okay, so we're gonna play you a little bit of tape first just to set up the—what we're gonna do today. About a month ago, we were doing the thing about the fake news.
00:03:13 - 00:03:13
Yeah, he was like, "You know, you guys want to talk about fake news, but that's not actually what's eating at me."
00:03:24 - 00:03:24
Quite bold!
00:04:43 - 00:04:43
Yes, but you know what? It's funny. One of the things that—I mean, we couldn't use that tape, initially at least.
00:06:20 - 00:06:20
But we kept thinking about it because it actually weirdly points us back to a story we did about a decade ago. The story of a moral problem that's about to get totally reimagined.
00:06:44 - 00:06:44
So what we thought we would do is we're—we're gonna play you the story as we did it then, sort of the full segment, and then we're gonna amend it on the back end. And by way of just disclaiming, this was at a moment in our development where there's just, like, way too many sound effects. It's just gratuitous.
00:07:07 - 00:07:07
No, I'm—I'm gonna apologize because there's just too much.
00:07:31 - 00:07:31
Just too much. And also, like, we—we talk about the MRI machine like it's this, like, amazing thing, when it was—it's sorta commonplace now. Anyhow, doesn't matter. We're gonna play it for you and then talk about it on the back end. This is—we start with a description of something called "the trolley problem." You ready?
00:07:44 - 00:07:44
All right. You're gonna hear some train tracks. Go there in your mind.
00:08:04 - 00:08:04
There are five workers on the tracks, working. They've got their backs turned to the trolley, which is coming in the distance.
00:08:24 - 00:08:24
They are repairing the tracks.
00:08:33 - 00:08:33
They don't see it. You can't shout to them.
00:08:45 - 00:08:45
And if you do nothing, here's what will happen: five workers will die.
00:09:00 - 00:09:00
No, you don't. But you have a choice. You can do A) nothing. Or B) it so happens, next to you is a lever. Pull the lever, and the trolley will jump onto some side tracks where there is only one person working.
00:09:12 - 00:09:12
Yeah, so there's your choice. Do you kill one man by pulling a lever, or do you kill five men by doing nothing?
00:09:17 - 00:09:17
Naturally. All right, here's part two. You're standing near some train tracks. Five guys are on the tracks, just as before. And there is the trolley coming.
00:09:18 - 00:09:18
Same five guys.
00:09:24 - 00:09:24
Yeah, yeah, exactly. However, I'm gonna make a couple changes. Now you're standing on a footbridge that passes over the tracks. You're looking down onto the tracks. There's no lever anywhere to be seen, except next to you, there is a guy.
00:09:26 - 00:09:26
A large guy, large individual standing next to you on the bridge, looking down with you over the tracks. And you realize, "Wait, I can save those five workers if I push this man, give him a little tap."
00:09:39 - 00:09:39
He'll land on the tracks and stop the ...
00:09:51 - 00:09:51
[laughs] Right.
00:10:05 - 00:10:05
But surely you realize that the math is the same.
00:10:31 - 00:10:31
Yeah.
00:10:37 - 00:10:37
All right, here's the thing. If you ask people these questions—and we did—starting with the first.
00:10:46 - 00:10:46
"Is it okay to kill one man to save five using a lever?" nine out of ten people will say ...
00:11:42 - 00:11:42
But if you ask them, "Is it okay to kill one man to save five by pushing the guy?" nine out of ten people will say ...
00:12:13 - 00:12:13
It is practically universal. And the thing is if you ask people, "Why is it okay to murder"—because that's what it is—"Murder a man with a lever and not okay to do it with your hands?" People don't really know.
00:12:53 - 00:12:53
Yeah?
00:14:27 - 00:14:27
Mm-hmm.
00:15:55 - 00:15:55
So when people answer yes to the lever question, there are—there are places in their brain which glow?
00:16:49 - 00:16:49
Even though the questions are basically the same?
00:17:19 - 00:17:19
Well, what does that mean? What does Josh make of this?
00:20:24 - 00:20:24
Do you buy this?
00:20:36 - 00:20:36
Yeah.
00:23:19 - 00:23:19
And Josh thinks there are times when these different moral positions that we have embedded inside of us, in our brains, when they can come into conflict. And in the original episode, we went into one more story. This one, you might call the "Crying baby dilemma."
00:26:17 - 00:26:17
Right.
00:26:44 - 00:26:44
Well, who breaks the tie? I mean, they had to answer something, right?
00:28:53 - 00:28:53
So in those cases when these dots above our eyebrows become active, what are they doing?
00:29:12 - 00:29:12
Okay, so that was the story we put together many, many, many years ago, about a decade ago. And at that point, the whole idea of thinking of morality as kind of purely a brain thing, it was relatively new. And certainly, the idea of philosophers working with MRI machines, it was super new. But now here we are, 10 years later, and some updates. First of all, Josh Greene ...
00:29:42 - 00:29:42
We talked to him again. He has started a family. He's switched labs from Princeton to Harvard. But that whole time, that interim decade, he has still been thinking and working on the trolley problem.
00:29:50 - 00:29:50
For years, he's been trying out different permutations of the scenario on people. Like, "Okay, instead of pushing the guy off the bridge with your hands, what if you did it, but not with your hands?"
00:29:59 - 00:29:59
And to cut to the chase, what Josh has found is that the basic results that we talked about ...
00:30:02 - 00:30:02
It's still the case that people would like to save the most number of lives, but not if it means pushing somebody with their own hands—or with a pole, for that matter. Now here's something kind of interesting. He and others have found that there are two groups that are more willing to push the guy off the bridge: they are Buddhist monks and psychopaths.
00:30:20 - 00:30:20
That would be the psychopaths, whereas the Buddhist monks presumably are really good at shushing their "inner chimp," as he called it, and just saying to themselves ...
00:30:28 - 00:30:28
So there's all kinds of interesting things you can say about the trolley problem as a thought experiment, but at the end of the day, it's just that. It's a thought experiment. What got us interested in revisiting it is that it seems like the thought experiment is about to get real.
00:30:32 - 00:30:32
That's coming up right after the break.
00:30:52 - 00:30:52
Jad, Robert. Radiolab. Okay, so where we left it is that the trolley problem is about to get real. Here's how Josh Greene put it.
00:31:49 - 00:31:49
Okay, so self-driving cars, unless you've been living under a muffler, they are coming. It's gonna be a little bit of an adjustment for some of us.
00:32:16 - 00:32:16
But what Josh meant when he said it's the trolley problem ...
00:32:30 - 00:32:30
... is basically this. Imagine this scenario ...
00:32:31 - 00:32:31
That suddenly is a real-world question.
00:32:37 - 00:32:37
Like, what, theoretically, should a car in this situation do?
00:32:58 - 00:32:58
So if it's between one driver and five pedestrians ...
00:33:20 - 00:33:20
But when you ask people, forget the theory ...
00:34:28 - 00:34:28
So there's your problem: people would sell a car—and an idea of moral reasoning—that they themselves wouldn't buy. And last fall, an exec at Mercedes Benz face-planted right into the middle of this contradiction.
00:35:03 - 00:35:03
Okay, October 2016, the Paris Motor Show. You had something like a million people coming in over the course of a few days. All the major car-makers were there.
00:35:10 - 00:35:10
Everybody was debuting their new cars, and one of the big presenters in this whole affair was this guy ...
00:35:50 - 00:35:50
This is Christoph von Hugo, a senior safety manager at Mercedes Benz. He was at the show sort of demonstrating a prototype of a car that could sort of self-drive its way through traffic.
00:36:22 - 00:36:22
He's doing dozens and dozens of interviews through the show, and in one of those interviews—unfortunately, this one we don't have on tape—he was asked, "What would your driverless car do in a trolley problem-type dilemma, where maybe you have to choose between one or many?" And he answered, quote ...
00:36:37 - 00:36:37
If you know you can save one person, save that one person.
00:36:56 - 00:36:56
This is Michael Taylor, correspondent for Car and Driver magazine. He was the one that Christoph von Hugo said that to.
00:37:10 - 00:37:10
This is producer Amanda Aronczyk.
00:37:36 - 00:37:36
I mean, all he's really doing is saying what's on people's minds, which is that ...
00:38:00 - 00:38:00
Who's gonna buy a car that chooses somebody else over them? Anyhow, he makes that comment, Michael prints it, and a kerfuffle ensues.
00:39:44 - 00:39:44
And those trade-offs could get really, really tricky and subtle. Because obviously, these cars have sensors.
00:39:51 - 00:39:51
This is Raj Rajkumar. He's a professor at Carnegie Mellon.
00:39:54 - 00:39:54
He is one of the guys that is writing the code that will go inside GM's driverless car. He says yeah, the sensors at the moment on these cars ...
00:40:29 - 00:40:29
... pretty basic.
00:40:36 - 00:40:36
But he says, it won't be long before ...
00:40:48 - 00:40:48
Eventually they will be able to detect people of different sizes, shapes, and colors. Like, "Oh, that's a skinny person, that's a small person, tall person, Black person, white person. That's a little boy, that's a little girl."
00:40:50 - 00:40:50
So forget the basic moral math. Like, what does a car do if it has to decide oh, do I save this boy or this girl? What about two girls versus one boy and an adult? How about a cat versus a dog? A 75-year-old guy in a suit versus that person over there who might be homeless? You can see where this is going. And it's conceivable that cars will know our medical records, and back at the car show ...
00:41:10 - 00:41:10
Mercedes guy basically said in a couple of years, the cars will be networked. They'll be talking to each other. So just imagine a scenario where, like, cars are about to get into accidents, and right at the decision point, they're, like, conferring. "Well, who do you have in your car?" "Me, I got a 70-year-old Wall Street guy, makes eight figures. How about you?" "Well, I'm a bus full of kids. Kids have more years left. You need to move." "Well, hold up. I see that your kids come from a poor neighborhood and have asthma, so I don't know."
00:41:26 - 00:41:26
[laughs] How does society decide? I mean, help me imagine that.
00:41:29 - 00:41:29
Raj told us that two things basically need to happen. First, we need to get these robocars on the road, get more experience with how they interact with us human drivers and how we interact with them. And two, there need to be, like, industry-wide summits.
00:41:45 - 00:41:45
This is Bill Ford Jr. of the Ford company giving a speech in October of 2016 at the Economic Club of DC.
00:41:57 - 00:41:57
Because, like, what if the Tibetan cars make one decision and the American cars make another?
00:42:22 - 00:42:22
So far, Germany is the only country that we know of that has tackled this head-on.
00:42:58 - 00:42:58
They—the government has released a code of ethics that says, among other things, that self-driving cars are forbidden to discriminate between humans in almost any way—not on race, not on gender, not on age, nothing.
00:43:50 - 00:43:50
How we get there to that globally-accepted standard is anyone's guess. And what it will look like, whether it'll be, like, a coherent set of rules or, like, rife with the kind of contradictions we see in our own brain, that also remains to be seen. But one thing is clear.
00:44:20 - 00:44:20
Oh, there are cars coming ...
00:44:27 - 00:44:27
... with their questions.
00:44:46 - 00:44:46
Okay, we do need to caveat all this by saying that the moral dilemma we're talking about in the case of these driverless cars is gonna be super rare. Mostly what'll probably happen is that, like, the planeloads full of people that die every day from car accidents, well that's just gonna hit the floor. And so you have to balance the few cases where a car might make a decision you don't like against the massive number of lives saved.
00:45:44 - 00:45:44
Mm-hmm.
00:46:10 - 00:46:10
[laughs]
00:46:28 - 00:46:28
Premeditated, yeah.
00:46:42 - 00:46:42
Well, yeah, but in ...
00:46:49 - 00:46:49
In the particulars, in the particulars it feels dark. It's a little bit like when, you know, should you kill your own baby to save the village?
00:47:03 - 00:47:03
Like, in the particular instance of that one child it's dark. But against the backdrop of the lives saved, it's just a tiny pinprick of darkness. That's all it is.
00:47:50 - 00:47:50
And that human being needs to meditate like the monks to silence that feeling because the feeling in that case is just getting in the way!
00:48:12 - 00:48:12
See, we're right back where we started now. All right, we should go.
00:48:27 - 00:48:27
Yes. Oh, this piece was produced by Amanda Aronczyk with help from Bethel Habte. Special thanks to Iyad Rahwan, Edmond Awad and Sydney Levine from The Moral Machine Group, MIT. Also thanks to Sertac Karaman, Xin Xiang and Roborace for all their help. And I guess we should go now.
00:49:04 - 00:49:04
I'm Jad Abumrad.
00:49:10 - 00:49:10
[laughs]
00:49:18 - 00:49:18
I'm gonna rig up an autonomous vehicle to the bottom of your bed.
00:49:27 - 00:49:27
So you're gonna go to bed and suddenly find yourself on the highway driving you wherever I want.
00:49:38 - 00:49:38
Anyhow, okay, we should go.
00:49:45 - 00:49:45
I'm Jad Abumrad.
00:49:50 - 00:49:50
Thanks for listening.
Radiolab Driverless Dilemma
00:00:40 - 00:00:40
I'm Jad Abumrad.
00:01:36 - 00:01:36
This is Radiolab. [laughs]
00:01:40 - 00:01:40
Okay, so we're gonna play you a little bit of tape first just to set up the—what we're gonna do today. About a month ago, we were doing the thing about the fake news.
00:03:13 - 00:03:13
Yeah, he was like, "You know, you guys want to talk about fake news, but that's not actually what's eating at me."
00:03:24 - 00:03:24
Quite bold!
00:04:43 - 00:04:43
Yes, but you know what? It's funny. One of the things that—I mean, we couldn't use that tape, initially at least.
00:06:20 - 00:06:20
But we kept thinking about it because it actually weirdly points us back to a story we did about a decade ago. The story of a moral problem that's about to get totally reimagined.
00:06:44 - 00:06:44
So what we thought we would do is we're—we're gonna play you the story as we did it then, sort of the full segment, and then we're gonna amend it on the back end. And by way of just disclaiming, this was at a moment in our development where there's just, like, way too many sound effects. It's just gratuitous.
00:07:07 - 00:07:07
No, I'm—I'm gonna apologize because there's just too much.
00:07:31 - 00:07:31
Just too much. And also, like, we—we talk about the MRI machine like it's this, like, amazing thing, when it was—it's sorta commonplace now. Anyhow, doesn't matter. We're gonna play it for you and then talk about it on the back end. This is—we start with a description of something called "the trolley problem." You ready?
00:07:44 - 00:07:44
All right. You're gonna hear some train tracks. Go there in your mind.
00:08:04 - 00:08:04
There are five workers on the tracks, working. They've got their backs turned to the trolley, which is coming in the distance.
00:08:24 - 00:08:24
They are repairing the tracks.
00:08:33 - 00:08:33
They don't see it. You can't shout to them.
00:08:45 - 00:08:45
And if you do nothing, here's what will happen: five workers will die.
00:09:00 - 00:09:00
No, you don't. But you have a choice. You can do A) nothing. Or B) it so happens, next to you is a lever. Pull the lever, and the trolley will jump onto some side tracks where there is only one person working.
00:09:12 - 00:09:12
Yeah, so there's your choice. Do you kill one man by pulling a lever, or do you kill five men by doing nothing?
00:09:17 - 00:09:17
Naturally. All right, here's part two. You're standing near some train tracks. Five guys are on the tracks, just as before. And there is the trolley coming.
00:09:18 - 00:09:18
Same five guys.
00:09:24 - 00:09:24
Yeah, yeah, exactly. However, I'm gonna make a couple changes. Now you're standing on a footbridge that passes over the tracks. You're looking down onto the tracks. There's no lever anywhere to be seen, except next to you, there is a guy.
00:09:26 - 00:09:26
A large guy, large individual standing next to you on the bridge, looking down with you over the tracks. And you realize, "Wait, I can save those five workers if I push this man, give him a little tap."
00:09:39 - 00:09:39
He'll land on the tracks and stop the ...
00:09:51 - 00:09:51
[laughs] Right.
00:10:05 - 00:10:05
But surely you realize that the math is the same.
00:10:31 - 00:10:31
Yeah.
00:10:37 - 00:10:37
All right, here's the thing. If you ask people these questions—and we did—starting with the first.
00:10:46 - 00:10:46
"Is it okay to kill one man to save five using a lever?" nine out of ten people will say ...
00:11:42 - 00:11:42
But if you ask them, "Is it okay to kill one man to save five by pushing the guy?" nine out of ten people will say ...
00:12:13 - 00:12:13
It is practically universal. And the thing is if you ask people, "Why is it okay to murder"—because that's what it is—"Murder a man with a lever and not okay to do it with your hands?" People don't really know.
00:12:53 - 00:12:53
Yeah?
00:14:27 - 00:14:27
Mm-hmm.
00:15:55 - 00:15:55
So when people answer yes to the lever question, there are—there are places in their brain which glow?
00:16:49 - 00:16:49
Even though the questions are basically the same?
00:17:19 - 00:17:19
Well, what does that mean? What does Josh make of this?
00:20:24 - 00:20:24
Do you buy this?
00:20:36 - 00:20:36
Yeah.
00:23:19 - 00:23:19
And Josh thinks there are times when these different moral positions that we have embedded inside of us, in our brains, when they can come into conflict. And in the original episode, we went into one more story. This one, you might call the "Crying baby dilemma."
00:26:17 - 00:26:17
Right.
00:26:44 - 00:26:44
Well, who breaks the tie? I mean, they had to answer something, right?
00:28:53 - 00:28:53
So in those cases when these dots above our eyebrows become active, what are they doing?
00:29:12 - 00:29:12
Okay, so that was the story we put together many, many, many years ago, about a decade ago. And at that point, the whole idea of thinking of morality as kind of purely a brain thing, it was relatively new. And certainly, the idea of philosophers working with MRI machines, it was super new. But now here we are, 10 years later, and some updates. First of all, Josh Greene ...
00:29:42 - 00:29:42
We talked to him again. He has started a family. He's switched labs from Princeton to Harvard. But that whole time, that interim decade, he has still been thinking and working on the trolley problem.
00:29:50 - 00:29:50
For years, he's been trying out different permutations of the scenario on people. Like, "Okay, instead of pushing the guy off the bridge with your hands, what if you did it, but not with your hands?"
00:29:59 - 00:29:59
And to cut to the chase, what Josh has found is that the basic results that we talked about ...
00:30:02 - 00:30:02
It's still the case that people would like to save the most number of lives, but not if it means pushing somebody with their own hands—or with a pole, for that matter. Now here's something kind of interesting. He and others have found that there are two groups that are more willing to push the guy off the bridge: they are Buddhist monks and psychopaths.
00:30:20 - 00:30:20
That would be the psychopaths, whereas the Buddhist monks presumably are really good at shushing their "inner chimp," as he called it, and just saying to themselves ...
00:30:28 - 00:30:28
So there's all kinds of interesting things you can say about the trolley problem as a thought experiment, but at the end of the day, it's just that. It's a thought experiment. What got us interested in revisiting it is that it seems like the thought experiment is about to get real.
00:30:32 - 00:30:32
That's coming up right after the break.
00:30:52 - 00:30:52
Jad, Robert. Radiolab. Okay, so where we left it is that the trolley problem is about to get real. Here's how Josh Greene put it.
00:31:49 - 00:31:49
Okay, so self-driving cars, unless you've been living under a muffler, they are coming. It's gonna be a little bit of an adjustment for some of us.
00:32:16 - 00:32:16
But what Josh meant when he said it's the trolley problem ...
00:32:30 - 00:32:30
... is basically this. Imagine this scenario ...
00:32:31 - 00:32:31
That suddenly is a real-world question.
00:32:37 - 00:32:37
Like, what, theoretically, should a car in this situation do?
00:32:58 - 00:32:58
So if it's between one driver and five pedestrians ...
00:33:20 - 00:33:20
But when you ask people, forget the theory ...
00:34:28 - 00:34:28
So there's your problem: people would sell a car—and an idea of moral reasoning—that they themselves wouldn't buy. And last fall, an exec at Mercedes Benz face-planted right into the middle of this contradiction.
00:35:03 - 00:35:03
Okay, October 2016, the Paris Motor Show. You had something like a million people coming in over the course of a few days. All the major car-makers were there.
00:35:10 - 00:35:10
Everybody was debuting their new cars, and one of the big presenters in this whole affair was this guy ...
00:35:50 - 00:35:50
This is Christoph von Hugo, a senior safety manager at Mercedes Benz. He was at the show sort of demonstrating a prototype of a car that could sort of self-drive its way through traffic.
00:36:22 - 00:36:22
He's doing dozens and dozens of interviews through the show, and in one of those interviews—unfortunately, this one we don't have on tape—he was asked, "What would your driverless car do in a trolley problem-type dilemma, where maybe you have to choose between one or many?" And he answered, quote ...
00:36:37 - 00:36:37
If you know you can save one person, save that one person.
00:36:56 - 00:36:56
This is Michael Taylor, correspondent for Car and Driver magazine. He was the one that Christoph von Hugo said that to.
00:37:10 - 00:37:10
This is producer Amanda Aronczyk.
00:37:36 - 00:37:36
I mean, all he's really doing is saying what's on people's minds, which is that ...
00:38:00 - 00:38:00
Who's gonna buy a car that chooses somebody else over them? Anyhow, he makes that comment, Michael prints it, and a kerfuffle ensues.
00:39:44 - 00:39:44
And those trade-offs could get really, really tricky and subtle. Because obviously, these cars have sensors.
00:39:51 - 00:39:51
This is Raj Rajkumar. He's a professor at Carnegie Mellon.
00:39:54 - 00:39:54
He is one of the guys that is writing the code that will go inside GM's driverless car. He says yeah, the sensors at the moment on these cars ...
00:40:29 - 00:40:29
... pretty basic.
00:40:36 - 00:40:36
But he says, it won't be long before ...
00:40:48 - 00:40:48
Eventually they will be able to detect people of different sizes, shapes, and colors. Like, "Oh, that's a skinny person, that's a small person, tall person, Black person, white person. That's a little boy, that's a little girl."
00:40:50 - 00:40:50
So forget the basic moral math. Like, what does a car do if it has to decide oh, do I save this boy or this girl? What about two girls versus one boy and an adult? How about a cat versus a dog? A 75-year-old guy in a suit versus that person over there who might be homeless? You can see where this is going. And it's conceivable that cars will know our medical records, and back at the car show ...
00:41:10 - 00:41:10
Mercedes guy basically said in a couple of years, the cars will be networked. They'll be talking to each other. So just imagine a scenario where, like, cars are about to get into accidents, and right at the decision point, they're, like, conferring. "Well, who do you have in your car?" "Me, I got a 70-year-old Wall Street guy, makes eight figures. How about you?" "Well, I'm a bus full of kids. Kids have more years left. You need to move." "Well, hold up. I see that your kids come from a poor neighborhood and have asthma, so I don't know."
00:41:26 - 00:41:26
[laughs] How does society decide? I mean, help me imagine that.
00:41:29 - 00:41:29
Raj told us that two things basically need to happen. First, we need to get these robocars on the road, get more experience with how they interact with us human drivers and how we interact with them. And two, there need to be, like, industry-wide summits.
00:41:45 - 00:41:45
This is Bill Ford Jr. of the Ford company giving a speech in October of 2016 at the Economic Club of DC.
00:41:57 - 00:41:57
Because, like, what if the Tibetan cars make one decision and the American cars make another?
00:42:22 - 00:42:22
So far, Germany is the only country that we know of that has tackled this head-on.
00:42:58 - 00:42:58
They—the government has released a code of ethics that says, among other things, that self-driving cars are forbidden to discriminate between humans in almost any way—not on race, not on gender, not on age, nothing.
00:43:50 - 00:43:50
How we get there to that globally-accepted standard is anyone's guess. And what it will look like, whether it'll be, like, a coherent set of rules or, like, rife with the kind of contradictions we see in our own brain, that also remains to be seen. But one thing is clear.
00:44:20 - 00:44:20
Oh, there are cars coming ...
00:44:27 - 00:44:27
... with their questions.
00:44:46 - 00:44:46
Okay, we do need to caveat all this by saying that the moral dilemma we're talking about in the case of these driverless cars is gonna be super rare. Mostly what'll probably happen is that, like, the planeloads full of people that die every day from car accidents, well that's just gonna hit the floor. And so you have to balance the few cases where a car might make a decision you don't like against the massive number of lives saved.
00:45:44 - 00:45:44
Mm-hmm.
00:46:10 - 00:46:10
[laughs]
00:46:28 - 00:46:28
Premeditated, yeah.
00:46:42 - 00:46:42
Well, yeah, but in ...
00:46:49 - 00:46:49
In the particulars, in the particulars it feels dark. It's a little bit like when, you know, should you kill your own baby to save the village?
00:47:03 - 00:47:03
Like, in the particular instance of that one child it's dark. But against the backdrop of the lives saved, it's just a tiny pinprick of darkness. That's all it is.
00:47:50 - 00:47:50
And that human being needs to meditate like the monks to silence that feeling because the feeling in that case is just getting in the way!
00:48:12 - 00:48:12
See, we're right back where we started now. All right, we should go.
00:48:27 - 00:48:27
Yes. Oh, this piece was produced by Amanda Aronczyk with help from Bethel Habte. Special thanks to Iyad Rahwan, Edmond Awad and Sydney Levine from The Moral Machine Group, MIT. Also thanks to Sertac Karaman, Xin Xiang and Roborace for all their help. And I guess we should go now.
00:49:04 - 00:49:04
I'm Jad Abumrad.
00:49:10 - 00:49:10
[laughs]
00:49:18 - 00:49:18
I'm gonna rig up an autonomous vehicle to the bottom of your bed.
00:49:27 - 00:49:27
So you're gonna go to bed and suddenly find yourself on the highway driving you wherever I want.
00:49:38 - 00:49:38
Anyhow, okay, we should go.
00:49:45 - 00:49:45
I'm Jad Abumrad.
00:49:50 - 00:49:50
Thanks for listening.