AI Driving Dilemmas: A Crash Course in Ethics for Engineers

10 mins

How do you make a car care? It’s often said that the trolley problem, a theoretical d...

By

How do you make a car care?

It’s often said that the trolley problem, a theoretical dilemma about a vehicle killing pedestrians, is a real-life conundrum for the makers of autonomous cars. But self-driving vehicles could raise much deeper and more complex moral questions.

Autonomous driving

The British government has promised to have fully self-driving cars on UK roads by the end of 2021. Driving our own cars is about to go the way of floppy discs and cassette tapes – right?

Well, there’s driving and driving. There are actually five levels of autonomy a car can have.

Level one is driver assistance, which is already normal (it includes things like cruise control).

Level three includes things like automated lane-keeping but requires the driver to be ready to take back control in an emergency.

Level five is the dream. Herbie, Chitty Chitty Bang Bang, Pixar’s Cars. Empty cars competently driving themselves are a fundamentally different story – because unlike in the movies, cars don’t understand right from wrong. And the effort to teach them is raising profound moral dilemmas.

This is where the trolley problem inevitably comes up. It goes like this: a runaway vehicle is about to kill two or more people. You could redirect it onto a different path, where it would kill only one person – but you’d be directly responsible for that person’s death. What do you do?

The real problem here, and in much of the debate around AI vehicles, is that we assume there’s a right answer. If we could just agree on our moral code, we could program it into a car.

Unfortunately, the universal human moral code doesn’t exist. Researchers from science journal Nature recently put versions of the trolley problem to people in 40 different countries. The result? 40 different culturally-specific versions of moral intuition.

What’s more, these simplistic theoretical problems can’t solve real-world dilemmas where driverless cars face high levels of uncertainty. Our cars won’t know for sure how many people they’ll kill if they swerve. Engineers can’t tell them the right and wrong of every situation in advance – the cars will need to work out context-specific solutions for themselves.

Autonomous vehicles will use deep learning algorithms to learn how to solve moral dilemmas through exposure to thousands of scenarios and situations. So to take the example of deciding whether to swerve to avoid someone in the road, the car won’t be making an isolated decision but a sequence of connected decisions.

The idea of a car with a mind of its own has been a human dream for almost as long as there’ve been cars. But profound moral dilemmas remain to be solved before it becomes reality. These dilemmas are too important to be left exclusively to engineers and programmers; they’re the domain of philosophers and of all humanity.

Contact us

If you are interested in finding out more, speak to one of our recruitment specialists today.

Site by Venn