Throughout 2018, we’ve been engaged in conversations about what the future looks like for not just EV’s, but Autonomous vehicles. The biggest question that still remains unanswered is who is going to set out the ground rules for ethics to be predetermind. Who should the vehicle focus on saving, the driver? the group of 20 teenage tourists, 1 baby in a pram? All these things a driver would typically have to do themselves and of course quite often these decisions are unavoidable and scrutinised. However to have these scenarios pre-determined, feels much more uncomfortable.

To help explain, this we’ve found this fantastic article from The Guardian & David Edmonds Senior Research Fellow at Oxford Uehiro Centre for Practical Ethics.

Loved by some philosophers, loathed by others, the so-called trolley problem is the quintessential moral puzzle. A runaway train is heading towards five people tied to a track. You can change a signal, diverting the train down a spur, so saving five lives. Unfortunately, one person is on the spur, and would die. What should you do? Most people – young and old, rich and poor – believe you should divert the train.

But what if a runaway train is heading towards five people, again tied to the track, and you are standing on a footbridge overlooking it, next to an overweight man? Once again you can save five lives, but only by toppling the heavy-set man over the bridge: he will die, but he is large enough to slow the train to a stop. What should you do? This time, almost everyone agrees that you should not kill one person to save the five lives.

Yet the trolley problem and reality are now on their own collision course. That’s because of autonomous machines – in particular the driverless car, which may be on our streets within a decade or so. Imagine the car is faced with an unavoidable accident – it can swerve one way and hit a child, or another and plough into several adult pedestrians. What should it be programmed to do?

There are tough engineering challenges for driverless cars to overcome before they are allowed to operate. But ethical issues might turn out to be the bigger obstacle.

Above all, however, it will be directly life-saving. Around the world, more than 1 million people are killed in car accidents each year, most because of driver error. The driverless car won’t go to the pub, won’t get distracted by its mobile phone, won’t become drowsy at the wheel.

So what is to be done? The crucial point is to acknowledge that trolley dilemmas are going to be extremely rare. The driverless car won’t have sluggish, human reaction times. If something unexpected occurs on the road ahead, it will almost always be able to brake in time to avoid a collision.

But we still need to work out what to do in those unusual cases where an accident is unavoidable. What kind of morals should be programmed into the machine? What the original trolley problem shows is that most of us are not crude utilitarians – that is, we don’t believe that the best course of action is always to maximise happiness, or to save the maximum number of human lives. If we did, we would see no difference between diverting the train and pushing the man over the bridge. Rather, most of us have Kantian instincts – we object to humans (such as our overweight man) being used merely as a means to an end.

However, I think that when it comes to machines we will be more tolerant of their making utilitarian decisions. This is borne out by a recent study that asked what people would do in various iterations of the trolley problem. It suggests that, around the world, people believe the car should save as many lives as possible, though it also revealed some variation in how people in different countries, for example, weighed a young life against an elderly one.

Driverless cars are merely one example of the autonomous machines to which we will delegate ethical choices. When should the carebot call for help if a patient is not taking her pills? What degree of risk to civilian life is acceptable before the autonomous missile launches an attack? An added complication is machine-learning – as machines “learn” how to act from repeated actions, they may end up behaving in unforeseen ways. As a result, it will make less and less sense to hold humans culpable for machine action – hence, I predict, our growing preference for utilitarian solutions.

If anything, the driverless car is the least problematic of these new dilemmas. Of course, we need to establish who is responsible for the ethics with which a vehicle is programmed, and what the ethical algorithm will be. It would be farcical if passengers or manufacturers were free to choose their car’s morality for themselves, so that, for example, you would select a Kantian Mini Cooper while I opted for a utilitarian Ford Focus.

A much more straightforward approach would be for the government to insist upon a general “minimise loss of life” rule. We can then disregard the bizarre questions put by philosophers, variations of “would you rather save two pedestrians or a successful businessperson?”

More controversially, the “minimise loss of life” code should apply even when the lives at stake include those of the passengers inside the driverless car. Although people tend to be utilitarian when it comes to the lives of others – preferring to save two strangers rather than one – they are less keen on utilitarian ethics when their own lives are in the balance.

Initially, people will be reluctant to use a vehicle that would sacrifice its passengers to save a greater number of other people. If humans were rational, this initial resistance would be fleeting, since the prospect of such an eventuality occurring would be almost infinitesimally small, and overall we will all be much safer in the driverless-car environment than we are now, with fallible humans behind the wheel. In practice, because we are not entirely logical creatures, a public campaign may be required to help shift attitudes.

Technology is progressing at a much faster pace than the public conversation about how it should be regulated. Never before have human beings been able to subcontract ethical decisions – including the most serious, involving life and death. We need to work out quickly what ethics should be encoded into autonomous devices, and how machine ethics should be regulated. Otherwise we will delay the arrival of technologies – such as the driverless car – and hold up all the extraordinary benefits they promise.

 David Edmonds is a senior research fellow at the Oxford Uehiro Centre for Practical Ethics and the author of Would You Kill The Fat Man?