• Sat. Jul 27th, 2024

Can driverless cars be programmed to make ethical decisions?

ByBea Bray

Nov 8, 2017

A driverless car is capable, in theory, of sensing its environment and navigating it without human input. Currently, no completely driverless or autonomous cars are permitted on public roads unless there is a human present who can take over control in an emergency. It is predicted, however, that within the next 10 years this may change.

Autonomous cars use radar, laser light, GPS, and sensory information to identify paths, obstacles, and signage. There is potential to decrease the number of road traffic accidents, as the possibility of human error is significantly reduced. For example, if cars were completely autonomous, there would no longer be the risk of drunk drivers or parents getting distracted by their children in the back seat. Cars would be able to provide a level of behavioural consistency which humans simply cannot.

Issues arise, however, when we consider how these driverless cars would respond to certain ethical situations.

The ‘trolley problem’ in philosophy describes a situation in which a tram is hurtling down the tracks without any brakes. In this scenario, there are five people tied to the tracks in the path of the tram and one person tied to the adjacent tracks. If the driver were to swerve path, he would only kill one person. Most people would save the five lives and kill one person, rather than keeping the tram on its original path.

This is based on the philosophical concept of utilitarianism, created by John Stuart Mill. A utilitarian viewpoint would always choose the option that caused the least harm and would therefore maximise happiness.

But can this always be applied? If we changed this example, and someone had to be physically pushed in front of the tram to save the five lives, most people would not deem this to be acceptable.

How can we decide a universally ‘right’ answer to these problems and then programme it into a machine? If these problems are situational, and the correct action needs to be decided in the moment, then this could be difficult. Ron Arkin, who advocates the ethics of automated weapons, claims we need to “step up and take responsibility for the technology we are creating.” If we create driverless cars, we are removing the integrity of the human decision maker, thus we must accept that there will be occasions when we may not agree with the moral programming of a machine.

The ‘trolley problem’ is not a common occurrence on roads. Arguably, returning to a utilitarian view, it is worth having these moral dilemmas if the overall happiness and safety of road-users is increased. The reduction in accidents could be a way to justify our answers to these moral dilemmas. We always seem to be able to conclude that autonomous cars simply are not capable of emulating the ethics of human beings. When examining the flaws in driverless cars, we can consider the situation where an emergency vehicle is behind the car. How would it know to ignore traffic rules and make way for the emergency services? It is questionable whether technology will ever advance enough to ensure that moral and ethical integrity remains intact.

Image: Emmanuel Maceda

 

Leave a Reply

Your email address will not be published. Required fields are marked *