The Ethics of Driverless Cars (Trolley Problem on Wheels)
As autonomous vehicles become more capable, they force us to confront tough ethical questions. Perhaps the most infamous is the so-called “trolley problem” scenario applied to driverless cars – if an accident is unavoidable, how should the car’s AI decide whom or what to sacrifice? People have dubbed this the “trolley problem on wheels.” In this article, we’ll delve into the ethics of driverless cars, exploring how these machines might make moral decisions and who gets to program those decisions. We’ll discuss the trolley problem thought experiment, real-world responses (like Germany’s stance that all lives are equal), and the broader ethical and legal challenges in programming car algorithms to handle life-and-death situations.
The Trolley Problem and Autonomous Cars
The trolley problem is a classic ethical dilemma: imagine a runaway trolley heading towards five people on the tracks. You have the power to pull a lever to switch it to another track, but there’s one person on that alternate track. Do you do nothing and let five die, or pull the lever, intentionally killing one person to save five? There’s no “correct” answer; it pits utilitarian logic (minimize total harm by sacrificing one for many) against deontological ethics (it’s wrong to actively kill an innocent person, even to save others).
In the context of self-driving cars, consider a scenario: A child runs into the road, the autonomous car can’t stop in time. It has two options: swerve into oncoming traffic (possibly harming its own passenger or another motorist), or continue straight and hit the child. This is akin to a trolley problem – the car might “choose” who gets injured based on how it’s programmed. People naturally wonder, how will the car decide? Save its occupants at all costs, or minimize total casualties even if it sacrifices its passengers?
Researchers have actually surveyed public opinion on such scenarios. One famous study in 2016 (Bonnefon, Shariff, Rahwan) found a social dilemma: people generally approve of autonomous cars that are programmed to sacrifice their passenger if it means saving a crowd of pedestrians – in principle[110][111]. However, those same people also said they personally would prefer to ride in a car that protects them and their family at all costs[112]. In other words, “It’d be good if everyone else’s car was altruistic, but I want my car to prioritize me.” This poses a policy challenge: if we leave it to market forces, manufacturers might sell “self-protective” cars because that’s what consumers want, even if overall that’s worse for society.
Another global survey (the MIT Moral Machine experiment) gathered millions of decisions from people worldwide regarding AV moral dilemmas[113][114]. Some general preferences emerged: - People leaned toward the car sparing more lives over fewer (sacrifice one to save many)[115][116]. - They preferred saving humans over animals[115][116]. - They tended to favor young lives over older ones[115][116] (though this varied by culture, with less emphasis on age in Eastern countries)[117][118].
However, do we really want cars making distinctions like young vs old? That feels deeply problematic – essentially valuing one human life over another based on age (or any category) veers into disturbing territory. Should an algorithm play God in that way? Most ethicists and regulators say no: a car should not discriminate based on personal characteristics like age, gender, etc. Every human life should be treated as equal in the eyes of the AI.
This leads to real-world responses. Germany’s Ethics Commission on Automated Driving in 2017 explicitly ruled that any classification of people in unavoidable accident scenarios is forbidden[104][105]. The guidelines they proposed (which the government adopted) say: human life has priority over property and animals (so hitting a trash bin is better than hitting a person, obviously), but if a crash is unavoidable, the car must not make choices based on attributes like age, sex, health, etc.[104][105]. All lives are considered equal. So in a trolley scenario with multiple people vs one person, a German-programmed AV wouldn’t use number of people as the sole input if it involves actively choosing to harm someone. Instead, it would be expected to minimize overall harm in a more general sense – perhaps by braking as much as possible and then if a collision occurs, it occurs, but not because the car steered into someone deemed less “valuable.”
One interpretation is that the car should aim for the action that does the least overall harm without considering personal features[104][105]. The Quartz article summarizing Germany’s position put it as: the car will “choose to hit whichever person it determines it would hurt less, no matter age, race, or gender”[119][120]. That implies maybe if one side has a single pedestrian and the other has a group, hitting one might be “less harm” – but only if all else is equal. It forbids, say, deciding to hit an elderly person instead of a child because the child’s life is considered ‘more years lost’. Germany took a stance that any such value judgment is unethical for a machine to make.
Programming Morality: Who Decides?
The trolley problem highlights a broader question: who should decide the ethical framework for autonomous cars? Is it engineers, car manufacturers, governments, or the owners?
If left to manufacturers, they might program whatever they think will protect them from liability or please customers. For example, a company might quietly prioritize occupant safety (since that’s their customer) even if it means greater risk to others. But that could lead to a tragedy or public backlash if, say, an AV swerved into a crowd to save its one passenger. On the other hand, if they program a utilitarian sacrifice of the passenger, would anyone buy that car knowingly? As mentioned, consumers might avoid cars that won’t protect them fully[112].
Some have suggested giving the user a choice – maybe a setting in the car’s ethics preferences (a morbid thought: “In an extreme scenario, do you want your car to favor your safety or pedestrian safety?”). However, allowing individuals to set their car’s ethics could be problematic. It’s unlikely regulators would allow a “selfish mode” that explicitly says “always protect me even if it kills others.” Also, imagine insurance or legal consequences – if an accident happens, did you choose the setting that resulted in someone’s death? That opens a can of worms.
Most likely, governments will need to set basic rules. We already see that in Germany’s guidelines. There may be international standards in the future. For instance, regulators could require that in an imminent crash, an AV must default to minimizing kinetic energy (i.e., brake as much as possible) and not swerve into known occupied spaces. Or they might say an AV should never deliberately choose to collide with one identifiable person over another – it should only make choices based on physics (where can the car reduce harm, e.g., hitting a guardrail vs a person).
There’s also the approach of randomness: Some ethicists say if a crash is truly unavoidable and there’s no non-harmful path, the car could act like a “trolley problem lottery” – essentially choose a path at random rather than having a bias. This avoids systematic discrimination (no group is always sacrificed). But telling victims “the car randomly decided to hit you” is cold comfort, though arguably fairer than “the car decided you were less important.”
Transparency is another ethical aspect. Should the programming decisions be transparent to the public? If a company programs their car a certain way, do buyers or regulators get to know the logic? Many argue for transparency so that these decisions aren’t made in a black box. This could build trust – or it could spark controversy if people disagree with the chosen approach.
Accountability and Liability
Ethical behavior of AVs ties into legal liability. If an autonomous car injures someone in a situation that involved an algorithmic decision, who is responsible? With human drivers, if you swerved and hit someone, you could be held liable (unless it was truly unavoidable from your perspective). With an AV, the occupant wasn’t driving. It could be the manufacturer’s responsibility if the algorithm is deemed faulty or unethical. This may push companies to adopt very conservative behaviors to avoid any situation of choosing who to hit.
One fear companies have is the lawsuit if their car “decided” to kill someone. If a car is explicitly programmed in a way that in scenario X it will kill Person A instead of Persons B, that looks premeditated when examined in court. Even if overall it was the “right” choice by numbers, it’s a legal nightmare. So many companies might prefer their car always tries to stop and not “choose” a target – but physics might force a choice. If a death occurs, the question is: did the car do what a reasonable human would have (which is a weird comparison since humans vary widely in those split-second judgments)?
Governments might end up granting some legal shields to encourage AV adoption (for instance, not treating an unavoidable-crash algorithm as admission of guilt in itself, provided it meets regulatory standards).
Interestingly, survey data show that if an AV did sacrifice its passenger to save others, people think that’s morally good, but they themselves don’t want to ride in one that might do that[112]. So there is a potential mismatch between societal good and individual preference.
Public trust in autonomous cars could hinge on them handling such dilemmas in a way people accept. If a single incident happens where an AV “decided” in a way that outrages people (say it swerved and killed a bystander to save its passenger), that could damage trust severely. Conversely, if an AV sacrifices its passenger (with or without consent) to save a busload of kids, that could also cause an outcry (why did the tech kill its owner? who gave it the right?).
Beyond the Trolley: Other Ethical Issues
While the trolley problem gets a lot of attention, there are other ethical facets:
Algorithmic Bias: The AI that drives cars is trained on data. If that data isn’t diverse, the car’s perception might be worse for certain groups. For example, studies have shown some image recognition systems struggled more to detect pedestrians with darker skin at night due to training data imbalance. That’s a serious ethical issue: AVs must be designed to protect all pedestrians equally. If there’s any bias in detection or response, that needs correction. As noted in some research, today’s systems can have trouble recognizing wheelchair users or people of certain heights if not properly trained – unacceptable from an ethical standpoint.
Privacy: As mentioned, AVs collect data – including video of public spaces. Ethically, there’s concern about constant surveillance. Who owns the video of you walking your dog that happened to be captured by a passing driverless car’s cameras? What if law enforcement or governments want access to AV sensor feeds to monitor citizens? This edges into ethics of privacy and consent.
Consent to Risk: When you drive, you implicitly accept risk and also agency in how you drive. When you get into an AV, you are trusting the car’s ethics and choices. Did you consent to how it might handle an extreme scenario? If the car is programmed to possibly sacrifice you, did you agree to that when buying the ticket, so to speak? Should there be an informed consent form – “by riding in this autonomous taxi, you agree that in rare emergency scenarios the vehicle may take actions to minimize overall harm that could result in your injury or death.” It sounds sci-fi, but it’s an ethical consideration.
Moral Hazard: Could widespread AVs make people more careless as pedestrians or cyclists, assuming the cars will always avoid them? For instance, if you know cars are programmed to never hit a jaywalker, maybe you jaywalk more, ironically creating more trolley-like dilemmas. This is a societal ethics issue – balancing AV behavior to be safe but not so exploitable that humans start behaving unsafely because they know the robot will yield.
Who gets priority on the road: Some have pondered if in the future, certain AVs (like public transport or emergency vehicles) might get algorithmic priority (or have V2V signals that make other cars yield). Is it ethical to program cars to always prioritize, say, avoiding school buses even if it means risking their own occupant? Or to always clear the way for an ambulance (that one seems ethical, but what about more controversial priorities like a VIP car?). There’s potential for ethical drift if not regulated: e.g., could a company program its fleet of delivery AVs to be aggressive in ways that disadvantage others (to meet delivery times)? Ensuring a fair, socially agreed hierarchy of road behavior is important.
The core ethical principle many agree on is that human life should be the priority, and that programming should strive to minimize harm without imparting biases about whose life is worth more. In practice, that often reduces to: do everything to avoid collisions (slow down, alert the person, etc.), and if a collision is inevitable, try to reduce the impact (maybe choose a path that slows the vehicle as much as possible or glances instead of full impact). If someone must be hit, it’s essentially determined by circumstances, not an explicit value calculus of individuals. This is somewhat how human drivers operate – they don’t typically weigh “3 vs 1” in an instant; they just attempt a maneuver that hopefully avoids all or at least reduces severity.
Germany’s rule of not discriminating by person characteristic[104] is a clear ethical line. Another approach, like in some US discussions, is to allow industry to voluntarily adopt ethics charters (for example, a consortium might agree on guidelines like “never optimize property over people, never target a specific individual to save another, etc.”).
In conclusion, the ethics of driverless cars are challenging but solvable with broad consensus and regulation. The trolley problem may be an eye-catching way to frame it, but in reality, the solution likely lies in programming vehicles to avoid having to make such direct choices as much as possible, and setting default principles (like don’t discriminate and try to minimize overall harm). This issue is as much about our values as it is about technology. Society will need to decide what we are comfortable with our autonomous agents doing in scenarios of moral conflict – essentially encoding our morals into machines.
As we progress, transparency about these decisions will be key. Manufacturers should be open about how their cars handle dilemmas, and regulators should provide oversight. Ethics boards and interdisciplinary experts (philosophers, engineers, legal scholars) are already being involved in many places to guide these decisions. It’s a fascinating convergence of philosophy and technology – one that, ultimately, forces us to reflect on our own values. After all, driverless cars will do exactly what we program them to. The responsibility is on us to program them wisely and ethically. This is the end of this article.