When you criticize past actions, it may also make sense to look at ongoing contemporary ethical clashes and ask oneself . . . Self, why am I tolerating this? Yes, I get really indignant about past wrongs, but what am I doing about current ethical disasters?
Truth is not always hard to find;
It is often staring you in the face.
The problem with truth is that it is hard to believe.
It is even harder to get other people to believe.
—Walter Darby Bannard
P.S. There are many topics not covered in this chapter, perhaps even a few of your favorite ethical screwups. The objective is to get you to think about what we are still doing wrong, not to provide an encyclopedic entry of all wrongs.
Most academics do not advance their careers by overseeing endless games of Monopoly. Then again, Paul Piff is not an ordinary researcher, and there is a certain delicious irony that his studies of rigged capitalist games took place in Berkeley, California . . .
Take a game of Monopoly, but give a random person a clear initial advantage: more money, twice the turns, higher rents . . . and then watch how the person behaves. After a few minutes, the initial advantage is forgotten, taken for granted. The person begins to act more forcefully, more like a boss. There is no empathy for the less fortunate. Rents are enforced, in full. When the inevitable happens and the randomly chosen person wins . . . it is all self-explained in terms of brilliant strategy, tactics, specific key decisions . . . Never mentioned? The overwhelming initial advantage.
Piff and his colleges found that having money and resources creates not only a sense of entitlement and inevitability but also leads many toward less ethical behaviors.1 This does not mean every upper-class person is unethical. But it does imply that, on average, having more money makes you more likely to act like an entitled narcissistic jerk. (And, yes . . . I just know that you are struggling to come up with even one contemporary example . . .)
Now compare Monopoly to immigration. You and I may or may not agree on immigration, its benefits and costs, its scope. Fair enough. There is ample room for debate. You get a lot of benefits from choosing your parents correctly and being born with the right papers. Where I do hope you agree with me is that systematically separating children from their parents, and putting them up for untraceable adoption may not be seen, by future generations, as terribly smart, or just, or ethical. Countries have a right to control their borders; they do not have the right to be unspeakably cruel.
In 2018 the United States Border Patrol (a part of the aptly named ICE) began systematically separating parents from their children. All because they lacked a piece of paper with the right words and seals on it. So how in the world did we get to this point? Well, throughout the developed Western world, a large number of folks feel that they are falling behind in status and income, and men usually prefer to express anger than fear. Fear breeds cruelty. Then a few politicians show up with a box full of matches and hateful rhetoric . . .2
Aliens
Predators
Illegals
Invaders
Rapists
Animals
Killers
And those are just words used by POTUS in speeches and tweets. His followers are WAY more extreme. During the Republican primaries 16 percent of Trump’s primary supporters believed that whites are a superior race. John Kasich and Jeb Bush supporters? 4 percent.3
By using these kinds of words over and over again we belittle and dehumanize a whole class of humans because they were born in the wrong place. And we create a narrative, as occurred in Piff’s Monopoly games, where we justify our entitlement and advantages as GOD-given and based on our own hard work and ethics. A typical narrative: “We are different from THEM not because we were lucky enough to have parents who were born in the right place, had resources, and cared, and because we had school and career opportunities, and the right legal papers; NOOO, we are different because WE work harder and are tougher . . .”
Some seek to feel better and prop up their own insecure status, by denigrating those who have so much less than they do. (Again, no contemporary example comes to mind, does it?) Others actively stoke a divisional dynamic for economic or political gain. Across time the name and definition of “Them” varies: “spics,” “Chinks,” “Japs,” “niggers,” “Eurofags,” “Polacks,” “Micks,” “redskins,” and a host of other denigrating labels. Eventually various groups of immigrants overcome most obstacles and become a part of the establishment to such an extent that the slurs, and eventually even the hyphenated labels—Irish, German, English—just go away and everyone in these once “other” groups gets treated and regarded as a core, unhyphenated American.
Turns out poor immigrants are not lazier and dumber. They usually work harder than you and I and are more resourceful; a mother who escapes murderous gangs, makes her way across multiple borders, keeps her kids in school, while doing two or three jobs . . . That is not an unusual story.
Instead of preaching e pluribus unum, xenophobic politicians often fall into the paradox of Schrödinger’s immigrant: one who is simultaneously lazy and stealing your job. Policies towards the undocumented have morphed from “we need to control our borders” into a far more brutal “we have to be ever more cruel, because otherwise they will stay and others will come as well.” In a particularly nasty twist, President Donald J. Trump, egged on by racist advisers, decided to use the full power of the US government to target the children of the undocumented. Apparently the underlying logic is “if you are horrid enough to their kids, it will scare them so much that they will stop coming.” Voltaire understood this logic and its consequences:
“Those who can make you believe absurdities
can make you commit atrocities.”4
US prosecutors, in court, argued that caged kids, sleeping on concrete floors could be indefinitely detained, and had no need for soap, toothbrushes, showers, hugs, or adult help.5 Unlike the 9/11 terrorists, the undocumented, including kids, could be jailed indefinitely with no legal representation. It is thus that three-year-olds, literally three-year-olds, came to sit in US courtrooms, in front of stern, black-robed judges. These kids did not have their parents with them. Nor were they entitled to legal representation. When prosecutors attacked them, they could not understand because the headphones provided to hear translation were too large for their heads and fell off. When judges asked questions they could not understand the kids got scared and crawled under the table. Some of these kids were deported. Others were put up for adoption and all traces of how to find them deliberately erased.
Attempting to establish control by taking the kids is not a new tactic. Chilean dictator Pinochet and the Argentine Junta would throw mothers out of airplanes and give their kids to army officers. Pol Pot, Stalin, and Mao all took away millions of kids and put them in camps to be “reeducated.” These were crimes against humanity. But, in the twenty-first century, one would not expect an OECD country to follow a deliberate policy of taking kids, making them disappear, and giving them to someone else. Those committing these crimes today may not face immediate prosecution, but they best not bet on immunity decades from now.
Even sadder is that so many believe the absurdities spouted by their “leaders” against “the others.” In attempting to justify why the rich get richer and so many have lousy job prospects and declining incomes, it is easy and expeditious to blame the weakest and most vulnerable. It is their fault. But as the internet meme says: they take ninety-nine cookies and then whisper in your ear; careful with that cookie, the immigrant is about to take it.
Because of the multitude of followers who believe these absurdities, atrocities have become standard operating procedure, and not just against immigrants. In this brave new world, the Wyoming Valley West school thought it would be a good idea to “incent” parents who had not paid for their kids’ breakfast and lunch bills. So they sent home a letter explaining that either the relatively small debts be paid or else “the result may be your child being removed from your home and placed in foster care.” When confronted about the absolute cruelty of this policy, instead of apologizing and backing down, they doubled down, arguing that it was an effective and appropriate tactic. Then, when a wealthy donor offered to pay off all debts, the school leaders said that they could not accept the donation. It took a while for the Twitterverse to reach such a fury that the school eventually, reluctantly, withdrew the threat, and took the donor’s offer.6
When the policy is deliberate cruelty—make them hurt as much as possible—even the kids of those we most honor, those willing to give their lives for the country, who have a clear, honorable path to citizenship, are un-protected. Service members are having signed contracts rescinded to block their path to citizenship. Spouses and children of those serving in the armed forces who were killed in action, before getting their citizenship, are being deported.7 And ICE cares so little it does not even know how many veterans it has deported.
Given that far more than 130,000 immigrants have served honorably since 9/11 . . .
Perhaps these policies may not be in the best interests of attracting talent and defending the nation?8
The fundamental input into a knowledge economy is brains, but not necessarily homegrown. When you build big walls around your country people notice. Eventually this hurts economic growth. Over half of the most successful, fastest growing US companies are founded, or cofounded, by someone who came over as a student or an immigrant. Venture-backed companies generate about 11 percent of US jobs, create some of the largest companies, and produce about one-fifth of the GDP; foreign-born people are a fundamental resource. But when you tell “Them” that they are not welcome, they go elsewhere; this can happen really fast. In 2015, almost 680,000 students got visas to study in the States. By 2018 the number fell by 42 percent.9
Even if one were to build a HUUUGE wall to keep more immigrants from coming in, is it really a good idea to vilify and attack the largest minority already living in United States? A good idea to break into a house in the middle of the night and deport parents and kids who were brought here at age 2, while their US-born brothers and sisters are left behind?
Legal? Yep.
Smart? Ethical?
Sometimes the US versus THEM dynamic is so useful for a given political party, or gets so ingrained within a society, that the divisions do not go away; they fester and eventually rot the institutional legitimacy of the whole. If one spends hundreds of millions of dollars on social media campaigns designed to convince large parts of the population that “they” are different, that we-the-virtuous should never, ever associate with the likes of “those people,” well . . . sometimes one gets one’s wish.
If the “them”—the belittled and isolated—is a large enough group,
they don’t just go away:
They split off, taking part of the US with them.
As politicians and elites foment distinctions, as they treat the less fortunate and “different” as a separate entity, without the same rights and privileges of “our citizens,” the excluded build their own parallel narratives and communities. In these cases historic slights and insults are revived, relived, magnified, and lionized; traditional Welsh and Scottish flags fly, ancient heroes are revived, old languages reenter grammar schools. Hmmmm . . . I wonder why three-quarters of the flags, borders, and anthems around the UN did not exist eight decades ago? The rhetoric of exclusion and alienation has consequences: northern Italians, Corsicans, Walloons, southern Finns, Basques, Catalans, Gauls, Kurds, Mapuches, Puerto Ricans, Ukrainians, Latvians . . . Alienating, ostracizing, stereotyping, and targeting a large minority in one’s country rarely ends well.
The “immigrants infest us and are murderers and rapists” narrative may win an election or two, but it fundamentally undermines the core narrative of the nation, of “. . . Give me your tired, your poor,/your huddled masses yearning to breathe free.” And it is especially provocative given that given that The United States’ Hispanic roots and cultures go way back. There are just a few hints here and there as you drive around San Diego, Los Angeles, San Francisco, New Mexico, Colorado, Tejas . . . These are tough, resilient, brave populations. They are also very proud. In the poorest neighborhoods of Latin America and the United States, you see little girls coming out of shacks, wearing white dresses to go to school, with their hair carefully combed and a ribbon. US leaders are going after their pride and their kids; Hispanics will remember this treatment for a long, long time. Few would benefit if the largest single minority in the United States someday decided that they could not get a fair shake and be a part of the American Dream.
Countries are more fragile than they seem. Ethical and unethical conduct can have long-term consequences on borders. How you treat folks today will have consequences tomorrow. What we are allowing and tolerating today could tear the nation apart in the future. It is not only profoundly unethical to allow kids and the most vulnerable to be treated as we are treating them; it is shortsighted and stupid. It makes us all smaller and weaker.
So why do we tolerate it?
Once upon a time, war and conquest were crazy profitable. Spain became a global power because it was able to conquer, enslave, and extract 180 tons of South American gold plus sixteen thousand tons of silver. The Dutch East India Company (VoC) was so lucrative that it triggered a wild speculative bubble, eventually reaching an inflation-adjusted valuation. of $7.6 trillion. (For comparison, Apple is about an 11 percent equivalent of the VoC). Other massive trade-speculative bubbles led to a $6.5 trillion valuation of France’s Mississippi Company, and a $4.3 trillion valuation on Britain’s South Sea Company.10 Conquest paid.
With these kinds of empire-making profits at stake, it made economic sense to take the risks of waging war. Your empire grew, prospered, and protected itself. But as destructive technologies spread, democratized, and decentralized, it becomes more expensive to play the Empire game. You can invade, even conquer, but it is ever harder to exploit your conquests. Rather than gushing oil or mining riches, the US’s Afghani and Iraqi adventures cost $6 trillion and 7,000 soldiers killed in action.11 (Never mind more than 244,000 Iraqi, Afghanistan, and Pakistani civilians).12
Some might argue the United States is just not tough or cruel enough. In which case, one might want to revisit some of the horrors the USSR was willing to indulge in during their little Afghani intervention . . . and how that turned out. No matter how much force you use, it is ever harder, over the long term, to govern a people who do not want to be your subjects, which is why one sees such an increase in the number of nation states over the last century.
Technology has dampened our appetite for war in at least three ways. First, we can destroy far more, but so can they. Modern networked economies are ever more vulnerable to disruptions by a very few. As the enemy acquires and deploys asymmetric warfare capabilities the maxim “never get into a fight with those who have less to lose than you do” becomes ever more relevant.
Second, in a knowledge-based economy, there is increasingly less incentive to try to hang on to most conquered lands. Try a thought experiment: Yes, Iraq has a whole bunch of oil. Now imagine that a majority of the Iraqis came to the US administration and said “we would like you, pretty please, to take over our country, govern it, make us three new stars in your flag . . .” Likely, the response would be NO WAY. However if a few uber-bright Iraqis want to come get PhDs and help build start-ups . . .
Third, our overall tolerance for in-your-face violence has decreased. Imagine you could now watch, on a big screen, in Technicolor, with full sound, a documentary on routine Aztec sacrifices, or one chronicling the usual Saturday afternoon torturing and burning of the heretics in European town squares . . . Might you feel just a tad queasy?
In the measure that wars make less and less sense economically, then one would expect the use of violence to take over and hold another’s territory to decrease across time (which is exactly the trend these days). If economic war is gradually phased out, will future generations really make a big distinction between the savagery of butchering hundreds with knives and axes and killing hundreds of thousands in the name of freedom and justice? OK to watch, and allow, hundreds of thousands of Iraqis, Syrians, Yemenis, and Afghans to die violently since 2003?
But even though the economic purpose-incentive for wars has decreased, it is not at all clear that we are out of the woods yet. We should not assume that there will never be another massive, great power war.13 Wars are still just too prevalent and popular as an instrument to enforce a government’s will. In its 241 years of existence, the United States has been at peace less than twenty years. If you were born after 2000 the United States has been fighting 100 percent of your life. France, since 1337, has been at peace for only 174 full years.14 And we best not underestimate the narcissism, lunacy, and incompetence of some current world leaders. Ours is a truly dangerous period.
The ethics of war have yet to catch up with the
technological transition-implications.
The nature of war changed fundamentally at exactly 5:29 a.m. on July 16, 1945, with the first nuclear explosion. A few decades later, single individuals have the power to destroy all human life on Earth (literally!). What are the consequences if Putin, Trump, Macron, Xi, Johnson, Khan, Modi, Netanyahu, Kim Jong-un and the many more that follow turn out not to be inspired, careful, ethical leaders? (I know, I know . . . hard to believe.) But even if each one were a wonderful person, would you still want to continue to bet on them to sustain our civilization?
In a touch of poetic irony, the beginning of our nuclear nightmare started in New Mexico’s Jornada del Muerto desert.
(Journey of the Dead Man Desert).
Russia has 6,490 nukes; US 6,185; France 300; China 290; UK 200; Pakistan 160; India 140; Israel 90; N. Korea 30 . . .15
And, because you can put Cobalt 60 on a bomb, just a few make the entire planet unlivable.
Turns out that just one nuke can ruin your whole day.
As increasingly destructive weapons spread, and as they get into the hands of those who do not care if they live or die, our regime of Mutual Assured Destruction (MAD) becomes quite unstable. Then India took over part of disputed Kashmir and an increasingly fundamentalist Pakistan got a bit upset, the Iran accord collapsed, Russia and the US pulled out of the INF treaty . . . No wonder a gaggle of Nobel Prize winners and assorted scientists set the Doomsday Clock at two minutes to midnight. This is as close as it has been to Armageddon since 1953. (In 1991 it was set at seventeen minutes to midnight.)
CNN’s dark humor headline? “The Doomsday Clock Says It’s Almost the End of the World as We Know It. (And That’s Not Fine).”
If we survive, future generations will ask why in the world did you allow THAT?
A reasonable follow up question from future generations might be: why weren’t y’all working day and night to stop a critically unstable, catastrophically dangerous system? And we will answer with:
And we will sound a whole lot like the “logic” used by slave owners, to defend crimes against humanity.
Yes there may be fewer individual conflicts, but nukes, cyber, and bioweapons seriously up the stakes of war. As will future technologies. Should we someday acquire the means to blow up whole planets with a single spaceship, does life survive long-term absent peace? Sci-fi writers still portend massive, ongoing conflicts in the future (Star Trek, Star Wars, Men in Black . . .) How realistic is this? Chances are, if we don’t find a way to stop war, we are eventually toast. We will likely not survive if we do not control our impulse to go to war. Is there hope? Yes. Somehow some of the most violent folks of all time, the Vikings, morphed into peace-loving Scandinavians. Long-term trends show homicides are collapsing globally (with specific national exceptions).
If current trends do hold up, and future generations do become far more peaceful, might they see us, our current behaviors, no matter how justified we think they may be, as absolutely savage? When great-grandkids see archival pictures of Yemen, Congo, Rwanda, Guatemala, Salvador, and Mexico, do you think they will say, “it was fine for you to ignore some of these massacres because some of these conflicts were interstate while others were intrastate . . .”? Or, someday, might what we allow today look as shocking to them as pictures of German and Japanese concentration camps do to us?
We know we have a big problem. We know much of the killing is unjust and unethical. We know we are putting the planet at risk. But because we have not found a consistently effective, faster, better, cheaper alternative to conflict resolution, we continue to do what we know is almost always wrong.
A lack of war halting ethics may just kill is all.
Library stacks are crammed with examples of corporate overreach, greed, and malfeasance. Most of these tales are true. So it is way easy, and often justified, to profile the corporate suits as Bad Guys—and regulators and NGOs as Good Guys. But let’s have a small reality check here. You interact with folks who work for companies big and small. Do you think the vast majority of these people are Bad Guys who wake up every morning thinking of new and different evil things to do to you and your family?
OK, OK, I hear you, so let’s exclude cable companies, telemarketing, and airline executives from this debate.
Are there situations where one could be too cautious?
One sees constantly ratcheting regulations, each of which sounds perfectly reasonable, until one really unpacks the implications. For instance, the European Union’s “Precautionary Principle,” seems a touch bureaucratic but otherwise as sweet as mom’s apple pie: “When human activities may lead to morally unacceptable harm that is scientifically plausible but uncertain, actions shall be taken to avoid or diminish that harm. Morally unacceptable harm refers to harm to humans or the environment that is threatening to human life or health, or serious and effectively irreversible, or inequitable to present or future generations, or imposed without adequate consideration of the human rights of those affected.”16
One way to summarize the precautionary principle in civilian-speak might be: when you are introducing a new technology or newfangled product, first show us that it does not harm anyone and then we will approve it. Better safe than sorry. OK, seems sensible. Except . . . When electricity was a new technology, could you prove that no one would ever get electrocuted? Has anyone ever been seriously harmed using electricity? How about steel or other metals? Staircases, beds, and bathtubs? If recently discovered and introduced, would the FDA ever approve table salt?
Close to 250,000 go to US emergency rooms every year because of bathroom accidents.
A male’s greatest chances of being hurt? #1 is on stairs and floors.
Followed by beds, mattresses, and pillows.17
And it is not only necessary for a new technology to be safe, you must also prove it promotes and defends equality and human rights.
Gee, might someone, sometime, misuse cell phones?
We often think of risk management in terms of blocking something, not acting, doing more studies. But sometimes being too cautious not only increases cost and slows things down. Sometimes it kills people. One way to understand this better is through traffic jams around Washington, DC. Once upon a time, a long, long time ago, traffic actually used to move around the Beltway.
Now, like Congress, it is just gridlock.
One particular section of roadway backed up every morning. Same time. Same place. Traffic specialists looked at entrance and exit design, construction, animal crossings, everything. And they could not figure out what was wrong. It was driving engineers nuts. Eventually the local paper published an article on this riddle, arguing it was the infamous Left-Lane Bandits fault; those who were slow and never let anyone pass. Turns out there was this one driver who would get on, same time, same place, every day, and cruise over to the far left lane, whereupon he would drive exactly one MPH under than the speed limit. No matter how much other drivers cursed, honked, tailgated, flashed their lights, it made no difference. This blessed man continued to do exactly the same thing. Every single day.
A few days after the initial newspaper article, a letter arrived at the paper that, in essence, said: My name is John Nestor. I am the driver. I have a right to drive, just under the speed limit, anywhere I wish. I am following the exact letter of the law. “And why should I inconvenience myself for someone who wants to speed?” As one might imagine, this response drove the good and notoriously patient citizens of DC a touch batty. Soon there emerged a verb for following the exact letter of the law and in the process mucking everything up: NESTORING.18
All humorous, except that . . .
John Nestor, MD, worked at the Food and Drug Administration (FDA), where he was in charge of approving lifesaving drugs. But the letter of the law said that, before being approved, a drug has to be proven safe. And there was just one minor issue: no drug is completely safe. (Remember those endless lists of potential side-effects described in every drug ad?). So dear, old Dr. Nestor never approved a drug. And he blocked everything: “his whistleblowing makes the Mormon Tabernacle organ sound like a kazoo.” Because he could not prove any drug was safe.
Despite the FDA’s bias, first and foremost safety, it eventually got to be too much, even for a hyper-cautious regulatory agency. Nestor got fired. Enter Ralph Nader. He sued to get Nestor reinstated, arguing that Nestor “had an unassailable record of protecting the public from harmful drugs.” So the FDA was forced to rehire Nestor, who never approved a drug during his entire, tax-payer-funded, career.19
Ironically, one of the areas he oversaw was new renal drugs.
He died of kidney failure.
Much of our drug-approval system is biased toward inaction. Academics want to make doubly sure. Then internal review boards (IRBs) want to make triply sure. And the regulators demand quadruple assurance. Not surprisingly, in a few decades the cost of bringing a new drug to market went from tens of millions, to hundreds, to over a billion dollars in some cases.
If printed, the documents required to back up a drug trial would be delivered by several tractor trailers.
When you just focus on safety and ignore cost and time to market, you may seem like a Good Guy, there to protect the public. But Good Guys, by not acting, can kill. If you think only in terms of Right and Wrong, Good and Bad, Principled Regulators versus Evil Industry you might miss a truth:
Sometimes the “Good Guys” end up hurting you as much as the “Bad Guys.”
When there is no mechanism and will to drive medicines to get faster, better, cheaper, then new drugs get ever more expensive. A whole lot of medicines don’t get developed and never come to market, because it is harder and harder to meet the cost hurdles. That is one of the reasons you can vaccinate your dog against Lyme disease, but not your kid. The tragedy is that you could protect yourself and family against Lyme; there were two vaccines that were 76 to 92 percent effective. The FDA approved LYMErix in 1998. There were no major side effects. But anti-vaxxers and lack of profits killed both products within three years. So since then, despite three hundred thousand getting sick EVERY YEAR . . . crickets. No commercial venture is going to redevelop an off-patent medicine that prevents Lyme with a few cheap doses. And the folks who run the National Institute of Health and are there ostensibly there to protect you? Well, here is their latest statement: In September 2018, NIAID’s advisory Council approved an FY 2020 concept entitled Targeted Prevention for Tick-Borne Illnesses. They then helpfully defined just what they mean by “concepts”: “Concepts represent early planning stages for program announcements, requests for applications, or solicitations for Council’s input. Council approval does not guarantee that a concept will become an initiative.”20 Oh, goody, makes me feel safer already . . .
Big pharma is far from blameless; R&D, and safety, can and should be improved. Pharma often it abuses and price gouges. But there is a cost-time-to-actually-develop-a-medicine trade-off. Our “ethical-regulatory” bias does not prioritize time: think about it, study it, wait and see. One way to visualize what is happening to pharmaceutical costs is to imagine watching jump horses. Low hurdle, almost all make it. Higher, fewer. Olympic level, almost none. Drug development is slower, worse, more expensive. In inflation-adjusted numbers, a billion dollars bought you 40 to 50 drugs in the 1950s. After enormous investments in R&D and technology, things kept going the wrong way. By 2000 it cost a billion dollars per drug and took fourteen years of discovery to market.21 Treatments get a lot more expensive. Ever fewer R&D projects make the last hurdle, typically for only rich-country diseases. Drugs for the killer diseases that are endemic to poorer areas, like malaria and dengue, don’t clear financial hurdles.The entire drug system is operating on Eroom’s Law, the equivalent of Moore’s Law in reverse.
Tens of millions die.
All of which brings us to Medicine’s Missing Measure; one can easily measure the cost of mistakes, of acting too fast, with too little oversight. But it is far harder to measure the cost of not acting, of postponing, of something not being developed at all. The traditional ethicist’s plea “to be cautious first and foremost,” sometimes can, and does, kill many.
P.S. Medicine is far from the only place where being excessively precautionary kills. Since high school we have all been confronted by a series of gotcha ethical scenarios like the following: You are at a railroad switch. An out-of-control train is barreling toward three folks and will surely kill them . . . You could intervene, throw the switch and kill only one innocent bystander . . . what would you do? The modern iteration of this debate is autonomous vehicles.
Opponents of robotic cars focus on the most extreme scenarios. An autonomous car is going down a windy street, comes around a corner, and finds a few children playing in the middle of the road. There are only two choices: drive ahead and kill the children, or drive off the cliff and kill the driver. So what would you program the vehicle to do? This is a one-in-ten-million-likelihood scenario, but, if autonomous cars become widespread, such things will happen. And they will be tragic. However, should that be the core of the debate? Should we stop autonomous cars until we find the RIGHT answer? After all, technology already made cars faster, better, cheaper. And a whole lot safer: “In 1913, 33.38 people died for every 10,000 vehicles on the road. In 2017, the death rate was 1.47 per 10,000 vehicles, a 96 percent improvement.”22 So why bother introducing new, unproven, possibly dangerous, autonomous driving? Well . . . Globally, on average, 3,287 still die in automobile accidents . . . every single day. These are the number-1 cause of death for those between 15 and 44.23
Houston, we still have a problem.
The opposition to and questioning of autonomous vehicles thrives because of two widespread infections: The Lake Wobegon effect—where “all the children are above average.” And the Dunning-Kruger effect—I’m incompetent and yet think I am really good. Any evidence of this manifestation? Well, a mere four out of five folks, of all ages, rate themselves as above average drivers.24 Yet, somehow, in the United States there are nineteen crashes per minute, per day . . .25
Want to see Dunning-Kruger live and in action one day?
Come drive in Boston . . .
So the core question for autonomous vehicles is NOT: how do we make them foolproof? Rather it is how long should we wait to deploy them, even if they are not yet perfect? Every year driver-operated vehicles kill tens of thousands.26 Do we wait until driverless cars are twice as safe as the average driver? Five times more? 10 times? Yes, Teslas sometimes do stupid things, even kill people. That will continue for a long time. But the core question is, what is the tolerance-crossover point? Once autonomous cars are better than the average driver, every year you postpone this decision, to answer more questions on the margin, to resolve all of the dilemmas, you kill more people.
The ethical thing would be to force autonomous cars as soon as we are almost certain that they can save more lives than the system we have in place today, at a reasonable cost. But because many ethicists go off and focus our attention on the extreme outlying cases, instead of the core problem of fundamental system safety, we forget how dangerous it is for our teenagers and grandparents to walk out the door with keys to a ton and a half of metal.