Postscript: Game Changers

We base most of our current ethical structures and beliefs on a single underlying assumption: humans, especially Western humans, and their beliefs, are the Apex. Interesting to speculate what types of things could completely upend our current perceived ethical order. Could we encounter, or create, a wholly different set of ethics?

One way to think about such a radical upheaval is to speculate on the consequences of a new dominant global power, a truly powerful and independent artificial intelligence (AI), a global pandemic, or even encounters with an alien civilization with completely different ethical parameters.

Universal Ethics? China . . .

My bias in this book has been toward US-Western ethics. But China, under almost any scenario, will be a core player in determining the ethics of future technologies. Having been the dominant empire, the greatest civilization, for centuries, China began a long decline after it began to turn away from technology and closed itself off from the world. The warnings were there; King George III sent along a series of inventions in 1793, but China ignored them. A few decades later gunships arrived and began subjugating the country, stealing even their most prized astronomical instruments.

Back when the ancient observatory was built, China could rightly regard itself as the lone survivor of the great Bronze Age civilizations, a class that included the Babylonians, the Mycenaeans, and even the ancient Egyptians. . . . Chinese schoolchildren are still taught to think of this general period as the “century of humiliation,” the nadir of China’s long fall from its Ming-dynasty peak.1

The long-term lesson? Dominate science and technology = Dominate the world. So every few months another story breaks about an experiment done in China that surprises and scares many. In its quest to be the first, many Chinese institutions are a touch less encumbered by institutional review boards, animal-rights activists, and individual-rights protections, earning the country the nickname “the Wild East of biology.”

First human-rabbit embryos

First CRISPR edited monkeys

First CRISPR on nonviable human embryos

First CRISPR on viable human embryos

There are still huge obstacles to the emergence of a new and dominant Chinese ethics. For millennia, Chinese students were instructed to copy the words of the masters, a lesson that lingers; over one third of Chinese journals publish plagiarized pieces.2 But there is tremendous momentum in its graduate students, universities, and researchers. The West ignores Chinese science at its peril. The country is already the second largest investor in science research and is steadily gaining in highly cited papers.3

95 percent of papers using transgenic moneys come from China.4

For many Chinese, science is the new, and dominant, substitute religion. With the fervor of converts, many publicly declare: “I believe in science.” Those who do not believe are declared ignorant heretics and ostracized from learned societies, tech jobs, top schools. But religions and ideologies, unmoored from ethics, can lead to horrid outcomes.

Same with technology. The most techno-literate societies in the world found this out during WWI. What was supposed to be a weeks-long rout became a grinding disaster that chewed up an entire generation. And, during the interwar years, even the nation that begat the Industrial Revolution began to seriously question its absolute belief in the scientific future.

How you feel today about technology, its promise and perils, reflects a far deeper belief system. In general, if you cannot wait for new discoveries, if they enthrall you more than they scare you, you tend to be quite optimistic. But temper your hopes with history; technology, divorced from ethics, is a bad recipe.

How we share and apply technology varies by region. China, Europe, and the United States have different ideas about what is ethically desirable. In general, the first prioritizes greater good for greatest number, community, stability, control. The second the individual’s right to privacy. The third biases in favor of tech behemoths and entrepreneurs.

Given the Chinese tendency toward the well-being of the collective, not toward the rights and protections of the individual . . . Should China become the co-dominant nation economically and culturally, one could see a radically different set of ethical priorities and directives spread globally.

Sci-fi author Daniel Suarez wrote a dystopian book about a society where everyone’s social currency was visible to everyone else. One’s standing and ability to do things depended on that ever-present and visible currency. It was not meant as an instruction manual. But apparently Beijing did not get the memo ; in some parts of the city, you no longer need keys to enter public rental complexes, a camera scans your face. Easy access, unless you are trying to sublet your subsidized apartment to someone else. And you best correctly recycle into every one of twenty-six bins . . . you are under the watchful eyes of automated cameras. Do it wrong? You get an automated fine. Do it right, free bus tickets. Throughout China almost everyone, everywhere, is subjected to do what we say and get a reward, or name-and-shame.5 All Chinese now get a “sincerity score,” a summary of how you do your job, if you break laws big and tiny, if you quarrel. Add to that baseline what you say or browse online, what you buy, who your friends are. High score . . . you can travel, get promoted, buy a house, get loans. Low score? Maybe you need to get re-educated, somewhere far away.6

Contrast this governmental concept of privacy with Europe’s far more restrictive laws, where one even has the right, after a while, “to be forgotten.”7 Don’t like people reading about your past shenanigans? Ask for your record to be erased. (But only inside the EU.) The United States falls somewhere in the middle; sometimes you can restrict or opt out. But even if you personally chose to prioritize privacy over community, the amount of “scraped” data available can easily profile, with increasing accuracy, your sexual and political orientation, financial standing, habits, and desires.

The average book club, pondering the questions raised in these chapters, may have very different viewpoints regarding the right answers, or even the right questions, depending on whether the discussion takes place on one continent versus another. As the world allows and breeds increasingly separate media-education-internets, one may also find a divergence on what is ethically acceptable and taught, especially between the West and the East. Is this the start of great ethical divides?

Artificial Intelligence

Vernor Vinge subtly opened a 1993 NASA conference with a simple idea . . . “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”8

OK then. Nothing to see here, folks. Move along . . .

What Vinge was mapping was what we now call the singularity, a single point at which machine intelligence suddenly way outpaces human intelligence. Vinge saw four potential pathways: (1) biology could greatly upgrade the human brain (directed, fast evolution). (2) We could become symbiotic with machines (blend implants and interfaces with the brain). (3) Large computer networks could suddenly merge and become conscious, super intelligent entities, perhaps with a logic that we might not recognize or understand (Internet Alive). (4) We could create a single human intelligence in a machine and then the machine could be rapidly scaled (mirrored, scaled intelligence). But, no matter what the path AI took, Vinge thought the resulting intelligence would be so incomprehensible to us that it would be like trying to explain Plato’s Republic to a mouse.9

Parts of these predictions, or even all of them, could eventually be right. But we may have to wait a while. Futurist Ray Amara, in what is now known as Amara’s Law, argued: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Perhaps there is no better example of this than AI. Even computer visionaries like Marvin Minsky, who ran the AI lab at MIT, were too seduced by its short-term prospects; in 1970 he gave an interview to Life magazine that argued: “In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its powers will be incalculable.”10 Many graduate students bet their careers on Minsky’s vision; 1973 and 1978 came and went. By the time I began interacting with this lab, in the late 1970s, almost all of his students were disillusioned and jumped ship. AI then went through a couple of dark decades.

Machines still do quite stupid things; in 2016 Microsoft let loose an AI Twitter bot, named Tay. To get the bot conversant in Twitterese it was instructed to scrape conversations, looking for patterns of response, and then mirror human behaviors . . . What could possibly go wrong? It took less than a day for the bot to go from “humans are cool!” to “Hitler was right.”

While we now laugh and ridicule Tay, we should also observe other learning patterns. Imagine walking into your favorite clothing store, and one particularly smart, careful, and observant manager follows you around, noting every place you pause, what you touch, what you take off a shelf, what you try on, and what you buy. Then, the next time you come in, the store is completely redesigned and rearranged to fit your desires and likes. Now imagine the same manager does this for every single customer. That is what now happens for each of Amazon’s millions of customers every day.11

Note that until very recently, intelligence was not THE KEY evolutionary advantage. One thing that kept intelligence from dominating the planet was the cost of providing enough calories; human brains are about 2 percent of our body weight, but they consume about a fifth of the total body energy— (in most people . . . Others, clearly not so much ;-)

It took a long time, and many failures, for this evolutionary bet on intelligence to pay off, and it almost did not. At least 32 of our predecessors did not make it versus creatures that invested in brawn instead of big brains. But once we learned how to hunt better and how to concentrate calories by cooking, then we could be far more effective. So we became the dominant species. So far . . .

As the energy required to operate “intelligent” machines drops radically, one would expect to see other forms of intelligence evolve rapidly. In this context consider: “over the last six decades the energy demand for a fixed computational load halved every 18 months.”12 As the cost to run machines keeps dropping, our willingness to build them, feed them, power them, improve, and scale them increases exponentially.

Had it been available, a terabyte of storage would have cost you $349 million in 1990.

Now you can buy a terabyte on a USB stick for $9.

Likely our relationship to AI will be similar to the way Hemingway described bankruptcy: “‘How did you go bankrupt?’ Two ways. Gradually, then suddenly.”13 A 2013 laptop had the same processing power as the most powerful computer on earth circa the mid-1990s.14 By 2015, $1,000 computers were beating mouse brains and were about 1/1000 a human brain . . . “This doesn’t sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.”15 Assume this is overoptimistic—that it takes a decade, or two, or three longer. No matter. We ignore compounding trends in processing speeds at our peril. As Tim Urban explains it:

It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human. . . . Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ—we don’t have a word for an IQ of 12,952.16

Higher intelligence has consequences. MIT’s Edward Fredkin points out that “as these machines evolve and as some intelligent machines design others, and they get to be smarter and smarter, it gets to be fairly difficult to imagine how you can have a machine that’s millions of times smarter than the smartest person and yet is really our slave doing what we want.”17

The parameter and instructions we put in place now can be like the proverbial butterfly wings that eventually breed a hurricane. Today’s programming styles may compound, having long-lasting effects—a single programmer’s opinions and biases can spread globally, through viral products. For instance, Joe Friend, program manager for Microsoft Word, may have had more of an impact on how people read than any other single person; he decided the default fonts for Office 2007, throwing out Times New Roman for san serif Calibri. Because most people simply stick with the default type, much communication was standardized and much character buried; “Much like handwriting, typefaces carry personality traits with them . . . each typeface has a distinct persona. A typeface can be confident, elegant, casual, bold, romantic, friendly, nostalgic, modern, delicate or sassy.”18 Doubt this thesis?

Ethics

Ethics

Ethics

Ethics

How and what we prioritize used to be a collective-messy process carried out by townsfolk, clergy, legislatures, kings. Tech companies try to sell you on the notion that math + machines = objective and ethical solutions, no need for civilians to oversee. But human decisions and biases underlie the design and output. If we acknowledge that the code underlying all tech comes with built-in human errors and biases, we can change and adapt algorithms as we learn. If we think they are purely “neutral and objective,” we will unquestioningly continue some very destructive compounding policies.

But in the measure that algorithms become more valuable, companies deliberately hide how decisions are made by machines, claiming “trade secrets.” Governments in turn claim “security,” and individuals demand “privacy.” So the bias is to privatize, to obfuscate. Billions of decisions are made and executed per second, with little human oversight or intervention. How data is classified, sorted, weighted to produce a specific AI recommendation, the relationship between input and output, becomes a black box.19

Ulrich Beck argues that as we increase our dependence on algorithms, we may pile on risk; we do so without understanding just what we signed up for.20 Not understanding the logic and underpinnings of instantaneous, automated decisions can create destructive distortions, and we, clueless as to just what just happened, react with fear, anger, and confusion when things go horribly wrong. Many machines and programs already interact in unpredictable ways: “at 2.32 pm on May 6, 2010, the S&P500 inexplicably dropped by over 8 percent and, just 36 minutes later, shot back up just as much . . .” Another crash, August 24, 2015, triggered close to 1,300 circuit breakers to stop free falls.21

In more and more cases, we do not know how machines are deciding and what they will choose to do. Furthermore, those designing widespread AI may not be carefully calculating the long-term impact and biases of their creations. “AI development will go the same way as all industrial development. . . . There will be a race to the bottom for the cheapest, fastest, most cost effective solution, encapsulated in the phrase ‘damn the consequences, just ship it.’”22 And, even if all coders had the same moral underpinnings, which they most certainly do not, their varying levels of skill and creativity in programming would still lead to quite different outcomes.

So rather than unifying a single moral code, automated algorithms are likely to reflect local biases and blind spots, creating a patchwork of hidden ethical outcomes; “the values of the author, wittingly or not, are frozen into the code, effectively institutionalizing those values.”23 When you tag, rank, and assess risk, you insert your own criteria as to what is important and what should be the consequence of your findings. Something as trivial and innocent as sorting by zip code could inadvertently overlap and tie into “profiles related to ethnicity, gender, sexual preference, and so on.”24

Having machines that are constantly learning and evolving could end up creating a machine-based “ethical” logic, one divergent from that of its original human creators . . . and it is not clear if the ethics of machine AI, as programmed, as it evolves, as it becomes independent, includes humans.

Pandemics Have a Way of Focusing Your Ethics, Don’t They?

Speaking of game changers; in early 2020, all those abstract constructs that ethicists and preachers like to ponder became REAL: societies used to abundance and freedom suddenly faced second by second dilemmas on whom to keep alive. As all 545 beds at Elmhurst Hospital in Queens, New York, filled up with COVID-19 patients, only a few dozen respirators were available. The trolley thought experiment—the trolley is coming, and you can decide to throw the switch this way or that, killing X or Y”—turned into “we have one ventilator and ten people need it.” In Europe, some hospitals began taking some over age 65, with multiple co-morbidities, off of ventilators. A brutal reminder that ethics can evolve very rapidly, as systems and societies face disasters.

If you ever questioned the basic premise of this book—that science may impact, and alter, the ethical choices we face—consider what the world would look like had we continued to invest in key technologies. Cutting early surveillance budgets and international cooperation-communication allowed a local pandemic to become a global catastrophe. Restricting the free flow of information and sidelining science and medical advice delayed COVID-19 responses in most nations and led to thousands of unnecessary deaths. Continuing to invest in expensive orphan diseases, but not in preventive vaccines and antibiotics, led to a world where dogs get vaccinated against some coronaviruses years before humans do. We could have prevented much of the medical and financial destruction with a little more information and a touch more investment in prevention.

Antivaxx movements suddenly learned what it means to live in a vaccine free world.

Having to spend months in quarantine opens time to reflect on one’s worth, family, friends, job. It also gave us a window into the suffering and heroism of others. Disasters leave a lot of rubble and deaths; they also provide some clarity on what ultimately matters. In hindsight pandemics allow or force redesign and reconstruction. As Charles Eisenstein noted: “Covid-19 is like a rehab intervention that breaks the addictive hold of normality. To interrupt a habit is to make it visible; it is to turn it from a compulsion to a choice. When the crisis subsides, we might have occasion to ask whether we want to return to normal, or whether there might be something we’ve seen during this break in the routines that we want to bring into the future.”25 So, in terms of evolving ethics, here are a few, suddenly relevant, questions that will shape what we consider acceptable in the next decades:

The United States

After the crisis, do we evolve into a kinder, gentler, more compassionate America? One more willing to help neighbors? Do mass unemployment and economic distress lead to helping hands? Or do we end up in a more angry-divided-fearful “I got mine” America, one with ever-growing walls between states, counties, neighbors? The same questions are applicable to the European Union.

Walls

In facing the 2008 financial meltdown, there was enormous coordination between the United States, EU, Japan, China, and many other countries. The same was true between cabinet members within countries. A striking contrast between the COVID-19 crisis and most previous global crises has so far been the lack of cooperation and coordination between countries and, often between states and officials within the same government. Many leaders thought they could base their legitimacy on “it is the other’s fault.” “If only we can keep X out of our borders.” Fear of “the Other” may become the natural default for populations forced to make drastic economic adjustments. But that choice of letting the neighbor suffer on the other side of a wall may in turn accelerate health care crises, crime, terrorism, and wars. Pandemics do not respect walls; we let others get sick, hungry and desperate at our peril.

World Order

If walls do continue to go up, one could easily see regional internet, financial, and trade systems. Suddenly the idea of different civilizations having very different ethical standards and priorities becomes more of a possibility, especially as China takes more of an international leadership role.

Debt

More than a few are now likely to agree that having a healthy society and access to lifesaving care is fundamental. But we are entering a decades-long period of excess debt. US deficits, pre-crisis, already exceeded a trillion a year. We just added a few trillion more. The same is true in Europe. Debt loads are going to force extreme cutbacks on municipal, state, and national levels. Taxes will likely rise. What and whom you favor as you cut budgets will shape nations going forward. Neither the United States nor the EU is likely to be able to keep what they already have in terms of social benefits. Same with pensions and civil service rolls . . . Unless military expenditures, tax breaks, and bias toward the wealthiest are radically revised.

Essential Workers

When the tide goes out, you quickly see who is not wearing a bathing suit. Those we often ignore: line cooks, delivery folks, drivers, hospital workers, garbage collectors . . . Turns out that they really are essential. While many of the white-collar folks were frantically managing from home, a whole lot of the service-industry workers, especially those who work with customers, were exposed to the virus daily. Relative paychecks and respect may rebalance. One might expect, as occurred with black soldiers coming back from World War II, that going back to “the way things were” may not be acceptable.

Dominant Companies

There has been much justified concern over how powerful certain companies have become. This crisis accentuated that. As most companies were melting down, Amazon announced that it was hiring 100,000 more people. Walmart added 150,000. Google, Netflix, the New York Times, and internet providers became the go-to resources. And, for most of us, they responded admirably. What do you do post-crisis? Break them up? Try to slow them down?

Concentrated Power

Measuring where everyone is, where they go, who they associate with, in the name of health safety . . . A lot of the instruments deployed to trace health epidemics can eventually become devastating to liberty and privacy. Just as 9/11 unleashed extraordinary powers, COVID panic unleashed unthinkable legal devolution. For instance, the Hungarian Parliament voted 72 to 28 percent in favor of giving its Prime Minister state-of-emergency powers—with no time limit. He could hereafter rule by decree, suspend Parliament, postpone elections, jail anyone “out of quarantine” for eight years, and incarcerate anyone spreading “fake news” for five years.26

A More Religious World?

Many of the fearful, as they see what they consider “acts of God” and face an early demise, historically turn fundamentalist. This harks back to the old adage “there are no atheists in a foxhole.” Compound a generationally unprecedented pandemic with additional biblical plagues and you have the ingredients for a surge of radical religiosity in various regions. This may be particularly acute in nations suffering double and triple whammies; as the world was overrun by coronavirus, East Africa was also suffering a biblical plague of locusts. One reporter graphically described what this means:

In a single day, a swarm can travel nearly 100 miles and eat its own weight in leaves, seeds, fruits and vegetables—as much as 35,000 people would consume. A typical swarm can stretch over 30 square miles. . . . By January, the locust swarms had damaged 100% of Somalia’s staple crops of maize and sorghum, according to the Food and Agriculture Organization of the United Nations. In neighboring Kenya, up to 30% of pastureland has been lost. Farther west, locusts have gorged on crops in South Sudan, already reeling from years of civil war and widespread hunger. And they have laid new eggs in Ethiopia, Eritrea, Djibouti and Uganda.27

Faced with disease, destitution, hunger, and violence, many could turn toward fundamentalism. More may “interpret” events such as extreme earthquakes, tornadoes, pandemics, and other disasters through the lens of the Earth’s or of God’s displeasure. Compound this with increasing numbers of desperate climate refugees, and one could easily see the rise of apocalyptic preachers, just as occurred post-medieval plagues.

Poor Countries

In nations lacking the network of social services, where there is little welfare or health care support, the choice to shelter in place may run up against different ethical parameters. The elite would obviously want everyone to stay away, but if you live in a cardboard shack, with a dozen people, right next to your neighbor, and if your food is earned day by day, hunger, other diseases, and violence may soon begin to stalk as aggressively as does the disease itself. How and when do you measure the trade-offs knowing the mortality rate of COVID is 1 to 4 percent?

A pandemic suddenly crystalized and made real many of the issues and conflicts raised in this book. Within months, we are likely to see a long-term technological series of solutions. But how we have acted in the meantime is likely to be remembered for a long time. People and societies will have reacted in different ways. As Susan Sontag observed “10 percent of any population is cruel, no matter what, and 10 percent is merciful, no matter what, and the remaining 80 percent can be moved in either direction.”28

COVID-19 quickly cleaved societies and showed us, individual by individual, who is willing to walk into a hospital, day after day, with minimal protective equipment and who took advantage of the poor, weak, and dispossessed. Faced with a collapsing economy, some extreme pro-life voices suddenly turned survivalists: let grandma go, as well as the old, weak, mentally challenged, or sick. “Me first!” became the battle cry of hedge-fund managers who destroyed 401(k)s, retirements, and livelihoods, by fear-mongering. Some financiers bragged that their bets—that the markets would collapse—have netted them billions of dollars, literally. They used the media to instill panic in less sophisticated investors, and then, instead of having any shame or remorse, they openly bragged about how smart they are, while destroying people’s investments.29

Most just sat at home, grumbled, ate, and watched movies. It will be this majority who, post-crisis, will lean one way or another, empowering either the “Me über alles” leaders or a coalition of compassionate conservatives and liberals. Ultimately why does the outcome matter? Because we cannot face humanity’s existential threats if we act as we did, even as we saw the pandemic avalanche heading toward us. The pandemic is a small disruption compared to what we could suffer if we do not face issues like WMDs and climate change. Those are the ultimate ethical challenges to our survival.

Climate Change

Among the funny, yet sad or terrifying, memes that proliferated during the pandemic, one is particularly poignant. A couple of medieval knights guarding the city gates peer toward the far-off forest border and ask “what is that, over there?” Slowly the faraway dot emerges into focus as a running enemy knight. Still very far away. Beneath the runner one sees “January.” The enemy keeps running, but it is taking him forever to get across the open fields. Label below . . . “February.” Suddenly, when the attacker is quite close, the label is “March,” and in an instant he has killed one of the guards and entered the city. The other startled guard looks on, shocked.

With COVID-19 there was plenty of warning, plenty of time to sound the alarm, close the gates, seek reinforcements. But the leaders in country after country were incredulous for so long that they acted way too late. Once a pandemic takes root, mass casualties are seeded. In the United States, had governments acted seven days earlier, each infection would have meant 600 cases instead of 2,400.30

But even hundreds of thousands of COVID-19 casualties pale in comparison to the impact of something like climate change. In this case, the knight is barely halfway across Siberia, so people, governments, think they have a lot of time, or claim that the knight is fake news. But every day the knight walks a little faster. Then he runs. Then he flies. It is a compounding system. Once climate change begins to irreversibly tip in Antarctica, Greenland, the great ocean currents, then all bets are off. This is not an issue that can be stopped with a few months of quarantine. Every year that passes without action compounds the problem. We may get to the point where it could take centuries to address our lack of action. Then millennia. By then, no matter what you do, it may be too late for much of humankind.

The pandemic gave us a warning: reset and re-prioritize. To address what is, alongside weapons of mass destruction, perhaps the single greatest ethical battle humans have ever fought. Have we learned? Will we act?

SETI: First Contact

As Stephen Hawking so gracefully put it: “The human race is just a chemical scum on a moderate-sized planet, orbiting around an average star, in the outer suburb of one among a hundred billion galaxies. We are so insignificant that I can’t believe the whole universe exists for our benefit.”31

Ever more powerful telescopes make human significance ever smaller. Galileo’s took us out of our nice, comfortable cradle at the center of the universe, where a stern, but fair, white bearded GOD looked upon, judged, and minded every facet and action of HIS creation. Observing other planets upended religions; it took a while for theologians to make sense of a far larger universe, one in which Earth was but a bit player.

Galileo was eventually pardoned by the Vatican . . . a mere 350 years later.

Each telescope launched makes us smaller. Current telescopes, those that have the power to image a single car headlight on the moon, take us across time and across much of the universe. Only once we had this kind of power did we begin to see minute shadows transit across stars light years away, confirming that other planets were common.

First confirmed exoplanets? 1988.

As of 2020 . . . over 4,135 confirmed, 5,047 candidates.32

The telescopes going up in the 2020s are powerful enough to observe a single lit candle on the moon. In exoplanetary terms, that implies that we will confirm tens or hundreds of thousands of new planets, and, in some instances, these devices might allow us to image the atmospheres of planets as they pass on either sides of their stars. As we learn more about other planets, there are two possibilities . . .

For me outcome A has a different consequence. It is ABSOLUTELY TERRIFYING. It means we, and we alone, are responsible for ALL LIFE IN THE UNIVERSE. If we drive a cute creature, ourselves, the planet to extinction that is it. There won’t be any cute Na’vi, ETs, little green men, Vulcans, Aliens, Jedi, and assorted friends and foes to take our place. One supernova, solar flare, black hole, or deranged leader and that is all life wrote.

An obvious corollary is we MUST DIVERSIFY LIFE NOW. Knowing how common extinction is on Earth, and across the universe, we have no greater ethical responsibility than to immediately try to spread life as far and as fast as we can. (Unless, of course, if you worship rocks and other inanimate stuff, in which case it is fine if all life in the universe goes kaput).

The chances we found the only other photosynthetic life in the universe is infinitesimal.

This is a calendar-changing event. There is a before and after we discovered life elsewhere; a date remembered by all cultures, across time. It resets humanity and its place in the universe far more radically than the discovery that we were not the only planet, the only solar system, the only galaxy . . .

For some the notion of encountering an alien civilization is terrifying. They assume the ethics of this civilization would mirror our destructive nature:

No civilization should ever announce its presence to the cosmos . . . Any other civilization that learns of its existence will perceive it as a threat to expand—as all civilizations do, eliminating their competitors until they encounter one with superior technology and are themselves eliminated. This grim cosmic outlook is called “dark-forest theory.” . . . It assumes every civilization in the universe is a hunter hiding in a moonless woodland, listening for the first rustlings of a rival.34

That of course assumes “they” would care about who we are and what we do. We think we are relevant. We all search for meaning; we all want to believe in a greater something—a greater cause, a greater purpose. It has been so since the beginning of time as we imbued meaning into rocks, water, planets, animals, and other assorted gods. Having a common, external, potential enemy might engender a global sense of purpose/common mission/dread/possibility. At this point, all of humanity is all in. It is us versus . . . WHAT? HOW MANY? HOW SMART? We might see gaggles of apocalyptic preachers, as well as an explosion of space and defense research.

On the other hand, should we ever be able to communicate peacefully with new life-forms, we might discover very different technologies as well as novel ethical laws and parameters—ones we cannot begin to imagine, having been locked into a minute, quasi-mono-cultured, ecosystem. The beliefs and practices of The Others may be the ultimate game changer: a definitive example of how ethics can evolve over time and across technologies.