Macquarie Island lies about halfway between Australia and the northeast coast of Antarctica. As one of the few islands in the region where animals can breed, it serves as a precious rest and breeding stop for migratory birds. It is also a protected wilderness, uninhabited by humans, other than visiting rangers and researchers. Because of these factors—its remoteness, its unique habitat, and its lack of human beings—the island is home to many rare species, especially seabirds, such as the blue petrel, which lopes across the water to gain speed before it takes off. (The bird is supposedly named for Saint Peter, in honor of the apostle’s trusting walk across the water to Jesus.) Huge populations of penguins and seals occupy the island.
Macquarie Island, in short, is a conservationist’s paradise. Or it would have been, if it hadn’t been ruined in the 19th and 20th centuries by hunters and traders, who sailed to the island repeatedly to capture penguins and seals for their natural oil, which could be used as fuel. Even as the sailors decimated the island’s native species, they brought alien species with them: Rabbits served as food, and mice and rats were accidental stowaways. They brought cats to kill the rodents—and also to provide some company (since clubbing seals all day can be lonely work). These new species had no natural predators on the island, so they treated the island’s native flora and fauna as an endless all-you-can-eat buffet.
By the 1960s, conservationists were ready to take aim at the rabbits, whose nonstop grazing and tunneling had caused severe erosion and disrupted the mating habits of seabirds, who like to burrow to breed. Some experiments had been run in the 1960s to see if various poisons would control the rabbits. One virus was thought promising, but it failed to spread, so the conservationists concluded that they needed a vector for the virus. In 1968, they started capturing thousands of fleas from Tasmania, transporting them to Macquarie Island, and releasing them in the rabbit burrows. As the rabbits came in and out of the burrows, the fleas would hop on board.
After about 10 years of this flea-seeding, all the island’s rabbits were lousy with them, and in 1978, the deadly myxoma virus was introduced. How do you introduce a virus, you ask? You walk around at night with flashlights and low-powered air rifles, shooting a bunch of rabbits in the bum with cotton-wool pellets soaked with the virus. The fleas took it from there, spreading the virus from rabbit to rabbit. By 1988, over 100,000 rabbits had died, reducing the total population to under 20,000.
Meanwhile, the cats were running out of rabbits to eat. They began to dine on the rare seabirds. So conservationists targeted the cats: Park rangers started shooting them, and by 2000, all cats had been eradicated from the island. Then the rabbit population began to rebound, partly because they had developed resistance to the virus, and partly because they weren’t being eaten by cats, which had been shot. Also, the lab that made the rabbit-killing virus stopped making it.
The conservationists decided: We’ve got to scale this thing up. They launched a plan to kill all the island’s rabbits, mice, and rats. They started by dropping poison bait out of planes, but about 1,000 native birds were killed along with the pests. The conservationists recalibrated. A more ambitious multipronged plan was hatched involving: killing the animals with poison bait, shooting them, hunting them with dogs, and unleashing a particularly successful virus called Rabbit Hemorrhagic Disease Virus, which was delivered via laced carrots.
This onslaught worked. By 2014, the last rabbit, mouse, and rat had been eliminated—and of course the cats were long gone. The native species began to rebound. The effort was hailed as a success, nearly 50 years after it had begun. However, the island is now being plagued by invasive weeds. Turns out that the weeds were being held at bay by the nibbling force of thousands of rabbits. Now conservationists are making plans to study and combat the weeds. The war continues.
Of all the stories I researched in writing this book, this is the one that perplexed me the most. I’ve spent hours trying to make sense of it. Is this the story of an epic fiasco? Or of a stunning conservation victory? Is it a parable about the consequences of “playing God,” or is it an inspirational tale about persisting and adapting in the face of failure? Is it a cartoon of downstream activity—constantly reacting to new problems as they emerge—or is it a classic long-term upstream intervention to prevent the extinction of native species?
I couldn’t even navigate my way through the morality of it: Is it okay to slaughter an island’s worth of animals? Should mankind really be in the business of selecting which species survive and which die? (If you leaned indignantly toward no, are you prepared to doom to extinction a beautiful species of petrel for the sake of preserving thousands of rats that, let’s remember, are only on the island in the first place because of some blubber-greedy sailors? [And if you sympathize with the petrel over the rats, then maybe we should question whether our moral judgments might be shaded by a species’ cuteness? Imagine if the sailors had brought not rabbits and rats but Labradoodles. One fears the petrels would be in big trouble.I])
Systems are complicated. When you kill the rabbits, the cats start feasting on the seabirds. When you kill the cats, the rabbits start overpopulating. When you kill both, the invasive weeds run rampant. Upstream interventions tinker with complex systems, and as such, we should expect reactions and consequences beyond the immediate scope of our work. In “shaping the water,” we will create ripple effects. Always. How can we ensure that, in our quest to make the world better, we don’t unwittingly do harm?
“As you think about a system, spend part of your time from a vantage point that lets you see the whole system, not just the problem that may have drawn you to focus on the system to begin with,” wrote Donella Meadows in an essay. Meadows was a biophysicist and systems thinker whose work I’ll draw on several times in this chapter. She continued, “And realize that, especially in the short term, changes for the good of the whole may sometimes seem to be counter to the interests of a part of the system.”
Here’s a painful illustration of Meadows’s point: In July 2009, a young Google engineer was walking through Central Park when he was struck by a falling oak tree branch, causing brain injuries and paralysis. It seemed like a tragic but fluke injury. Except that, later, the comptroller of New York City, Scott Stringer, started analyzing the claims paid by the city to settle lawsuits, and he discovered an unexpectedly large number of settlements resulting from falling branches. (One was the engineer’s lawsuit, which had settled for $11.5 million.) Curious, Stringer investigated further and discovered that the city’s pruning budget had been cut in previous years, in an effort to save money. “Whatever money we thought we were saving on the maintenance side, we were paying out on the lawsuit side,” said David Saltonstall, the NYC assistant comptroller for policy.
Stringer’s office created a program called ClaimStat—its name was inspired by CompStat—that he announced in 2014 would be a “new, data-driven tool that will help to identify costly trouble areas before they become multi-million dollar cases.” His team mapped and indexed the roughly 30,000 annual claims made against the city, hunting for patterns. They found, for instance, that the city had paid out $20 million in settlements over a period of years due to injuries to children on playgrounds. ClaimStat revealed that one swing on a Brooklyn playground was responsible for multiple lawsuits—it was hung too low, and five children broke their legs on the swing in 2013. “All someone needed to do was go out and raise the swing six inches, and the big problem would have been eliminated,” said Saltonstall. “But nobody thought to do that.… When you start to aggregate it, you see what the causes are, and that the fixes are generally not that complicated.”
This is what Meadows meant about the interests of the “part” and the “whole” diverging. You can save money by cutting the pruning budget, and that’s good for the parks department. But then you end up paying claims, in amounts far greater than the cuts, to innocent people who got hurt by falling branches. This linkage, though, was invisible to the people involved. It was only when Stringer’s team began to compile and study the data that the pattern became apparent.
In planning upstream interventions, we’ve got to look outside the lines of our own work. Zoom out and pan from side to side. Are we intervening at the right level of the system? And what are the second-order effects of our efforts: If we try to eliminate X (an invasive species or a drug or a process or a product), what will fill the void? If we invest more time and energy in a particular problem, what will receive less focus as a result, and how might that inattention affect the system as a whole?
The Macquarie Island example might have led you to believe that tinkering with ecosystems is too complex to be feasible. But with the right kind of systems thinking, it can work. The international organization Island Conservation, whose mission statement is “to prevent extinctions by removing invasive species from islands,” has succeeded many times in ridding islands of rats, cats, goats, and other intruders. As a result, endangered species—often ones that exist nowhere else—have been saved. The organization’s tools include sophisticated forms of cost-benefit analysis and conservation models such as a food web, which is essentially an org chart of who eats whom on an island. The food web makes it easier to envision the second-order effects of removing one species from the food chain. “Islands are systems,” said Nick Holmes, who was the director of science at Island Conservation for eight years. “If you move things around within the system, there are consequences beyond the direct.… If there are goats on an island plus invasive plants, and you remove the goats, will you get an increase in invasive plants?” Holmes said that they use an extensive set of questions about indirect impacts to assess new projects.II
When we fail to anticipate second-order consequences, it’s an invitation to disaster, as the “cobra effect” makes clear. The cobra effect occurs when an attempted solution to a problem makes the problem worse. The name derives from an episode during the UK’s colonial rule of India, when a British administrator was worried by the prevalence of cobras in Delhi. He thought: I’ll use the power of incentives to solve this problem! A bounty on cobras was declared: Bring in a dead cobra, get some cash. “And he expected this would solve the problem,” said Vikas Mehrotra, a finance professor, on the Freakonomics podcast. “But the population in Delhi, at least some of it, responded by farming cobras. And all of the sudden, the administration was getting too many cobra skins. And they decided the scheme wasn’t as smart as initially it appeared, and they rescinded the scheme. But by then, the cobra farmers had this little population of cobras to deal with. And what do you do if there’s no market? You just release them.” The effort to reduce the number of cobras yielded more cobras.
Other examples of the cobra effect are more subtle. Amantha Imber, an organizational psychologist and founder of the Australian innovation firm Inventium, had an unfortunate brush with it. In 2014, her 15-person team was ready to move into a new office space in Melbourne. Imber had spent about $100,000 renovating it, and the results were stunning: a hip open-office plan with two long, custom-made wooden desks, bathed in light from windows stretching up to 12-foot-tall ceilings, with patches of graffiti on the walls. When clients came in, it nailed their conception of what an innovation firm should look like. It was perfect. Except when it came to working.
“I would get to the end of the work day and think to myself, I haven’t really done any work today, I’ve just spent the day dipping in and out of email, in meetings, being interrupted by coworkers,” said Imber. She started doing her real work at nights or on the weekends.
Imber and her team thought that the open space would encourage face-to-face collaboration, but it backfired. “I’m not going to start face-to-face conversations because everyone else is going to be privy to it,” she said. And when people did talk, it interrupted every single person in the room, making it impossible to do deep, focused work. Imber started working from cafés in the morning, and she gave her colleagues permission to do the same. As a result, these days there’s usually only two or three people in the office at any given time.
A 2018 study by Harvard scholars Ethan Bernstein and Stephen Turban backs up Imber’s experience. They studied two Fortune 500 companies who were preparing to transition teams of employees to an open-office floorplan. Before and after the move, many staffers volunteered to wear “sociometric badges,” which captured their movements and logged how often they talked and to whom. (Their conversations were not recorded, just the fact they were talking.) The goal was to answer the most basic question about open floorplans: Do they boost face to face (F2F) interactions?
The answer was almost laughably clear: F2F interactions plunged by about 70% in both companies. Meanwhile, email and messaging activity spiked. When people were placed closer together so that they’d talk more, they talked less. The cobra strikes again.
What can be confusing, in situations like these, is that we must untangle contradictory strands of common sense. On one hand, you think: Of course, moving people closer together will lead them to collaborate more! That’s just basic sociology. On the other hand: No, look at subways or airplanes—when people are crammed in together, they find ways to retain some privacy through headphones or books or deeply unwelcoming glances. How can you know in advance which strand of common sense to trust?
We usually won’t. As a result, we must experiment. “Remember, always, that everything you know, and everything everyone knows, is only a model,” said Donella Meadows, the systems thinker. “Get your model out there where it can be shot at. Invite others to challenge your assumptions and add their own.… The thing to do, when you don’t know, is not to bluff and not to freeze, but to learn. The way you learn is by experiment—or, as Buckminster Fuller put it, by trial and error, error, error.”
Looking back on the open-office miscue, Imber said she wishes she had tried some experiments with her staff in the State Library Victoria in Melbourne. The library has many different kinds of environments, ranging from open, collaborative spaces to more solitary ones. Had the team sampled some of those different areas, observing how they affected the group’s productivity and happiness, that experience might have helped them design an office that served them better.
For experimentation to succeed, we need prompt and reliable feedback. Consider navigation as an analogy: To travel somewhere new we need almost constant feedback about our location; we follow the arrow on a compass or the blue dot on Google Maps. Yet that kind of feedback is often missing from upstream interventions. Think of the open-office situation: How would you know whether collaboration was increasing or not? Most employers don’t have “sociometric badges” to log conversations. Maybe you’d add a question to the annual employee survey, asking for people’s feedback on the transition. But that kind of infrequent, point-in-time feedback isn’t enough to navigate. It’s like driving a car with no windows and, once every hour or so, getting beamed a photo of the outside environment. You’d never arrive at your destination, and given the risks, you’d be crazy to try.
“The first thing I would say is you just need to be aware that whatever the plan you have is, it’s going to be wrong,” said Andy Hackbarth, a former RAND Corporation researcher who also helped design measurement systems for Medicare and Medicaid. I had asked him what advice he’d give to people who were designing systems to make the world better. “The only way you’re going to know it’s wrong is by having these feedback mechanisms and these measurement systems in place.”
Hackbarth’s point is that we don’t succeed by foreseeing the future accurately. We succeed by ensuring that we’ll have the feedback we need to navigate. To be clear, there absolutely are some consequences we can and should foresee. If we don’t anticipate that removing the goats on an island might make the invasive weeds run wild, then that’s a clear failure of systems thinking. But we can’t foresee everything; we will inevitably be mistaken about some of the consequences of our work. And if we aren’t collecting feedback, we won’t know how we’re wrong and we won’t have the ability to change course.
Soon after I talked to Hackbarth, I had another conversation that reinforced his point. I was talking to a physical therapist who works with women who are recovering from mastectomies. The surgeries often cause them muscular pain and movement difficulties. But something she said struck me: “As soon as a woman takes her shirt off for therapy, I can tell which surgeon did the work. Because the scars are so different.” One surgical oncologist in particular has a knack for “beautiful” scars, she said, while another consistently leaves unsightly scars.
I felt a bit sad for that less-proficient surgeon (and more sad for his patients). He might well retire never knowing that he could have done more to help women. You could blame the PT for not sharing her observations, but think about it: What would happen if you approached your boss’s boss, unsolicited, with a critique of her work? This is a systems problem. There’s an open loop in the system: The insight from physical therapists is never getting fed back to the surgeons.
Feedback loops spur improvement. And where those loops are missing, they can be created. Imagine if, in the mastectomy situation, photos of surgical scars were taken automatically at patients’ follow-up visits, and those photos were sent back to the surgeons along with a comparison set of their peers’ work. (Or even more radically, imagine if the comparison set was shared with patients before their procedures, as an input into their choice of surgeon.III)
Think of all the natural feedback loops involved in, say, selling cars: You’ve got data on sales and customer satisfaction and quality and market share, and beyond that are external assessments to keep you honest, ranging from customer reviews to Consumer Reports analyses to J. D. Power studies. Over time, these inputs almost force companies to make better cars. It’s genuinely difficult to buy a poorly made car these days, especially now that the Pontiac Aztek is gone. But imagine if almost all of these sources of feedback were missing—if you just made cars every day and hoped for the best. That’s, in essence, the way our education system works.
Yes, standardized tests scores are a key source of feedback, but what changes are made in response to that feedback? If a disproportionate number of eighth graders score poorly on linear equations, for example, do the seventh- and eighth-grade teachers subsequently meet and redesign their approach to the subject for the next semester? (Even if they did, that would still just be 1 point of feedback per year!) Imagine if, instead, teachers had data at their fingertips every day: What if teachers could instantly see which students haven’t participated in the last few classes? (And which have hogged too much airtime?) What if they knew, based on the previous night’s homework, which concepts the students were struggling with the most? What if they knew, based on school-wide data, which of their colleagues has the best way of teaching a particular lesson? All teachers will have some intuition about these things, and some star teachers will engineer their own systems for accomplishing these things, seeking constantly to improve themselves. But improvement shouldn’t require heroism! Online marketing messages don’t get better because of heroics—they get better because the feedback is so quick and targeted that you almost can’t escape improvement.
In short, if we want to make the education system better, we could try to concoct the perfect intervention—the new curriculum, the new model—and hope for the best. Or we could settle for a pretty good solution that’s equipped with so many built-in feedback loops that it can’t help but get better over time. The second option is the one that systems thinkers would endorse.
How do you build a feedback loop? Let’s take a simple example from the business world: the staff meeting. Staff meetings are a great example of a human endeavor—like fistfighting and potty training—that never improve. We get a lot of practice in meetings, but as Michael Jordan said, “You can practice shooting eight hours a day, but if your technique is wrong, then all you become is very good at shooting the wrong way.”
One business created a feedback loop for meetings. The owners of Summit CPA Group, a 40-person accounting group founded in Fort Wayne, Indiana, made a decision in 2013 to let everyone work remotely. It was a popular decision, but it had consequences. Because they didn’t encounter each other in person anymore, their online meetings became their primary means of contact.
At first, the meetings were problematic in familiar ways. “What happens is you get certain people that will talk forever and dominate the entire conversation,” said Jody Grunden, the cofounder of Summit. “You’ve got certain people that won’t say a word, and then you got people in between.” Worse, the people who dominated the conversation tended to be the complainers and the critics. The firm actually started losing CPAs because they found the interactions so negative.
So the firm made some changes. They had a facilitator run the meetings, using a new structured agenda that included a segment in which every participant shared something positive from the previous week. It sounds a bit corny, and at first some people tried to pass their turn, but pretty soon it became the norm. The bright-spots focus changed the tone and, better yet, provided a venue for learning: They started sharing advice on everything from handling tough clients to making reports simpler. Beyond the structured agenda, though, they added a feedback loop. At the end of every meeting, every attendee verbally scored the meeting from 1 to 5. Outliers were asked quickly what had made the meeting unusually helpful or unhelpful. When people complained about something—a discussion going on too long, a problem not being resolved—those issues got addressed. As a result, the meetings steadily got better, because now they had a closed loop. The virtual meetings at this accounting firm now consistently score 4.9 out of 5.0. (Whereas Ben Affleck’s movie The Accountant scored 3.65 out of 5.0 on IMDb. He needed a feedback loop, apparently.)
We started with the question: How do we avoid doing harm? We’ve seen that wise leaders try to anticipate second-order effects beyond their immediate work. (Examples: food webs at Island Conservation and ClaimStat’s data patterns in NYC.) We’ve seen, too, that we can never anticipate everything, so we need to rely on careful experimentation guided by feedback loops.
Based on these ideas, we can formulate some questions to guide a decision about whether or not to stage an upstream intervention. Has an intervention been tried before that’s similar to the one we’re contemplating (so that we can learn from its results and second-order effects)? Is our intervention “trial-able”—can we experiment in a small way first, so that the negative consequences would be limited if our ideas are wrong? Can we create closed feedback loops so that we can improve quickly? Is it easy to reverse or undo our intervention if it turns out we’ve unwittingly done harm?
If the answer to any of these questions is no, we should think very carefully before proceeding. To state the obvious, there’s a vast difference between an “experiment” where some colleagues try out open-office seating in the Melbourne library and an “experiment” where scientists tinker with a species using gene-editing tools. Please do not mistake this chapter’s emphasis on experimentation for the ethos of “move fast and break things.”
Upstream work hinges on humility. Because complexity can mount quickly even in simple interventions. Let’s take a final example that should be an easy one: trying to cut back on single-use plastic bags. Environmentalists consider these bags a leverage point, because even though they make up only a tiny fraction of the overall waste stream, they do disproportionate harm. They’re lightweight and aerodynamic, so they end up blowing into waterways or storm drains. They endanger marine wildlife and befoul beaches. And frankly they’re symbolic of an unsustainable mind-set: Factories are manufacturing plastic products—an estimated 100 billion bags are used annually just in the US—that may not degrade for hundreds of years, all for the sake of making it easier for customers to schlep their purchases home, at which point they’re immediately considered trash. So this should be a no-brainer: Let’s get rid of these bags.
Our starting point for systems thinking demands: What are the likely second-order effects? What will fill the void left by plastic bags, if they’re banned? Customers will either: (a) use more paper bags; (b) bring reusable bags; or (c) go without bags.
Here’s where we reach our first surprise: While paper bags and reusable bags are far better than plastic ones from the perspective of keeping waterways clean, they are worse in other ways. They require far more energy to produce and ship than do plastic bags, which means they increase carbon emissions. A UK Environment Agency study calculated the “per use” effects of different bags on climate change and concluded that you’d need to use a paper bag 3 times and a cotton reusable bag 131 times to be on par with plastic bags. Not to mention that manufacturing paper and reusable bags causes more air and water pollution than plastic, and they are much harder to recycle. So now we’re forced to grapple with part/whole confusion: If protecting waterways and marine life, specifically, is our goal, then a plastic bag ban is a great idea. But if making the whole environment better is the goal, then it’s less clear. There are competing effects to consider.
Another twist is that we’ve got to be very careful how we design the ban. In 2014, Chicago passed a law banning stores from offering thin, single-use plastic bags at checkout. So what did the stores do? They offered thicker plastic bags at checkout. The retailers’ supposed rationale was that customers could reuse these plastic bags, but of course most didn’t. That’s the cobra effect again: Trying to rid the environment of plastic led to more plastic.
Experimentation leads to learning, which leads to better experiments. California voters passed a statewide ban in 2016, without the thicker-plastic loophole. One effect of the ban, though, was that sales of small and medium plastic trash bags shot up. (Presumably there were people who reused their grocery store plastic bags as trash bags at home—or for picking up dog poop—so in their absence they had to start buying alternatives.) A study by economist Rebecca Taylor found that 28.5% of the reduction in plastic caused by the ban had been nullified by this shift toward other bags. Still, that’s 28.5%, not 100%. The ban had significantly reduced single-use plastics. (And notice that in order to assess this issue at all, someone had to be carefully tracking the sales of substitute products, thus creating a source of feedback.)
Then there were truly unanticipated consequences. Some people attributed a deadly 2017 hepatitis A outbreak in San Diego to the lack of plastic bags. Why? Homeless people had been in the habit of using the bags to dispose of their own waste. When the bags became less plentiful, the other alternatives turned out to be less sanitary.
I wonder if you’re feeling now the way I was feeling when I first started trawling through this research: overwhelmed and dispirited, with a spritz of annoyance. What hope do we have of solving the hardest problems facing us when even plastic bag policies create a blizzard of complexity?
It was Donella Meadows’s quote—about the need “not to bluff and not to freeze but to learn”—that pulled me out of my wallow. Because her point is: It’s hard, but we’re learning. As a society, we’re learning. Think of all the ingredients required even to analyze a policy like the plastic bag ban: the computer systems, the data collection, the network infrastructure, not to mention the ecosystem of smart people who know how to structure experiments that can shed light on city- and state-wide policies. This infrastructure of evidence has existed for a mere blip in human history. When it comes to upstream thinking, we’re just starting to get in the game.
In 2016, Chicago scrapped the plastic bag ban that had led to the cobra effect. The city council replaced it with a 7-cent tax on all paper and plastic checkout bags that started in early 2017. And you know what? It’s working pretty well. A research team led by economist Tatiana Homonoff collected data from several large grocery stores. Before the tax, about 8 out of 10 customers used a paper or plastic bag. After the tax, that dropped to roughly 5 out of 10. What did the other 3 people do? Half the time they brought their own bag and half the time they carried out their purchases without a bag. And for those 5 customers who kept using bags, ka-ching, their voluntary tax payments provided the city with extra money to serve citizens.
Chicago’s leaders tried an experiment by banning lightweight plastic bags; it failed at first, but they knew why it failed, so they tried a different experiment, which worked better, and hopefully no city on earth has to repeat the dumb version of the ban again. It’s slow and tedious and frustrating, but we’re collectively getting smarter about systems. Donella Meadows deserves the last word: “Systems can’t be controlled, but they can be designed and redesigned. We can’t surge forward with certainty into a world of no surprises, but we can expect surprises and learn from them and even profit from them.… We can’t control systems or figure them out. But we can dance with them!”
I. At one point, desperate for insight, I sent a pleading email to Peter Singer, one of the world’s leading moral philosophers and the author of the book Animal Liberation. What did he make of the Macquarie Island intervention? He replied, “I’m not willing to say that we should let species go extinct rather than kill introduced animals, but if there is extreme suffering (e.g., the deaths of millions of rabbits in Australia because of the introduced virus myxomatosis) then I am doubtful that we ought to do that.” He added that “we should develop non-lethal methods of population control, or if that isn’t possible, find lethal methods that result in a quick and painless death.” I quickly embraced Singer’s stance as my own, in hopes of keeping at bay any more cognitive dissonance.
II. I should add, to be fair, that Holmes is not skeptical about the Macquarie Island intervention in the way that I am. Don’t want it to seem like he’s throwing his conservation colleagues under the bus here.
III. Some nuance here: First, plastic surgeons often do show off photos to patients. This physical therapist’s experience is with the work of surgical oncologists, who typically handle the mastectomies (removal of the breast) but not the reconstructions. Second, all of the previous chapter’s concerns about measurement apply here. Obviously in this situation we’re not optimizing for subtle scars. We’re optimizing for a woman’s healthy recovery from cancer. The hypothesis here is that the right system might allow us to achieve both the health outcomes and the aesthetic ones.