chapter two

GOVERNMENT

I ONCE WROTE a book called Good Business: Your World Needs You, which was based on a simple argument: business runs the world. The world needs changing. Let’s use business to change the world.

Looking back, with the benefit of my experience since then, working with some of the world’s biggest companies to try to solve social and environmental problems and then working in politics and government to do the same, I can see that I was—how shall I put this?—wrong.

Business does not run the world. In the end, government sets the rules by which business operates, and rightly so. Government is—at least notionally—accountable to people in a way that businesses are not. Businesses clearly make a huge contribution to society, both positive and negative, and they should try to improve it. But if we really want to change the world—if we want to make it more human—then we must use the mechanism designed for that purpose: government. As we saw in Chapter 1, we need to change the way we choose our governments by fixing our broken political system and getting more people involved. But we also have to change government itself: the priorities of those within it, the way policy is made and implemented, and the way the whole thing is structured.

MORE HUMAN PRIORITIES FOR GOVERNMENT

Sir John Cowperthwaite was the financial secretary of Hong Kong in the 1960s and the man widely credited with creating the conditions for its phenomenal economic success. When asked what advice he would give to a poor country trying to get richer, he said, “They should abolish the Office of National Statistics.” Cowperthwaite believed that the collection of economic data simply encouraged governments and bureaucrats to needlessly and destructively interfere in the economy, so he refused. This infuriated his masters in London. When they sent a delegation to persuade him to change his approach, Cowperthwaite literally sent them straight back on the next plane.1 I love that story for its wonderfully British rebelliousness. But that rather eccentric episode from the 1960s also highlights the big problem with government today: its prioritization of numbers over people. Of course it sounds obvious, trite even, to argue that government should ‘put people first.’ And yet it doesn’t happen. Look at any government’s priorities: the focus of federal, state, or even local government bureaucrats, the things that drive big decisions—it’s actually about numbers, about economics.

The political world’s focus on economics was most famously captured by Bill Clinton’s 1992 election campaign strategist James Carville when he scrawled, “The economy, stupid” on the office walls of Clinton’s headquarters. In government, money reigns supreme, and I don’t just mean lobbying and political donations. The official calendar is dominated by economic events—job statistics, quarterly growth numbers, inflation, interest rates, what’s happening in the stock market. In government it really is the economy, stupid.

You can understand why. Economic growth pays for progress in other areas—the human things that really matter. On an individual or family level the more money you have, the better life tends to be. Money doesn’t buy happiness, but it can certainly eliminate many of the things that make us unhappy. Money provides security and opportunity. It is often those who have never experienced what it’s like to have little or no money who say it doesn’t matter. Try telling that to the mother of three who’s working two jobs to feed her children and still can’t afford to buy diapers.

So I don’t have any problem with the idea that the government should concern itself with how to improve society’s economic circumstances. Political leaders could have many worse aims, after all, than making their people more prosperous. Prosperity generally has brought us longer, healthier lives, with better education, more art and leisure, greater civic and political participation—all good things. Economic growth has been a decent enough proxy for these important human outcomes.

My argument is different: it is not about challenging the value of economic growth but about the way our government goes about it. We’ve made a narrowly defined representation of the economy—economic indicators—the priority while forgetting that ‘the economy’ represents something deeper: our actual lives as they are lived. Basing everything on numbers pushes government policy toward mechanical, bureaucratic systems rather than more organic approaches that put people first. Take a so-called economic issue like jobs. It’s just as much—if not more—a social, human issue. Of course you can fiddle around with interest rates and taxes, and that will have some effect on unemployment rates. But the effects are far less certain or predictable than economic policymakers let on. The wiser and more self-aware of them know that there is very little they can do to ‘improve the economy’ or ‘create jobs’ in a general sense. If you want to see more jobs created and more people able to take them, the things that really work are specific, human things.

Help people get the skills they need. Help them gain the confidence and support to become entrepreneurs. Help make sure children are brought up in a way that gives them the character and the capacity to learn, to train, to hold down a job. None of the social policies required to bring about these ends are captured in the job numbers or the growth statistics. But they are critical to achieving the economic results we seek. On the surface economic issues are human issues at their heart. Social policy is economic policy, just more effective.

We know from the latest developments in neuroscience and evidence from long-term research that the conditions in which children grow up, especially in the first few years of life, have a crucial impact on the rest of their lives: whether they will work or be on welfare, a contributor to society or a cost. But government doesn’t prioritize spending on the causes of social problems; it wastes money on the symptoms. This flaw is systemic. It’s not because politicians and public officials are stupid or useless or malevolent; quite the opposite: they are mostly highly intelligent, dedicated, and public spirited. The problem is a system that forces every decision into a framework that is literally inhuman. It’s one of the reasons citizens feel that so little of any real substance seems to change, regardless of what party or politician is in power.

To design a more human world, we must change the priorities of government. We have to start from the notion that certain things must be in place for people to lead a decent life, that government’s overarching priority should be to lay the infrastructure, in its broadest sense, that will allow that to come about. Of course, there’s physical infrastructure—roads and railways, energy grids, and utilities. But we need human infrastructure as well—education and health services, early childhood education, mental health, relationship and family support. Infrastructure’s problem, though, is that it tends to need expensive upfront expenditure before the benefits flow. As a society, we already make a certain commitment to this idea. We subsidize our children’s education, for instance, because we think of it not as a cost but as an investment that will enable productive lives later on.

Now we need to adopt that same mindset across the board. This isn’t new. In Britain the Victorians thought like this 150 years ago. As economist Diane Coyle points out, so-called Victorian values aren’t just prudish or conservative; they “also speak of hard work, self-improvement, and above all self-sacrifice for the future.”2 The Victorians had the forethought to build tangible investments in their communities like railways, canals, sewers, town halls, libraries, concert halls, and modern hospitals. Meanwhile, they strengthened society through the police, trade unions, mutual insurers, learned societies, and the nursing profession. At the same time in the United States, we saw similar investments in public libraries, hospitals, schools, and colleges. President Franklin Roosevelt marshaled such forethought to invest in great public works projects as a means both to spur the economy out of the Great Depression in the short term and to undergird long-term growth in the future. President Dwight Eisenhower channeled this impulse in the 1950s with the Interstate Highway System—while in the states, governors like Pat Brown of California established the three-tiered higher education system. This “sense of stewardship,” as Coyle described it in the context of the Victorians, is what can fundamentally define success or failure for a society.

Today we have shirked our duties as stewards. Public universities are in a perpetual crisis (California’s have faced years of budget deficits and tuition hikes). According to the American Society of Civil Engineers, the United States must spend some $3.6 trillion by 2020 to meet demand for and maintain essential infrastructure like roads, dams, schools, levees, bridges, railways, and water and energy systems.3 America is the world’s leading economy, yet the Internet speed in our technology hub, San Francisco, is no faster than Mexico City’s;4 our trains are no faster than Japanese trains in the 1960s,5 and one of our airports, New York’s La Guardia, made Vice President Joe Biden think he was in “some Third World country.”6 We have chosen the tyranny of a narrow short-term economic calculus over America’s long-term future.

This approach to governing is flawed in another way: it causes decision-makers to focus too much on the symptoms of problems instead of their causes. Government massively underinvests in infrastructure—both physical and human—because government accounts treat it as a cost with no benefits. Consequently government, whether federal, state, or local, applies its energy to the symptoms of social problems (welfare, crime, etc.) rather than their causes (e.g., what happens in early childhood). Art Rolnick, former vice president at the Minneapolis Federal Reserve Bank, has dedicated his career to this point. In Minneapolis, state and local government provided nearly a billion dollars over the last decade to subsidize new professional sports stadiums7—spending that tends to have low and sometimes even negative effects on economic development. Meanwhile early childhood education, which has been shown to generate high returns on public investment over time, has hardly been funded.8 Thankfully Rolnick and his allies fought for and won an unprecedented $250 million from the Minnesota state legislature last year for early childhood development, from prenatal care to preschool (see Chapter 8). No other state has made such commitments.9

One way to think about it, as Oxford economist Dieter Helm suggests, is to distinguish between spending money on assets and spending money on liabilities.10 Infrastructure is an asset. Yes, it costs money initially, but we can predict (and later record) its ‘dividends,’ such as the economic activity a new fiber-optic network produces or the money an early childhood intervention saves later on. That money goes in the ‘plus’ side of the ledger and, in any sensible accounting system (like the one used by businesses), can be offset against the spending. Eventually it will pay for itself. But in government, infrastructure spending isn’t treated any differently. Tackling unemployment by building a high-speed broadband network that would help in the creation of new jobs is counted as no different from handing out the same amount of money in welfare payments. Tackling crime through parenting programs that could prevent offending in the first place is counted in the same ledger as keeping criminals in jail. Long-term human needs come second to the short-term imperative of the numbers; sensible, long-term policies are rejected because their dividends come down the road. The numbers rule, in a totally irrational way, and we end up with budget-based priorities instead of priorities-based budgets.

The issue really centers on a process called scoring. When government, at any level, does almost anything, it is ‘scored’ by a nonpartisan budget office that assesses that action’s fiscal consequences: How much will it cost? Will it make money that offsets some or all of its costs? Will it lead to savings in one part of government but more spending in another?

Currently the standard practice is static scoring. Static scoring considers a policy’s fiscal impact upon implementation. Although it’s not inaccurate, it is often misleading. For instance, it’s obvious that if you spend money or cut taxes, it will have some impact beyond the initial cost. Say a tax cut or an infrastructure project stimulates economic growth. It follows logically that the economic growth will result in greater tax revenue, offsetting the initial cost. Yet static scoring fails to consider that eventuality, so the tax cut and investment look much more expensive than they really are.11

But there’s another type of scoring: dynamic. Unlike static scoring, dynamic scoring considers the fiscal effects of legislation over time. Let’s say a tax cut stimulates economic growth enough to bring in new tax revenues or a spending increase boosts the economy. Projections of those effects can be applied to offset the initial cost of the policies. Dynamic scoring helps us take a long-term view. And because infrastructure usually takes a long time to pay off, the long-term view is more human.

After winning congressional elections in 2014, Republicans did introduce dynamic scoring for some measures—namely legislation that affects tax bills. Republicans—including presidential candidate Jeb Bush—rightly argue that dynamic scoring demonstrates the long-term benefits of tax cuts. But that’s only half the story. Yes, tax cuts can pay for themselves,12 and it is absolutely right to show that. So can spending on infrastructure, however, but political partisanship has gotten in the way of applying dynamic scoring to that. Dynamic scoring needn’t be used on all legislation (as some experts point out, it would be uninformative and simply too cumbersome in many cases13), but it should be used on significant revenue and infrastructure spending projects, whether that’s physical infrastructure like roads and bridges or human infrastructure like early education and family support. “If dynamic scoring is truly about reflecting the on-the-ground impact of government action, it must be applied to both sides of the ledger: spending and revenue,” writes Democratic congressman John Delaney. To use different accounting systems for different parts of the budget is “intellectually dishonest.”14

I wanted to tackle this—frankly quite geeky—question of government accounting systems and their impact on policy priorities because it’s a good example of something we will see over and over in this book. To really change things, we need to understand the underlying causes of problems, and more often than not they are structural and systemic. In election campaigns the candidates battle over this tax cut or that spending commitment. These aren’t unimportant questions, but they fail to get at the fundamental factors that seem to leave many massive social and economic problems permanently unsolved.

In the 2016 election the animating issue on the left is inequality; Bernie Sanders and Hillary Clinton discuss it constantly. Yet instead of talking about the causes of inequality, they’re debating how to treat its symptoms. Whether you agree or disagree with their solutions—taxing the rich, new rules for banks, wage hikes, and so on—the sad truth is that they are all, basically, tactical fixes. They might make a difference in the short term, but they won’t solve the problem of inequality in the long term. We need to address the structural problems of capitalism that lead to growing inequality, not just tax more here or regulate more there. We need to invest in early family support, to address inherited cultural disadvantages. And we need to overhaul the economic regimes that have prevented people from escaping poverty and achieving upward mobility. But in our current political reality, many of these options are simply off the table. They don’t all cost money, but the type of investment required to make some of them viable is dismissed out of hand as too expensive—and thus impossible to even contemplate.

It’s the same kind of thing on the right. Republican candidates talk about how we need growth and jobs, proposing lower taxes and less regulation as a means to get there. But these responses are just as superficial: way out of kilter with the scale of what’s needed to really address the problem, which is the chronic underperformance of the US economy for decades, dramatically illustrated by languishing productivity growth.15 America’s economy is not suffering because of any particular regulation or tax (although I would be the first to agree that many of both could usefully be cut or abolished), but because of underinvestment in the infrastructure that underpins it. Internet speeds, bridges, mass transit, energy all are areas in which badly needed investment would pay untold economic dividends. Again, though, the scale of investment required is impossible given a system that refuses to see such investments as fiscally sound.

Avoiding proper analysis of how to actually solve our problems—rather than just managing their symptoms—has become rampant in our politics: all the time, no matter the party. Our election campaigns might entertain us, even occasionally inspire us, but they are irresponsible. By failing to countenance bold, systemic policy changes, our shallow and insubstantial political debates prevent leaders from, well, leading. They don’t talk about the long-term, underlying structural problems—or the solutions—and so have no real mandate to act for the long term when in office. Our politicians once understood the need to build for the future; instead, the system of government accounting has defeated them. If we want government to have more human priorities, we will need to change that system.

A MORE HUMAN WAY TO DESIGN AND IMPLEMENT POLICY

In 2000 then UK prime minister Tony Blair floated a new idea to clean up antisocial behavior on British streets. He proposed that offenders be made to pay an on-the-spot fine of £100 (about $150); if they didn’t have the money, they’d be taken to an ATM to withdraw it. The policy was never implemented. One of the main reasons was because it turned out that the ‘yobs’ (British slang for young folks up to no good) targeted by this policy did not usually have £100 in cash ‘on the spot,’ or a bank account to withdraw it from; often they did not even have £100 to their names.16

Why did Blair make incorrect assumptions about those committing street crime? Come to think of it, how did anyone in any government anywhere conceive that dopey new tax, that waste-of-money antipoverty program, that white-elephant urban-regeneration plan, and so on? It’s easy to blame ‘useless politicians,’ ‘partisan aides,’ and ‘incompetent bureaucrats.’ But that’s simplistic; we need to go deeper. One reason for the failure of so many government programs over the years—and the dissatisfaction so many people feel with the public services they receive—is the fact that the people the programs or services are designed to help are too often an afterthought. The lives of average people—especially the poor or those interacting with social services—are simply unknown to policymakers. Blair and his advisers’ daily reality was so far removed from that of the people who made trouble on the streets that no matter how right the theory, the policy could never work in practice.

Don’t think that politicians don’t care about people or the effects of their policies—they do, most of them, sincerely and deeply. It’s just that by the time politicians are in their government offices, trying to find slots in their overfilled schedules to actually think about policy and its implementation, they’re unable to do so on a human scale, in a human way.

It’s not just the design of policies; it’s the delivery too. Consider the New Enterprise Allowance (NEA), another well-intentioned UK government idea, this time from the administration of David Cameron. It was aimed at helping people get off unemployment benefits—and was in fact a policy I helped create and fought hard to introduce. The idea was that some unemployed people might want to start their own businesses, so if we could get each of them a loan and some advice from a mentor, they’d be on their way. The NEA loan would be administered through Jobcentre Plus offices, the British equivalent of work-placement offices, which were already in most communities. When the program was launched, the initial uptake was disappointing and the Treasury officials in charge of its funding wanted to cancel the whole thing. Rohan Silva (my closest friend and former colleague in government who you will meet a lot in these pages) was curious about why the program, loosely based on a similar one that had been extremely successful in the 1980s, wasn’t working. On a hunch that perhaps it wasn’t being pitched in the right way, he asked the officials running the policy what the staff at Jobcentre Plus offices were trained to say to potential candidates and what their reasons would be for granting or turning down a loan if the conversation got that far.

The officials had no idea. They never gave it a moment’s thought. They just assumed that because the opportunity was available, anyone who wanted it would sign up. A low level of sign-ups must have meant the policy was defective. The government was about to cancel a potentially effective program without any sense of how it was being implemented where it really mattered: the human point of contact between two people.

THE D.SCHOOL AT STANFORD

In 2012 I came to Stanford University, where I taught a number of courses, including some at its Institute of Design, or d.school. This required me to reflect on how we did things in government—what we did well, what we did badly, and what we could have done differently. That reflection has been profound. I now realize why so many policies fail, why so much money is wasted, why so many promises are never delivered, why this happened in our administration—and in every government. It has to do with a mindset, an attitude, an approach in which policymaking is much more about theory than practice, where the people making the policy and the people implementing it make no real effort to understand, in detail, the lives of the people whom the policy is for. I have no hesitation in saying to my students that the single biggest improvement we could bring to policymaking in government is to make it more human, to put people at the center of the process. That may sound platitudinous. But I mean it in a very precise way, based on what I learned at Stanford about the process of human-centered design—or, as it is known at the d.school, design thinking.

Formally established in 2004 as the Hasso Plattner Institute for Design at Stanford, the d.school is the academic home of design thinking. In the late 1980s the software usability engineer Donald Norman put forward his vision of user-centered design, and by the late 1990s and early 2000s designers at the Silicon Valley design firm IDEO (the company responsible for Apple’s first mouse), started to realize that they could apply user- (or human-) centered design (by now also called design thinking) not just to objects, products, and software but also much more broadly. In the 2000s design thinking had been applied to entire business models, not to mention consumer experiences. By the time the d.school was founded it was clear that because innovators in almost any discipline are creating new things—whether business ideas, healthcare interventions, education policies, or consumer services—and that the act of creating is an act of design, graduate students from any university program would benefit from a design-thinking education. Stanford’s d.school was established to give them that opportunity. It’s now run by Sarah Stein Greenberg, my brilliant coteacher and the person who more than anyone else has helped me understand what design thinking is all about and how it could help improve policymaking and implementation.

At the d.school students are guided through a process that, though inevitably messier in practice, can be explained in a handful of straightforward steps:

       Empathize with the user

       Define the problem

       Generate ideas

       Prototype solutions

       Test the prototypes

       . . . keep testing, adapting, and improving.

Empathy is not a word you hear very much in government. But to understand a problem and imagine a solution requires an understanding of the people affected. This is an act of empathy, and human-centered designers put themselves in the shoes of those they are designing for. Empathy requires close, highly detailed observation of people in the context in which they’ll be using the product or service in question. This borrows from the anthropological practice of ethnography, which assumes that observations and open-ended interviews reveal more about a person’s beliefs, needs, emotions, and desires than opinion surveys and market research ever could. In the design-thinking process taught at the d.school this means talking to ‘users’ before doing anything else, which really means listening to users. Politicians, of course, will say they listen to their constituents all the time, either directly or through polls and focus groups. But this is different. The kind of empathy work required for design thinking is about deeply understanding the life of the person you’re designing for, forcing yourself to be open-minded rather than selling your own ideas.

The first stage of the designers’ process includes observing users in their day-to-day routines and immersing themselves in the users’ environment for days, even weeks on end. The acclaimed urban theorist and writer Jane Jacobs captures precisely this sentiment in her 1961 masterpiece, The Death and Life of Great American Cities. She writes that the only corrective to ineffective, top-down city planning—generally by ‘visionary,’ egotistical architects and politicians—is to base policy on “true descriptions of reality drawn not from how it ought to be, but from how it is.”17

These true descriptions of reality are the basis for step two: defining the problem. This might seem straightforward, but it’s surprising how frequently policymakers can be found solving the wrong problem—a superficial one, a symptom rather than a cause—or a problem perceived in one way by the outside world but totally differently by those actually experiencing it. A great example comes from a student project that began at the d.school as part of a course called Entrepreneurial Design for Extreme Affordability. In “Extreme,” as the course is popularly known, teams of Stanford graduate students come together for over six months to work on different problems facing the world’s poor. Each group, which deliberately mixes up students from any field—law, medicine, business, history, computer science—is paired with an international NGO to solve a specific problem in an extremely affordable way.

One team was tasked with developing a lower-cost baby incubator to be piloted in rural Nepal (nearly 1 million children die each year globally from complications due to premature birth18). The team of four first met in January 2007 when the course began. They spent two and a half months researching incubators as well as infant mortality and the medical requirements of premature babies. It is customary with Extreme for one or two of the students on each team to visit their partners on the ground. For the incubator team Linus Liang, a computer-science student, traveled seven thousand miles to Nepal along with two other student groups working on similar health-related problems.

“We were very ambitious—we landed, got to the hotel, and went to the hospital that day,” Linus recalls. “We did exactly what design thinking instructs—lots of interviews, observations, etcetera. We talked to at least twenty doctors and then went to the neonatal intensive care unit and observed all the different doctors and nurses, the amount of noise—everything.”

Linus discovered one odd thing. There were incubators everywhere, but they sat empty. “There’s this hospital with nothing, no resources—there’s mold growing on the walls,” Linus recalls, but “they had all these donated incubators. Some [of the instruction labels] were in Japanese, some were in German—there were all these different languages. And all of them were pretty damn good. So Linus asked the next question: given that Nepal had all these incubators, why did it have such a high infant mortality problem? It turned out that many of the country’s premature births happened in rural villages, far away from the hospital and the life-saving incubators. The babies were dead on arrival.

It was clear that the d.school team needed to go deeper to understand the problem. So they got on a bus and went to a village outside Kathmandu. “It was kind of horrible what they had out there. There was no infrastructure.” In the village they visited it became quickly apparent that the team’s initial idea—a cheaper incubator for hospitals—wasn’t going to get them anywhere. There was no running water, no local hospitals—just shacks with barely trained ‘doctors’ who did their best to serve the villagers’ medical needs. The incubators they used were simple wooden boxes with light bulbs in them, most of which were burnt out anyway, some for as long as four years. They told Linus that they had no money to buy new bulbs, but in any case there was nowhere to buy them, nor electricity to power them.

“You slowly learn all the constraints. And then we started designing for that when we came back,” Linus explains. “That’s when we really understood the need.” Back at Stanford the team used its findings to develop a new point of view. They were no longer looking to design a cheaper, simpler incubator for local hospitals: “We wanted to design for the mothers who have no actual healthcare system, no water, no electricity, no transport, no money.” The team redefined the problem they were trying to solve based on the empathy they had gained. And it was at this point that they could start to generate ideas for what a solution might look like.19

This might sound familiar—who hasn’t sat through a ‘brainstorming’ session to come up with something clever or creative? But this step in the design-thinking process, like the others, is usually given short shrift by politicians and policymakers, many of whom already have solutions in mind that they’re trying to advance, based often on ideology rather than empathy with the people they’re trying to help. One of my favorite teaching moments at the d.school is our demonstration of a bad brainstorm versus a good brainstorm. Take it from me—most of the ones you see in government are bad. The cardinal rule for getting good results? Defer judgment. Whether you’re alone or with a group of colleagues, the best way to kill potentially good ideas is to point out their flaws at the moment you’re trying to come up with them. That’s not to say that “there’s no such thing as a bad idea.” Of course there is. But the time for evaluating ideas is not when you’re trying to generate them.

For me, the part of the design-thinking process that offers the greatest contrast with how things are done in government is the final stage: prototyping and testing an idea. The key is to embrace experimentation, testing a concept with cheap and rough prototypes before investing more in its development. This is a world away from how government does things. Yes, there are pilot programs—but these typically cost many millions of dollars and are launched with great fanfare. The incentive is to prove that they work, not to find out whether they do. Prototype testing is not piloting. For example, rather than building a website—still a costly exercise—you could literally sketch it out on pieces of paper, put it in front of people, and get feedback on how they would use it based simply on asking them to point at boxes they would ‘click’ on and why. The methodology of rapid and low-cost prototyping and testing taught at the d.school is now the basic modus operandi for every tech firm in Silicon Valley, from the biggest names to the smallest start-ups.

And that’s exactly what the d.school team working on the incubator problem did next. Based on insights gained from Nepal, they shifted their focus from the hospital and the clinician to the village and the mother. The team reframed the problem from “building a cheaper incubator” to “keeping premature babies warm.” The result was a wrap like a sleeping bag, warmed by a special heated insert, all costing about 1 percent of the price of an incubator. When the course was finished, the team set themselves up as a company (called Embrace) and moved to India, determined to bring their concept to a stage at which it could actually be deployed. They continued testing new versions, each time gaining fresh insight from the mothers whom they watched using the prototypes. Small tweaks came from observation: a plastic window over the chest, allowing doctors to see at a glance if the baby was breathing, or a simple OK/Not OK indicator in place of the previous numerical temperature gauge that the mothers didn’t trust (digital displays tend to malfunction so often that there’s an automatic suspicion of them). From its humble origins as a student project, Embrace has now expanded its operations across Asia, Latin America, and Africa and has gone on to save fifty thousand lives—and counting.20

DESIGNING POLICY FOR WHO WE ARE, NOT FOR WHO WE OUGHT TO BE

It’s amazing what you can discover when you listen to people. But a more human approach to policymaking doesn’t just mean paying attention to details on the ground; it’s about understanding behavior too. Over the last thirty years psychologists, neuroscientists, and economists have systematically cataloged ways in which we consistently fail to live up to traditional expectations of how humans ‘ought’ to behave. This means that we now have a rigorous body of work, rather than just our hunches, that we can use to enhance the effectiveness of public policy.

For example, economics assumes that we value money objectively—that a dollar gained causes as much pleasure as the pain from a dollar that’s lost. But in the late 1970s psychologists discovered that we tend to be much more averse to a loss than we are keen to have the equivalent gain. This work was so disruptive to traditional economics that its discovery by Daniel Kahneman and Amos Tversky earned the Nobel Prize in 2002.21 An interesting application of the principle was an experiment in a Chicago school district. For many years school administrators around the world have tried to incentivize better teacher performance through bonuses awarded at the end of the year if certain targets are met. In Chicago Heights a group of University of Chicago academics tried an approach based on this behavioral concept of loss aversion. They gave the teachers their bonus at the beginning of the year and told them that at the end of the year, all or parts of it would be taken away if they didn’t meet specified goals. Two control groups were also established: one in which teachers were given no performance incentive and one involving a traditional end-of-year bonus. The results were spectacular: the performance of the students taught by teachers in the two control groups was about the same, suggesting that the traditional performance bonus doesn’t really make much difference. But in the group where the teachers were given an upfront bonus with a threat that it would be lost, the students’ results improved two to three times more than the traditional bonus group.22

More human policy design would also recognize that we often neglect to do things that are in our own best interest—and that we even say we want to do—like put money in our pensions, exercise, and remember to go to doctors’ appointments. But government policy has been designed and implemented for decades based on the assumption that people will behave with perfect foresight and self-control. Consider government’s approach to fighting poverty: programs of the left traditionally consist of giving money, jobs, food, and other “free stuff”—as critics like to call it—to alleviate material deprivation, while programs of the right traditionally try to improve economic incentives so that poor people can ‘lift themselves up by the bootstraps.’ Both approaches, as the New York Times columnist David Brooks has pointed out, errantly “treat individuals as if they were abstractions . . . shaped by economic structures alone, and not by character and behavior.”23 The simple truth is that our actions don’t always match our intentions. Human things like forgetfulness get in the way.

People also tend to do what they think other people are doing. One of my favorite examples comes from an experiment that psychologist Robert Cialdini carried out in Arizona’s Petrified Forest National Park. He posted signs drawing attention to the high incidence of looting pieces of ancient wood. They backfired: letting people know that theft was frequent—normal even—actually increased theft.24 Traditional efforts by governments to change people’s behavior so often make this classic mistake—highlighting the negative behavior they want to discourage rather than promoting as a social norm the positive behavior they want to encourage.

IN HIS 2015 State of the Union Address, President Obama highlighted a growing problem: many people are desperate for jobs and the requisite skills but are unable to afford college and obtain them. To that end Obama announced the expansion of a model tried out in Tennessee and Chicago: “that two years of college becomes as free and universal in America as high school is today.”25

The idea of free community college sounds attractive, of course. It will likely boost enrollment and help those who can’t afford college courses. But what about those who enroll but don’t graduate? In other words, what about more than half of them?26 College affordability is no doubt part of the problem. But it’s only a part—and may not even be the biggest part. For many, it’s the seemingly insignificant barriers that defeat them.

A lost transcript, an unpaid fee, the difficulty of filling out a financial aid form—the maze of paperwork that students must navigate would be daunting for the best of us. Rich students—or students at rich universities, like Stanford—have layers of advisers, academic support, and material resources to help them through the pitfalls of college, but students at community and state colleges often don’t. Even worse, many of those students don’t come from backgrounds that prepare them for academic culture or the soul-crushing challenge of navigating college bureaucracies.

So although the policy of making community college free is undoubtedly well intentioned, it may not be so well designed. Instead of asking, “How can we make community college less expensive?” officials might have asked, “What can we do to help community college students succeed?” Free tuition is a reasonable answer to the first question. It isn’t, necessarily, to the second.

That’s because enrolling and paying for community college is relatively easy: signing up is straightforward, and poor students’ tuition is often covered by Pell Grants and other government aid. “The important task,” as again, David Brooks points out, “is to help students graduate.”27 That’s where a human-centered approach might pay off.

In the summer of 2014, months before President Obama’s State of the Union proposal, Howard Schultz, founder and CEO of Starbucks, announced that his company would work with Arizona State University to help Starbucks employees—many of whom have already earned quite a few credits—finish college. Two years of tuition would be absolutely free; those with more than two years to go would get a significant discount until they reached that point.28 Although on the surface the two plans seem quite similar, Schultz’s plan had a crucial difference from Obama’s: it was designed in a more human way.

Starbucks assigns students an individual counselor who frequently calls to help students plan courses and, more importantly, to help them overcome practical barriers like signing up for classes or finding lost transcripts. Mary Hamm, a forty-nine-year-old barista in Virginia, had always wanted to go to college and jumped at the opportunity to attend online through Starbucks. But when she got her first quiz back, she was crestfallen at her seven-out-of-ten result. She felt like a failure. Her counselor asked whether she was allowed to use her notes on the test: it turned out that she was. But she didn’t know that, inadvertently handicapping herself. After several decades out of the classroom, it turns out you can forget some of the tips essential to simply being a good student, and the presence of a counselor can be a simple, practical part of the solution. It also helps boost confidence. Students like Hamm “often have doubts about themselves being college-ready,” says Dave Jarrat, an executive at InsideTrack, the firm that provides the counseling support for the program. “That manifests itself when they get into college, take a quiz, and get a C and say, ‘See? I’m not college material.’ Then they drop out.”29 The counselors’ goal is not only to provide their students with helpful tools and point them to valuable resources but also to prepare them for the reality that they will have setbacks to overcome.

The Starbucks plan is a great example of human-centered design. It addresses real student needs, however trivial they may sound, and welcomes feedback so the program can continuously improve. When some students dropped out of the program, they were asked why; after telling their advisers that tuition was still too expensive, Starbucks made it cheaper.30 It’s still unclear how successful that fix will be. The Starbucks program certainly won’t be the model for getting every aspiring college student on track, but it shows the value of empathizing with ‘users’ and adjusting big bureaucracies to their needs. That simple but profound change of perspective can improve the chances of any policy—and make the difference between a waste of money and a successful investment.

We need a more human approach at least in part as an antidote to policymakers’ increasing overreliance on data. I’m not against using data as a policymaking tool; it can help us understand the extent of a problem’s existence, scope, and distribution as well as its trend over time. Data can help us see whether existing policies are having the desired effect or if concepts that worked well in one place are effective in others. Perhaps data’s most crucial function is helping us understand what has generally worked or failed before. But when it comes to the design of specific programs or the way public services are actually delivered, it’s dangerous to make assumptions based on data, as if all people and situations were the same. Here’s David Brooks again, on efforts to combat teenage pregnancy: “A pregnancy . . . isn’t just a piece of data in a set. It came about after a unique blend of longings and experiences. Maybe a young woman just wanted to feel like an adult; maybe she had some desire for arduous love, maybe she was just absent-minded, or loved danger, or couldn’t resist her boyfriend, or saw no possible upside for her future anyway. In each case the ingredients will be different. Only careful case-by-case storytelling can uncover and respect the delirious iconoclasm of how life is actually lived.”31

POLICING: AN URGENT NEED FOR A MORE HUMAN APPROACH

At the height of the crack-cocaine epidemic in the 1990s, New York City police commissioner Bill Bratton instituted a program called CompStat in an effort to radically realign the department around a clear principle of accountability for crime reduction. Police officials were required to attend weekly meetings at which they discussed the key data of the week, with statistical trends analyzed across departments and over time. Although it was largely credited with reducing the specific measured crimes (muggings, murders, etc.), the program precluded tactics that might have been more effective in the long term but were less amenable to measurement, such as trust-building in communities. Events like the 2014 riots and overmilitarized police response in Ferguson, Missouri, or the brutal chokehold arrest and subsequent death of Eric Garner, an unarmed petty criminal in New York, are partly the result of such a targets-driven culture. Bratton, who advocates CompStat around the world, himself acknowledges that too much attention has been placed on “the numbers of stops, summonses and arrests” and not enough on “collaborative problem-solving with the community.”32 Jim Bueermann, president of the Police Foundation and a former police chief, admits, “If you ask a traffic officer how many tickets he wrote today, their emphasis is on writing tickets to meet a number, as opposed to a desired outcome, which is safer streets.”33

This data obsession and the results it evokes are reasons why so many people in America are losing trust in the police. In Ferguson the public—in this case the African American minority—has been egregiously abused by the city government through its police and court system. A US Department of Justice (DOJ) report found that “Ferguson law enforcement practices disproportionately harm Ferguson’s African-American residents” and that there was “substantial evidence that this harm stems in part from intentional discrimination in violation of the Constitution.”34

The DOJ report is instructive for laying bare the insidious effects of a targets-based policing culture. The NYPD might have made errors with CompStat, but those errors were at least made with the intent to reduce crime and make New York a safer city. In Ferguson the police prioritized “productivity”—read: writing citations—and residents were seen “less as constituents to be protected than as potential offenders and sources of revenue.” The municipal court, meanwhile, failed in its essential function to adjudicate legal disputes. “Instead,” the report found, “the court primarily uses its judicial authority as the means to compel payment of fines and fees that advance the City’s financial interests.” In doing so, the court’s practices “violate[d] the Fourteenth Amendment [of the US Constitution according “equal protection” under the law] . . . and impose[d] unnecessary harm, overwhelmingly on African-American individuals, and run counter to public safety.”35 The court mired citizens in red tape, issuing warrants over missed court appearances and unpaid tickets (some nine thousand in 2013 alone, in a town of twenty-one thousand), and erecting “unnecessary barriers to resolving municipal violation[s],” including “fail[ure] to provide clear and accurate information” about fines and court procedures.36

Justice may have been carried out to the letter of the law in Ferguson, but it was hardly according to its spirit. In Ferguson, targets undermined the effectiveness of the very services they are supposed to help improve. Take this example of a Ferguson man who ran afoul of Ferguson police:

In the summer of 2012, a 32-year-old African-American man sat in his car cooling off after playing basketball in a Ferguson public park. An officer pulled up behind the man’s car, blocking him in, and demanded the man’s Social Security number and identification. Without any cause, the officer accused the man of being a pedophile, referring to the presence of children in the park, and ordered the man out of his car for a pat-down, although the officer had no reason to believe the man was armed. The officer also asked to search the man’s car. The man objected, citing his constitutional rights. In response, the officer arrested the man, reportedly at gunpoint, charging him with eight violations of Ferguson’s municipal code. One charge, Making a False Declaration, was for initially providing the short form of his first name (e.g., “Mike” instead of “Michael”), and an address which, although legitimate, was different from the one on his driver’s license. Another charge was for not wearing a seat belt, even though he was seated in a parked car. The officer also charged the man both with having an expired operator’s license, and with having no operator’s license in his possession. The man told us that, because of these charges, he lost his job as a contractor with the federal government that he had held for years.37

Did racism cause the criminal justice system to abuse targets? Or did a targets culture lead to racist abuse? Probably both. Certainly there was truly horrific racism and corruption in Ferguson, and it is a fact that the African American population disproportionately bore the abuse of the criminal justice system. But the Ferguson scandal was also the consequence of how the system operated. Ferguson’s police force was judged not by how safe the streets were but by how many tickets its officers could write, and the courts were judged not by how just their proceedings were but by how many fines they could levy. Racism may have led Ferguson’s system to target African Americans, the town’s most vulnerable and least powerful demographic, but any system set up to privilege citations and the revenue they produce over due process is destined to result in corrupt and unfair results. Targets disguised those injustices as the results of legitimate law enforcement.

Although the spark in Ferguson was, in retrospect wrong (Darren Wilson, the police officer who shot Michael Brown, prompting the initial investigation, was ultimately cleared), the flame of the Brown-related riots illuminated the injustices present in the criminal justice system: an often authoritarian police culture that is overmilitarized, underscrutinized, and at odds with the public safety objectives with which the police are entrusted. In the short time since the Brown shooting, Americans, especially white Americans, have become increasingly aware of police misconduct, especially misconduct that disproportionately hurts black Americans. A steady stream of deaths of African Americans at the hands of the police has given rise to the Black Lives Matter movement, most notably: Eric Garner, who died in New York after police put him in a chokehold for illegally selling cigarettes; twelve-year-old Tamir Rice, who was shot by police without warning in a Cincinnati park for holding an airsoft gun; Freddie Gray, who sustained fatal injuries while being transported by Baltimore police; Sandra Bland, who died in her jail cell (ruled a suicide by the coroner) after being arrested over a traffic infraction; and Samuel DuBose, who was shot by University of Cincinnati police during a traffic stop.

Police officers across the country have an inestimably difficult job, and we place a great deal of trust in them to carry out their duties. The vast majority of them are not simply good, honest, and decent officers; they are brave heroes for putting themselves in harm’s way to keep us safe. Something is wrong, however, when they blindly follow targets; when they unfairly target the poor, the black, and the male; or when they bark out orders to people as if they were animals. When police officers put targets before people, rules before relationships, and revenue before community, they are not acting in the best interests of the people—whether they be black, Hispanic, Asian, or white.

We urgently need a culture of more human policing. As long as police officers work under a siege mindset in which they fear leaving their cars—or only do so when armed to the teeth with military gear—a disconnect will exist between the police and the communities they are sworn to protect. Police forces are not invading armies; they do not need tanks. Police officers are not soldiers; they do not need full-body armor and assault rifles. Instead, they need a more human approach in which they are integrated with their communities. They should heed the words of Sir Robert Peel, the British statesman who started London’s Metropolitan Police, the world’s first modern police force: “The police are the public and the public are the police.”38

Overreliance on targets, on data, is certainly not the only factor contributing to the police becoming more authoritarian; indeed, data is a valuable tool to understand the policing needs of a community. But we have seen what happens when the police see citizens as data points: their actions become beholden to statistics, not good police work. It leads to a cycle in which the police dehumanize people and people dehumanize the police. In communities of color, where longstanding historical grievances and the legacies of racism persist, already tenuous relationships between people and the police are pushed to the brink in which abuse and civil unrest are all but inevitable. There are many steps to be taken to resolve the crisis between America’s police and so many of its citizens. But the first is straightforward: let’s not confuse measurable actions like stops, citations, fines, and arrests with the true goals of safer streets and citizens.

“GO OUT INTO THE REAL WORLD”

In the end, numbers are just not that helpful in actually designing or implementing policy in government and in public services like the police. For example, knowing whether a family is above or below the poverty line tells us nothing about why that’s the case. And if we don’t know why a family is in poverty, we can’t know how to design programs to help that family out. No statistics or set of data can substitute for the intimate, nuanced knowledge that a policymaker should internalize by going out and experiencing the complexity of the world where the policy will have its effect.

In government in the UK we tried to move things forward. In 2008, while we were still in opposition, Rohan ordered ten copies of Nudge by Dick Thaler and Cass Sunstein, a book that sets out the case, in theory and with practical examples, for using behavioral economics to improve people’s ability to make decisions. The book promotes a sort of ‘libertarian paternalism’: steer people in the desired direction without outright compulsion. David Cameron picked up a copy from the pile on Rohan’s desk and shortly thereafter started referring to it eagerly. Within weeks of becoming prime minister Cameron gave the go-ahead to set up a ‘nudge’ unit of our own. Under the direction of social psychologist and Cabinet Office veteran David Halpern, the Behavioural Insights Team worked with government departments to help improve policy design, now replicated in the US federal government in the shape of the White House Social and Behavioral Sciences Team. Nudges aren’t just cute tricks to tack on to existing policies. One of the most successful—and oft-quoted—nudges instituted by the Cameron government was to require large companies to auto-enroll employees in their pension plans, increasing pension savings dramatically.39

Design thinking, too, has started to make inroads in government. After Rohan’s experience of the New Enterprise Allowance brought home to him how detached officials were from the actual people government policies were trying to help, he decided something had to be done to bring the users of government programs into the way programs were designed. “I was so frustrated by the officials’ answers to these basic questions about the way this policy was presented to the people who might benefit from it. What is the script? What posters are up? What materials do people get if they’re interested?” he recounts. “Just imagine the individual lives that could have been changed, the amazing businesses started, the jobs created, if there had just been this small change, if there had just been a moment’s thought given to the actual human interaction. Why is there such a dislocation between policymakers and reality? And this is just a small policy in the overall scheme of things. Think about all the policies right across government and how much more effective they could be.”40

Rohan initiated a process to bring design thinking into the heart of government. He met with experts at the Royal College of Art, Jeremy Myerson and Anthony Dunne, as well as Tom Hulme from the UK office of IDEO. From that came the idea to run a design-thinking course for top civil servants. “I wanted to devise a systemic response to this systemic problem,” Rohan says. So in November 2012 he convened a group of a couple of hundred of the UK’s top civil servants at Number 10 Downing Street for a design-thinking workshop. The venue, the State Dining Room, emphasized the importance of the enterprise: “I chose that room very specifically because it’s the grandest room at Number 10,” Rohan recalls. “Civil servants are a pretty cautious bunch, and Whitehall is not a particularly amenable place to new ideas, so I wanted them to understand that this was serious.” Professors from the Royal College of Art’s new Service Design program, who led the day-long crash course, made good use of the space, covering it in Post-its (the global medium of design thinking, I’ve discovered since teaching at the d.school). Cabinet Office minister Francis Maude kicked it off; Cabinet secretary Jeremy Heywood and head of the Civil Service Bob Kerslake weighed in to make closing remarks. “The day was bookended by the people in charge,” Rohan explains. “I did everything I could to frame this as legitimate and mainstream.”41

In the United States, design thinking is even starting to reach the most bureaucratic of agencies. The Office of Personnel Management (OPM) is in charge of more than 2 million employees across the entire federal government—hardly an agency that is naturally at the forefront of anything. But with wheelable whiteboards, oddly shaped furniture, Post-it Notes, and Sharpies galore, the basement beneath its Washington, DC offices looks more like what you’d find in Stanford’s d.school than a typical government office building. This is the OPM’s Innovation Lab. “It’s very different, believe me, from OPM’s typical work,” says Sydney Heimbrock, the executive in charge of OPM’s innovation practice. Here, OPM staff use the space to teach design courses and workshops for government employees and to teach other agencies, like the US Department of Agriculture (USDA), how to build similar programs.

Among its many other responsibilities, the USDA is in charge of the nation’s Food and Nutrition Service Program, which distributes free or reduced-cost school lunches to low-income students. Unfortunately far fewer students are enrolled in the program than qualify, meaning that many go hungry or undernourished every day. For ten years, despite trying every approach they could think of, USDA officials couldn’t get to the root of the problem.

Enter OPM’s Innovation Lab team. With their colleagues at the USDA, they embarked on the design-thinking process, including visits to schools and time spent talking to administrators and school lunch workers. The most valuable insights they gained were in many ways the most banal, things that nobody would think warranted the attention of top-level staff, and yet made a big difference. For example, it became quickly apparent that simply redesigning the application form could have a huge impact on enrollment. The box for the family name wasn’t large enough for very long names, which many low-income immigrant families tend to have. Challenges with low literacy made it difficult for some parents to complete the form. It was also more complicated than it had to be: people regularly filled it out incorrectly or found its complexity too much of a barrier. Through the design-thinking process, the USDA learned and experimented enough to be in a position to implement changes on a wide scale. “It’s an easy way to quickly address challenges to the program,” explains Stephanie Wade, the Innovation Lab’s director. “It’s a really good example of the government working quickly to be better for the people.”

Of course all this—the jargon, the Post-it Notes, the brainstorming—is easy to mock or portray as the latest management fad. In fact, however, it’s much more than that; it’s a fundamental reorientation of policymaking, from a focus on bureaucratic needs and priorities to the real lives of the real people government is supposed to serve.

But let’s not get carried away. Although there have been steps forward—for example, Prime Minister David Cameron commissioned intensive ‘mystery shopping’ research so he could learn how policies were actually being implemented in the real world; Megan Smith, the US chief technology officer and a Google veteran, brought together teams of engineers and health experts at a day-long ‘hackathon’ to design a better Ebola suit42—the vast majority of government policymaking and implementation is still mired in a bureaucratic mindset. The number of politicians, policymakers, and other public officials who have even heard of human-centered design or behavioral economics, let alone experienced it, is minuscule. And although OPM has been somewhat successful, some of those trying to infuse design into government have run into trouble. When Karen Courington, a former congressional aide who took part in a class I taught at the d.school, tried to infuse the Capitol with a bit of design-thinking spirit, she met fierce resistance. After asking someone in charge whether she could set up a nonpartisan room with whiteboards where staffers could meet and brainstorm policy ideas—perhaps with a few Post-its!—he told her, “No, there’ll never be a white board in the Capitol as long as I’m here.”43

So although having specialist units, like the Innovation Lab or nudge units, is a promising start, the big benefits in terms of a much higher hit rate of policies, programs, and services that actually work will most quickly be realized if the principles of design thinking are integrated into the everyday habits of every government employee at every level. That means government officials regularly spending time with their ‘users’—that is, citizens. Of course, elected officials and civil servants are busy, but it’s really just a question of priorities. Instead of spending half their time in meetings being briefed on policy problems, they need to go out and actually experience them. Before designing a new parenting program while in government, I went along to parenting classes and talked to the parents about their experiences. Before implementing one of our government’s biggest and most ambitious domestic policy initiatives—National Citizen Service, a kind of universal, nonmilitary personal development and community service program for teenagers—we spent years prototyping and testing the model.

A MORE HUMAN WAY OF ORGANIZING GOVERNMENT

But there’s still a problem: scale. Too often the people making decisions are too far from the people affected by them. Government is too big, too distant, too complex, whether it’s in Washington, state capitals, or county administrative centers, all of which can incubate their fair share of bureaucratic inanity. In Britain the problem is centralization: to an absurd and counterproductive degree, things are run from the center in London. But in the United States the problem is different. Size and centralization are part of the problem, but with cities, counties, states, and the federal tier, American government is one of the most fragmented in the world. More often its problem is complexity: layers of government that do the same task and ossified bureaucracies that are Kafkaesque in scope. Red tape has gotten so bad, in fact, that one respected scholar, Charles Murray, literally—and rather exhilaratingly, in my view—suggests breaking rules in the spirit of regulatory civil disobedience as the most effective approach to reform.44

Accountability in government is fuzzy, lying with endless obscure bits of the bureaucracy with multiple overlapping responsibilities that are not clearly defined, least of all to the average citizen. Adhering to regulation or interacting with government is a complicated obstacle course rather than a straightforward endeavor. In his 2011 State of the Union Address, President Obama colorfully illustrated the point:

We can’t win the future with a government of the past. We live and do business in the Information Age, but the last major reorganization of the government happened in the age of black-and-white TV. There are 12 different agencies that deal with exports. There are at least five different agencies that deal with housing policy. Then there’s my favorite example: the Interior Department is in charge of salmon while they’re in freshwater, but the Commerce Department handles them when they’re in salt-water. I hear it gets even more complicated once they’re smoked.45

The hubris of big, bureaucratic government has grown to the point at which its self-inflicted catastrophes are now clearly undermining support for government itself. It used to be said that centralized government is more efficient. Occasionally, yes. There are obviously certain public policy objectives that are best handled at the national level. Localizing our foreign policy doesn’t make much sense. But many problems that beset government arise precisely because they are handled at the wrong scale. This is most obviously the case when it comes to the contracting out of enormous parts of the state to external organizations, often in the private sector. But the problem is not the contracting out: it’s the size of the contracts.

In The End of Big, Nicco Mele identifies “nerd disease,” in which technical experts make things needlessly complicated in order to justify their own maintenance services. He describes consulting for a business that had an online staff in one building and another team in a warehouse across the street shipping orders. The company wanted to integrate the computer systems for the two sides of the business, and the technical staff were advising the CEO to spend $1 million on a contract with an external supplier. Instead, Mele suggested something different: simply have someone walk between the buildings twice a day with the orders.46

As expensive as nerd disease is when things go according to plan, it is even worse when they go wrong. The ramifications tend to be massive. Instead of a small, local failure, it’s national and affects everyone in the system, often all at once. And because bureaucrats and contractors are reluctant to inform superiors of problems (to protect themselves politically), they often only become apparent when it’s too late. Consequently, big government contracts end up mired in waste, failure, fraud, and abuse.47

In America the implementation of the Affordable Care Act offers a cautionary tale. It all started with nerd disease. In order to take advantage of the healthcare market created by Obamacare, users had to go online to ‘exchanges’ to browse and apply for different insurance policies. But first the government had to build the website, which, given that it was meant to reform an industry the size of the entire French economy, unsurprisingly didn’t go according to plan. There were several delays in launching the system, and once enrollment started, substantial problems with the interface plagued prospective users as they tried to navigate the site. The administration had underestimated the website’s complexity from the start, hiring just one official with experience in healthcare IT, Todd Park of Castlight Health. Instead of managing it directly, the administration shuffled responsibility to the Centers for Medicare and Medicaid Services in Maryland, which didn’t even have a permanent director because his appointment was blocked in Congress.

By taking on such a vast and wide-ranging initiative at once, the president and his staff knew that they were playing a bold political game with high stakes; Republicans would look for any glitch to try to torpedo the whole project. Consequently the pressure to keep everything on schedule was immense. Through fear of confirming the criticism of Obamacare’s political opponents, problems went unreported to the White House; as a result, those with the ability to push back major deadlines or make significant changes were unaware of problems until they were too late to fix. Meanwhile the Health and Human Services Secretary, Kathleen Sebelius, was, according to one commentator, “out of her depth” and unable to properly oversee such a vast project.48 In trying to do too much too quickly and with too little capacity, the Obama administration was left reeling. But honestly, how surprising is it that one single contract to manage one-sixth of the US economy should go wrong?

The trouble with bureaucracy running amok in this way is that it has the unfortunate tendency to keep running amok. During the twentieth century, as we turned to the state to organize vast new areas of activity—education, health, welfare—we tolerated bureaucracy in order to achieve scale and efficiency. But once the ball started rolling, bureaucracy begot more bureaucracy; government grew larger and more distant. To this day governments follow the same basic formula, choosing distance and centralization for the sake of assumed—but rarely demonstrated—efficiency, all at the expense of accountability.

MVG: MINIMUM VIABLE GOVERNMENT

In Silicon Valley there is an overused cliché about when to launch your startup that nevertheless contains an element of wisdom: minimum viable product, or “MVP.” It means you should get your business idea out in front of real customers as quickly as possible. If you wait until your product is perfect before you launch it, you’re too late. Hence, MVP: the smallest, simplest version of your idea that might possibly work. When it comes to public policy our rule should be MVG—minimum viable government. To avoid the pitfalls of big, bureaucratic, inhuman government, we should always aim to decentralize power. Ideally, this would be directly to people themselves so they can make as many of the decisions that affect their lives as possible. That is typically how the richest and most powerful members of society live, and we should want those same freedoms for everybody. Power—and budgets—should be devolved to the level that is just large enough to be practical.

That’s why I’ve always been a huge fan of directly elected mayors, a form of government that, despite being commonplace in America and many other parts of the world, is rare in the UK. That’s a shame, because a directly elected local leader is recognizable, accountable, and responsive. Mayoral government is more human. It is also more pragmatic and often nonpartisan—mayors are doers. They might nominally identify with one party, but their typical concerns are fixing street lights, not bloviating about political controversies. They focus on getting things done because they literally have to live with the results. Making the buses punctual, the parks clean, and the city center thrive might not inspire intellectual debate in newspaper opinion pages, but they’re the stuff of everyday life: they are what matter to most people.

It’s not enough simply to have a mayor, though. Mayors must have real power to get things done. And sadly that’s not the case everywhere. In San Francisco, for example, the mayor has very little power to really lead change: responsibility for crucial parts of daily life in the city is split between the mayor, the school district, and even distant Sacramento. The Board of Supervisors exerts a powerful and generally unhelpful check on positive action, and mayors in San Francisco have shied away from taking control of public schools. All of this is completely counterproductive: fragmented public policy responsibility prevents the mayor from actually implementing the kind of fundamental reforms that are sorely needed in this astonishingly divided and poorly run city, with massive wealth and technological know-how next to entrenched poverty, inadequate social services, and almost comically shambolic public transportation, to name just a few of many egregious local government failures.

As well as having the power to get the basic things done, mayors must also be able to experiment. The decentralization of power is the best way to generate and test new ideas, and in the United States, experimentalism is seen as a critical part of localism. Governors at the state level and mayors at the municipal level pioneer fresh approaches and policies that get adopted elsewhere in America and sometimes the world over. At least in this respect San Francisco provides a positive example in the shape of former mayor Gavin Newsom. In 2004 he presided over the first gay marriage in America. It prompted years of public debate, referenda, and legal wrangling, but the result is one of the biggest social changes in history, with the legalization of gay marriage sweeping across the United States and the world. That incredible, liberating revolution began in earnest with the conviction of an assertive mayor. President Obama’s healthcare plan was based on the reforms of Mitt Romney, Massachusetts’s former governor. Angel Taveras, the mayor of Providence, Rhode Island, won the Mayor’s Challenge (a prize established by Mike Bloomberg) for his initiative Providence Talks, which helps poor children close the ‘word gap’ (poor kids hear millions fewer words during childhood than their more affluent peers, hampering cognitive development).49 In areas as diverse as civil rights, education, the legalization of medical and recreational marijuana, environmental protection, assisted suicide, food safety, the regulation of drones . . . in America change often starts at the bottom and trickles upward. It’s even true in Japan, where the most recent advance in gay rights has come from a district in central Tokyo, Shibuya Ward, which announced in February 2015 that it would give same-sex couples the same legal rights as married heterosexual couples, a major step forward in a country that, although tolerant of homosexuality, grants gay couples relatively few rights.50

For all the problems of US government and politics, America has at least one thing right: for the best results, policy innovation cannot happen in a vacuum, and it cannot all come from the top. This role of local jurisdictions as “laboratories of democracy,” articulated by Supreme Court Justice Louis Brandeis and based on the Tenth Amendment of the US Constitution, is deeply American, and we should encourage even more creativity, innovation, and risk taking at the local level.

POSITIVE DEVIANTS

Sometimes, however, the most human thing government can do is realize that people can solve their own problems. One particular expression of this idea holds great potential: the power of positive deviance.

In 1990 Jerry and Monique Sternin, workers for the charity Save the Children, moved to Vietnam to set up a program to fight child malnutrition in poor rural villages. While conducting surveys to understand the scope of the issue they grew curious about the handful of children who, despite coming from families as poor as all the others, were perfectly healthy—the positive deviants. What were these kids doing differently? If they could discover behaviors that enabled even the most materially deprived parents to raise healthy children, the implications would be tremendous. They found that all the parents of the positive deviants collected tiny shell pieces from crabs, snails, and shrimp from rice paddy fields and added them to their children’s diet along with the greens from sweet potato tops. None of the other families did. Both these ingredients, though free and available to anyone for the taking, were commonly considered to be inappropriate, if not dangerous, for children and so were generally excluded from their diets. In addition, in five of the six positive deviant homes, parents were abnormally hygienic. Another difference: parents would generally feed their kids twice a day (before and after work in the fields). Parents of the positive deviants, however, instructed caregivers to feed their children regularly throughout the day. Though total daily calories were no different, because of the small stomach size of children under three, those with multiple meals were absorbing up to twice the nutrition with the same amount of food.

These findings enabled aid workers to set up homegrown educational programs that passed on the lessons to other parents. The beauty is that the solution required no more resources than the villagers already had. The answers to the problem were right there in the community with the people who lived there.51 Imagine if we designed our domestic policy programs with a ‘positive deviance’ approach. Imagine if policymakers went to the very populations they were trying to help to find not just problems but solutions as well. Near where I live, in Oakland, California, a project called the Family Independence Initiative (FII) is doing just that.

Part of the problem of poverty, according to FII’s founder Mauricio Lim Miller, is that government defines poor people by their problems, neglecting—and even inhibiting—what they’re already doing well. So at FII, staff are explicitly forbidden to advise families or give them ideas. For ‘helpful’ people (as social workers tend to be), this is a very difficult—sometimes impossible—impulse to control. But stepping back is essential, as it creates a vacuum that families fill with their own ideas. “The best and most culturally relevant solutions are embedded in community,” Lim Miller explains, “and people build and strengthen their social networks when they look to friends and neighbors who have successfully faced similar challenges.”52 FII provides income support on the condition that its clients record and make steps toward achieving basic goals on issues like income, debt, health, education, and relationships and that families meet once monthly to discuss their goals. Staff are allowed only to ask open-ended questions like: “What do you think should be done?” and “Do you know of anyone who successfully did what you want to do? Can you ask them for help?”53

According to Lim Miller, “When we respect families to lead their own change and give them access to resources the way middle- and upper-class people access them, they begin to transform their lives.” It wasn’t easy: families used to the traditional (literal) give and take of welfare programs were looking for direction from FII staff. After they received none, they started to share goals, then ideas. Sometimes it was clear that families were on their way to making a mistake.

Jorge and Maria-Elena were a recent immigrant couple, refugees from El Salvador’s civil war. After seven months of meetings, in which Jorge and Maria-Elena had mostly talked about their health and their children’s education, they declared they were going to buy a home. A Spanish-speaking real estate broker had promised them they could buy a house in their neighborhood, and friends would help them meet the down payment. It took all the self-control Lim Miller and his staff could muster to stop themselves from ‘saving’ the couple from what was—obviously to them—not going to be a rosy situation. But rather than intervene, Lim Miller told his staff to track them and simply accept that people had to make their own mistakes. The house was purchased, and the broker made his commission. But Jorge and Maria-Elena were now saddled with a mortgage that was 65 percent of their income. Losing the house seemed all but inevitable. Lim Miller felt bad, and his staff was upset they had let this happen.

But that’s when the couple surprised them. “We had assumed the family was clueless,” Lim Miller recalls, “but at some point they had recognized that they were in over their heads. They had included a refinance clause in their mortgage.”54 With the help of their friends, they renovated it, increasing its value so they could refinance the house and bring their mortgage down to 40 percent of their income. “From that point on,” says Lim Miller, “I promised myself that I would try to avoid underestimating people’s ability to solve their own problems.” (The couple still own their home, by the way).

What makes this story interesting is the ripple effect that an organic, human success story has on others in the community. Now Jorge and Maria-Elena’s friends knew what to do—and not to do—to buy a house. Two months after the refinancing everyone started saving more because they wanted to buy a home too, just like their friends. Within eighteen months the four other families in their FII cohort had purchased homes, but without using the predatory broker. By helping families find other families who succeed in overcoming problems associated with poverty, Lim Miller is building a self-perpetuating platform from which the working poor don’t just walk away with lessons learned but also inspiration.

The results are clear: household incomes of the initial twenty-five families increased by nearly a third after two years. Moreover, 40 percent had bought new homes within three years.55 A year after the program’s payments ceased, household income still increased (now 40 percent higher than baseline).56 After extending the program to San Francisco, the numbers continued to impress: amongst families there, in two years household income increased by an average of 20 percent, half of school-age children showed improvements at school, and three of five households reduced their debts. The initiative has since expanded to Hawaii and Boston, where within one year of its operations incomes increased 13 percent.57 “When you come into a community that is vulnerable with professionals with power and preset ideas, it is overpowering to families and it can hold them back,” according to Lim Miller. “But the focus on need undermines our ability to see their strengths—and their ability to see their own strengths.”58

Lim Miller found that when people—even if just at the neighborhood level—are given a bit of responsibility over their lives, a bit of power, they wield it well. It’s a lesson we need to apply throughout our system of government. We need to find ways to give power back to the people who can use it most effectively.

WE’VE GROWN USED to government that doesn’t seem to respond to our needs. It seems to worry about the problems of someone else, somewhere else. When it does touch us, it seems endlessly bureaucratic. Government is not and should not be the solution for every problem. But where it is necessary, there is no reason for it to be so aloof and unresponsive. The imperative to shake it up is urgent. Those who serve in the public sector have an awesome responsibility; most are genuinely public spirited and well intentioned. Let’s unshackle them from the structures that inhibit them. Let’s change the accounting structures so we design policy for the long term, solving problems at their root causes. Let’s change the working structures so we design policy based on real people’s needs. And let’s decentralize power, bring it closer to people, and put it directly in people’s hands when we can.

If we do all this, we will certainly make government more human. We might even make it more popular.