Ten years after the term was coined, the behavioral insights field is at a pivotal point. Its first decade saw the approach attract significant resources, generate attention globally, and expand rapidly into a dynamic ecosystem. Robust evaluation means we know it has brought results. But a clear-eyed look around shows that governments mainly still rely on economists for their core policy decisions; every day, people experience services that are far from easy, attractive, social, or timely; and experimentation often feeds into marginal decisions only.1
After this period of rapid expansion, some reflection is needed. The approach has proven to be more than a fad, but the movement is still in flux; its legacy is unclear. To fulfill its potential and have enduring impact, those applying behavioral insights must do three main things over the coming years: consolidate, prioritize, and normalize.
The approach has proven to be more than a fad, but the movement is still in flux; its legacy is unclear.
Consolidating is about developing more consistency in the way behavioral insights are applied, confirming the most reliable evidence and theories through replication, and determining how findings vary across cultures and subgroups. Prioritizing is about identifying and pursuing the most valuable new directions for behavioral insights, in terms of both new techniques and new applications. Normalizing is about how to integrate the behavioral insights approach into standard practices for organizations, so it can endure even if attention fades—even if we eventually stop talking about “behavioral insights” as a distinct idea.
We explore these three actions throughout this chapter. In broad terms, we start by discussing more straightforward technical issues about how we can upgrade and optimize existing practices in the future. We then move on to more challenging questions about the tensions inherent in the behavioral insights approach, tough choices its practitioners will have to make, and what will be required to ensure it survives.
The preceding chapters have shown that the behavioral insights approach has developed mainly through practical application, rather than following a clear blueprint. Moreover, the growth of these practical applications has been rapid: We have not always had the time to reflect on what is being learned. As a result, the label of “behavioral insights” has been attached to a great variety of activities.2 Nudging is an obvious example. Some have argued that the definition of nudge set out by Thaler and Sunstein has gaps and inconsistencies, and there may be truth in that claim.3 But Thaler and Sunstein are also clear in what nudging is not: taxation or bans, for instance. Yet the term “nudge” has become so popular that it does get applied to exactly these kinds of interventions.4 In the same way, “behavioral insights” has been applied to many activities that have little relation to the evidence and principles we have set out in the earlier chapters—and some initiatives that actively contradict them.
To a certain extent, we should be relaxed about these developments. Some of them represent valuable adaptations to new challenges (and we propose further adaptations in the following). Since the behavioral insights approach is a pragmatic one, practitioners can leave academics to police the boundaries of concepts like nudging. But there are also pressing reasons to sharpen the way we talk about behavioral insights. The very factors that led to behavioral insights being promoted successfully—intriguing experiments and accessible summaries—also mean that it can be easy to gain a superficial knowledge of the topic, even if real expertise is lacking. The result can be free riders who apply the term “behavioral insights” indiscriminately (often to poor-quality work). Over time, these moves dilute the overall credibility of the approach and confuse those who want to understand or apply it.
One response is to set out a clearer, tighter definition of behavioral insights. Others can then refer to this definition and use it as a guide to shape practices or call out those who are abusing the term. A stronger set of practices can also form the basis for greater professionalization—and we are already seeing the birth of associations that are planning to create standards and certifications. Of course, we are not saying that every application can or should conform with an idealized approach. But we do think there is a pressing need to provide a coherent account of what that ideal is—which was one of our main motivations for writing this book. Over the longer term, we must move beyond lists of individual heuristics and biases, and focus instead on their relationships with one another and how they fit with the dual-process theories that provide the foundations for behavioral insights.
As well as solidifying the scope and approach of behavioral insights, we need a discerning look at the underpinning evidence. Which findings should we be using or discarding? The good news is that this work is already being taken forward in a few different ways. One approach is to combine the data from existing studies and work out which concepts or applications seem to have the biggest effect (known as a meta-analysis). As more behavioral insights studies are published, more of these syntheses are emerging. For example, a recent study of nudges in general found many studies of defaults that had a large effect, but fewer studies and weaker effects for precommitment strategies.5 Another study looked just at nudges to change food consumption.6 It found that nudges that focused on shaping the environment (e.g., placement of food) had larger effects than those that focused on shaping opinions (e.g., nutrition labeling).
The issue with relying on existing studies, as we noted in chapter 5, is that the reliability of some of these studies has been questioned. Several concepts that attracted much attention have since been found to have shaky foundations, including priming, ego depletion, and choice overload. Therefore, ongoing attempts at replicating results are helping us to understand which concepts should be avoided. People applying behavioral insights should both pay attention to these emerging findings and respond the right way. For behavioral insights to have credibility over the long term, we need to be ready to admit that some of our findings may have been one-time results. More than anyone else, behavioral scientists should know that unwelcome information is vulnerable to confirmation bias and cognitive dissonance. Our main commitment should be to “what works,” since the overriding goal is to have an impact on issues in the real world, rather than to maintain theories for their own sake.
Of course, simply thinking in terms of “what works” is not enough. The behavioral insights approach now also has a pressing need to work out what works for whom, when, and in what contexts. One way of answering this question is to get more variety in our data. We already discussed the need to involve non-WEIRD study participants in chapter 5, and more studies are collecting data on cross-cultural variations in psychology.
The other approach is to look for the variety in the data we are already collecting. In chapter 5 we touched briefly on differential effects: when interventions work better for some groups than others. Looking at subgroups within the data we collect on trials can help. While running too many tests on subgroups can lead to spurious results, done properly this can create new, robust findings about how groups respond differently to behavioral insights interventions, allowing greater targeting and better results. As computing power and statistical packages have improved, new data science tools have also emerged that can help us do this even more effectively. Predictive analytics, for example, offers new ways of detecting patterns in large amounts of data, which can then generate new conclusions. These techniques were recently applied to the results from an intervention in Mexico that tried to increase savings rates by sending different SMS reminders. The analysis revealed that the reminder performing best overall actually made women under the age of 29 much less likely to contribute than using the other interventions. In contrast, individuals aged 29 to 41 years increased their savings after receiving the same message. Predictive analytics offers a powerful new way of detecting these patterns.7
As the name suggests, predictive analytics can also help make predictions: It can identify groups that are particularly likely to, say, default on a loan, so they can be offered preemptive help. In our own work, we found that applying these techniques to basic, publicly available information allowed us to reliably predict which medical providers were performing inadequately, such that 95 percent of inadequate providers could be identified by inspecting just 20 percent of providers. The vision is that predictive analytics can allow quicker, more sophisticated identification of groups who behave in distinct ways, while behavioral insights can deal with the “final mile”: designing the most effective interventions to influence those behaviors.
Currently, though, we just do not have this level of precision in matching nudges to groups and situations. At least, this level of knowledge is not public—it may be possessed by private companies. Behavioral scientists can draw on the best available evidence, but this still means that there is an element of guesswork or professional intuition in selecting which concepts should be applied to which issues or groups. The problem is that the judgments of behavioral scientists are just as vulnerable to bias as anyone else’s. Hindsight bias is likely to be a particular issue. This is where, after an event, people think that the outcome was much more predictable than it actually was before the event occurred (“I knew it all along!”). When behavioral scientists are running experiments, the danger is that unexpected results swiftly become obvious in retrospect, and people forget how uncertain or mistaken they were beforehand.
One effective way of combating this bias when applying behavioral insights is to force yourself and colleagues—the more, the better—to make a prediction (e.g., “I predict Intervention A will perform best”). Then force those who made the predictions to revisit them once the results are known—perhaps by scheduling a delayed email. Comparing predictions with results has several advantages: it helps you to place the results in context and appreciate how surprising—or not—they are; it provides feedback on how knowledge of the area is advancing (or not); and it helps ensure that our judgments of our abilities are well calibrated. We think prediction rounds should become a standard part of a behavioral insights approach.
Correcting for biases in selecting interventions can only mitigate the problem, not solve it. The question of what works for whom and when can only be answered by combining more data of better quality, new methods such as predictive analytics, continued testing of interventions, and improved delivery systems (since the risk of errors increases with more complex targeting). A final note of caution, though: apart from the question of can we target interventions at particular groups, should we do so? Many countries have laws that guarantee individuals equal treatment from the public and private sector, regardless of their characteristics. If we find, as in the Mexico savings study, that women under 29 respond differently from the general population, it may not be appropriate or acceptable to target them on this basis. What seems to be more acceptable is when we target people based on their previous behavior—for example, if someone has been repeatedly late with their taxes in the past. But the line of acceptability is blurry, particularly as companies are now using predictive analytics to increase consumption in ways that may not benefit consumers.8
As well as securing reliable and nuanced knowledge about what behavioral insights can do, we need to determine the priorities for pushing the boundaries of the field. One need is to broaden its methods and perspectives, in order to give a fuller account of what drives behavior. Some historical context is required to understand why this need exists. When behavioral insights first came on the scene, much of the research on behavior provided to governments and businesses was based on the premise that people gave accurate accounts of what they did, why they did it, and what they would do in the future. A typical project might be to run a focus group, discuss with participants why they act a particular way, and then ask them how would react to new options (e.g., a revised message or process). The responses would be taken as the basis for policy or commercial recommendations.
As we have seen earlier, findings from behavioral science suggest that this kind of analysis is flawed. These findings led early proponents of behavioral insights to shape their new approach in reaction to this usual way of doing things. The impact of identity, society, and culture as drivers of behavior was displaced by a focus on dual process theories instead. There was less asking people about their motivations and reflections and more observation of their actions. And behavior was installed as the main outcome measure of value, rather than shifts in attitudes or beliefs.
These are generalizations, of course—but the behavioral insights perspective was defined in opposition to these common practices in order to show what was new. Now that it has succeeded, the approach needs to broaden out, loop back, and incorporate elements it previously neglected (as well as new ones). This is already happening: applied behavioral science teams are now much more multidisciplinary than their forerunners. In Canada, for example, the federal team sits within the Impact and Innovation Unit, a group that also prioritizes approaches such as design and co-creation.
It is also increasingly common to pair quantitative approaches, such as randomized controlled trials, with qualitative approaches that can illuminate why something does or does not work.9 Although a good behavioral insights project will be built on an evidence-based theory of change (see chapter 4), sometimes we cannot be sure that the results were produced for exactly the reasons we hypothesized. Conducting qualitative research means we can build a richer understanding: Was the intervention implemented correctly? Did some aspect cause participants unanticipated problems? Even if the outcome was achieved, were there hidden emotional or economic costs for participants?
Take, for example, a project the Behavioural Insights Team worked on with the International Rescue Committee. The goal was to reduce the use of violence against children by teachers in a refugee camp in western Tanzania. On the scoping trips we used a range of methods to identify the drivers of the issue, working with social scientists who had deep expertise in qualitative research, experts in postconflict and displacement trauma, and—of course—culture and language interpreters. As we moved into the solutions phase we worked with designers to develop interventions that worked in context and, since we knew that teachers would need peer-led support, we used network mapping approaches to identify the most influential individuals within the peer group. We also sought advice from experts in cognitive behavioral therapy, since a key part of the intervention was to help teachers identify triggers that led to habitual violence in their reactions. These collaborations produced a more powerful and nuanced intervention.
At one level, this broadening out can simply be seen as a case of behavioral insights adopting new techniques. But we think that a more fundamental change could happen. There is a criticism that the approach can be rather mechanistic in practice: experts apply a theory to a practical problem, a change is made, and prespecified outcomes of behavior are measured. While this is a powerful process, it can also be quite linear and static, with one active party nudging and a passive one being nudged. There can be limited opportunities for people to provide feedback on interventions to improve them. In the eyes of critics, this setup creates a “psychocracy” of control.10 Therefore, we think a priority is for behavioral insights to incorporate new thinking on more reflexive, dynamic and nuanced forms of change. Two promising areas are human-centered design and network analysis.
In its most basic definition, design is about how elements are arranged to achieve a particular purpose. Design always deals with things that are tangible, experienced, and present, rather than with abstractions or theories. Objects or services can be designed to create certain feelings or thoughts, or to invite certain behaviors. In this sense, the use of behavioral insights has always incorporated aspects of design: it is concerned with practical problems (like “choice architecture”) and is sensitive to how the wording of a letter or layout of a waiting room can have a big impact.
However, the behavioral insights approach to design has been quite top-down: principles of behavior have been used to embed certain ways of acting into the local environment or context. Designing from principles in this way can be successful (see the iMac or iPod), but it has also been criticized for not paying enough attention to users. However, there has been increasing interest in “human-centered design” (or the closely related concept of “design thinking”), which focuses more on trying to understand a user’s needs, map their experiences, and actively prototype solutions with them. Behavioral insights projects could draw on human-centered design more in three main ways.
First, human-centered design focuses on exploring people’s needs and goals, rather than starting from a target behavior (as in chapter 4). Of course, when dealing with policy there are instances where active steps must be taken to prevent people’s needs being fulfilled (e.g., people’s need to commit certain crimes). But we think the goal for behavioral insights is to pay more attention to people’s needs, while also using its tools to understand the strategies people are using to try to fulfill these needs.
A very simple example concerns “desire lines.” These are the informal paths, created by footfall erosion, that show the shortest or most desirable routes taken by people—which may not be the ones provided by an official design. Some designers have started to see these paths as an opportunity, rather than a nuisance. For example, universities such as the University of California, Berkeley, and Virginia Tech (Virginia Polytechnic Institute and State University) have waited to see which routes people were taking to cross grass areas, before paving the routes.
The same principle applies in more complex situations. For example, recently policymakers in the UK have become concerned about capacity problems in the country’s hospital emergency departments. These problems are perceived to be caused by people visiting emergency facilities with only minor ailments. The system planners have created alternative options (like “urgent care centers”) that deal with such ailments in a more efficient way. However, they are not very popular. In this situation, policymakers could use behavioral science to persuade or nudge people to use the “correct” facilities. But a more human-centered approach would look at the “desire lines,” and see that attending emergency departments is completely reasonable. People are confused by the role of the urgent care centers, which are also not prominent and not always open (unlike the emergency department). Therefore, they are using a heuristic for navigating the system—“go to the emergency department”—which is working for them, and which means behavior will be difficult to change. This insight points toward instead adapting the service so people don’t have to change their behavior—perhaps by co-locating the emergency and nonemergency care. In other words, there could be a greater recognition of people’s agency and more attempts to design around existing behaviors, rather than attempting to change them.
Second, human-centered design places greater emphasis on people’s own interpretations of their beliefs, feelings, and behaviors. The behavioral insights approach does emphasize the need for immersive techniques like in-depth observation, as well as trying to use services oneself. But it has tended to be more skeptical of self-reports, given the automatic nature of behavior and the prevalence of cognitive blind spots. (However, one of the most famous books on human-centered design admits that “people themselves are often unaware of their true needs, even unaware of the difficulties they are encountering.”11) We think there is room to give more weight to how people view their own experience, and to broaden out from the focus on revealed behavior.
Finally, human-centered design encourages active participation from users (and staff members). We noted before that there are some applications of behavioral insights that work by disrupting the Automatic System, making people pause and engage their Reflective System. For example, the Becoming a Man program for crime reduction appears to work by “helping youth slow down and reflect whether their automatic thoughts and behaviors are well suited to the situation they are in, and whether the situation could be construed differently.”12 The program, based in Chicago, reduced arrest rates by 28–35 percent and increased graduation rates by 12–19 percent.
The obvious next priority is to combine these kinds of approaches with human-centered design to help people design or redesign their own environments. This relates to Thaler and Sunstein’s vision that nudging can be used widely by “workplaces, corporate boards, universities, religious organizations, clubs, and even families.”13 But it also taps into the facet of behavioral science that claims individuals usually use heuristics effectively and can be empowered to use them better.14
Of course, there are many environments that individuals struggle to change on their own, indicating that a change in politics or policy is needed. As we discussed in the last chapter, there is a particular case for using the tools of deliberative democracy to discuss policies that draw on behavioral science. But this kind of engagement has wider potential benefits than just debating or approving certain policies—it is also important for a healthy democracy and civic agency.15 And, despite what critics allege, the use of behavioral insights can actually help build that agency.
At the most basic level, behavioral insights can be used to nudge people to take part in civic activities in the first place. Although this nudge may be operating on the Automatic System, the goal is to ensure that someone takes part in an activity that engages their Reflective System. Then, behavioral insights can be used to design better deliberative mechanisms. Many of these activities take place in groups, but behavioral science shows that groups are vulnerable to issues like group polarization, availability cascades, and self-censorship.16 We can’t just assume that good reasoning prevails in deliberative settings—but evidence-based design makes it more likely.17
Finally, behavioral insights can spark wider engagement. Let’s return to the example of food consumption from chapter 1. If people actively engage with the evidence that our eating is heavily influenced by features in our immediate environment, they may start requesting policies that address those forces. Indeed, there were signs that this happened in the deliberative forum on obesity in Victoria, mentioned in the preceding chapter, which produced fairly radical proposals. Behavioral insights could have an additional role here, by finding new ways to nudge governments or legislators themselves—and there is evidence that these attempts can actually produce results.18
This potential is not restricted to the public sector: People could also try to nudge companies. For example, the UK saw the creation of a Fair Tax Mark, which was awarded by a not-for-profit organization to businesses that did not practice any form of tax avoidance. The Fair Tax Mark is a classic nudge, in that it provides a prominent signal to consumers that can shape both consumer and business behavior, without mandating change. But this nudge was created through self-organization to achieve a particular policy goal.
Human-centered design can go a long way toward engaging behavioral insights with human agency more deeply, moving it further away from a mechanistic worldview, and opening up new frontiers. We think that network analysis holds similar promise. Network analysis concerns how behaviors spread through interactions. The behavioral insights approach has not neglected this question, owing to its roots in social psychology, but it needs to incorporate more of the latest evidence on the topic. While there are many varieties of network analysis, we think that a complex adaptive systems (CAS) approach could be particularly useful. In short, a CAS is a dynamic network of many agents who each act according to individual strategies or routines and have many connections with each other. They are constantly both acting and reacting to what others are doing, while also adapting to the environment they find themselves in. Because actors are so interrelated, changes are not linear or straightforward: Small changes can cascade into big consequences; equally, major efforts can produce little apparent change.19
An important point is that coherent behavior can emerge from these interactions—the system as a whole can produce something more than the sum of its parts. While we accept that this is true for things like markets, forests or cities, we are only beginning to understand just how pervasive network effects are. For example, a recent study showed how partisan policy divisions may actually be produced by chance, not prior conviction.20 This US-focused experiment recruited participants who identified as either Democrats or Republicans. Participants were placed into one of ten online “worlds,” in which they were asked whether they agreed or not with twenty statements. These twenty statements were about public issues, but had been constructed so that they did not tap into preexisting partisan fault lines. Rather than asking about abortion or gun rights, the questions asked whether participants agreed with statements like “the current lottery based juror system should be replaced with full-time licensed professional jurors” or “social media sites have a positive influence on people’s daily lives.”
The twist was that in eight of the worlds the participants were told whether mostly Democrats or Republicans were supporting the measure. The political alignment of others on the issues was made visible. But in two of the worlds, participants were not told about the views of others—they were just asked whether they agreed with the statements. The results were striking. When people were not aware of the views of others, there were hardly any differences between Democrats and Republicans in terms of support for the measures. When people could see how others were rating the statements, a strong partisan divide opened up—people aligned with their party.
But here’s the fascinating part. The topics that fell into the “Democrat” or “Republican” camps varied greatly between the eight different worlds. Sometimes Republicans ended up supporting a new juror system, and sometimes they opposed it. Rather than views reflecting some preexisting ideological stance, partisan alignment seems to be created by “a tipping process that might just as easily have tipped the other way.” The same system effects that lead certain songs or books to become unexpectedly popular21 also drive some apparently fundamental and intractable problems in society.
Network analysis (and CAS in particular) can help the behavioral insights approach by making it less static, individualistic, and mechanistic. It offers new ways of understanding the impact of interventions that may not be captured by the linear process of RCTs. For example, an experiment in Austria showed that mailing different messages to 50,000 potential television license fee evaders increased their compliance.22 This is a classic intervention along the lines of our example in chapter 4.
But the authors dug deeper and did follow-up analyses. Using location data, they mapped who lived near whom. They found that a household’s compliance increased if its neighbors in the same network received a letter—even if the household did not receive one itself.23 In other words, the behavioral effects of the intervention were spreading through social networks, as people told their neighbors about it. In fact, these effects were about as large as the impact of receiving the letter itself—5 to 7 percentage points increase in compliance.
The behavioral insights approach needs more of this kind of analysis. Many RCTs would not measure these kinds of spillovers and would focus only on the individuals who received an intervention. In fact, often the whole point of an RCT is to protect a control group against this kind of “contamination.” Instead, we need to draw more on network analysis to understand how behaviors spread—and how this varies according to different types of behaviors—in order to maximize the impact of interventions. We need “nudge plus networks.”24
There is a final, broader way that CAS can help. As we noted earlier, behavioral insights projects often take a fairly top-down and linear approach to designing and implementing solutions. However, in complex adaptive systems there may not be a straightforward link between causes and effects. As Herbert Simon observed in 1969, this means we need a different and less directive approach: “When we come to the design of systems as complex as cities, or buildings, or economies, we must give up the aim of creating systems that will optimize some hypothesized utility function.” Instead, we need to understand how we can create the conditions so that individuals and organizations can interact in a way that means the desired behaviors emerge from the system indirectly.
We have made a start. There is increasing interest in behaviorally informed regulation that focuses more on reshaping the “rules of the game” for public good, rather than trying to target specific behaviors.25 But the truth is that many behavioral scientists join policymakers in suffering from an “illusion of control.”26 This is where we overestimate how much direct impact a policy or intervention will have over events. We are particularly likely to do this if we are dealing with complex adaptive systems where change is not linear—so there is a pressing need to understand how behaviors emerge in those situations.
We have written about the need to consolidate the knowledge base for the behavioral insights approach, and the new methods that the approach should prioritize. We want to end by returning to the question at the start of the chapter: how can the behavioral insights approach fulfill its potential?
We need to understand how we can create the conditions so that individuals and organizations can interact in a way that means the desired behaviors emerge from the system.
One obvious route is for decision makers to apply behavioral insights to a range of new problems. The development of driverless vehicles, for example, may not seem to have an obvious connection to human behavior. However, governments and developers around the world are coming to a different conclusion. When Singapore’s Ministry of Transport set up its Committee on Autonomous Road Transport, for example, it invited our colleague Rory Gallagher to be a member.
Rory’s role was to identify (and propose solutions to) challenges that might arise as a result of the pesky humans. For example, the illusion of control might lead drivers to think they have more sway over whether they get from A to B safely than they actually do, which could lead them to intervene in counterproductive ways. Similarly, one of the most famous examples of superiority bias—the tendency to think oneself better than average—is in self-assessment of driving skills.27 If most people think they are better than average at driving, then will they be willing to relinquish control to a computer when the rubber, literally, hits the road? We think that the growth of behavioral insights means that decision makers are less likely to see these kinds of problems as purely technological ones.
But not all problems have the same importance—and the criticism we outlined in chapter 5 is that the behavioral insights approach has not had enough impact on the big issues in “upstream” strategic decision making, even though it could contribute much there. To generate that impact, we need to recognize that the solution here is not a technical one that improves the methods of behavioral insights. Instead, we need to see this as a political issue.
Research shows that, in reality, behavioral insights teams and experts are acting as “knowledge brokers” and entrepreneurs.28 The successful ones recognize the often chaotic nature of turning ideas into practice (whether in the private or public sector), and look for windows of opportunity where they can prove their value. They are looking to develop networks, positions, and tactics that establish their authority and credibility among decision makers. Thus, the real need is to develop the skills of knowledge brokering in order to get the opportunities to influence “upstream” decision making.
The challenge is that this need creates tension between the three pillars of the behavioral insights approach that we outlined in chapter 1: the commitment to pragmatism gets pitched against the commitment to evidence and evaluation. Always inflexibly pushing for a randomized trial may result in behavioral experts being shut out of upstream decision making. So may failing to recognize that a comprehensive evidence review is not always possible, and that policymaking has to incorporate both evidence and political values. On the other hand, adapting behavioral insights principles too radically cuts against the need for coherence that we discussed earlier in this chapter.
Therefore, the way to build behavioral insights into upstream decision making is to create a productive balance between rigor and pragmatism. Pragmatism ensures the seat at the table, while rigor gives a better chance of successful outcomes. The longer term goal is that having such a seat at the table will start to improve the way policy and strategy are made in general. The truth is that there are many biases that affect policymaking itself, and we already have proposals that could mitigate them—the key is to get the opportunity to implement these.29
This brings us to our final point. Over the past ten years, there has been increasing interest in the idea of behavioral insights. However, at some point, this interest may start to ebb and attention may go elsewhere. Other ideas may come along, with their own compelling claims. People may stop deciding to “bring in” behavioral experts, as Rory was for autonomous vehicles. If this is true, then the immediate priority should be to integrate behavioral insights into the standard way that policy is made or organizations are run. That would mean that the practices are resilient to people no longer asking for “behavioral” solutions.
In fact, you could argue that this is the ultimate goal of the behavioral insights approach. The idea of a “behavioral” solution or approach should become meaningless, since the principles will have become absorbed into standard ways of working. Since, as we said in chapter 1, most policies or services concern behavior, then this is just about improving the way that central function is performed—it is not some kind of optional extra. Rather than talking about “behavioral public policy,” we would just refer to public policy (or corporate strategy) done better. In a sense, stopping talking about behavioral insights may actually be a sign that its true promise has been fulfilled. Until then, there is more to do.
The idea of a “behavioral” solution or approach should become meaningless, since the principles will have become absorbed into standard ways of working.