Two

OPTIMIZING

images

IN THE EARLY 2000s, Stockholm’s traffic congestion was getting out of control.

Commute times had exploded. Delays and frustrations were mounting. The Swedish capital’s productivity ground to a halt during rush hours. One obvious way to resolve the challenge was to boost capacity by building another bridge. This strategy had worked before—Stockholm already had dozens of bridges—and, after all, it was the “Venice of the North.” Stockholm city officials instead paused for reflection. They then hired an unusual group of consultants: engineers from IBM.

For IBM, the project was more like planning a rescue mission than preparing a traffic angioplasty to unclog Stockholm’s arteries. To tackle the unknown, the IBM crew started installing sensors around the city to monitor traffic. IBM used 430,000 transponders that accrued data and collected 850,000 photographs. Using this information, IBM produced a total systems model by mathematically analyzing all traffic nodes and seemingly unrelated bottlenecks. The result of this detailed effort convinced Stockholm officials that they shouldn’t build any new bridges or roads. Instead, they should start charging commuters who wanted to use existing bridges and highways during peak hours.

The results of congestion pricing were astonishing. During the trial run of this new scheme in 2006, traffic congestion in Stockholm decreased between 20 and 25 percent. The wait time for commuters plummeted by one-third on average—even to almost one-half. Public transportation regained popularity. The scheme helped remove a hundred thousand cars from the road. Levels of carbon and other particulate emissions plunged. In 2007, Stockholm passed a permanent referendum implementing a camera-based toll system. Sweden’s successful experiment garnered attention. Cities in Asia, Europe, and North America began to contemplate sophisticated congestion-pricing approaches.

images

TRAFFIC JAMS are like leaky buckets: with more supply, they only get worse. In addition, the carrying capacities of roads are fixed, so handling extra cars during peak hours presents a nearly insurmountable challenge.

A recent urban mobility report from the Texas A&M Transportation Institute noted that the fifty-six billion pounds of annual carbon emission in big cities of the United States during peak hours is “equivalent to the liftoff weight of over 12,400 Space Shuttles with all fuel tanks full.” The nearly three billion gallons of fuel consumed to create those emissions is “enough to fill four New Orleans Superdomes.”

On an individual level, these figures become dramatic. The personal cost to the average commuter has more than doubled, as has wasted fuel, over the last thirty years. Commuters, the report noted, “spent an extra 38 hours traveling in 2011, up from 16 hours in 1982.” This equates to five lost days spent in traffic.

“Today we have innumerable road sensors and cameras on the ground that automatically upload data so information can be shared and analyzed in nearly real-time,” writes Naveen Lamba, the global industry lead for IBM’s Intelligent Transportation products. Sensors and transponders that IBM used to support its analyses turned out to be indispensably helpful in mapping traffic flow. “Once data is five to seven minutes old, it’s too late to make any changes that will reduce congestion,” Lamba adds. “Once a commuter is stuck in gridlock, it’s too late to find an alternate route.” Forecasting the traffic demand is an added challenge; even real-time data are often insufficient.

Building our way out of traffic congestion is not always a viable option. “We have to learn to get more productivity out of existing assets using technology,” Lamba declares. In Stockholm, IBM took a modular approach in trying to understand every piece of the system that may be directly or indirectly contributing to the traffic jam. The outcome was to create a new electronic infrastructure: car tags linked to an electronic or convenience store account for payment. This approach influenced public behavior and the very social process of commuting. The revenue that came from the tolls could be applied to the upkeep of a city’s highway system and other activities. Peak pricing, in this case, was not a point solution but a platform solution that simultaneously tackled a number of other challenges. A leaky bucket became an ocean of opportunities.

A solution that doesn’t work in one setting may be transformational in another. Unlike Stockholm, a village in Africa might benefit from an additional road or a bridge to increase public access to services and opportunities. With a decent road, people who never thought about getting a vehicle might choose to buy one. A road means more mobility, and more mobility means more commerce.

Traffic congestion is a function of human behavior. It comes in the form of latent preferences wired in each of us—how we choose to travel from one place to another. As a result, public behavior plays a pivotal role in the success or failure of infrastructure design projects or policies. By and large, that’s because traffic, like any social arrangement, is a complex system composed of multiple systems that interact with one another with no main controller. Their collective effects are by nature nonlinear, often leading to unpredictable behavior called emergence. Even an infinitesimal change (a single orange traffic cone) can lead to an unpredicted impact (a freeway traffic jam) across a system of systems consisting in part of roads.

On this topic, Vinton Cerf, the coinventor of the Internet, offers an astute perspective. One day Cerf was trying to pour peppercorns into a grinder using a funnel. “A few peppercorns got through and then they got stuck. If I had dropped them in one at a time, there would have been no problem,” Cerf observes. “But because I poured several of them into the funnel, the emergent property in this case was congestion.”

This basic understanding of the complex, large-scale effects (such as behavior change) arising from simple rules (peak pricing) is a helpful notion for optimization. “The thing is that you can’t create congestion with one peppercorn,” Cerf adds. “And the most interesting part is that there isn’t anything about a peppercorn that will explain to you much about its congestive properties, other than maybe the fact that it’s due to friction.”

images

ANYONE CAN CLAIM to optimize something in words, but in practice it’s a different story. Optimization is akin to attending a gym and committing to repetitions for strength training. How can we get the best results out of a workout in the shortest period? How can we continually make something better?

Optimization has two basic components. The first component is an objective focused on maximizing or minimizing an outcome variable that is usually a function of something else. Gribeauval’s optimization objective was to inflict maximum losses on his enemy, with a broader goal of winning the war. Optimization also includes a constraint, consisting of limitations to which the objective is subjected. Operations researchers who use models and study ways to improve efficiencies would consider Gribeauval’s goal a classic “weapon target assignment problem” for which they would develop an algorithm. With limited time and resources, how could Gribeauval find the right set of tools—or combinations of tools—and position them optimally to achieve his objective?

Engineers use a variety of modeling techniques to arrive at approximate representations of reality that, by their nature, are not exact. Models are of two basic types: implicit and explicit. In implicit models, “assumptions are hidden, internal consistency is untested, their logical consequences are unknown, and their relation to data is unknown,” as Joshua Epstein, a professor at Johns Hopkins University, describes. In this regard, “when you close your eyes and imagine an epidemic spreading, or any other social dynamic, you are running some model or other. It is just an implicit model that you haven’t written down.” In explicit models, the assumptions, empirical caveats, and equations are clearly presented for analysis and verification. With one set of assumptions, “this sort of thing happens. When you alter the assumptions, that is what happens,” Epstein adds.

Among the many benefits of modeling are to “demonstrate trade-offs and suggest efficiencies” Epstein highlights, or even “reveal the apparently simple to be complex, [and the complex to be simple].” Models expose areas where more data are needed and reveal what work needs to be done. Collecting data on road use patterns from all corners of Stockholm strengthened IBM’s model and its eventual decision to recommend congestion pricing.

There are no perfect models for optimization. Every model is limited by its assumptions and criticized for reducing reality to simple equations. “Simple models can be invaluable without being ‘right,’ in an engineering sense,” says Epstein. “Indeed, by such lights, all the best models are wrong. But they are fruitfully wrong. They are illuminating abstractions.” But the primary purpose of using models to support optimization is to develop a structure that makes constraints and trade-offs clear.

While models are valuable, they also mislead at times. A familiar fallacy among engineers is to assume that a model working well at one level will also function the same way at a different scale. Not necessarily. In fact, emergent properties in complex systems are almost always a function of scaling. Construction engineer John Kuprenas and architect Matthew Frederick derive this insight from the Victorian astronomer Sir Robert Ball:

An imaginary team of engineers sought to build a “super-horse” that would be twice as tall as a normal horse. When they created it, they discovered it to be a troubled, inefficient beast. Not only was it two times the height of a normal horse, it was twice as wide and twice as long, resulting in an overall mass eight times greater than normal. But the cross sectional area of its veins and arteries was only four times that of a normal horse, calling for its heart to work twice as hard. The surface area of its feet was four times that of a normal horse, but each foot had to support twice the weight per unit of surface area compared to a normal horse. Ultimately, the sickly animal had to be put down.

Models are support systems. They are a means to aid decisions; they are not the final decisions themselves. By illuminating the pluses and minuses surrounding the final objective, good models provide a reality check for optimization. In IBM’s case, the primary objective was to minimize traffic congestion in Stockholm, which turned out to be a function of automobile usage during peak hours. The constraints included fixed road capacity, the local government’s budget, and people’s latent preferences. A natural starting point to fully understand and optimize such a complex system was to build a model.

images

IN THE EARLY 1940s, the U.S. Post Office Department faced a crisis. A large number of postal workers had left the department to serve in the military during the Second World War. Annual mail volume was skyrocketing (it reached forty-five billion pieces by 1950), thanks in large part to the explosive growth in direct-mail advertising over the preceding two decades. How could the department optimize postal delivery around the country?

Pressures relating to cost, efficiency, accuracy, and delivery schedule—and perhaps the institution’s future—led the postal department to take an engineering approach. The fascinating results are one of the great strengths of today’s U.S. postal system, which have benefited the rest of the world.

The system’s designers segmented the United States into “zones,” each with a distinct five-digit identification number. And so it was that in 1963, following two decades of research and engineering, the postal service announced its implementation of the Zone Improvement Plan code. The ZIP code established a whole new system for connecting senders and recipients of mail.

In a process emblematic of modular systems thinking, the developers of the ZIP code divided the country into ten sections numbered 0 to 9. They started on the East Coast, assigning Maine the number 0, and moved across the country westward. ZIP codes in New York and some if its neighboring states started with 1; those in the Washington, DC, area began with 2; the West Coast states with 9; and so on. Other numbers in the code further parsed these zones according to central post office hubs and the nearest post office in a particular region.

Specialized machinery was developed to help sort mail for each zone. It took time to refine the accuracy, since the process included a human element. An operator had to key the ZIP code for each envelope or package into a sorter machine. The keying process led to typos and errors. A letter, for example, that was supposed to go to Chemult, Oregon, could end up being routed to Custer, South Dakota, and subsequently rerouted to the postal hub in Denver, Colorado.

To twenty-first-century ears this system might sound inefficient, but in the 1960s, ZIP codes were “revolutionary because of the idea that the mails were being processed based on a number code,” says Nancy Pope, a technology historian at the Smithsonian National Postal Museum. The ZIP codes were also helpful in streamlining mail sent to U.S. cities with common names, like Greenville or Salem or Springfield.

Before mechanization, postal staff members were hand-sorting the pieces. In that situation, “even if you’re really good at it, you’re not going to do more than sixty letters a minute,” Pope says. “I mean, that would be topping out as the best sorter on record for the postal service.” On average, most workers managed twenty to thirty pieces a minute, and because these processes were manual, errors were possible. With the advent of automation, the game completely changed. Machines were able to process up to two thousand pieces a minute, if not more, and therefore a system like the ZIP code laid the groundwork for improved efficiency throughout the postal enterprise.

Federal buildings—the U.S. Capitol, the White House, and the Pentagon, for example—were assigned their own special ZIP codes. Other countries soon began to adapt the ZIP code idea to create their own versions of numeric or alphanumeric postal index codes. ZIP codes became an iconic engineering solution—and an integral part of commerce—that made dramatic improvements in the efficiency of the postal service, even as it reduced costs and errors by integrating new postal technologies. The development of the ZIP code was the result of master planning—a long-term strategy that’s typical of many successful (and unsuccessful) large-scale engineering, architectural, and military projects. Sometimes a deliberate, carefully planned deconstruction is required for the creative reconstruction of a system.

Not everyone was thrilled about the implementation of ZIP codes. People didn’t like having to remember five numbers. The three-digit area codes for telephone numbers had also come into effect. And businesses had started to require social security information for income tax. It all appeared to be a numeric conspiracy—even a communist plot. A systems optimization concept like ZIP codes required a massive national campaign to persuade people to embrace it, and that included a cartoon character called Mr. Zip. Musical legend Ethel Merman lent her brassy voice for a promotional song: “Welcome to Zip Code . . . learn it today. Send your mail out . . . the five-digit way.”

The impact of ZIP codes extends far beyond postal applications. Online enterprises now routinely capitalize on the twentieth-century postal engineering infrastructure to collect demographic, behavioral, and other information about their customers. These codes have become a requisite feature for megaprojects such as the census, direct-mail campaigns, targeted micromarketing applications—what some praise as “recommender systems” and others critique as “consumer espionage”—and for authorization at gas station pumps and supermarkets. In the United Kingdom, for example, the term “postcode lottery” describes the inequality in the delivery and quality of health care and other public services—the idea that where people live may define the standard of services they can expect to receive.

Engineering, as it should be clear by now, is not only about technology—that is, replacing manual labor with machinery. In equal or more parts, it’s about strategy. The development of ZIP codes—similar to how IBM saw a structure in the traffic mess and used it to change public behavior—was a simple yet profound strategy in optimization. It helped solve a practical problem more than a technical one.

Scholars and practitioners have used a variety of terms to argue about the distinction between technical problems and practical problems. Examples include “problems” and “messes”; “tame problems” and wicked problems”; “high grounds” and “swamps”; “hard problems” and “soft problems.” These terminologies signal a basic split. In the first case of each example, something well defined needs to be solved. In the second case, the issue being tackled cannot be readily solved using only equations or analytics but requires also an understanding of human and other factors, which often contribute to emergent properties. Both ZIP codes and congestion pricing are examples in which engineers blended technical and social factors in practice.

We’ll now see how a major Internet products firm applied this sort of optimization to mapping and cataloguing our world.

images

GOOGLES GOAL is ambitious: organizing the world’s information. The company’s New York City operations are housed in a 1930s-era former Port Authority building in the Chelsea neighborhood. Google’s primary-color theme radiates the possibility that it could also be a day care center for grown-ups. Past the clickety-clack of keyboards, and the abundance of free food in the pantries, sits the office of Alfred Spector, vice president of research and special initiatives. He likes to use Google Maps to monitor traffic intensity and plan his trips. “I’ve only missed the train out of Grand Central to Pelham at most three times in the last six years,” Spector says confidently.

Spector and his colleagues operate with the belief that every piece of information has an expiring window of opportunity. The trick is in seizing the data at the right time and in the right context so that the information becomes useful. Near-real-time technologies like Google Maps are governed by the concept of continuous optimization. “We get very effective traffic data now in New York with red, dark red, green, yellow indicators; it’s quite realistic,” Spector says. “We might be reducing peak traffic on New York City roads by directing people to better approaches.”

The idea of altering traffic in response to congestion or an incident is not a new challenge. Operations researchers recognize this as a resource reallocation problem that surfaces especially during an emergency: employ the evacuation route for people to disperse efficiently, and use the ingress routes for responders to enter the affected area. Google’s innovation has been to direct the power of information to users so that they can make data-aided, adaptive choices.

Spector’s colleagues have written that, when trying to build something new, like Google Maps, “instead of debating at length about the best way to do something,” they “prefer to get going immediately and then iterate and refine the approach.” This is to help support Google’s primary mission: “solve really large problems.” Consider this fundamental challenge: there are about 50 million miles of paved and unpaved roads in 195 countries. “Driving all these roads once would be equivalent to circumnavigating the globe 1,250 times—even for Google, this type of scale can be daunting,” the project engineers write.

Google’s engineers began their project by acquiring visual data from around the world—thanks to recent developments in street-level panoramic imagery and user-provided photographs. The next step was to develop a large-scale systems model that “includes detailed knowledge about one-way streets and turn restrictions (such as no right turn or no U-turn),” the engineers explain. Using this information, Google then converted the position of the sensor embedded in its camera—nowadays also in our phones—into accurate road-positioning data through a method called pose optimization. The process was driven not by a single algorithm, but by a group of interconnected tools.

Google engineers branched out to auction algorithms—typically used for determining the best offer on an item with concurrent bidders—to predict the real-time traffic demand for commuters simultaneously interested in taking the same route. They used image-processing techniques for creating “depth maps” to encode 3-D data on distance, orientation, and other local information such as roads, sidewalks, buildings, and construction work. They relied on remote sensing and pixel-level analyses of satellite images to help generate multiple views of any location—whether the Eiffel Tower or an abandoned mining town in the Alaskan wilderness. The engineers collectively made the best of these tools—and they continue to do so with several others—to create a better value for the users of Google Maps.

“The idea of driving along every street in the world taking pictures of all the buildings and roadsides seemed outlandish at first,” the engineers add, “but analysis showed that it was within reach of an organized effort at an affordable scale, over a period of years.” From Spector’s viewpoint, it was a basic question of cost-effectiveness. Google Maps started out as an engineering trade-off concerning efficient logistics—that is, could the mapping be done?—but then was followed up by an economic argument on the potential market for the application.

“It would turn out to be feasible,” Spector says.

images

BEING DATA DRIVEN is a precondition for optimization. This idea has influenced every industrial sector. In the telecommunications industry, for example, in recent years the “volumes that are going over our mobile data networks are up 25,000 percent and they are doubling every year still,” notes Randall Stephenson, the CEO of AT&T Corporation. In the airline industry, a Boeing aircraft flying from London to New York pumps out 10 terabytes of operational data every thirty minutes during the trip.

Being data driven is also only one part of optimization. Understanding user needs is another critical component. Consider this scenario from Norman Augustine, retired CEO of Lockheed Martin Corporation: Suppose you did a survey and asked passengers what they would like in a new airplane. Say you found out that they would like to get to their destinations sooner. For an aerodynamics expert, it may become an issue of the airplane needing to fly faster. For a systems engineer the approach is different.

Using modular thinking, a systems engineer would break down the entire travel process into its component parts. Flying in the airplane is one part of the system among many. There’s the getting to the airport part, finding a parking spot part, navigating the terminal part, ticketing part, processing the baggage part, waiting for security part, waiting to board part, boarding part, and getting to the final destination part. All these parts—and several more—contribute to the speed, efficiency, and performance of the entire system. The systems engineer can try to optimize individual parts while paying attention to trade-offs and constraints. With modular thinking, the solutions might come down to how to get through security faster, how to improve the boarding process, and how to retrieve luggage quickly.

Things get more complicated when you add in Mother Nature, the ultimate system of systems. In the case of aviation, for example, weather is a huge, unpredictable factor in optimization. Similarly, in early days, lack of measurements and data forced engineers involved in constructing pipes and sewers to make immediate assumptions and judgments. “If you want to build a tunnel, you have to deal with a geological medium that’s constantly changing and interacting with other systems,” says Wayne Clough, a geotechnical engineer who is secretary of the Smithsonian Institution, and former president of the Georgia Institute of Technology. “You need to have a strong systems approach that will allow you to adapt to changing environments.” Modern technologies enable extraordinary amounts of data collection about nature. But using that information for any form of optimization will always be a challenge.

We can try our best using our technologies, but ultimately Mother Nature wins.

images

I WENT TO business school while simultaneously working on a doctorate in biomedical engineering. My goal was to start a medical-device company. One brisk morning in early 2008, it all changed.

I was reading articles in the Financial Times and other business magazines online. They were analyzing the rickety state of the U.S. economy. Each article offered its own diagnosis and prescription different from all the rest. Those news items corroded my confidence. Why? As a freshly minted MBA, I had no idea what they were talking about. They were unlike anything we discussed in economics or finance class. They defied everything I thought I knew. I felt as if I had reached a state of intellectual renunciation. I needed to unlearn in order to relearn the basics.

Later that morning, I went to my lab to do clinical studies. On one of my study subjects, I rigged up sensors to monitor the cardiovascular effects of a noninvasive technology that we were developing. Our research was centered on stimulating the calf muscle pump to improve lower-leg circulation. More than three-fourths of the blood volume in the human body is below the chest. For the blood to flow back to the heart effectively—on every heartbeat, against gravity—the veins need to be compressed by the contractions of skeletal-muscle fibers. That’s why the calf muscles are also called the “second heart” and their inadequacy is implicated in many chronic health disorders.

As I was monitoring the peaks and dips, and the drifts and shifts of the beat-to-beat blood pressure data, they reminded me of the fluctuations in stock prices. My understanding of the physiology of the human system began to converge with my lack of understanding of the way the financial system worked. I had an epiphany. Instead of stimulating the calf muscle pump, I thought I should start stimulating the economy.

Some web searches later, I decided to apply for an economic policy fellowship at the National Academy of Sciences. I didn’t even tell my PhD adviser. Two of my mentors, inured to my crazy ideas, offered to write recommendation letters. Everything seemed intuitive—until I had a panic attack after I submitted the application. The abruptness of my decision kept me restless. I convinced myself that I had a vanishingly small chance of being selected as a fellow. In the days that followed, life trended back to normal.

A few weeks later, I was short-listed as a finalist for an interview. Soon after, I was selected for the fellowship. In the fall of 2008—at the height of the U.S. economic crisis and a historic presidential election—I took a semester off from school and went to Washington, DC. It was a turning point in my life. As someone with a business degree and practically no working knowledge of the economy, I was receiving a ground-level introduction to the intricacies and malaise of economic policies at play. I was fortunate to work for an advisory board chaired by an influential economist who was a former treasury secretary.

Debates during an executive board meeting introduced me to the head-spinning issues of the day. It was as if I had been plunked down at the main control panel of a space shuttle’s command module. Topics of discussion included finding the right blend of trade policy, fiscal policy, monetary policy, corporate incentives, federal research support, and several other options needed to keep the economy humming. The experience in Washington made it clear that my education had been disconnected from the fragilities of the real world. It was bewildering.

After all, who was I to judge? I had just floated from the fresh waters of engineering to the salty waters of public policy.

images

ENGINEERS AND ECONOMISTS hail from different disciplines, but both professions are rooted in rationality and quantitative rigor. From new products to new policies, engineering and economics routinely rely on the principles of optimization—again, achieving a desired objective under a set of limitations. Harvard economist Gregory Mankiw has argued that “the subfield of macroeconomics was born not as a science but more as a type of engineering,” considering that the originally practical orientation of economics seems to have changed over time.

At least two concepts of optimization overlap between economics and engineering. The first is utility maximization. If we go back to the example of congestion pricing, there was hardly anything new about the idea of levying people for using public utilities. If something is in short supply, then you can charge more for it. But in applying the principles of utility maximization, the IBM engineers effectively reduced the traffic congestion mainly through behavioral shifts, with modest change to the existing infrastructure. Perhaps the same reasoning could be applied to Gribeauval, whose goal was to maximize the utility and impact of his modular cannons.

The second concept is mechanism design—which is considered “the ‘engineering’ side of economic theory”—as economist Eric Maskin put it in his 2007 Nobel Prize lecture. How can we design a “preferred” mechanism to achieve a broad social objective? In the case of IBM, traffic reduction was achieved by people’s eventual buy-in of the digital tolls. Even more, with Google Maps’ traffic prediction technology, people may plan to use a different mode of transportation in advance—which could have significant time and revenue ramifications. Ironically, engineering—the very profession that created the automobiles that produce congestion—was also an “invisible hand” that drove economic transactions and, by way of new technologies, reduced the costs and effects of traffic.

The main difference between the economic way of thinking (which is largely theoretical) and the engineering way of thinking perhaps resides in how ideas are implemented. The British economist John Maynard Keynes once stated, “If economists could manage to get themselves thought of as humble, competent people on a level with dentists, that would be splendid.” Although Keynes was stressing the need for a practical mind-set among his colleagues (to a certain degree), we might concede that filling a molar cavity is not in the same league as reducing the federal debt or deficit.

In the world of economic policy, several engineers have gone largely unnoticed in the implementation of utility maximization and mechanism design. Marcel Boiteux, a preeminent French engineer, presented a formula for pricing a service when demand is highest. His optimization challenge was to minimize peak-time electricity consumption. Just as roads have capacity constraints during peak hours, so do power plants. Their production capacity is fixed, and the demand could be managed if people consumed less power during peak times. The situation opened up possibilities for an engineer to think like an economist by providing people the necessary incentives to eschew power usage.

“This shift from ‘engineer to economist’ introduced new ways of reasoning, based on the search for an economic optimum,” writes Alain Beltran, a historian of the French power industry. This type of thinking started radiating outward as decision makers everywhere quickly realized that peak-hour consumption of valuable resources was ubiquitous. “It’s just all over the place from really simple things, like why do some restaurants charge you more for dinner than lunch, even though it’s the same meal, probably prepared by the same chef,” says Charles Phelps, an economist at the University of Rochester. “It’s also about running a parking garage efficiently. You don’t want to charge too much for parking on the weekend because the garage is mostly empty anyway.”

The airline industry offers a fabulous example of peak-load pricing. At any given time, airplane capacity is fixed. The carriers can’t really add more planes as needed for peak-hour service on Monday mornings. In contrast, they have too many underused airplanes on Saturday nights. Business travelers may have less flexibility and are willing to pay more than other customers. All these factors lead to price elasticity, creating the need for differential pricing for the same service.

If you look carefully during your next visit to the supermarket—or your favorite online store—you’ll find that differential pricing is pervasive. Think of razor blades. On their own they are expensive, but when you buy a razor, you may sometimes get two or three free blades as part of the package. Hairdressers charge more for women than for men. Theme parks charge less for a package of rides than for a single ride. Concert halls charge extra for the price of a single ticket relative to the cost of a season ticket. These price discriminations depend on time, convenience, and factors other than the cost of the service itself. When we grudgingly pay the premium to air carriers to get tickets for a summer vacation, it’s easy to feel like we’re being exploited. We’re not. It’s a simple rule of optimization to prime our behavior.