“When I first began my experience as a consumer,” Christine Frederick told a congressional hearing in 1910, “I thought that the best way to do my family marketing was to ask the dealer his price and then Jew him down.” Her anti-Semitic comment notwithstanding, Frederick was a famous household efficiency expert and consulting editor to the Ladies Home Journal. She learned, she said, that bargaining “was the attitude of all consumers in this country 50 years ago [in 1860]. This condition existed because at that time all dealers and merchants overpriced their articles, and the shrewd buyer was the only one who could get the best trade or bargain, after hours of talk and discussion.” Price negotiating, she continued, “is still extant in some European countries and almost all Asiatic countries, where I myself have been first asked two dollars a yard for cloth and finally secured it at fifty cents. But [the American department-store merchants] John Wanamaker and Marshall Field saw the fallacy of these methods as affecting more important lines of merchandise. The reason why today 99% of all merchandise is no longer sold after these Asiatic methods is because common sense has demonstrated that the one price plan is more honest and more efficient for all concerned.”1
The 1910s marked a period in retailing when it was not hard see how the old world of the previous decades was passing, but difficult to understand the new world that was being born. People knew things were changing dramatically, and tensions ran high. Christine Frederick was on Capitol Hill to support a bid by packaged-goods manufacturers to set their products’ prices so that grocers could not lower them. Stores and brand name makers were struggling for new ways to exert primary influence over the shopper’s purchases. All sides realized price would be only part of a solution that would encourage repeat purchases and satisfactory profits. Producers failed in their attempt to dictate the price by law, but they tried other forms of leverage over grocery stores and other retailers. The retailers, for their part, realized they had lost some influence over customers by abandoning “Asiatic methods” that involved one-to-one give-and-take. They tried to establish new ways to control their relationships with them in an era when the opportunities of face-to-face personalization were hard to find, if not lost.
To arrive at an understanding of the transformation of today’s retail spaces, we first have to explore approaches to customer relationships, what they meant for the formation of retailing in the twentieth century, and the legacy they created for retailing in the twenty-first. I’ll focus here on the development of two pace-setting shopping environments of the twentieth century, the department store and the supermarket. The time span cuts across three periods: the age of peddlers and small store merchants before the mid-nineteenth century; the rise, growth, and travails of department stores and supermarkets from the mid-nineteenth century to around 1995; and the birth of new forms and norms of selling with the rise of the Web, the smartphone, and new forms of industrial competition. Traditional pre-nineteenth-century selling techniques centered around prejudice and discrimination, with peddlers and small merchants determining which shoppers were “winners” and which were “losers” in the course of negotiating sales based on such measures as the purchaser’s race, gender, ethnicity, and income. In the mid-nineteenth century, a new breed of merchants tried to encourage customers’ return visits—what sellers typically meant by loyalty—in a populist way. They spread the word that a democratic era of shopping had arrived, but strong impulses toward discrimination remained. Despite their dominant egalitarian rhetoric, the largest, most populist sounding merchants accompanied these professed democratic ways with inequitable practices. Discrimination showed up in many elements of twentieth-century department store and grocery life: pricing, purchasing, packaging, branding, display, promotion, location, architecture, payment systems, and labor.
While major retailers were willing to slug it out over the lowest prices in the short term, they feared that perpetually guaranteeing the lowest possible prices was a recipe for insolvency. Instead, they tried to organize their operations in such a way that they could profitably offer beautiful surroundings, periodic discounts, and other populist incentives that would encourage loyalty among broad populations of shoppers. Ironically, though, attempts by the supermarket industry to reorganize during the painful economic conditions of the 1970s ultimately led it and, soon, all retailers to adopt a technology that could make their businesses more profitable by returning to the explicit discriminatory mindset of the peddler era. In the twenty-first century the new approach would make surveillance of shoppers a habit, and data-driven personalization a must.
Christine Frederick was correct that bargaining was a fact of life not too many decades before her congressional hearing. In fact, this form of retailing in the early United States was centuries old. Whether informal transactions or those conducted in more structured settings such as outdoor markets and small stores, buyers and sellers had seemingly always tussled over the price and quality of merchandise. As far back as biblical times buyers and sellers both inevitably took into account the characteristics and cues of the other as each side tried to gain the upper hand in the bargaining. The nature and the cost of the merchandise was therefore often tailored to the particular transaction. Historian Claire Holleran remarks that in ancient Rome haggling over price was an everyday activity. She also points out that how and where an individual negotiated exemplified that person’s status: “The act and the space in which shopping took place were central to the construction of particular lifestyles and identities.” That was as true in early modern England and Renaissance Italy as it was in old Rome. In Renaissance Italy the wealthy tended not to negotiate for goods in public. While they might examine luxury items in public, they would retire to their home with the merchant to negotiate the price.2 But for most people the bargaining tended to be far less refined, as they typically purchased goods from screaming hawkers and peddlers trudging through the street. If they were lucky they might manage a quiet conversation about price and quality with a peddler moving from house to house in the village.
Bargaining is, of course, a two-way street. Just as the buyer cautiously had to pursue the best deal, the seller had to survive on his trading skills. As a reminder of the original prices they paid for the goods, as well as to recognize the goods if stolen, peddlers marked the back or bottom of their wares with unique symbols. They also followed a number of other strategies to maximize their returns on investment. Historian Laurence Fontaine, who studied the notebooks of peddlers in Europe from the seventeenth through the nineteenth centuries, found they covered a limited territory and so regularly returned to the same villages. Consequently, they developed ongoing relationships with individual customers and kept track of the deals made with them. They would also keep track of those customers’ friends and relatives, as they might learn of the deals and could expect to pay similar amounts for similar goods. In one notebook from mid-nineteenth-century France, a peddler noted the women who bought his goods but also linked them to the more important aspects of the town: the men to whom they were related (for example, “Roubi’s mother,” or “Mourau’s wife”). He recorded the occupations of those men and their family connections, as well as how they were addressed in the village and how they presented themselves during his conversation with them.
By researching the networks of their customers, notes Fontaine, peddlers slowly and “almost surreptitiously” became part of the village culture. Sellers and buyers negotiated prices “within the context of a personal relationship, and through the manoeuverings of trading and bargaining.”3 These bonds allowed the peddler to tamp down the inevitable tensions on both sides of the negotiation and to encourage customer loyalty built on protection: a reputation for honest representation of his goods’ quality. The peddler could also build loyalty by implying to particular customers that he was giving them and their family especially low prices. Knowledge about the customers came in handy, too, when allowing sales on credit. Judging from Fontaine’s peddler account books, a large percentage of the buyers did not pay in full when making a purchase. By allowing personalized deferred payments, peddlers solidified their relationships with customers. Each return to the village was an opportunity to recover a small amount of money they were owed as well as to offer a few more goods on credit—a built-in way to encourage loyalty. The peddler’s challenge was to collect enough of the debt to enable him to repay his creditors while retaining enough to feed his family. This didn’t always happen; peddlers often had to borrow money in their own villages with the expectation of recouping the cash during their upcoming travels.4
The peddling business migrated to North America with the flood of immigrants who poured in during the eighteenth and nineteenth centuries. Some peddlers who managed to amass a bit of cash settled in communities and established small dry goods emporia or grocery stores. Most of this group continued a version of the peddler tradition that was then called “personal service,” whereby sellers helped customers choose from goods that were behind a counter or in a storage area. Writing about Chicago grocery stores at the turn of the twentieth century, historian Tracey Deutsch notes that customers developed “more personal relationships with their grocers than they might have with transient peddlers or nonlocal market stall sellers.”5 In addition to selling groceries, these merchants offered a range of other services including food preparation advice, letter writing or translating for non-English-speaking customers, and offering information about neighborhood activities. Grocers also learned much about their patrons’ family situations, a topic that often came up when the customer asked for credit, as many did.
As with the peddlers, the relationship between shopkeeper and customer could be fraught for both sides. Deutsch points out that traditional approaches which led to “close personal relationships” and reasons for loyalty “did not guarantee smooth interactions in grocery stores, just as they did not in other retail formats.”6 Along with Fontaine, she found that advancing credit was crucial in the sellers’ attempts to win customers, even as the sellers worried about bringing in enough money and the buyers worried about terms and their ability to pay. Pricing issues also caused angst. Customers typically assumed that store owners would give trusted clerks the code to the pricing symbols so they could bargain knowledgably. Despite efforts by the sellers to put customers at ease, shoppers worried whether they were paying more than other customers. Ethnic customers suspected that grocers who were of a different ethnic background overcharged them and sold them inferior products. African-Americans, who typically had little choice but to frequent stores owned by whites, were especially suspicious that the lack of transparency of price and quality caused them to receive poor treatment.7
During the mid and late nineteenth century, these strains opened the door for ambitious merchants to offer alternative techniques to make customers feel more comfortable and thereby gain their loyalty. The key change was posted pricing, which started as a stand-alone selling idea but became a crucial part of the new U.S. retail institution. Through much of the 1800s dry goods store owners had followed the peddlers’ routine of personalizing charges for individual customers. Although Christine Frederick and others credited John Wanamaker and Marshall Field with making the change, the idea actually long preceded them. As far back as the seventeenth century Quakers believed it was morally abhorrent to charge different prices for the same item, and Quaker merchants Potter Palmer (an early partner of Marshall Field), Roland Macy, and the founders of Strawbridge & Clothier followed this philosophy in the early nineteenth century.8 By the 1840s standardized pricing as a means to encourage customer trust was evident. An undated business card of A. T. Stewart and Company, possibly from as early as 1827 or as late as 1841, described the firm’s prices as “regular and uniform.” Adam Gimbel, founder of the iconic department store, guaranteed fixed prices at his Vincennes, Indiana, trading post in 1840. Several New York stores were advertising one price as early as 1842.9
For the merchant, part of the attraction of fixed prices was that clerks didn’t have to be taught how to bargain. This model became increasingly important beginning in the 1860s, as U.S. cities saw the growth of multistory, multidepartment emporia employing many people. A famous precedent was Aristede Boucicaut’s Le Bon Marché in Paris. The store’s name means cheap, or a bargain. Boucicaut founded it in 1838 to sell piece goods in a poor neighborhood but branched out into various types of women’s clothing and beyond; by the early 1850s it had morphed into what we call today a department store. Le Bon Marché marked all of its goods with fixed prices and even offered a money-back guarantee. These policies influenced American entrepreneurs, who moved from selling single stock products such as bolts of calico (the original “dry goods”) to a wide range of merchandise types in distinctively separate departments. The post–Civil War era was ripe for this retailing model. The growing urban population and an improving standard of living meant concentrated numbers of people with money to spend. The wide circulation of newspapers, handbills, and billboards in dense neighborhoods enabled the stores to advertise to large numbers of people. Mass transit in the form of horse-drawn trolleys could bring those residents to central shopping districts. At the same time, rabid competition among many small stores selling the same limited range of products yielded virtually no profits. As retailing historian Robert Hendrickson notes, “Farsighted merchants cast about for new things to sell, realizing that if they wanted to grow, they would have to offer a fuller line of merchandise.”10
The model these forward-thinking store owners had instituted in the mid-nineteenth century, especially regarding their approaches to gaining steady customers, fed into an entirely new approach that would develop in the following decades. Historians of late nineteenth-century America agree the era was transformative when it came to selling goods to the American masses. William Leach describes what was happening as “the democratization of desire” and traces it to the 1880s. This was a time of rapid industrialization and the creation of a new form of market capitalism based around industries that focused on “the production of more and more commodities.” By the late 1890s so much merchandise was flowing out of factories and into stores that businessmen feared overproduction, which might push prices so low they would drive manufacturers and retailers out of business. In addition, while real incomes of many Americans were growing, wealth was lopsided; a small number of captains of industry controlled much of the nation’s assets. Advocates of the new market capitalism may have wanted to draw public attention away from criticism that the far-reaching resources of a relative few were hindering democracy by diminishing the political power of the many. To do that, they reimagined democracy as the right of each American to dream about achieving personal happiness by buying things. “This highly individualistic conception of democracy,” writes Leach, “emphasized self-pleasure and self-fulfillment over community or civic well-being. . . . The concept had two sides. First, it stressed the diffusion of comfort and prosperity not merely as part of the American experience as heretofore, but instead as its centerpiece. . . . Second, the new conception included the democratizing of desire, or, more precisely, equal rights to desire the same goods and to enter the same world of comfort and luxury.”11
The key, merchants learned, was to follow a strategy of low markup and rapid turnover that contemporaries attributed to Boucicaut. The idea was to be proactive about low-cost selling: purchase large quantities of goods for cash (thereby getting the supplier’s lowest price) and sell them quickly at low margins. The incoming profits would finance the purchase of yet more goods for cash that also would be unloaded quickly. The challenge was huge, however. Sellers realized that a store’s “turn” of its products was key. Success would come only if product turns could take place continually and on a large scale. These conditions pointed to a store with several departments and many different items.12 They also indicated the need for a continual flow of large numbers of people. To achieve this merchants built on the trend of posted low prices, even though they recognized that competition primarily over price could lead to unsustainably low profit margins, even severe losses. The emerging department stores therefore went out of their way to cultivate loyalty through egalitarian versions of protection and privilege.
Protection came in the guise of the customer’s right to touch the goods as well as guaranteed satisfaction—the stores accepted returns without argument or penalty. Department stores offered a new, egalitarian form of privilege by creating a beautiful environment to which all shoppers would want to return and in which they could feel special. The no-haggle returns and the marvelous surroundings were geared particularly toward women, whom the new merchants saw as their primary customers. Historians point out that the nineteenth century in both the United States and Europe was marked by a Victorian sensibility regarding women in public places. “Disreputable women were associated with the immorality of public life in the city,” writes historian Mica Nava. “Respectable and virtuous women were connected to the home.”13 Such concerns didn’t mean middle-class and wealthier women fully shunned urban streets without escorts. Chroniclers of the era have pointed to the large numbers of women who moved freely around the landscape to carry out philanthropic work.14 Barbara Olsen argues that in some areas “American women by the mid-1850s could now shop alone, make their own purchase decisions from expanding product categories, and spend their own money earned outside the home.”15 Moreover, Nava notes, the “Victorian ideal of separate spheres” unraveled rather quickly in the urban landscape around the turn of the twentieth century as women engaged in the public world of education and the suffrage movement. These changes also tied into the loosening of Victorian sexual mores. Relatively mainstream notions around “choice of partner, courtship patterns, and independence of movement” mixed with more radical ideas about “‘free love’ and sexual pleasure as an entitlement for women as well as men.” Women’s magazines and, later, movies spread these ideas to the distaff population at large. None of these changes came easily; literary and other depictions of the “proper female sphere” suggest the social tensions. But by the early twentieth century increasing numbers of American women were “making themselves at home in the maelstrom of public life . . . however contradictory, painful and uneven the process.”16 Department stores helped the process along by positioning themselves as oases of populist privilege—and havens of safety—in a nonfeminine urban world.
From their inception the stores “provided a particularly welcoming space for women.”17 To elicit the sense of special comfort, they outdid one another with elegance and showmanship in order to attract crowds. Again, Bon Marché had pointed the way. A. T. Stewart’s 1848 Marble Drygoods Palace at Broadway and Chambers Street in Manhattan, though not yet a department store, demonstrated how to draw crowds in a U.S. urban center. The block-long edifice became an instant tourist attraction. The canny Stewart lured visitors inside not just to ogle at the architecture but to attend fashion shows, scan and purchase the high-quality European women’s merchandise, and scoop up the great deals he offered through “fire sales” of distressed and damaged goods. Competitors followed suit, first via the elegant Ladies Mile that grew on Broadway around Stewart’s and later uptown in fabulously appointed stores such as Macy’s, Siegel-Cooper, Gimbels, and Bergdorf Goodman. Among the other emporia that arose in this tradition were Abraham and Strauss in Brooklyn; John Wanamaker and Strawbridge & Clothier in Philadelphia; Marshall Field and Carson Pirie Scott in Chicago; L. S. Ayres in Indianapolis; Lazarus in Cincinnati; and Dayton’s in Minneapolis. Designers and architects appointed the edifices with large display windows, carved wood, polished stone, imposing mirrors, fancy elevators, and streamlined escalators. The stores were among the first public spaces to use central heating and electric lighting both for illumination and to make artistic impressions. Department managers laid out goods in gorgeous displays, and the mundane administrative functions were situated in areas the public would not encounter.
The French author Emile Zola described these developments (also taking place in Europe) as agents for “democratizing luxury.” He also saw them as enticements for the broad middle class to spend large, often too large, amounts of money.18 The edifices were typically situated on major thoroughfares easily accessible by mass transit. In New York City and Chicago, some of the stores even maneuvered to have subway or elevated line stops at their locations. While the lavishness drew tourists, the real intention behind all the glitter was to build a loyal clientele, which for merchants meant people who would continually return to buy items. Accounts of the period indicate that such customers would visit the same store frequently, even several times a week.19
With the influx of crowds of shoppers, the new mass-marketing entrepreneurs understood they were no longer able to tailor prices to the individual.20 Although those shoppers who considered themselves winners in such give-and-take transactions might balk, the retailers were counting on a much larger population being happy with the new process. Store management spread the word that loyalty-generating practices were to be enacted in a populist, routinized manner. A. T. Stewart supposedly boiled down the notion to a blunt statement to his clerks: “Never cheat a customer even if you can.” He also instructed them on the importance of gifts for customers who paid cash rather than wanting credit. “If she pays the full figure [does not ask for credit], present her with a hair of dress braid, a card of buttons, a pair of shoestrings. You must make her happy so she will come back again.”21 John Wanamaker, who set the bar high for pursuing customer loyalty, laid the general precept out in a more philosophical tone. “The chief profit a wise man makes on his sales,” he wrote in 1918 in a memorandum book, “is not in dollars and cents but in serving his customers.”22 Shoppers could rest assured that a posted price (Wanamaker himself supposedly invented the price tag) along with a satisfaction guarantee meant they were not being taken for a ride.
But Wanamaker and other department-store entrepreneurs also aimed to create a service atmosphere that would cause their targeted (women) customers to see the stores as multifaceted environments that surrounded them with a sense of feeling special. To that end, clerks and floor workers were trained to be attentive to customer needs.23 Installment purchasing and free delivery were among the widely available services. Most dramatically, the stores made actual shopping only one part of the retailing experience. Executives also attempted to cultivate loyalty by attracting people for reasons beyond shopping. For example, one could simply rendezvous with friends at a beautiful fountain or impressive sculpture in the center of the store. The stores presented concerts on-site; Wanamaker’s turn-of-the-century store had one of the largest pipe organs in the world. The merchants offered special toy exhibits for children, particularly around Christmas time. They offered dining by establishing restaurants in the stores. Or, instead of going anywhere near the store, customers could phone in an order from home. Wanamaker was a leader here, too: as early as 1900 the company employed hundreds of phone operators to take customer orders day and night.24
Like department stores, grocers developed an emphasis on loyalty through egalitarian access as well as populist protection and privilege, though in somewhat different ways and, when it came to architectural opulence, on a slower timeline. The early leader was the Great Atlantic and Pacific Tea Company. Founded in 1859 with a small number of New York tea and coffee stores, by 1880 it expanded to some one hundred branches. It also added sugar to its merchandise, a decision that began its march toward becoming the nation’s first grocery chain. Apart from fixed low prices, A&P units were not much different from the neighborhood grocery stores with which they competed. They offered everyone discounts and gifts for continued patronage. They also advanced customer credit and allowed free delivery, activities that were expected in the grocery business at the turn of the twentieth century. But in the face of a national debate about high food prices during the 1912 presidential campaign, some of the company’s outlets abandoned those inducements. Instead, they attempted to gain loyalty by trumpeting efficiency-driven cost reductions. The idea was to keep grocery prices as low as possible. Implementation meant reducing the staff to a manager and a clerk who retrieved the goods for customers. A&P’s populist spin was that the no-frills style protected its customers from the scourge of high prices. The strategy proved highly successful, leading A&P and other grocery chains to expand the concept widely. It was a rather Spartan paradigm; an A&P store was no John Wanamaker.
The next couple of decades saw two additional iterations of the no-frills grocery store—the self-service market and the supermarket. The first innovation came from Clarence Saunders’ Memphis, Tennessee, Piggly Wiggly store in 1916. The place was rather small compared to most groceries of the day. His idea was to have the customers pick up the goods in the store themselves (instead of asking a clerk for assistance) and then pay at a central checkout area. Saunders knew this strategy would enable him to cut down on labor costs. He was also convinced customers would buy more if they could see and touch all the merchandise themselves. He therefore created aisles that facilitated customers’ handling of the goods, and he provided baskets in which shoppers could collect the items they wanted to buy.25 As with department stores, the public spin on the setup was one of democratic privilege. As early as 1922, the Piggly Wiggly chain boasted that its self-service model “fosters the spirit of independence—the soul of democratic institutions, teaching men, women, and children to do for themselves.”26
The idea caught on. Kroger, Grand Union, Acme, and the many other growing chains adopted this model, and eventually A&P followed suit as well. The low labor costs associated with the self-service model encouraged the expansion of grocery chains, and the larger ones regionalized. Another factor influencing the chains’ growth was their continuing power struggle with their suppliers over issues that had brought Christine Frederick to Capitol Hill earlier in the century. The stakes were high. Many grocers preferred to choose the manufacturers whose products they put on their shelves, because they could bargain over wholesale prices and get good margins. Some manufacturers, however, tried to force grocers to carry their goods on their terms by going over the heads of grocers to their customers to position the manufacturers’ products as special “brands.” Packaged goods companies such as Sapolio Soap, Pepsodent, Kellogg, and Procter & Gamble launched an avalanche of advertising in popular newspapers and magazines to exhort people to buy items with unique names, trademarks, and package designs. The firms assured shoppers they would protect them from the uncertain quality of counterparts in barrels and bins and even in packages with unfamiliar persona and provenance. A Toasted Corn Flakes ad in Munsey’s Magazine of the early 1900s noted that “the package of the genuine bears this signature [W. K. Kellogg].” It quoted an imperious young women telling a merchant who was offering her flakes from a different company, “Excuse me—I know what I want, and I want what I asked for—TOASTED CORN FLAKES—Good day.”27
Manufacturers were pushing the right loyalty button. This was an era when grocery shoppers were looking for protection, as newspapers in the late nineteenth and early twentieth century were rife with stories of unscrupulous sellers putting dirty, dangerous, and even lethal materials into foods. People likely also remembered the “swill milk” scandal that affected New York and Brooklyn in the 1850s and 1860s. Diseased cows that had been fed distillery refuse were then literally milked to death. Frank Leslie’s Illustrated Weekly cover in 1858 featured a drawing of a dairy getting milk out of a sick cow on a hoist. (The liquid, the caption said, “made babies Tipsey and often sick.”)28 Ads that exhorted shoppers not to accept imitations played on these worries and effectively forced even large grocery chains to carry branded goods in the name of customer protection. Procter & Gamble warned grocers that they had better stock their new vegetable shortening Crisco—which, unlike the commonly used lard, would not spoil through bad refrigeration—because the advertising and public relations campaign the company was launching would be so extensive that their customers would come clamoring for it.
In addition to offering protection as a way to build customer loyalty, manufacturers also adopted campaigns and packaging that pointed shoppers to incentives included with the product inside the box or package. Manufacturers also offered discounts or gifts for collecting proofs of purchase from the product packaging.
The brand manufacturers’ growing clout had a critical implication for the grocery industry, which sometimes was forced to accept smaller margins (and profits) on brand name products. But whether brand name item or not, the stores were committed to low prices because of competition and the need to encourage loyalty. Therefore, with such tight margins they had to sell very large numbers of goods to achieve acceptable profitability. During the 1920s many chains addressed the challenge by aggressively expanding their number of stores, which enabled them to purchase products from manufacturers at the lowest possible wholesale prices. They then implemented bouts of predatory retail pricing (undercutting the costs charged by competitors) on those lower-cost items, along with offering gifts with purchases, to encourage shopper loyalty and eliminate rivals in particular market areas, thereby gaining more leverage over suppliers. Once a store drove out competitors it could then increase prices a bit on the contested products.29 Chains that successfully followed this strategy raised their market share: in 1920 grocery chains accounted for less than 3 percent of grocery sales, but by 1935 five chains captured about 25 percent.30
The chains’ growing power notwithstanding, the Great Depression of the 1930s saw the emergence of a second new model for the grocery store, the supermarket. Originally spelled as two words—“super market”—it was conceived by entrepreneurs from outside the grocery establishment who pushed the low-price, no-frills paradigm to a new extreme in those hard economic times. The formula consisted of a massive store (often located in an inexpensive abandoned warehouse or factory), local advertising and promotional ballyhoo about products to be sold below cost, and huge numbers of customers noisily and chaotically hunting for bargains. One of the first of these “cavernous, ungainly stores” (as Tracey Deutsch describes them) was started by former Kroger executive Michael Cullen.31 Called King Cullen, the store was located in an abandoned Long Island garage and used empty ginger ale cases for display tables.32 While Cullen controlled all the merchandising in its outlets, some versions of the Depression supermarket leased space to different types of grocery retailers under the umbrella of a single name. Yet another iteration saw butchers, bakery owners, and other specialized grocery retailers join together to share the costs under one moniker, such as Big Bear stores, which opened in New York and New Jersey in the early 1930s.33 Although these early supermarkets did not serve nearly as many people as the major grocery companies did, they often caused quite a stir in the neighborhoods they served. Twenty-thousand people entered the Family Market Basket on its opening day on Chicago’s North Side; this supermarket was 50 percent larger than most of the bigger chain store units. The owner of another Chicago supermarket boasted that his opening-day crowds were so large he had to call the police.34
Executives at the established chains observed the phenomenon warily. Kroger executives worried about the dangers of appealing to loyalty through low prices instead of other aspects of protection and privilege. “The early supermarkets were bare-bones bargaining houses that emphasized price,” Kroger company chroniclers noted, “while Kroger felt it owed its growth and strength to its insistence on quality”—presumably through its service and its company brands.35 Although they were used to competing on price, sometimes very intensely, chain executives nevertheless believed that the ongoing extreme low-price promises, enormous size, and chaotic nature of supermarkets made them unstable ventures. They insisted long-term value lay in linking low, but not lowest, prices with a populist stress on a stylish experience and decorum in their stores. “The super-store in its present form may prove to be a ‘depression baby,’” one industry person sniped.36
In fact, by the late 1930s supermarket owners began to modulate their emphasis on lowest price and shift to other forms of competition such as decoration with “a feminine touch,” wheeled shopping carts instead of baskets, high-quality meats, and a new emphasis on service.37 This new wave of classy treatment, available to all, seemed to result in higher profits in the face of the stores’ traditionally low margins. Kroger, A&P, Jewel, and other grocery chains began to construct expansive stores with large parking lots to encourage shoppers to fill their trunks with groceries. The building rate and size accelerated after World War II, as many of the American middle class and upper-middle class began to relocate to the suburbs. Stores averaged nine thousand square feet by the late 1940s (far larger than the chain outlets of a decade earlier) and twenty-two thousand square feet by 1957.38 They contained a vast range of advertised goods as well as store brands in an array of departments that sold fresh meats, salads, fish, and baked goods, as well as health and beauty supplies. Many of these areas were self-service, in keeping with the overall mindset from decades earlier.
The structures themselves looked quite different from the A&Ps, Piggly Wigglys, and Big Bears of earlier in the century. In a 1951 article Collier’s, a popular weekly magazine, celebrated the change with language that evoked the air of privilege suffusing the great department stores of the era. “Low prices are no longer the supermarket’s only attraction,” the article noted. The transformation was “the prodigious issue of a marriage between brilliant showmanship and the world’s most modern distribution techniques.” The result, it said, was a new world of efficiency linked to indulgence: “In a supermarket the housewife buys her groceries (and a growing variety of other things) faster, because of self-service, and cheaper, because volume sales enabled the stores to keep prices at a minimum.” The article went on to enthuse: “Nothing that could conceivably lure the housewife has been left undone by the supermarket operators. Their grocery stores are the world’s most beautiful. They’ve gone into color therapy to rest the shopper’s eyes; installed benches to rest her feet; put up playgrounds and nurseries to care for her children; invented basket carts with fingertip control; revolutionized a packaging industry to make her mouth water; put on grand openings worthy of Hollywood premieres. They’ve completely made over the nation’s greatest business—food—to attract more and more of her interest and her dollars.”39
Like the grocery chains, many department stores also relocated to the suburbs. While the branches did not typically match the glory of the downtown edifices (which sometimes suffered because of suburban mall competition), their executives continued to stress that the recipe for shopper loyalty had to involve alluring architecture, display, and service.40 Some chains, such as Sears and J.C. Penney, aimed for the mass market with low prices. Others—for example, Macy’s and Lord & Taylor—presented somewhat higher priced goods. Still other department stores, Saks Fifth Avenue and Neiman Marcus among them—presented a wealthy image with very expensive items. A 1974 overview of department stores marveled at the Woodfield, Illinois, shopping center, then the largest in the world. Along with Sears, Marshall Field’s, J.C. Penney, and Lord & Taylor, it had two hundred smaller stores, many of which were themselves part of chains. The authors of this write-up concluded that an attractive location was a good formula for nonprice competition: “It is a seven-day operation with shoppers driving many miles for the Sunday ‘champagne brunch’ served by [Marshall] Field’s Seven Arches restaurant, one of more than 30 eating places in the big development.”41
For two decades after World War II media reports referred to the purchase of food and everyday necessities in supermarkets as well as the purchase of clothes and home goods in great urban department stores as a wonderful sensory experience. Both retailing sectors understood the competitive usefulness of linking material desires with the very essence of American democracy. Both arenas ultimately aimed for shopper loyalty that was not primarily tied to price, and both echoed the rhetoric and the architecture of democratic access to privilege. They continued to promote self-service as a democratic, loyalty-inducing plus. They claimed that the very depersonalization of the new forms reduced the tension that had been at the core of merchant-shopper relations in the peddler model. Unlike the earlier model, in which the customer often felt scrutinized by the grocer—who might be judging purchases, offering substandard products from under the counter, or charging more because of ethnicity, race, or perceived low income—they promoted the idea that twentieth-century shopping furthered the democratic ideal of allowing (to quote William Leach) “everybody—children as well as adults, men and women, black and white—[to] have the same right as individuals to desire, long for, and wish for whatever they pleased.”42 During the 1950s and 1960s, government and retailing officials pushed this equality-through-self-service-and-materialism argument as part of the defense of capitalism against communism in the Cold War, claiming that “because supermarkets lowered food prices, celebrated freedom of choice, and made customers feel that they were being treated equally, they reduced the appeal of communism and showcased the real value of American capitalism and free enterprise.”43 When Soviet leader Nikita Khrushchev came to the United States in 1959, American officials made sure that he visited a supermarket so they could present it as a symbol of their nation’s superiority.44
The mid-twentieth-century buoyant rhetoric notwithstanding, critics have noted impulses toward discrimination coursing through the new retailing model. While department stores “eagerly accepted all dollars,” in the words of historian Susan Porter Benson, they early on divided their clientele into two broad groups: the more affluent “carriage trade” and the poorer “mass,” or “shawl,” trade.45 Stores declared that class divisions were simply the natural consequence of fundamentally different shopper desires. Articles in the retailing press contended that the shawl group saw fancy fixtures and other upscale appointments as indicators the establishment would charge them higher prices. In contrast to this group, noted a turn-of-the-twentieth-century commentator, “People of culture and refinement dislike crowds and crushes in stores [and want] to trade at a store where there is plenty of room and an abundance of air, with surroundings of an elegant, not to say aesthetic character.”46 Department-store architects and designers claimed to be taking to heart the differences between the two groups with the creation of bargain-price departments which sold lower-quality goods, located usually in store basements. As Porter Benson notes, “The bargain-section strategy allowed department stores to pursue simultaneously their strategy of ‘trading up,’ or seeking an even wealthier clientele, and their goal of increasing sales volume.”47
This basic bias showed up on several levels, especially before World War II. Outdoor windows sometimes signaled the customer bifurcation: front windows might show prestige goods, while side windows might contain sale merchandise.48 Inside, store personnel were taught to reflect a desire for an elite version of personalization in the midst of populist extravaganzas. People who clearly had a lot of disposable income received special consideration. Salespeople and doormen were encouraged to greet high-spending customers by name, for example, and even phone them when the store received new items that might be of interest. Preferential service could include free delivery. Well-off customers had their special requests dealt with more carefully and their returns accepted more graciously than those of other customers.Another side to personalization involved matching clerks with customer type. Personnel directors used a range of stereotypes—including age, race, and birth status—to assign salespeople (typically women) to different parts of the store based on the anticipated customers in each section. For example, older and native-born women were sent to higher-price departments, while younger and immigrant women worked in areas that sold less-expensive goods. Capping all of these prejudices was the allowance of a charge account to only the most select customers. During the first half of the twentieth century, charge accounts were the province of stores, not national credit firms such as Visa and MasterCard. To lessen the risk of default, and because they understood that well-off charge customers spent far more than cash ones, department stores made it clear this was a special privilege granted “only [to] wives of ‘substantial citizens’ and not those of you [from] working-class families.”49
The discriminatory aura pervaded supermarkets as well. As early as 1926, chain grocers worried that they were reaching only working-class and lower-middle-class shoppers because they emphasized price over services such as helping customers retrieve items from around the store and stationing a clerk in the produce section to help customers choose, weigh, and wrap their fruits and vegetables. Christine Frederick, who didn’t hide her anti-immigrant and anti-Semitic sentiments, stoked these concerns by reprimanding the chains as off-putting to busy women because of their abandonment of personal help and their long checkout lines. Industry observers warned that “low prices were not enough to keep customers because women—at least the sort of women so coveted by chains—wanted more.”50 Consequently, grocery chains began to move somewhat away from their vaunted standardization and toward at least partially retrieving the service component. Sometimes this meant hiring multilingual clerks to work in stores in immigrant neighborhoods. More commonly, though, it meant ensuring that stores in more desirable neighborhoods had service components. Class distinctions in service became even starker during the Depression, when supermarkets in working-class neighborhoods often offered lower grades of meat, cheaper brands of goods, and fewer services than they did in middle-class and upper-class neighborhoods. Chain store management also preferred to build new stores in wealthier neighborhoods, a practice that took on special vigor after World War II.
By the 1960s critics were citing studies that supermarkets in economically distressed (often African-American) neighborhoods were dirtier, limited in variety, and higher in price than in more well-off districts, and stocked substandard foods. The customers and clerks who shopped or worked at supermarkets in poor neighborhoods were certainly aware of the discriminatory patterns, but these conditions were rarely publicized except in the wake of major events such as the rioting that occurred during the Depression and World War II, and following the assassination of Martin Luther King Jr. in 1968. The latter insurrection served as an ironic counterpoint to the Cold War trumpeting of consumer democracy just a few years earlier. These angry outbursts led to short-term attempts by merchants and government officials to explore causes and offer solutions, but the policy suggestions tended to be rather bland. Tracey Deutsch notes that even reformers “treated the facts (that lower quality was being sold in poor neighborhoods in lesser quantities and at higher prices) as intransigent and inevitable.” She adds that an article appearing in New Republic arrived at similar conclusions about supermarket inequalities but didn’t think that government interventions could possibly be effective. The article did offer several ideas for addressing the problem, including the placement of home economists in local stores to provide advice, and busing shoppers to better stores in better neighborhoods.51
Supermarket operators’ response to the obvious inequalities was to acknowledge them not as consequences of their prejudicial discrimination—a practice they publically condemned—but as unfortunate results of larger economic problems or as evils of discrimination unrelated to retailing. In 1968 a Philadelphia woman who had heard the claim that chain supermarkets sold goods of lower quality in black neighborhoods confronted the president of the National Association of Food Chains. According to a Business Week article, the woman said that the official “and his colleagues deny everything—and then they explain why it happens. Suburban stores, they say, are bigger than ghetto stores, with more parking space, roomier aisles, better displays, and greater store traffic. The implication to experienced merchandisers is plain: fresher product, greater variety. . . . By contrast, land in ghetto areas is often costly, and parking space is in short supply. The result: fewer stores, cramped, crowded, and offering few products.”52 Yet as retailing indignities continued to simmer within disadvantaged communities, the public relations arms of retailing firms continued their campaigns “to celebrate [the supermarkets’] hard work on behalf of women.”53
Rather than actively confronting socially corrosive prejudices, supermarket and department-store management throughout the twentieth century focused on a different form of discrimination in the midst of their populist pursuits: they needed to find ways to keep their most profitable shoppers coming back. Posted pricing, low markups, and an impressive interior had rapidly become generic elements of the twentieth-century retailing environment. Yet identifying this group had become extremely difficult. For one thing, the stores now had so many customers it was hard to identify the ones who kept coming back. For another, loyalty and profitability didn’t necessarily go hand in hand. A store could lose a lot of money enticing customers to be loyal; for example, earlier in the century department stores had lists of wealthy people who took advantage of their customer status by returning very expensive merchandise frequently, insisting on special deliveries, and piling up lots of credit. Just as frustrating, short of reviewing sales receipts, stores had difficulty determining what even good customers expected. Learning and responding to customer desires was an impressionistic project in the early twentieth century. Even in big stores, upper management and product buyers relied on department managers and salespeople for their opinions regarding what their desired customers wanted.
During this time many in the academic community and some businesspeople began proposing that retailers adopt a “scientific” approach to the issue. By the 1920s, stores were collecting a wide range of information from customer surveys,54 and in 1933 market research pioneer Arthur Nielsen made deals with a representative sample of stores to audit the products they sold, and with the results of his study he launched his drug and retail store index; a year later he debuted an index of department store and food sales.55 Historian Sarah Igo has noted, though, that the quantitative projects tended to measure what individuals actually purchased, not “what they desired.”56 Some stores did give their customers surveys to fill out about what they wanted from the store, and a few department stores recruited customers as advisers on issues concerning service and merchandise.57 But overall retailers found it frustrating to study customer desires using the tools of science. A Filene’s department-store executive sneered that retailers were “merchandis[ing] on opinions not on facts.”58 In her exploration of pre–World War II department-store management Susan Porter Benson concludes that “they had only the foggiest and most impressionistic sense of who their best customers were and what they wanted of the store.”59 And Bill Bishop, a longtime supermarket industry observer and founder of the Willard Bishop consulting firm, reflected in a 2014 interview that before 1990 supermarket managers strategized about their customers “more from gut feeling than anything.”60
Rather than focusing on a “scientific” fix for ascertaining what customers want, executives turned to efficiency efforts in logistical operations and dealings with suppliers. University of Wisconsin professor Paul Nystrom wrote approvingly in his 1916 text on retail selling and store management that “business magazines have teemed with articles upon efficiency and scientific management.”61 Merchant Edward Filene led the way for department stores, developing a much-copied “scientific” model plan in the 1910s in which a store’s sales patterns were assessed to determine the range of items in the store and the amount of stock for each.62 Supermarkets, too, learned that making internal operations more efficient and putting price pressures on suppliers could widen margins more effectively than trying to build loyalty. They also discovered that profits could be made by charging manufacturers for special displays of their products and for in-store advertisements. These efforts at maximizing revenue made financial results respectable despite razor-thin margins on many of the products sold.
Store employees tended to see such efficiency efforts in opposition to the goal of loyalty because they believed their attention was then shifted away from customers and toward paperwork and backroom operations.63 Management clearly felt a need for both. They insisted money saved through efficiency could yield money for loyalty-generating activities. And many executives believed that, like their efficiency programs, the best loyalty programs would be those that could be evaluated quantitatively. But measuring loyalty proved extremely difficult. For example, it was virtually impossible to calculate the extent to which high-profile, expensive interior designs and architecture brought in customers; their utility was taken for granted as important. Likewise, another loyalty effort, the signal events department stores staged for their cities, such as the Thanksgiving Day parades put on in New York by Macy’s and in Philadelphia by Gimbel’s, seemed to make good business sense, yet by their very nature they could not be analyzed in terms of specific sales and loyalty-building.
Consequently, throughout the twentieth century merchants focused on pursuing other methods whose results could be evaluated at least to some extent by norms of “scientific selling.” The most enduring of these was advertising, which in the first half of the century meant marketing to wide publics. Merchants saw two main benefits: the possibility of building loyalty by circulating positive paid messages about the store in local newspapers; and the ability to check the effect on sales of an item featured in an advertisement. Although today it may be hard to understand how retailers could connect specific sales to their newspaper advertising expenditures, in the first part of the twentieth century retailers used a more direct-marketing approach. For example, when John Wanamaker advertised in Philadelphia papers that a special deal on umbrellas would take place the day after the ad appeared, he and his copywriter, John Powers, believed that umbrella receipts on the day of the sale would indicate the success of the ad. This view continued into the 1940s, as stores anticipated that once shoppers made the trip to the store to purchase the sale item, they might then purchase other things as well, and at full price.64 Comparing the number of umbrellas sold on the day of the sale with the number sold on a typical day would give the merchant a sense of the power of the ad and of the newspaper. Placing the ad in various other local newspapers might help distinguish the factor that exerted the greatest influence on the sale, the ad itself or the vehicle in which it ran. And an examination of the receipts from those who bought umbrellas could indicate whether the advertised sale also resulted in sales of undiscounted merchandise. Still, the conclusions would hardly be definitive because of the many variables that could lead shoppers to make purchases. And the anonymous receipts in this generally cash-only era wouldn’t signify whether ad-related purchases truly led to the best kind of loyalty—repeat purchases even when the customer encountered no discounted merchandise.
Grocery chains faced a similar dilemma. They also advertised special sales on particular products and could attribute at least some of any resulting revenue uptick to these circulars and newspaper announcements.65 Discount coupons that were incorporated in an ad or sent to people’s homes could be directly traced to the publication in which they had been printed if they carried a corresponding code. If they did, certain broad relationships between the advertising and resulting sales were quantifiable, such as determining the percentage of redeemed coupons for a particular product in a particular neighborhood. But there was much the numbers couldn’t tell, such as whether the coupons encouraged repeat shopping and therefore the loyalty that retailers prized so highly. Meanwhile, brand goods manufacturers such as Procter & Gamble broadly circulated their own money-off coupons to encourage loyalty not to any particular chain but to the products. Sometimes, in fact, manufacturers circulated the coupons to get shoppers to demand grocers carry items that they otherwise didn’t stock.66 This activity stoked the long-simmering tension between the makers and the sellers, as did the payment amounts manufacturers offered stores for their trouble of collecting and returning the coupons; stores perennially complained the reimbursement was inadequate.
The trading stamp program offered the one loyalty vehicle that merchants knew could help them quantify repeat shopping. Trading stamps were introduced as a way to encourage a customer to make repeated shopping trips to the same store. The customer would be awarded stamps based on the total amount spent—typically one stamp for every ten cents paid. After accumulating a certain number of stamps the customer would receive a gift. Originating in the nineteenth century, trading stamps were different from discount coupons because sellers distributed them upon purchase. The first stores to dole out the stamps—beginning in England possibly during the 1880s and in the United States during the 1890s—were small department stores for exclusive use in their establishments.67 They were different from small gifts that customers sometimes received with their purchase—say, a baker’s dozen or the small thread A. T. Stuart encouraged his clerks to distribute—because they were individually worthless. This was controlled loyalty; customers both received, and were rewarded for, their stamps from the same store, and so repeat visits were easy to monitor. Tracking the amount of time it took for a customer to earn a reward would enable the merchant to note the success of the loyalty program across its customer base. The establishment could also note the value of a particular customer.
But this store-specific program didn’t last. The situation began to get muddled in 1896 when the Sperry and Hutchinson Company became a wholesaler of what it called Green Stamps. The idea was simple and had a populist spin that invited everyone into the game: Sperry and Hutchinson sold large numbers of gummed stamps to retailers for a tiny fee per stamp. The retailer would then award customers with one stamp for every ten cents they spent. (Early on, to discourage credit sales, customers would receive stamps only if they paid promptly in cash.)68 The customers were to paste the stamps into a specially designed booklet, which, when filled, they could redeem for various products (“premiums”) at an S&H center. S&H would offer exclusivity to a particular type of merchant—a grocer, a plumber, an electrician—in designated localities. For example, a 1910 newspaper ad for Davidson’s Cash Store in Phoenix, Arizona Territory, extolled the firm’s “price and quality” and noted that “they are the only people in Phoenix who give S. & H. Green stamps with hardware.”69 S&H’s pitch to these merchants was that stamp-collecting customers would give priority to retailers who offered the stamps. That may be, but because the booklets were likely filled with stamps awarded by any number of different vendors, tracking customer loyalty to a particular store via Green Stamps was impossible.
That didn’t matter to shoppers, of course, who liked accumulating the same stamps from different establishments. The tiny revenue S&H made per stamp ballooned into a huge cash flow, which translated into substantial profits in two ways. First, S&H profited from the difference between the price it paid for a premium and the amount in stamps a customer had to redeem for the item. Second, S&H earned considerable profits in the form of interest from the large cash reserves it held in the fairly lengthy period between the time that stamps were sold to stores and when customers completed the process of collecting, saving, and redeeming them.70
S&H’s success encouraged other companies to enter the trading stamp business, mostly with a regional focus. One fierce competitor was Gold Bond stamps, known at the time for offering a mink coat as an item that could be redeemed for stamps. But while loads of consumers gathered stamps eagerly, others found the activity preposterous. Some economists scoffed that the stamps simply encouraged price inflation, arguing that stores raised their prices to pay for the stamps. Some state legislators screamed that the stamp companies were actually anti-democratic. They said the programs were taking advantage of naïve consumers, who didn’t realize that the value of gifts amounted to only about 2 percent of what they had spent. One anti-stamper described such programs as “prostitutions at their best and economic insanity at their worst.” Dozens of states tried to ban trading stamps outright or impose taxes that would force them out of business.71 Yet most of these initiatives failed, and the stamps endured—but not because retailing executives liked them. Retailers tended to see them as albatrosses and tried to find excuses that would be acceptable to their customers for getting rid of them. To them, store coupons or discounts were far more useful because the executives could control the timing, nature, and amount of the offers, and these tactics would provide them with at least some quantitative measure of success. With stamp programs, all stores knew was the total number they gave out, though occasionally they could quantify the success of special promotions, such as offering double stamps for a short period.
Trading stamps experienced rapid growth up until 1915, with department stores, mail-order houses, and many other sellers including grocery chains joining in. With the start of World War I the business slumped, and this downturn lasted well into the Great Depression. Department stores experienced cash flow problems during the war and were among the first to stop using them, while grocery chains canceled their programs as they converted their units into economy outlets. Consequently, stamp programs between the two world wars became the preserve of small stores wanting as many loyalty arrows as possible in their competitive quivers. But the competition that accompanied the fast growth of supermarkets during the 1950s led many supermarket chains, which by then seemed interchangeable as well as impersonal, to offer trading stamps as a way to stand out from the others. One retail economist wrote that, in the 1950s, “the loss of individuality reinforced the supermarkets’ image of a formal business, carrying the same brands as the other supermarkets, offering the same conveniences, and charging just as much. In less than two decades the supermarket had become a comfortable and commonplace store to the shopper; an enmeshed and exposed firm to the merchant.”72
Some supermarkets, including Kroger and A&P, held out for a time, concerned that offering stamps could reduce their margins and therefore limit their flexibility to time other premium and discounts programs. But most ultimately concluded that stamps were an unstoppable rage, as the president of Kroger reflected to the Wall Street Journal: “We fought them by cutting prices; we gave away hosiery, dishes, and dolls. We used every gimmick known—and still the stamps stores took sales away from us. We couldn’t fight them, so we joined them.”73 A&P, whose president called stamps “a drag on civilization,” began dispersing them at some stores as well.74
The trading stamp craze continued through much of the 1960s. Berkshire Hathaway, the chief investment vehicle of Warren Buffett, began investing in Blue Chip Stamps in 1970 when the company had sales of $126 million, with sixty billion stamps licked. “When I was told that even certain brothels and mortuaries gave stamps to their patrons, I felt I finally found a sure thing,” Buffett recalled in 2007. But Buffett was buying in at the end of a long ride. “From the day Charlie [his partner] and I stepped into the Blue Chip picture, the business went straight downhill,” he acknowledged.75 Discount stores and inflation were the main reasons that merchants began to jettison the sticky things. Big-box chains such as Shoppers’ City, Target, and Kmart sprang up on a large scale in the 1960s and started diverting the profits from department stores and supermarkets. They challenged shopper loyalty to stamps by aiming price-cutting efforts at the most popular redemption center items and setting up grocery departments with lower prices than those offered by supermarkets.76 The discount model also undoubtedly benefited from widespread concern regarding rapidly rising food prices. Various factors contributed to this dramatic increase, among them steadily increasing petroleum prices, a decrease in world grain production that caused feed grain and therefore meat prices to skyrocket, and a devaluation of the dollar. The impact was dramatic. The consumer price index rose 16 percent from 1967 through 1970, and 27 percent from 1970 to 1974. Food prices rose even more—15 percent from 1967 through 1970 and 41 percent from 1970 to 1974. Wholesale prices of farm products rose higher yet—an astonishing 69 percent from 1970 to 1974.77 The United States had never before experienced such high rates of inflation during peacetime.78
In this tumultuous environment, the profits of retail food chains as a percentage of return on sales plummeted from an average of 1.2 percent in 1963–70 to an average of .7 percent in 1972–74.79 Struggling to keep their companies afloat, executives ceased giving out trading stamps. Even companies that ran their own programs, such as Grand Union, stopped them during the 1970s.80 They instead decided to emphasize discounts and coupons the retailers could control closely, and in amounts that would encourage customer visits. In the early 1970s, though, the overriding concern for retailers was less loyalty than it was dismally low profitability. While discount coupons also allowed merchants to track results at least a bit more precisely than stamps did, this amounted to small consolation at a time of gravely bad earnings. In such an environment it’s hardly surprising that when supermarket executives were presented with an invention aimed at improving managerial efficiency—which also might potentially quantify shopper loyalty as never before—they leaped at the chance.
The new technology was the Universal Product Code scanning system: the small rectangle of black and white bars that gets swiped across a scanner connected to the checkout register—a feature that we take for granted on virtually every package we buy today. Incorporated into the bars is a unique code for the particular brand and type of item. The checkout scanner reads the product code—say on a sixteen-inch DiGiorno Rising Crust frozen pizza—and looks up the pizza’s current price from a database in an on-site or central computer. Almost instantaneously, information regarding the sale and the store from which it was purchased is transmitted to the corporate office. Corporate buyers can then determine when they need to replenish the stock for that specific pizza in that specific store. As a result of this development, companies now had immediate, precise knowledge of what was (or wasn’t) selling and when, and in which stores.
Despite its evident benefits, the UPC system was not instituted until the mid-1970s even though the basic technology had existed for decades. In 1948 Bernard Silver, a graduate student at the Drexel Institute of Technology (now Drexel University), overheard a local supermarket executive imploring a dean to develop an efficient means for creating codes for product data. The dean demurred, but Silver took up the challenge with another graduate student, Norman Woodland, and later that same year they conceived and patented a version of the modern barcode. They used light from a very hot bulb to reflect off printed lines and create patterns that could be read by a special sound-on-film tube. As one writer notes, “It worked, but it was too big, it was too hard, computers were still enormous and expensive, and lasers hadn’t been invented yet.”81 A further stumbling block was the lack of an industry system to designate numbers or products. As two business economists note, “The UPC system would have been prohibitively expensive, perhaps technically impossible, to implement a decade earlier than it was.” They added, though, that it was the supermarket industry’s search for efficiency in the midst of an unprecedented financial challenge that pushed the project toward success. “The UPC was shaped as much by the challenging and volatile conditions of the food sector as it was by the forces of technology.”82
Between the 1940s and the 1970s some manufacturers and retailers tried to establish their own product coding systems, but each was incompatible with any other system and so they were effectively useless. Amid grave concerns about rising inflation in the early 1970s, a group of grocery industry trade associations banded together to pursue a universal coding system, forming the Uniform Grocery Product Code Council, a committee of supermarket and packaged-goods executives aided by the consulting firm McKinsey & Company. IBM was chosen to develop the technology, and National Cash Register developed the actual scanner. The first UPC-marked item scanned at a retail checkout took place at a Marsh supermarket in Troy, Ohio, on June 26, 1974, at 8:01 a.m.: a ten-pack of Wrigley’s Juicy Fruit chewing gum.83
There were doubters, but the project moved ahead rather quickly. By 1976, 75 percent of the items in a typical supermarket carried a UPC symbol, while installation of scanners in supermarkets took place more slowly. The code was soon considered to be “firmly established in the food industry.”84 At the same time, Kmart, Walmart, and other grocery-stocked big-box merchants joined the move to scanners. Moreover, nongrocery manufacturers were now approaching the Code Council for UPC symbols to place on their products. By 1982, food and beverage manufacturers no longer constituted most new UPC registrations.85
The Wrigley’s gum crossing the scanner signified the beginning of a revolution throughout retailing. Stores adopted the device slowly at first (in 1976 Business Week published an article titled, “The Supermarket Scanner That Failed”), but through the 1980s, and spurred by the big-box merchants, supermarkets made the scanners standard features at checkout. The executives who switched over to this system had efficiency in mind primarily, and over time they certainly achieved that goal. In addition to speeding the checkout process, the scanners enabled stores to order goods more efficiently because they could immediately identify which products sold poorly and which sold well. And stores no longer had to stamp or otherwise label each individual item with pricing information for a clerk to read and enter into a cash register at checkout; instead they could affix a single price label for an item to the adjacent shelf so that customers would know the cost. Finally, because they now could have virtually instantaneous knowledge of the goods in their stores and how they were selling, retailers could stock far more products than in the past and with fewer stock-keeping woes—and they did.
More important, perhaps, the scanning system upended the power relationship between retailers and manufacturers. For the first time in a century, chain retailers now had leverage over manufacturers when it came to information about the items they sold. Neither Nielsen nor IRI (another retail auditing company) could hope to provide anywhere near the level of competitive detail that retailers’ computers were now accumulating. With the new technology, store executives could know with unprecedented speed how a manufacturer’s new product was selling, or whether a brand’s advertising or couponing program seemed to be working—and all this information could be further broken down by store, by the extent of the success of the item or program, and by specific durations. These were important bits of information with which stores could bargain with manufacturers for product discounts, promotional funds, and “slotting fees”—payments for including a manufacturer’s product on the retailer’s shelves.
The scanner system also offered opportunities for efficiency to intersect with loyalty. Grocery executives recognized at the outset that the system could encourage shopper loyalty, reasoning that customers would appreciate getting through the checkout line more quickly. And although the idea wasn’t implemented at the time, the executives saw that the checkout registers could also be used to print recipes related to specific purchases—reflecting an interest in using personalization as a loyalty motivator in ways that were previously impossible. From there it was no great leap to view the barcode scanner as a logical platform that retailers could use to track the purchases of all customers. Trading stamps seemed to offer that potential in the early twentieth century, but ultimately they weren’t relevant to the mass-market, populist impulses of the period. In the late twentieth century the pressures on retailers pointed to a very different, nonpopulist mindset: a high-tech version of the discriminatory peddler era. As the 1970s drew to a close, the idea of tracking sales of individual shoppers by computer and profiling people based on what they bought had yet to take off. But the institutional challenges of the next decade would certainly move retailers in that direction.