The Postwar Landscape of Mass Consumption
Around 1950, two American families gathered for kitchen-table chats to chart the future. On the Lower East Side of Manhattan, Cele Roberts, pregnant with her first child, discussed with her husband the ideal location for raising a family. They had visited friends in some of the new houses appearing on the city’s edges and swooned at the gleaming kitchen appliances the homes held—double stainless-steel sinks, new GE stoves and ovens, and Bendix washing machines. In 1949, the Roberts family purchased a house in Levittown, New York, funded by a Veterans’ Administration mortgage and Mr. Roberts’s income as a substitute teacher. Compared to their cramped and cockroach-laden one-room apartment, the eight-hundred-square-foot house felt spacious. The Robertses’ external surroundings also differed from the “hurly-burly” of the city streets of their previous home. Mrs. Roberts could peer out her picture window and track the growth of newly planted fruit trees and beyond them a “long grassy hillock.”1 At about the same time, in another household, an Illinois farmer named Dale Beall shared his sketches of a remodeled dairy barn with his wife Veleanor and their daughters. Like the modern kitchen of Levittown floorplans, the new milking parlor and bulk tank cooler in the Beall barn plan carried hopes of a brighter future. By streamlining the architecture in which their cows were fed, housed, and milked, the Beall family aimed to bring the old family farmstead into a new era. “Fifty-six,” Mr. Beall told his daughters, “is the year to fix.”2
Although the Roberts and Beall families rebuilt their lives during the same period, their stories rarely intersect in historical works. Those who moved to suburban neighborhoods often perceived the areas to which they moved as empty spaces, blank slates on which they could rewrite their lives.3 Others adhered to an idealized pastoral view of the rural landscape as an unchanging place. Both these ideas masked the ways in which the nation’s rural landscapes were changing and, with them, the lives of those whose incomes derived from farming. Historians and geographers have offered an admirably detailed picture of how supermarkets, strip malls, and suburban tracts of ranch homes remade the built environment of the postwar United States. But as the Bealls’ farm plans suggest, the building blitz in the suburbs was paralleled by a grand reconstruction of the nation’s barnyards. The logic of mass consumption that altered the sites where Americans lived and purchased food and other goods simultaneously transformed the geography of food distribution and the agricultural infrastructure.4
Tank trucks, traveling on the same interstate highways that facilitated suburban development, made it possible to haul milk once deemed too expensive to get to thirsty urban and suburban consumers at lower costs.5 Thus, geographically isolated farmers who had once sold milk for manufacturing purposes (to cheese factories or creameries) could sell milk on the Grade A market. But to sell their milk for drinking, outer-ring dairy farmers had to remodel their dairy barns and milk houses to meet the more stringent health standards required for fluid milk. When the Beall family added a milking parlor and bulk cooler, their remodeling job was part of this broader effort. Once farm families remade their barnyards, their milk could be channeled into a greater variety of foods and reach a broader number of consumers; it might end up at the nearby plant that made award-winning cheese, but it could also be served pasteurized in a glass on the kitchen table of a suburban development like Levittown. By 1959, the milk of farm families who supplied Wisconsin Consolidated Badger Cooperative was reaching consumers at all of the Carvel ice cream stores in Milwaukee, as soft-serve in Dairy Queen and Tastee Freez stores in Michigan and Wisconsin, and in custard stands throughout the state.
Milk processors welcomed the greater volume of milk that could be diverted to whichever product promised the greatest reward, but realignment of the milk commodity chain had mixed effects. As their sales expanded geographically, milk cooperatives like Wisconsin’s Badger boasted fewer members. Many farm families who believed they might reap greater profits by remodeling their barns to meet Grade A standards found that the cost of improvements outweighed any premiums paid for improved milk. Farmers who had long produced Grade A milk faced competition from distant counterparts that drove milk prices lower, even as housing developments on the urban periphery nudged property tax bills upward. Consumers could obtain a broader array of dairy products at the supermarket, but traded some control over their diets to food manufacturers. The rise of a food economy of mass consumption and production was not smooth and foreordained, but was fraught with discord and dislocation.6
Both newly suburban residents and the farm families that stocked their refrigerators made personal decisions about how fully to engage with the postwar food economy. Not all farm families remodeled their barns to stay in the dairy business. Not all urban residents aspired to a suburban existence where they could fill their refrigerators and freezers with foods purchased from the nearest supermarket. As changes in how foods were marketed, packaged, and transported redrew the geography of food production and consumption, however, farm families and consumers could hardly avoid the impact of these changes on how they sold their product or bought their food. Through the story of ice cream and its ingredients, this chapter examines how individual farm families, milk inspectors, and consumers navigated the broadly changing postwar landscape of dairy production and mass consumption.
Ice cream showed up in nearly every iconic postwar locale. Baskin & Robbins and Carvel were both founded in the immediate postwar era, and by 1954, 40 percent of all incorporated towns over 2,500 had a Dairy Queen.7 Vendors like Good Humor sold their wares on leafy suburban avenues, beckoning residents and especially children outdoors for a frozen treat. According to architectural historian Dolores Hayden, developers seeking to attract buyers to the suburbs even used a house’s location on “the regular route of the Good Humor Man” as a selling point.8 Ice cream appealed across racial and class lines; features and ads for ice cream desserts appeared in Ebony magazine, as well as in mainstream journals.9 With the widespread adoption of home freezers, more Americans in the 1950s and early 1960s could take home a half-gallon or even a full gallon of ice cream from the supermarket.
The site and character of ice cream consumption in the postwar era differed from its prewar patterns. From the 1910s through the early 1940s, Americans usually purchased ice cream from confectionaries, drug stores, and retail counters, since most Americans lacked home freezers to keep ice cream cold. Eaten outside the home, ice cream was intertwined in patterns of community life and social relations. Rural residents caught up with neighbors in town over a malt or milkshake on Saturday nights. Children chased the local ice cream man down public streets. In the lyrics of tunes like “Over a Chocolate Sundae on a Saturday Night,” and “Won’t You Have an Ice Cream Soda with Me?” romances blossomed at the soda fountain counter.10
When Americans of the interwar period did serve ice cream at home, most of them purchased only as much ice cream as they planned to eat, since they had no way to store it for future use. In 1940, only about half of all American homes had electric refrigerators.11 Thus, ice cream was a food of picnics, banquets, birthdays, and special events. Women could purchase ice cream pies for special guests, turkey-shaped desserts at Thanksgiving, or animal-molded sundaes for a birthday party.12 At summertime picnics, families made their own ice cream. One New York farm family celebrated the Fourth of July in 1943 with an ice cream social.13 Similarly, making ice cream together was the event at a 1941 Maine family reunion. Men dug out a block of ice and pounded it into small pieces with an axe, while women mixed a custard of cream, eggs, and sugar. Finally, everyone took turns cranking the makeshift “freezer” in the yard.14 The labor involved in making ice cream marked it as a food of summertime gatherings that were much anticipated precisely because they were so infrequent and unusual.
After World War II, however, ice cream consumption changed dramatically. Ice cream consumption hit an all-time high in 1946 at 5.1 gallons per capita.15 Not only did Americans eat more ice cream, they also purchased it in new places. Three factors were especially important in these changes: the adoption of home freezers, the demographic shift of the baby boom, and the rise of supermarket shopping. First, more consumers purchased a refrigerator with a freezer compartment in the 1950s. Not only were refrigerators readily available, but postwar prosperity and installment plan purchasing made the products affordable. In the five years after World War II, consumer spending on household appliances rose 240 percent. In 1950 alone, Americans purchased three million refrigerators, many of which came with freezer compartments.16 Once American households had home freezers, it became possible for their residents to purchase ice cream on a more regular basis and in larger quantities.17 Whereas in 1941, half-gallon and gallon packages made up less than 1 percent of the ice cream market, by 1955, nearly 30 percent of the nation’s ice cream was purchased in half-gallons and gallons.18 The debut of the television set further stimulated the growing home market for ice cream.19 Ice cream and appliance makers fully recognized the mutually beneficial elements of their relationship; in a 1950 ad campaign, freezer and refrigerator-maker Kelvinator sought to boost sales by promoting ice cream during June dairy month.20
The postwar baby boom also played a key role in changing the patterns of ice cream consumption. In the late 1950s, ice cream manufacturers began selling ice cream sandwiches and other novelties in multipacks, capitalizing on the growing size of the typical American family. The crowds of kids flocking to the nearby ice cream stand or chasing the ice cream man down the street were a ripe market. As the decade progressed, ice cream makers appealed to children in increasingly targeted ways. Eskimo Pie awarded children who sent in ice cream wrappers with premium gifts, like roller skates, a Joe DiMaggio baseball, and pen and pencil sets.21 Carvel Ice Cream Company Stores interested kids by creating “Freezy the Clown,” staging ice cream eating contests at new branches, and developing a comic books series starring Freezy Carvel, Super Space Man.22 Due to its ease of preparation, ice cream was well suited to an age in which women still shouldered the primary responsibility of making meals and caring for children but joined the paid labor force in greater numbers than ever before.23
Figure 4.1 Ice cream takes on a patriotic appeal in this image by the Olsen Publishing Company. Used to boost troop morale in World War II, ice cream consumption reached record highs in the late 1940s. Wisconsin Historical Society, WHi 93317.
As home freezers facilitated the ice cream binge and the baby boom increased consumer demand, supermarkets emerged as a key site for ice cream purchase. In 1952, though the nation’s 15,383 supermarkets made up less than 4 percent of grocery stores nationwide, they conducted over 40 percent of the grocery business.24 The food store came within 1 percentage point of the drug store as the place where Americans bought ice cream in 1950.25 Buying ice cream in a supermarket was fundamentally different than purchasing it from a confectioner or drug store. The heart of the supermarket sale was impulse buying; sales managers arranged products on the shelf in attractive displays in hopes of sparking unplanned purchases.26 Consumers adjusted slowly to the idea that ice cream could be purchased on impulse and kept in the freezer for unplanned events. A 1954 marketing survey of 193 housewives in Philadelphia and Pittsburgh determined that women over forty, in particular, were more likely to purchase ice cream for one meal only. But the same study revealed that younger women, especially those with children, purchased “extra quantity for storage in refrigerators.”27 Products like Eskimo Pie’s multipacks were specifically designed for supermarket distribution. To ensure that its growing line of ice cream bars appeared in large stores, Eskimo Pie even designed its own merchandising cabinet.28
Access to such an abundance of sweetened, frozen foods marked a radical break from the rationing of cream and sugar during the war years. But the dizzying array of frozen desserts in the supermarket aisles and flowing from fast-food shake machines could also be disorienting. In 1954, the executive director for the Detroit Dairy Council, having interviewed one hundred consumers, told the Michigan Association of Ice Cream Manufacturers that consumers felt less sure about what, exactly, filled the cones and cartons in the freezer case. She explained, “In interviews and contacts with consumers, there permeated the feeling that ice cream manufacturers were selling something called ice cream that contained substances other than milk, cream, sugar, and flavoring.”29 The same technologies and economy that made it possible for ice cream to become a food of all seasons and incorporated in the diet of all classes led some to wonder whether such a product might be cheapened in the process of mass access. How good could ice cream be if manufactured on a mass scale?
The sources of consumers’ questions about ice cream reflected broader concerns about food in the 1950s. Many consumers’ biggest concern was value. Inflationary pressures in the 1950s hiked food prices, and food buyers wanted to get their money’s worth for the goods that filled their grocery cart. Historically, demands for low prices centered on staple foods like milk and meat, but as Americans came to think of a wider variety of foods as needed items, they expected value to extend to goods once thought of as luxuries—including ice cream. Value entailed quality, as well as cost. Consumers were especially loath to pay high prices for substandard products and many felt that quality suffered as the assortment of frozen treats abounded. Stated a Washington, D.C., consumer, “The stuff [ice cream] sold today bears little resemblance, either in looks, taste, quality, and in all probability, ingredients, to that manufactured several years ago.”30 Paying inflated prices for poor-quality ice cream left a bitter taste in consumers’ mouths.
If an acute sense of value put consumers on edge, the changing process of ice cream manufacture intensified their concerns. In the 1950s, food scientists concocted new recipes for ice cream mixes. To the core ingredients of cream, sweetener, and flavoring, ice cream makers added stabilizers, like gelatin, Irish moss (carrageenan), and guar seed gum, among others. Before the war, ice cream makers used stabilizers to help their product withstand inadequate refrigeration. Newly formulated mixes enabled ice cream to travel longer distances while maintaining its texture. Stabilizers slowed the formation of ice crystals and preserved the product’s creaminess as it traveled from factory floor to supermarket freezers.31 Manufacturers’ desire to distribute a low-cost product also led some to rethink the basic components of ice cream: milk, cream, and sugar. Food manufacturers in Texas began to replace cream with vegetable fats, and these “vegetable fat frozen desserts” reached as far north as Chicago by 1952.32 As ice cream makers utilized a greater variety of ingredients to provide a cheap abundance year-round, consumers’ suspicions about what, exactly, was in their ice cream cartons only intensified.
The variability in ice cream recipes led states to revise regulations defining ice cream. State laws varied. In Arkansas, ice cream could only be sweetened with sugar, but in neighboring Missouri ice cream makers could use dextrose, lactose, corn sugar, maple syrup, honey, brown sugar, malt syrup, and molasses (except blackstrap) to sweeten their product. Alabama allowed nondairy fats—such as vegetable and animal fats—to be mixed with sweeteners and sold as “mellorine” or labeled “imitation” ice cream, while Wisconsin prohibited any iced desserts using vegetable fats that resembled ice cream to be made or sold in its state. The patchwork of state laws reflected the predominance of the local agricultural geography. Arkansas sugar growers and Wisconsin dairy farmers benefited from standards that restricted ingredients manufactured by other industries.33
But as food manufacturers, rather than farmers, flexed their muscle to shape food policy, they sought the creation of a new set of federal standards for ice cream to govern its contents. The supermarkets that sold the bulk of the nation’s ice cream by the late 1950s and the food companies that supplied them were part of a nationwide distribution system. When food manufacturers sent their goods across state lines, their products crossed into new regions of legal authority. Unifying food standards under the federal umbrella would enable even greater flexibility for food manufacturers in obtaining raw materials and selling their final product. Food processors argued that uniform regulations would allow the logic of supply and demand, not state regulation, to drive their sales.34
To this end, in 1958, the federal Food and Drug Administration (FDA) issued new standards for ice cream that required any ice cream to contain at least 10 percent milk fat and 20 percent milk solids (milk fat and nonfat milk). The standard also allowed ice cream makers to utilize some ingredients as stabilizers, as long as these substances had been proven safe and did not exceed more than half of 1 percent of the finished ice cream.35 In announcing the federal ice cream standards, FDA regulators noted that the new rules would ensure that consumers were getting the product they paid for—without excessive amounts of air, water, or deleterious substances mixed in.
Although the FDA’s alleged aim was to make sure consumers obtained “what they expect in these dairy foods,” the establishment of federal standards on ice cream made it more difficult for consumers to assess the foods’ contents. Once the FDA established a federal standard of identity for a food product, its manufacturers were no longer required to list which ingredients it contained, so long as it met the guidelines set forth by the federal standard. A carton of ice cream might be churned entirely from cream or blended from a combination of dairy foods (like skim milk, casein, dried buttermilk, or whey), but as long as the end result contained 10 percent milk fat and 20 percent milk solids, it constituted ice cream under the federal standard. The standard thus set a basic floor for what defined ice cream and asked consumers to trust food manufacturers and regulators to look after its contents.36
Many consumers, however, preferred to maintain the authority to evaluate the contents of their foods. They clamored for ice cream cartons to carry comprehensive lists of ingredients. As one man explained, “When buying a can of salmon, the label tells me that SALT has been added. When buying canned fruits, the labels proclaim the addition of SYRUP. Even packaged candy-coated popcorn holds a slip stating the ingredients … but not ice cream!”37 Some consumers wanted ingredient lists because they were anxious to know what they were feeding their children.38 Others wanted to avoid gelatin or hog-derived ingredients for religious reasons.39 Some wanted to select ice cream with the highest butterfat content, while others sought low butterfat desserts.40 Consumers felt that they had few clues to distinguish between the ever-widening array of items in the freezer case. “When I pay from $0.59 to $1.59 per half gallon,” stated Theodora Burke, of Utica, New York, “I would like to know what the difference is.”41 Barring ingredients lists, consumers were more likely to believe scare stories printed about ice cream’s contents, which claimed that modern ice cream contained the same ingredients as antifreeze, rubber cement, and oil paints.42
The intense desire to know what was in ice cream was about more than changing manufacturing processes and the legal guidelines of its creation. It was also about the insecurities that home cooks felt as they furnished family tables with premade items, rather than cooked meals from scratch. To women balancing a host of responsibilities, scooping out a dish of ice cream or handing out ice cream bars was appealing because it required less time than baking pies, cakes, or cookies. Yet ice cream carried none of the cachet of a home-baked tart. In a time that glorified domesticity, many women felt that the foods they served reflected their worth. The dessert course, in particular, was often viewed as the best test of a woman’s skill in the kitchen. A woman who could bake a tasty pie or an elegant cake earned admiration. But the respect conferred by a well-baked pie crust was difficult to attain when serving a manufactured product. Whereas a home baker could certify the quality of a cake’s contents and knew the process of its creation, the housewife who served store-bought ice cream could offer few such assurances. One could buy ice cream and remake it at home—enhancing it with delectable sauces, decorating ice cream cones as clown hats, or layering ice cream into frozen pie crusts—but such tasks took just as much time and skill as baking. Being fastidious about the act of food purchase—that is, certifying that a product’s contents were wholesome and becoming an avid reader of ingredient labels—was a way to adapt homemaking skills to a consumer age.43
Consumers’ demands for more information about the contents of ice cream also reveal how central product labels and ingredient lists had become to food buying by the 1950s. Initially the site of brand promotion, careful attention to food labels became further entrenched as food manufacturers utilized them to bill foods’ nutritional contents in the 1920s and 1930s. As food manufacturers’ lexicon became more difficult to decipher, consumers demanded a right to information about food contents, leading to the creation of government-enforced quality standards for food labeling in the 1930s.44 By requiring government quality standards, consumer advocates in the 1930s aimed to empower food consumers to make informed choices about their foods. But as practiced in the 1950s, the FDA’s standard of identity ruling provided consumers with less evidence, not more, about the ingredients within a carton of ice cream. While American women willingly traded away the labor of making dessert to ice cream manufacturers, they wanted to maintain some control over the quality of foods they served in their kitchens.
The questions that ice cream inspired consumers to ask demonstrate the ambiguities of mass consumption in the postwar era. On one hand, the techniques of mass production, such as long-distance transportation and innovations in food science, made it possible for a broader number of consumers to experience ice cream as an everyday treat. The broad array of ice cream choices—from soft-serve cones to Klondike Bars, hand-packed pints to gallons of factory-churned vanilla—signaled the height of a consumer-driven economy. But, on the other hand, behind this delectable array of choice lingered a sense of uncertainty. What shortcuts made such cheap abundance possible? Could a woman be a “good” cook in an age of manufactured foods? Consumers’ desire to know what was in ice cream usually led them to contemplate the process of its manufacture and especially to wonder about the additives or nondairy fats it contained. Rarely did they consider the processes by which the “natural” ingredients—like sugar or cream—were produced. Had they done so, consumers would have found that trends toward uniform standards extended beyond the ice cream recipe to the farms on which cows produced milk. There, farm families, like housewives, struggled to balance the promises of mass production with maintaining individual control over their livelihoods.
Postwar transportation technologies and political changes radically altered the geography of dairy farming. Before the war, the seasonality of milk production, transportation costs, and economic and health regulations created two zones of milk production. Farmers located in greatest proximity to the nation’s urban areas provided city dwellers with Grade A milk for drinking. These Grade A producers earned a premium for their milk, because it had to be produced under more stringent health regulations and shipped to market quickly. Alternatively, those farmers located at a distance from urban markets largely sent their milk to local manufacturing plants, such as creameries or cheese factories. Grade B producers did not have to invest in the equipment to meet Grade A standards and were paid less for their product as a result. While surplus milk from Grade A farms could be made into cheese or butter, milk that derived from a manufactured milk farm could not meet the sanitary standards to be bottled and sold as fluid milk. Outer-ring Grade B dairy producers became newly able to supply the fluid milk market in the postwar era, so long as they improved their farms to meet Grade A production requirements. By flattening transportation distinctions between manufactured and fluid milk, the new postwar food economy made it possible for milk to be interchangeable.
The changing geography of milk transportation affected every link in the commodity chain of milk production—those who shipped milk, inspected it, and manufactured it into dairy foods. Small milk-processing plants that had once churned butter or crafted cheese grew into national distributors of a wide variety of dairy products to supermarkets and fast-food restaurants. According to a 1960 study of dairy manufacturing cooperatives in Minnesota, Iowa, and Wisconsin, half of cooperatives surveyed added facilities to their plants to accommodate Grade A milk between 1951 and 1956.45 One such cooperative, Wisconsin’s Consolidated Badger Cooperative, purchased smaller cheese and ice cream companies and closed some smaller butter factories, directing milk from an everexpanding region to larger manufacturing plants. Pennsylvania’s Wawa Dairy Company purchased nearby Brookmeade Dairies in 1950, and by 1957, Wawa’s milk trucks began carrying ice cream.46 By acquiring the facilities to process fluid milk, cheese, cream, ice cream mix, and dairy byproducts like powdered skim milk, dairy manufacturers could sell the dairy products in highest demand for maximum returns. The goal of these plants was not simply to provide a market for local farm families to sell their milk, but rather to obtain milk from a greater region and sell it nationally.
Milk inspection also witnessed great changes. The transport of milk across greater distances required municipal milk inspectors to rely more heavily upon counterparts from other cities or states to guarantee milk’s safety, for they could not travel to increasingly distant climes to carry out inspections. In 1948, for the first time, the New York City Board of Health approved cream for ice cream manufacture to enter the city from outside its own milkshed, that is, from outside the geographic area that supplied the city’s fluid milk. Some inspectors worried that interstate agreements allowed inexperienced inspectors to take over the tasks of long-serving municipal officials and insisted on asserting local control over the milk and food of their localities. But states that exported vast quantities of milk, like Wisconsin and Vermont, were especially eager to set uniform standards of milk inspection, because interstate and national agreements would enable their states’ milk to reach consumers without being stymied by local regulators.47
Those on the dairy farm felt the effects of the changing dairy economy most profoundly. As before World War II, the kinds of farm families engaged in dairying varied widely in the immediate postwar era. Over three million farms reported milk cows in the 1950 agricultural census, but only a fraction sold milk commercially. (Nearly half of the farms reporting dairy animals had just one or two cows.) On farms that sold milk, herd sizes remained small, for dairying was often combined with general farming. In Wisconsin, for instance, most farmers had herds of ten to nineteen milk cows. Only California and New York boasted a significant number of herds of fifty or more milking animals. Even in California, the state with the largest average herd sizes, fewer than 10 percent of the state’s dairy farms had more than fifty cows.48 Since dairying continued to be mixed with other farm pursuits, most farms had buildings fitted to accommodate generalized farming, not specialized dairying. Depression economics and wartime equipment shortages discouraged new construction in the 1930s and 1940s. A 1947 survey of Illinois dairies, for instance, found that most farm buildings had been constructed forty to seventy years earlier. Such buildings were physically sound but ill-equipped to house larger herds of dairy cows or equipment specifically designed for dairying.49
Rather than spend the money on farm improvements to enter the Grade A market after World War II, many farm families left the dairying business. Farm income slumped with the rising costs of machinery. Some dairy farmers also faced higher property taxes as suburban neighborhoods encroached on their fields and as tax assessors recalculated the value of their farms to incorporate new buildings.50 Nearly half of the commercial dairy farms in the central plain region of New York in 1953 were no longer in the business by 1963.51 Minnesota lost 29 percent of its dairy farms between 1962 and 1968.52 Many of those who left dairying stayed on their farms but shifted to raising beef cattle or cash crops, such as corn, soybeans, or sugar beets—agricultural operations that required less intensive labor or less expensive machinery.
Families who sought to continue dairying rebuilt outmoded barns and reconstructed the environment of dairy farming to achieve mass milk production in the postwar era. They rethought nearly every element of farm management and the built environment. New state and national standards for milk quality, such as the 1959 United States Public Health Service Milk Ordinance and Code, encouraged some remodeling, because the laws defined milk’s purity partly by the architecture in which it was produced.53 Some farmers sized up to accommodate larger herds and, they hoped, a bigger milk check. Others recalibrated the mix of land devoted to croplands and permanent pasture. The impact of the economy of mass consumption, then, was not only evident in the broader array of products in the supermarket aisle, but also in the ways that farmers organized their barnyards, housed their cows, and grew crops to feed them.
Just as home freezers remade the suburban kitchen, the bulk tank cooler transformed the milk house. Until the early 1950s, most farmers chilled milk by pouring it into ten-gallon cans and immersing the cans in cold water in a cement tank. When milk haulers came to pick up the milk, the farmer and the truck driver would hoist the full cans onto the hauler’s truck, in exchange for empty cans that would be filled the next day. During the 1950s, dairy farmers began utilizing bulk milk tanks, mechanical refrigerators capable of chilling hundreds of gallons of milk. Rather than heave heavy cans of milk, haulers servicing bulk tanks hooked a special hose to farmers’ bulk tanks and pumped milk into trucks’ stainless-steel receiving vat, making the chore of milk cooling, transportation, and handling more efficient.54 Most dairy farmers needed to build new milk houses separate from the barn to accommodate tank truck deliveries.
In 1953, only 6,200 dairy farms, or roughly 1 percent of the commercial dairy farms in the United States, had bulk milk cooling tanks. Farm people in regions where large herd sizes prevailed adopted bulk milk tanks quickly. The milk of an average-sized California dairy herd of forty-four cows, each of which yielded 8,000 pounds of milk per year would have filled eighty or ninety cans each day—too many to heave and hoist onto a milk truck.55 Efficient for farmers with large herds, bulk tanks were costly for the smallest dairy farmers. At $1,500 to $2,500 dollars, the purchase of a bulk tank was a considerable expense. As farmer Herbert Leon of Aurora, Indiana, asked Senator Clyde Wheeler in 1956, “The talk now is bulk coolers which cost $2415. I only sold $2886 worth of milk last year how could I buy one and eat?”56 The costs of installation, such as remaking the milk house door to accommodate a tank or installing pressurized water to sufficiently clean it, added to farmers’ expenses.57 Since the bulk tank was a new technology, there were few used models for economizing farmers to purchase.58
Figure 4.2 The bulk tank cooler required significant investment from farm families and indicated a farm’s specialization in dairying. It was a source of pride for those who invested in the technology in the 1950s and 1960s. Consolidated Badger Cooperative Creamery Records, Wisconsin Historical Society, Green Bay Area Research Center.
The bulk milk tank was just one element of the farm geared to mass milk production. Farm management specialists in the 1950s also promoted plans to redesign two parts of the dairy farm that had remained unchanged since the turn of the twentieth century: the stanchion barn and the pasture. Their aim was to help farmers boost production and raise more cows on the same acreage without additional labor costs. National trends to mass production thus transformed not only the networks of milk transportation but also the layout of dairy farmsteads themselves.
One change promoted in the 1950s was to shift from stanchion barns to loose housing. Until the late 1940s, most dairy cows were housed in barns in which each cow stood confined to an individual stall or stanchion. Farmers milked, fed, and bedded cows in their stanchions. The stall barn made it easy to locate specific cows. If the veterinarian came to administer medicine or if a buyer came to purchase the animal, a farmer knew where to find it. Stanchion barns, though, were expensive to construct and costly to remodel. As the average size of cows increased, farmers found the stalls too cramped or short to accommodate cows’ larger bodies. Engineers believed that they could improve cow comfort, reduce farm costs for improvements, and relieve laborious barn tasks, such as feeding and manure removal by altering the layout of the stall barn.59
In the 1950s, engineers experimented with a different kind of dairy barn: the pen barn or loose-housing system. In these barns, used widely in the South before World War II, spaces were arranged by function: one section of the barn was used for feeding, another for loafing and bedding, and a third as a milking parlor. Adult cows were allowed to roam freely, rather than being fixed in place.60 Whereas in a stall barn farmers provided feed to cows in the feed alley adjacent to their stalls, in a pen barn cows ate from self-feeding bunks that held hay and silage. In a stall barn, each cow had a separate drinking cup, but in a pen barn, cows drank from a common water tank in the feeding area. Farm engineers believed that loose housing would save labor and reduce barn construction costs. Pen barns were easier to remodel than stanchion barns, so they could accommodate fluctuations in herd size. The reduced costs in providing loose housing made it the favorite of farm engineers. The conclusions of one agricultural economics student in 1961 summed up their view: “If new dairy buildings are necessary, stanchion barns should not be built. There is always a loose-housing alternative that gives lower cost.”61
Changes to the task of milking in a loose-housing system were especially dramatic.62 In a stanchion barn, a farmer moved from cow to cow during milking, usually with a milking machine in tow. Farmers with loose-housing operations milked cows in a central location, called a milking room or milking parlor. Twice a day, cows filed into the cement-floored room in small groups and proceeded into individual milking stalls. The stalls, located above a pit where the milker stood, allowed farmers to work at eye-level with cows’ udders. Thus, farmers could inspect cows for signs of disease or injury and milk cows without stooping or squatting. As the milker washed cows’ udders and hooked them to the milking machine, cows ate concentrated feed or grains. In most modern milking parlors, pipelines carried the milk directly to the bulk tank. When all the cows were through milking, they exited the parlor, and the next batch of cows entered.63
Even farm families reluctant to adopt loose housing entirely built milking parlors and installed pipeline milkers.64 Dale Beall was among them; he maintained a stall barn but proudly displayed a new milking parlor by late 1956. Milking parlors appealed to farm families because they cut the time required to milk cattle. For farm owners who hired laborers to perform milking tasks, the time saved with the use of a milking parlor was paramount. In 1963, Max Foster, of Modesto, California, credited the construction of a new milking parlor with increasing the average number of cows each employee milked from sixty to one hundred animals. For Foster’s operation, the main distinction was not just that parlor machines channeled milk to the cooler more efficiently, but also that the new machines had automated feeders that provided each cow with a grain ration in proportion to the amount of milk produced. By streamlining milking time, the milking parlor thereby diminished the wages that Foster owed to workers.65 Hank Bennett, who farmed in Sixes, Oregon, also noted the changes that the construction of a milking parlor brought to the task of milking. In his newly constructed milk room, one man could milk and feed 150 cows, thanks to machines that massaged cows’ udders to stimulate milk to be let down and push-button-operated conveyor belts that delivered roughage at milking time. Cows, not just laborers, faced adjustment to the new system. After operating his remodeled milk room for a few weeks, Bennett reported, “The heifers are now coming in with their milk let down, but many of the old cows are only partially ready. It will probably take several months to get some of the old girls trained.”66
A final change to the local geography of the dairy farm in the 1950s and 1960s dealt with how and what cows were fed. A cow’s ration continued to be made up of a mix of roughage (hay, silage) and concentrates (grains or root crops), but the ways they received their roughage began to change. For the first half of the twentieth century, one of the most consistent components of dairy cows’ roughage ration was pasture grass. Just as consumers greeted the first warm day of spring with an ice cream cone, farm families marked spring by turning cows out to pasture. Elihu Gifford even recorded the precise day when he turned cows out to pasture for the first time in his diary each May.67 Grazing restored cows’ health, gave a rich yellow hue to their milk, and increased the flow of milk they provided. When pastures were dried up or covered with snow, cows ate the roughage part of their ration as hay and silage.
Dairy farm families did not abandon grass feeding for corn in the 1950s and 1960s, contrary to the impression created by recent agricultural writers and grass-fed meat enthusiasts.68 In fact, many dairy farmers became more reliant on the roughage part of the ration after 1945.69 But the ways in which farmers utilized pasturelands and cows encountered grass did undergo a change. At the very moment that urbanites were drawn to the suburbs by visions of cows grazing in verdant and rich pastures, farmers began to feed cows grass in the barnyard rather than turning them out to graze. Some farmers provided grass chopped, others as silage, and some used hay pellets or wafers. These changes to the form in which cows received pasture rations required changes to the physical layout of farms and to farm laborers’ everyday tasks.
One strategy to intensify the use of grass involved reallocating the kinds of lands devoted to pasture. Some farmers used electric fences to confine cows on a particular area of pasture each day, thus preventing cows from wasting grass in a larger area. Farmers gave grass a chance to grow back before cows grazed on it again.70 Other farm families removed cows from upland areas that previously served as permanent pasturelands. Grazing in upland woodlands exposed cattle to dangers, like poisonous plants, and rarely proffered adequate roughage. In the 1950s, state soil conservation programs provided incentives to farmers for taking cows out of woodlands, since grazing also compacted soils and harmed young trees. Wisconsin’s Forest Crop Law, for instance, reduced property taxes on woodlands from which cows were removed and thus offset the cost of building fences to keep them out.71 As cows were restricted from forest uplands, land once utilized for growing cash crops was designated for pasturage. In New Hampshire, farmers enrolled in the Green Pastures program fenced off woodlands and planted a mix of landino clover, alfalfa, or brome grass on fields once devoted to wheat and potatoes.72 New Hampshire’s Green Pastures program was largely an effort to boost dairy yields so that eastern dairy farmers could compete with milk imported from western dairy farms, without increasing acreage. The changing national geography of milk distribution, then, instigated transformations to the layout of New Hampshire farms.
A new, more capital-intensive, way of providing roughage for cattle was to cut and haul grass to cows that remained in the barnyard. Using mechanical choppers in the field, farmers then provided the chopped grass to cows from self-feeding bunks. Those who chose this alternative believed hauling and chopping pasturage bested grazing, despite the twice daily labor involved in cutting grass. Mechanical choppers cut grass more systematically than cows, who trampled some grasses as they grazed. Feeding chopped grass on a schedule, farmers claimed, caused milk production to be more consistent. Others turned to providing pasturage in the barnyard because they found it difficult to get cows to and from pasture, especially when pastures were located across busy highways.73 Generally, it was the efficiency afforded by cutting grass that prompted farmers to shift to this method of feeding. One Ohio farmer told Hoard’s Dairyman in 1958 that feeding cows machine-cut pasture grass instead of grazing cows on pasture enabled him to raise forty more cows on the same acreage. California farmer Max Foster said it helped him increase his herd by four hundred animals, on only slightly larger acreage.74
The biggest change to grass feeding in the postwar era was its preservation as silage. Farmers who utilized this practice stored the mix of alfalfa, clover, and brome grass in glass-lined silos instead of drying the first cutting of grass as hay. Whereas hay needed to cure in the fields for days before being stored, silage could be preserved quickly, a fact with implications for cows’ nutrition and farmers’ hay-making process. Nutritionally, leaves remained on alfalfa stored as silage, boosting its protein content. More important to farm laborers, storing grass as silage diminished the importance of making hay “while the sun shines.” Whereas wet weather could leave hay moldy or slow its curing time, farmers might still be able to make silage in advance of a storm. Cut just before seeding, alfalfa and other grasses destined for silage would be chopped and then stored in the silo to ferment. Once stored, silage could be fed even when pastures dried up or were too muddy for cows to graze. Thus, ensiling grass gave farmers assurance of a ready feed supply no matter the weather. But such assurance came at a price; glass-lined silos, field choppers, and the automated feeding systems associated with them cost thousands of dollars, money that farm families hoped to offset with greater milk yields.75
Hauling pasturage to cows in the barn or feeding grass silage diminished the seasonality of cows’ ration and cows’ role in procuring it. Just as home freezers made ice cream less of a seasonal treat, grass silage made dairy cows’ diets less variable throughout the year. At first, a seasonal pattern continued to shape the timing of grass fed as silage. Those who ensiled grass began feeding it to cows in the fall, as pastures dried up.76 By the 1960s, farmers began using stored grass year-round, replacing cows’ time on the pasture with silage and hay rations in the feedlot. Still novel in the late 1950s, farm people worried how the shift away from any pasture grazing might affect a cow’s health. But Hoard’s Dairyman reassured readers that cows without access to pasture and fed stored silage and hay exclusively on the Wisconsin Experiment Station farm “produced very well and seemed normal in every respect.”77
Yet such a feeding strategy was abnormal in some respects, both to cows and to those who tended them. Farmers who fed cows grass silage in the feedlot reduced the initiative and control that cows themselves had over what they ate. Cows encountered grasses blended together in the feed trough, including those they did not relish on the pasture. Indeed, one of the benefits often cited by advocates of hay chopping and feeding in the barnyard was that cows could not be so selective about which grasses they consumed. Like a crafty housewife who mixed vegetables or cheap meats into the casseroles for their families, farmers’ carefully blended ration cut costs and reduced cows’ ability to choose what they ate.
Feedlot feeding also altered the character of farm work, a fact noted even by N. N. Allen, the agricultural expert who promoted feedlot feeding to skeptical Hoard’s Dairyman readers in 1958. He wrote, “I’ll never forget the mornings as a kid when I went out before sunrise to bring in the cows, barefooted and pants legs rolled up. The eastern sky was a blaze of color. … Birds were serenading from all sides, and I almost hoped the cows would be at the farthest corner of the pasture so it would last longer. What does lot-feeding offer as a substitute for that?”78 Allen offered his comments as an aside, and yet his memories pointed to a fundamental change in the character of dairy work. Making grass feeding more efficient to maximize the number of cows milked per acre left little room for experiencing birds and skies in a pastoral scene.
The pastoral vision for which Allen nostalgically pined was the same idealized landscape that drew many suburbanites to the countryside in the 1950s and 1960s. But what these newcomers rarely understood was that the place they encountered was a place of modernization and mass production, not a landscape locked into a static past. Some of the cows grazing on the pasture, viewed by onlookers as pastoral symbols of a more relaxed life, were experiencing their own daily grind, commuting to the milking parlor and punching the clock twice a day at milking time. The economy of mass production that helped bring about the construction of Levittown simultaneously remade dairy farms and changed the lives of those who inhabited them.79
The built environment of agriculture, then, offers clues to the consequences of a mass consumption economy on nature just as revealing as the strip malls and split-level homes on the city’s fringes. To enable their milk to be nationally marketed, farmers rebuilt the milking rooms, dairy barns, and feed storage structures. Simultaneously, they remade the working landscape of the farm, removing cows from woodlands and turning land once devoted to cash crops into regularly harvested hay fields. The nature to which city dwellers fled was not just one of timeless respite, but one touched profoundly by technological change and economic modernization.
By the late 1950s, although consumer spending continued to drive the national economy, some Americans began to express misgivings about a public policy premised upon consumerism. Postwar critiques of the affluent society often focused on the excesses of suburban life. Betty Friedan illuminated the silent disparity of suburban women living unfulfilling lives. Vance Packard’s Waste Makers took aim at the flamboyant tail-fins on cars parked in cul-de-sacs.80 Problems sparked by excess were never those of suburbanites alone. Though often framed as a problem of postwar America’s identity or spirit, the emphasis on abundance had material consequences for the nation’s rural landscape and the bodies of its citizens.
Even staple foods like milk, and the farmers who produced it, were touched by crises of too-much in the 1950s and 1960s. One such problem was a surplus of milk itself. Farmers adopted techniques like artificial insemination and devised new feed rations because they believed that high-producing cows would yield a bigger milk check. But as milk yields increased, dairy products accumulated and prices remained low. In the pages of farm journals, debates raged over whether the milk surplus was the fault of Grade A or Grade B producers and how the surplus might be reduced, but the underlying message was clear: production gains were not translating into greater farm income.81
A second crisis of abundance facing the dairy industry was that of manure. Before the postwar era, most farmers handled the manure produced on their farms by returning it to cropland, thereby improving soil fertility and crop yields. As chapter 2 details, manure was one of the products of dairy cows most valued by farmers like Elihu Gifford. But as farmers altered their feeding strategies to raise more cows on the same acreage, the volume of manure their herds produced increased. Further, as farmers shifted to providing feed to cows in the feedlot, rather than grazing cows on pasture, the manure cows released concentrated in a specific area. In the barnlot, growing piles of animal waste attracted flies and generated odors. Confinement livestock operations commonly produced more nutrients than the soil could absorb. Crop farmers found chemical fertilizers cheaper and easier to use than animal manure, reducing their desire to use the excess animal waste as fertilizer. Selling dairy manure to crop farmers required multiple steps of processing and transportation, an unattractive prospect for livestock raisers and one that boosted the price of composted manure relative to chemical alternatives.82
As dairy farmers retooled barns and milking parlors, then, they also incorporated new equipment to handle manure. Many replaced the hand labor of shoveling and hauling manure with timed systems that diluted manure with water, channeled it into a storage tank, and then distributed the effluent through irrigation lines on surrounding fields.83 Optimally, the construction of manure lagoons allowed farmers to store manure until a time when it could be spread safely on fields, while protecting neighbors from the odiferous stench of animal sewage.84 Newly installed manure systems often presented problems in practice. California farmer Ernest Greenough found that the slop washed out of the barns on his dairy farm was too thick to flow through irrigation pipes and that the watery, foul mixture tended to pool outside corrals during the summer, drawing flies.85
The environmental and public health risks posed by excess manure surpassed the nuisance posed by flies and odiferous vapors. In northern dairy districts, soils froze during the winter and prevented manure spread on fields from being taken up by soils. Thus, manure applied during the winter could quickly be carried away by heavy spring rains into waterways, increasing the streams’ particulate matter and reducing the amount of oxygen available to aquatic plants and animals. Even when applied at the appropriate time of the year, if farmers spread nitrogen-rich animal waste too heavily on soils, not all the nitrogen it carried could be taken up by harvested crops. Excess nitrates contaminated groundwater. Already by the late 1960s, studies revealed that 42 percent of rural wells in Missouri contained nitrate in higher levels than the recommended tolerance for infants. Those wells in proximity to livestock feedlots were more likely to carry nitrates.86 Manure lagoons could also pose problems to the farmers who operated them. If farmers lacked adequate ventilation as they removed decomposed sludge from manure lagoons, they risked exposure to toxic gasses like ammonia, methane, and carbon dioxide.87 Problems with manure remained largely an industry concern through the 1960s. Only as environmental pressures and state regulation intensified in the late 1960s and early 1970s did many farm families utilizing confined feeding develop plans to control waste.88
A more public problem of abundance facing the postwar era dairy industry was new research about heart disease. Heart disease became the nation’s leading cause of death in 1921, but as rates of the disease increased between 1940 and 1955, it gripped the public’s interest.89 Attention to the disease only increased in September 1955, when President Eisenhower suffered a heart attack.90 Researchers differed on what caused heart disease, some attributing it to rising occupational stresses, others believing that sedentary lifestyles were the largest contributing factor.91 But a third explanation, one that provoked the most concern for the dairy industry, deemed the American diet, and especially its fat content, responsible for the deadly condition.
In the mid-1950s, Dr. Ancel Keys concluded that the number of calories from dietary fat was the most meaningful factor distinguishing rates of heart disease in the United States from other nations.92 The arteries of heart disease sufferers also pointed physicians to the dangers of dietary fats. Oily deposits lined the arteries of heart disease patients, occluding and narrowing them. Since the American diet was rich in cholesterol, and cholesterol was found in the arteries of heart disease sufferers, cholesterol-rich fatty foods seemed suspect. By 1954, specialists attending the World Congress of Cardiology asserted “high-fat diets, which are characteristic of rich nations, may be the scourge of Western civilization.”93
The health concerns raised by heart disease in the 1950s were very different than those associated with milk in the preceding half of the twentieth century. For the first time, Americans questioned milk’s healthfulness not because of its propensity to spoil or carry communicable diseases, but because its chemical makeup seemed detrimental to human health. The new focus on milk’s fat content broke radically from nutritional science promoted in the 1920s and 1930s. During this interwar period, milk’s fat content had been the product’s key benefit, for its butterfat carried vitamin A. By the late 1950s, as scientists linked saturated fats to higher blood cholesterol, milk’s richness seemed a liability.
But the connection between dietary fat and heart disease remained disputed in the 1950s and 1960s, and physicians were reluctant to endorse a low-fat or low-cholesterol diet as a means to halt the threat of heart disease.94 In 1960, when the American Heart Association first issued a statement that reducing the diet’s fat content might help to prevent heart disease by reducing blood cholesterol and controlling weight, it did so cautiously. The association advised a shift from saturated to unsaturated fats and a reduction of fat calories, but included a caveat that “there is as yet no final proof that heart attacks or strokes will be prevented by such measures.”95 In 1961, the American Medical Association’s Council of Foods and Nutrition called dietary recommendations “premature.” In October 1962, the American Medical Association committee went further, declaring that the “anti-fat, anti-cholesterol fad is not just folish [sic] and futile,” but risky.96
Already on the defensive due to plummeting butter sales and rising farm costs, dairy industry officials feared that reports linking saturated fats to heart disease would spell doom for the industry.97 Dairy officials exaggerated discrepancies among health specialists about the relationship between dietary fat and heart disease. American Dairy Association advertisements issued in 1962 called physicians’ recommendations to reduce saturated fats “a highly experimental treatment.”98 In January 1963, the board of directors of the National Dairy Council announced that its members would be undertaking weight-control diets rich in milk and dairy products. In so doing, the council members believed they were waging “an all-out assault on ‘freak’ and fad reducing diets.”99
As dairy industry officials attempted to discredit the link between heart disease and dietary fats, ice cream makers came to view the weight-conscious sector of the population as a growing market. Analysts urged the industry to “get ice cream on reducing diets,” stressing that because ice cream had fewer calories than many pies and cakes, it could be considered “slenderizing.”100 By 1956, ice cream companies began manufacturing lower-fat dietetic desserts, ice milk, and creams with artificial sweeteners.101 Dietetic ice creams never made up a large proportion of the ice cream marketed for the home, but almost all of the soft-serve ice cream sold was ice milk.102
As the dairy industry attempted to shield itself from any connection to the rising prevalence of heart disease, government agencies walked a tricky line. Long a promoter of dairy products as healthy foods, the departments of public health and agriculture hesitated to turn their back on the heralded product. But as more and more studies tied heart disease to dietary fat, the cautious stance became increasingly difficult to uphold. On January 19, 1962, CBS television aired a documentary called The Fat American. The program invited leading physicians and psychiatrists to comment on the causes and consequences of excess weight. Secretary of Agriculture Orville Freeman also appeared on the program. Dismissing the studies about cholesterol as “a scare,” Secretary Freeman cautioned that a dairy surplus would accumulate if concerns about dietary fats prevailed. Freeman’s position was roundly criticized by a reviewer in the New York Times who wrote, “Economic loyalty to the farm bloc may be all very well, but it hardly would seem to take precedence over what is basically a medical matter.”103 Another man who protested to the secretary’s office expressed similar criticism. The caller “said he was dismayed to hear a Cabinet officer brush aside, seemingly as inconsequential, the very substantial body of medical evidence regarding cholesterol and fatty diets. He had inferred from what you said that you were more concerned about the butter industry than the nation’s health.”104
Two features made Freeman’s position—calling the research linking dairy foods to cholesterol as a “scare”—increasingly difficult to sustain by the mid-1960s. First, whereas in the 1950s and early 1960s, the American Heart Association and the American Medical Association had been at odds about the prudence of recommending dietary changes to reduce the risks of coronary disease, by 1965 both groups endorsed reductions in fat intake for young men vulnerable to heart disease. As new evidence from the Framingham Heart Study revealed clearer links between blood cholesterol levels and heart disease, more physicians were convinced of the merits of dietary changes to reduce cholesterol.105 Hence, it became increasingly difficult for the dairy industry to portray physicians’ findings about heart disease as “preliminary” and their dietary recommendations as “quack diets.”
The other feature that made it difficult for Freeman to uphold the importance of dairy consumption in the early 1960s was that the structural changes in dairying in the 1950s and early 1960s had altered the public perception of dairy farmers. By the 1950s, the favored place given to dairy products during World War II seemed anachronistic. Some bristled at the idea of granting price supports to farmers who seemed more like successful businessmen than working laborers. As farm people adopted new technologies and modernized their farms, retailers and advertisers of dairy foods, like the Good Humor Man, had replaced dairy farm families as cultural icons associated with dairy’s life-giving properties. While policymakers in states with many dairy farmers, such as Minnesota, New York, and Wisconsin, were alarmed about the very real economic strains facing dairy farm families, such problems seemed secondary to larger concerns about the Cold War, consumer culture, and civil rights on the national agenda.
To dairy farm families and milk plant managers accustomed to a favored spot in the popular imagination, the shift away from thinking of milk as healthy and natural came as quite a shock. At the 1962 annual meeting of the Consolidated Badger Cooperative, George Rupple, the cooperative’s manager, commented on the dramatic change, saying, “Today we are hearing lots of new words—cholesterol, unsaturated fats, etc. These things are affecting our lives and our business. I never thought ‘nature’s most perfect food’ would be used as the target of health associations, faddists, and profit-minded publicists. Some people have forgotten that milk is nature’s own method of sustaining life.”106
Strangely, though, even as butterfat shifted from being one of dairy’s virtues to its biggest vice, ice cream retained much of its appeal. When people sat down to a bowl full of ice cream or purchased an ice cream cone, they did so to escape and evade everyday pressures, regardless of the health infractions such indulgences may cause. Even the president’s own heart disease specialist, Dr. Paul Dudley White, was known to drink milkshakes for lunch and eat ice cream.107 Photographers snapped his most famous heart disease patient eating a Good Humor Bar.108 Ice cream’s continued success also stemmed from another feature unique to the food. Unlike milk or butter, few Americans had ever eaten ice cream solely for its healthful qualities. Rather, Americans ate ice cream as a treat and to celebrate social events. Rarely conceived of as a health food (despite industry promotions as such), ice cream’s reputation was less tarnished than that of other dairy foods by news about heart disease and cholesterol.
The resilience of the ice cream trade in the face of new concerns about heart disease, however, did not mean that the dairy industry would be unscathed by criticisms in the postwar era. In fact, by the early 1960s, research linking dietary fats to heart disease was but one of the challenges facing the dairy industry. Even more troubling was new evidence that substances created by modern society—especially antibiotics, radioactive fallout, and pesticides—were tainting the milk children drank. Unlike ice cream, fluid milk had long billed itself as a healthy food and was considered by most Americans to be essential for children’s diet. If Americans turned to ice cream for good humor, they sought out milk for its nutritious, life-giving qualities. How would milk drinkers respond to the new discoveries of environmental hazards in milk? How would dairy farm families protect the ideas of healthfulness associated with their food?