Chapter 2 Clicks and Cookies

The May 17, 1993, issue of Adweek featured an essay urging the advertising industry to colonize the internet. Its author, business writer and researcher Michael Schrage, was sure major advertisers would turn the internet into their next great vehicle. Writing at a time when media buyers saw the internet as both primitive and anticommercial, Schrage adopted a tone that was both admonitory and celebratory. He began by invoking the hot media properties of the day to flag the superiority of the internet: “A uniquely American network that's growing faster than John Malone's TCI ever did; that's even more global than Ted Turner's CNN; and is light years more interactive than Barry Diller's QVC. It's called Internet.” He continued:

Advertisers and agencies take note: It has the potential to become the next great mass/personal medium. No one really “owns” it; no one really “manages” it. Nonetheless, over the past five years the Internet has exploded into a multimedia phenomenon that deserves the serious attention of anyone who wants to understand what the future has to look like.

… Virtually every major university, corporation and government agency in the world is on it. Now that it's being privatized, the Internet is rapidly opening up to the tens of millions of personal computers. Forget Prodigy, CompuServe and America Online. What the world's telecommunications networks are to telephones, Internet is becoming for personal computers.

Pick a media metaphor, any metaphor—direct mail, telemarketing, broadcasting, narrowcasting, interactive multimedia—and the Internet is flexible enough to handle it. Want to put the L. L. Bean catalogue online? And let people place custom orders with their American Express cards? Why not advertising-sponsored electronic mail? Technically, the Internet could do it today. Want to do a direct electronic mailing? Electronic classifieds? Interactive animated brochures? It's already happening.1

Schrage noted that talk radio already existed on the internet. (“The neat aspect is that you can capture only the snippets of the show that you want and file them into your PC for retrieval at your leisure. No plans yet to get Rush Limbaugh.”) He foresaw that video would follow radio and “the Internet may well be a driving force in making tomorrow's PCs even more TV-like.” He argued that in view of the heavy downloading of pornographic pictures, the future was bright: “Where ‘adult’ services begin, mainstream programming is sure to follow.” Schrage ended his essay with a prediction about media buying: “That's why the Internet is like the early days of radio. The technology is there and accessible; more people are logging on; big ideas are just waiting to be born. It's exciting. My bet? By the end of 1996, at least two major advertising agencies will be designing ads and recommending Internet buys for at least a dozen of their Fortune 1000 clients.”2

Nearly two decades later, it's clear that Schrage's predictions about advertisers’ ability to reach people through the internet, and even his internet-buying recommendations, were remarkably accurate. But what he failed to foresee, or at least to note, in 1996 was that Madison Avenue's movement into this new world would be quite rocky and uncertain. A key reason for this was the struggle to create media-buying criteria to legitimate the new medium. What audience characteristics should advertisers demand that website owners deliver? What kinds of measurements (“metrics”) should advertisers require of websites to prove that they reached the right audiences in a profitable manner? In 1993, systematic measurements of internet audiences simply didn't exist. Media buyers initially had little clue how to evaluate their advertising spending in the new online world.

Not that it was a top-of-mind issue for the major media-buying agencies and their clients. During the 1990s, these clients saw online work as low-budget and experimental, a probe into a future in which the effects of advertising could be far more measurable than with traditional media. As for media buyers, many were in the throes of developing the freestanding planning-and-buying profit centers described in Chapter 1. In so doing, one of their primary goals was to improve the metrics available to them regarding audiences for television, magazines, radio, newspapers, and other traditional media. To these media-buying agencies the internet was a marginal consideration. They competed in that space with small start-up firms that were often devoted to both online creative work and buying.

Nevertheless, when media buyers did place persuasive messages into the digital space, they employed the hardball, quantitative approaches to efficiency and measurement that they were working to establish throughout the media system. From the beginning of their involvement, they saw an audience member's click—a concept Michael Schrage didn't invoke—as the central proof of value. To them the click was the attribute that distinguished the internet. With traditional media, agency practitioners had to use general circulation figures and program ratings to make decisions about where to place their advertisements. They had no way to know, for example, exactly how many people would really see a magazine ad or watch a television commercial. The click was compelling for media buyers because they saw it helping them implement the increasing measurability—read quantification—of advertising that was part of the additional value they claimed when talking to clients about their new stand-alone business models. Further, though more subtle, the click was a tangible audience action that media buyers and advertisers could use as a vehicle to ease their historical anxiety over whether people notice their persuasive messages or, even more, care about them. At the very least, they hoped, it would finally eliminate the legendary client complaint: “I know that half of my advertising works, but I don't know which half.”

Despite the availability of the click, media buyers and their clients oscillated between enthusiasm and pessimism. The story of the Web's first few years is one of ambivalence regarding its role for advertisers. On one side was optimism that an ability to track responses through clicks would make the “on line” space the great new place to reach out to audiences. On the other side was a continual worry that ways to think about clicks, and techniques to measure them, were flawed and raised more questions about accountability than answers.

The concerns led to indecisiveness about measuring audiences on websites and continual changing of the definitions for success. The indecision was reinforced by practices within the advertising industry that separated key media-buying decision makers from those involved with the Web. It was exacerbated by a desire by interactive media buyers and their clients to push down online ad prices rather than give the new medium slack. And it was furthered by continual comparison of the Web to television, to the former's detriment. Major advertisers saw television as the consummate branding medium—a major way to create likable personalities for their products. They suspected that the Web, at the time visible only on a computer with a small screen, would never rise to the status of a serious branding vehicle. Because of the clicks it might be useful for rather basic forms of direct marketing. For a time the nation's largest advertiser, Procter and Gamble, and its internet-buying firm, media.com, advocated this position.

Yet the creators and distributors of content on the internet—they would be called internet “publishers”—believed the rhetoric claiming that the internet would become the new big medium. They doggedly kept trying to find favor in the eyes of leading advertisers. They had little choice. Having rather quickly decided that their audiences wouldn't pay for their online content, they looked to media planners and their clients for long-term survival. The publishers became sellers of a “banner” form of advertising—essentially a rectangular box positioned either vertically or horizontally across the Web page—on which a site visitor could click. Websites used all sorts of click-related metrics to make Web audiences stand out to advertisers and argue valuable returns on investment. They tried new ways of counting clicks. They tried new ad formats and technologies to spike clicks on ads. They tried to find ways to understand why visitors went to their site so that marketers could target among them efficiently. They joined up with firms that charged advertisers for people's clicks on ads around the Web, traced what visitors were doing across many sites, and drew conclusions about them so marketers could select among them efficiently. And when media buyers and major marketers still grumbled that the results were not what they wanted, the Web publishers and their partners in click metrics went back to learn even more detailed information about their audiences.

A major irony of this one-step forward, half-step backward period for Web marketing was that advertisers turned the very phenomenon that was supposed to make the internet their best vehicle—its interactivity—into a liability. Web publishers simply could not persuade buyers that the Web met their new bar for acceptable quantitative measurement. Many buyers scoffed that internet metrics lent less insight into target audiences than did the measurement schemes of traditional media. Some advertising executives, in fact, dismissed Web audiences as unattractive precisely because of their interactive habits. One result of that irony was a determination by Web publishers to show Madison Avenue it was wrong. They and their technology allies placed a key proposition permanently at the heart of the new system: help media buyers identify measureable ways to know, target, and consider the impact of commercial messages on audiences as never before, even in the face of vocal public worries about privacy invasion.

In 1994, the start of the Web's first commercial decade, most companies considered the internet out-of-bounds for advertising campaigns. In those years it was primarily a text-based medium; pictures and sounds could be downloaded and opened only through special software. The internet was therefore not amenable to the large display ads and audiovisual commercials that were used in traditional media to build brand images. More important, to marketers internet users seemed fiercely protective of what an Advertising Age writer in 1993 called a “culture … which is loath to advertising.”3 The “usenet” discussion groups that populated this world had created strong norms against sending obvious and persistent sales messages. People who went against the norms were “flamed”—subjected to barrages of angry replies. Nevertheless, in this environment advertisers still found ways to reach internet users. The key, said those doing it, lay in subtlety. They often used the euphemism information provider and emphasized the helpfulness of the activity.4 But most firms wanting to reach people “on line” tended to do so via Prodigy, a dial-up information and entertainment service founded in 1984 by CBS, computer manufacturer IBM, and retailer Sears, Roebuck and Company. Its revenues came from subscriptions and advertising. In the early 1990s, Toyota, for example, had a Prodigy area where Toyota owners could communicate with one another and the company. Sara Lee Corp.’s L'Eggs hosiery offered catalog shopping on Prodigy. Straightforward ad banners touted companies of various sizes and types; advertising, often with low-resolution graphics, was said to take up as much as one-third of Prodigy's available viewing area.

To people in the advertising industry of the early 1990s, Prodigy represented one facet of what they called “interactive.” Used in this way—as a noun rather than an adjective—interactive referred to a raft of new technologies that had arrived in the 1970s and 1980s and had the ability to go beyond the one-way flow of traditional media to encourage potential customers to learn more about their company. These included CD-ROM, computer-driven store kiosks, interactive television, and online services such as Prodigy and interactive TV versions called videotex. A smattering of major marketers found it appealing to reach out to people they figured were well-heeled, highly educated consumers who adopted these tools. The firms were joined by a number of big ad agencies, often those that had them as clients and that considered the area small but possibly promising, and, in any event, both the agencies and the clients could be portrayed as cutting-edge. Ogilvy & Mather's interactive division, Ogilvy Direct, had the deepest roots. The agency started this division in 1981 to create marketing material for Time Inc.’s new teletext enterprise (teletext was a technology for broadcasting printed information, such as news reports and sports scores, to television sets equipped with a special decoder). Although Time discontinued the venture in 1983, Ogilvy Direct survived. Led by Martin Nisenholtz, it developed into an operation with forty employees that in 1994 had ongoing projects on one or another interactive medium with Equitable, Forbes, House of Seagram, Kraft, General Foods, Campbell Soup, Hewlett-Packard, and American Express. Ogilvy's size and breadth was unusual, though. In 1994 Advertising Age rated major ad agencies in terms of their digital expertise and found none that came close to Ogilvy and many that seemed primarily to pay lip service to the area.

Nisenholtz acknowledged the skepticism of even his own account executives toward suggesting to their clients that they spend media-buying money on vehicles they might not understand. “Account-management people only want to work with you if they trust you,” he told a reporter in 1994. “They're not going to open their kimonos and allow you access to their clients unless they know you're going to deliver.”5 Promoters of interactive businesses offered another reason for the agency establishment's skepticism and reluctance to buy space on the new technologies. A DDB Needham account executive put it bluntly in a letter to Adweek that same year: “Agencies have resisted all forms of new media,” wrote Jonathan Anastas, because they feared “erosion of commissions, expensive capital investments during a recession, loss of creative control and fewer awards. Repeatedly, clients have asked for ‘new thinking’ only to get media plans loaded with expensive TV campaigns and four-color spreads.” He added, “Shame on any agency executive, creative or media professional who hasn't explored Internet at least once.”6

Interest in the medium that most in the ad world had considered off-limits had already begun to bubble up, however. Michael Schrage's 1993 Adweek piece was an example; while organized as a set of predictions, the essay was also an exhortation. About a year later an Advertising Age editorial echoed it, arguing that “with anywhere from 10 million to 30 million relatively well-educated, affluent and cutting-edge online users, the Internet would seem to be a heavenly place for advertising.”7 Advertising Age didn't say where it got its numbers; the Forrester Research internet consultant firm in 1995 estimated that only 250,000 to 500,000 consumers were dialing into the Web from home.8 The inconsistent audience claims underscored the invisibility of people in the Web space. Optimism abounded, though. Forrester predicted the number would reach twenty million by 2000.9 For its part, Advertising Age enjoined marketers to abandon their notion that Web users eschewed commercialism while it urged them to retain the assumption of the Web audience as elite: The magazine acknowledged that marketers exposed to tales of flaming by “a cyberspace community peopled by academics and intellectuals” might view the Web as “advertising hell.” But it noted that “the truth is the Internet is already flooded with commercial traffic and transactions. It is becoming more and more a mass medium whose real-world users are accustomed to having their information subsidized by advertisers.” The editorial advised its readers on the importance of respectfully opening “a clear dialogue between the ‘net community and the ad community.”10

The thrust of the editorial was seconded and extended a month later in a widely heralded speech by Procter and Gamble's CEO, Edwin Artzt, to the American Association of Advertising Agencies. P&G was the largest U.S. advertiser and historically a powerful force in the media system; the company had pioneered the soap opera on radio and television, for example. Artzt said he still believed in the importance of broadcast TV to reach huge numbers of people at the same time to sell products such as “four hundred million boxes of Tide.” Yet he felt it important to consider a variety of methods beyond the major TV networks to get the “broad reach” the firm needed. Procter and Gamble had already begun to use customer segment-ation and target-marketing techniques. What worried P&G's chairman primarily was not that new technologies would encourage more targeted advertising. Rather, it was the “chilling thought” that emerging technologies were giving people the opportunity to escape from advertising's grasp altogether.11 He noted that, instead, the personal computer could be “a formidable future vehicle for advertising and even programming.” He reminded his audience that the advertising industry had worked in the past to shape the media to its needs. “We may not get another opportunity like this in our lifetime,” he said. “Let's grab all this new technology in our teeth once again and turn it into a bonanza for advertising.”12

Artzt didn't mention the internet specifically; he gave equal weight to CDs, online services, and other devices, some presumably not yet imagined. Rather quickly, though, mainstream marketers and their agencies came to see the Web as the embodiment of where Artzt was telling them to go and take control.13 Executives at P&G, in particular, took his words as a cue to encourage experimentation with the new medium.14 A crucial technical development that helped the process along was the creation of programs facilitating the interface between the user and the internet called the “browser”—especially Mosaic in 1993 (a month before Schrage's piece appeared in Adweek) and Netscape Navigator (a more user-friendly browser) in fall 1994, a few months after the Advertising Age editorial and Artzt speech. These programs made it easy to see graphics as well as text; previously people who wanted to download pictures had to combine files in viewers such as LView. The browsers introduced the idea of a “Web site” to the general public, where photos, drawings, and ads could be seen immediately. Clicking on them as well as on highlighted sentences would activate hyperlinks that transport the user to other parts of the site or the Web. Such links represented a new way of accessing knowledge, including information from marketers.

Web publishers themselves were certainly interested in getting marketers’ business. A large part of the reason was that the overwhelming majority of websites didn't charge visitors to enter. Broadly speaking, the reason was clear. In the mid- and late-1990s, publishers were in a race to show advertisers who had the most users, and if they wanted that kind of scale they couldn't charge a fee.15 Yet even in 1995, an analyst for Forrester contended that eventually the free-access model of websites would have to give way to fee-based services. “This can't go on indefinitely the way it is,” she said. “You will probably see different models developing, and we think there will certainly be a ‘bundle’ level, where customers pay for a set bundle of content.”16

The bundling didn't happen, but over the next few years several high-profile Web publishers did try to charge access fees using various approaches, and they failed painfully. One tactic was to require payment for what people were used to buying off-line; for example, some daily newspapers took this route, believing the pundits who claimed that news was “the perfect commodity for making money on the Internet.”17 Quickly, though, they concluded that people would not subscribe to online editions. Also attempted was a mixed subscription model—free and “premium.” ESPNNet, the online version of the sports TV network, tried this and found that it couldn't coax many visitors to pay $4.95 per month for the extras. Some publishers adopted yet another strategy: give the product away free until people seem to be hooked on it, and then start charging them. Observers concluded that this didn't work, either. The Wall Street Journal became known as the most successful of this sort, though industry experts didn't consider it much of a success. The paper allowed free access to the site until the site reached 650,000 registered users in September 1996. It then asked subscribers to pay $29 a year for the online version, while it charged nonsubscribers $49. Online readership immediately plummeted to 30,000 and then rose again to 50,000—still less than 10 percent of the initial number—by the end of the year. It wasn't a great model for publishers, most of which did not possess the Journal’s reputation and role as a professional tool. The poster child for an even more dismal result was Slate, Microsoft's sophisticated Web-only culture magazine. It threw in the subscription towel in February 1997 after switching only a month earlier from free access to a $19.95 annual subscription rate. The magazine's editor, Michael Kinsley, wrote a letter to his readers titled “Slate Chickens Out.” Kinsley admitted what many people had already predicted: If he started charging anything for Slate, no one would read it.18

This was a dispiriting development for publishers and editors who hoped that the internet would signal a new venue for both subscriptions and advertising. In 1997, a columnist for Canada's Globe and Mail newspaper asked why people wouldn't pay Web publishers. Some analysts, he said, “suggest that once users had become accustomed to free content they became unwilling to give it up.” Other observers, he added, point to demographic factors. “The Web audience—largely white and middle-class—is too demanding and too critical to pay for content,” he wrote, without elaborating on precisely what he meant.19 Both explanations fit with a historical view: Free or virtually free content was a feature of so much North American content throughout the twentieth century. Magazines and newspapers charged readers between 20 percent and 50 percent of the cost of producing each issue, with the rest (including a profit margin) paid by advertisers. And U.S. television and radio stations were free to their listeners; marketers picked up the entire tab. The underlying message to generations of Americans was that content should be presented cheaply or at no charge. So with a new medium in which content was still often rather simply presented and many sites were indeed free, internet users saw no reason to pay anything. Websites, eager for visitors and hoping to sell space to advertisers, capitulated rather quickly. Early on, they did ask people to register to use the sites, requesting information that would entice advertisers. But many sites found that internet users shied away from doing that, and that others lied when registering. Many sites stopped even that imposition on visitors.

Some publishing sites turned to selling things to make money. A few began to sell their promotion of other sites via “hot links”—hypertext connections in their site that sent users to another site.20 For most, though, traditional forms of advertising seemed the logical place to turn. Sometimes an advertiser would pay for a message stating that it was sponsoring a site or a page on a site. Then, on October 26, 1994, the popular technology magazine Wired began to sell pictorial banners in large quantities on its new website, HotWired.21 A 1996 Advertising Age article claimed that this was the first application of banner advertising on the Web. Clicking on the ad would activate a link to the advertiser's own website. HotWired tempted visitors to interact with the ads by asking the question “Have you ever clicked here?” Advertising Age said that the gimmick “was very effective in attracting the user's attention.”22

Despite superficial similarities to ads in other media, the clickable ads represented an unprecedented format. Like print ads, Web commercial messages appeared in various sizes; as opposed to column inches, the dimensions of Web ads were measured in terms of the computer screen's pixels. At the same time, Web ads, like TV ads, were wedded to time; yet while the duration of television commercials was set by the networks and agencies buying them, the practical duration of an internet ad depended on the person staring at the computer. A click on the address box, the back button, or a different ad, and a company's ad might well disappear.

Other Web publishers followed with clickable banners. Generally, they sold the ads by the cost-per-thousand (CPM, or cost per mil) model that was standard for newspapers, magazines, and other traditional media. According to that standard, when a media agency agreed to pay a certain amount to place an ad, the agency evaluated the cost in terms of the price to reach one thousand people. So, for example, if a magazine ad costs $50,000 to reach ten thousand people, the CPM is $5. Assuming all other conditions are equal, that is a more efficient buy than a magazine ad that costs $50,000 and reaches five thousand people, because in this case the cost per thousand is $10.

Because it was impossible in the traditional model to determine whether an audience member saw the content or the ad, CPM was based simply on general circulation—the number of copies of newspapers sold, for example, or the Nielsen rating of a television program. A website, by contrast, would charge a price based on the advertising impressions served to an actual person. Websites defined an impression as an advertisement that was sent to an individual who had clicked on a site's page.23 Because at the time it wasn't possible to determine whether clicks were by the same or different individuals, sites judged every click separately. Costs per thousand impressions varied greatly depending on the presumed nature of a site's visitors. Sites that continued to require registration could often charge higher prices than nonregistering sites because they offered greater visibility of the audience. An Advertising Age reporter found that that “web CPMs at major sites generally range from $10 to $80.”24 If accurate, the numbers were generally higher than typical broadcast network rates, which at the time hovered between $5 and $15. Of course, websites charging even $70 CPMs would earn far less money than the TV networks because online audiences were so much smaller.

In adopting the cost-per-thousand model, Web publishers recognized the importance of being able to tell potential advertisers as much as possible about site visitors who might click on a banner ad. Since sites accepted that people didn't want to register (or lied when they did), they decided to obtain this information surreptitiously, and a new business came into being. In September 1995, Adweek noted that “a scant year-and-a-half after the World Wide Web was born, companies are scrambling to offer data-crunching systems that can tell Web site proprietors and the advertisers that have begun to enrich them how many people are logging onto their sites, who they are, where they're coming from, what they're doing once they get there, and how long they stick around.”25 In other words, publishers were hiring companies with systems that could analyze the click traffic coming into their websites.

The leader in this area was the Internet Profiles Corporation (I/Pro), which had Nielsen as a partner and counted Hearst, Netscape, and Playboy among its fifty customers in 1995.26 Its I/COUNT service let website proprietors monitor their own sites’ usage, delivering such data as number of visits, pages viewed within each site, and basic data about users’ geographic location and system configuration. Its I/AUDIT service collected these data, analyzed them independently, and then delivered a monthly or quarterly report that guaranteed to advertisers the accuracy of the traffic data.27 To Ariel Poler, I/Pro's founder, it was this process that distinguished the new medium. “There's no question that the Web is the most measurable of all media by far,” he enthused in 1995.28

To substantiate this claim, Rick Boyce, Hotwired's vice president for advertising, enthusiastically told a trade-press reporter what he and his advertisers could learn from an I/Pro report about a Hotwired page with an AT&T banner across the top. “In a particular week, this page was served 11,955 times,” he noted. “Of the 11,955 views, 576 clicked on that banner and went through. So you start to get at an ad effectiveness-type measure, which is really critical—what we're able to see is ad wearout. It's not at all uncommon to see what I call the ‘click rate’ decline over time.” Boyce was clearly emphasizing the dual value of his site—as a point to display the ad and as a place where the visitor could click on the ad to learn more. The latter action, he suggested, was the responsibility of the advertiser—in this case, AT&T—which needed to bring the attitude of a direct marketer to this new form of display advertising. AT&T needed to realize that as the click rate starts to dip, it should adjust its message in an attempt to reinvigorate the click numbers. “AT&T has been on our site for 40 weeks and has not changed once,” he said disapprovingly. “If they'd gone into this with a mind-set that was in touch with what's happening in the content area, they could really have maximized their investment. Agencies need to begin retraining themselves to a different model. The old model is, the ad's done, let's go on to the next project. In this medium, the ad's never done. It can never be done.”29

Boyce's enthusiasm notwithstanding, many advertisers were not convinced that they could learn enough about visitors to publishing websites to justify spending lots of money on display ads. Two problems stood out in particular. One was simply the possibility of fraudulent data. As Adweek noted, “Web-auditing today is roughly equivalent to ABC, Fox and the USA Network each sending their own ratings points to Nielsen directly, and asking the ad industry to trust them to report their audience honestly.” Concern about this difficulty led to discussions of independent entities such as the Audit Bureau of Circulation taking over the audit side of I/Pro's business, but nothing had yet happened. (Eventually ABC did offer audits.) The second problem was that neither websites nor I/Pro in September 1995 had the ability to track individuals. I/Pro could infer individual “sessions” by their streams of click patterns but couldn't be sure if the clicks represented particular visitors. Moreover, Poler admitted, in the absence of visitors’ registering, “we can estimate how many visitors came to the site, but we can't tell if it's a repeat visitor versus a new visitor. The technology doesn't let you get 100 percent accurate numbers. There's no standard yet for keeping track of a visitor throughout his or her session.”30

These sorts of concerns soured some media planners on the idea that display ads were the way to use the Web. Erica Guen, the head of strategic media resources at the Saatchi & Saatchi agency, stated flatly in 1995 that on publisher websites “you're going to know a great deal less than you can know about users in other media.” Her preference was to see the Web as totally different from the traditional media space. The goal should not be to advertise on websites with independent content but to create websites for advertisers that would gauge their success not through audits but through the building of relationships with customers and potential customers. She offered the example of a loyalty program instituted by Toyota Interactive on Prodigy. The site offered bulletin boards, chat sessions, and product information to members who accessed the service by typing in their vehicle I.D. number. A year later, she said, the company had tracked sales of five hundred new Toyotas to customers who'd participated in the program. “More and more,” she asserted, “I'm only going to use my Internet presence to strengthen relationships with consumers—and that's not something that's measured in numbers of users, but in terms of quality of measurements.”31

How much emphasis to place on advertisers’ websites versus display ads was a topic of much discussion among media planners and their clients. Their hesitancy about Web advertising was reflected in data the Jupiter Communications internet-consulting firm released in mid-1996. Jupiter found that the Web business had gone from virtually zero to in 1994 to $71.6 million in the first half of 1996. It expected Web spending to zoom past $300 million by the end of that year.32 From fall 1994 through July 1996, forty-six of the top one hundred advertisers purchased ad space on the Web. Yet Jupiter found that only eleven of the top media advertisers in traditional media ranked among the top fifty Web spenders during July 1996. Telecommunication and computer companies trying to sell internet-related services or wares showed up on the list, but a lot of the spenders were simply Web companies trying to draw visitors from other websites to their own sites so that they could sell ads at high prices. These Web companies were using money they received from venture capitalists, who hoped the sites could ratchet up sales and then go public, providing them with huge windfalls. Nontech marketers appeared in abundance, but they were spending rather little cash.33 Advertising practitioners realized that with such a profile of corporate patrons, the Web didn't have a chance to get the really big money to compete with television, radio, magazines, and newspapers. “Web publishers know,” an Advertising Age reporter wrote, “that for Web advertising to succeed the number of big marketers participating must increase and the amount spent by Web-centric companies must go down.”34

In late 1995, I/Pro's Ariel Poler thought he had a solution for increasing major marketers’ interest in display advertising: a universal registration system that would track a person across all websites he or she visited and report it to marketers who paid I/Pro. Called I/CODE, it would, according to Poler, be a “name-tag system.” As he described his plan to Adweek, every user logging on to an I/CODE-compatible website would have to enter a user name and password. I/PRO software would then be able to track every page he or she logged on to. And, he suggested, if the user could be convinced, just once, to fill out a questionnaire providing basic personal data—i.e., sex, age, income level—then I/PRO would be able to provide advertisers with detailed demographic information about his or her Web buys. “I sure hope that the standard will be something like I/CODE,” he told Adweek. “But if I/CODE doesn't fly, we'll have to find other mechanisms that do.”35

As it turned out, I-CODE went nowhere. Critics doubted that people would sign up, but Poler later recalled that tests with Playboy magazine's website attracted a million registrants. He said that the sites were reluctant to sign up with I-CODE because they feared that Poler rather than they would own the data.36 As he suggested, though, the marketing logic of the day was demanding this kind of information. The coming years would move in I/CODE's direction incrementally and without going to the trouble of getting visitors’ permission.

The “cookie” marked the beginning of this shift. Ultimately it would do more to shape advertising—and social attention—on the Web than any other invention apart from the browser itself. Its profound significance seems not to have been apparent at the time to Lou Montulli, who created the cookie while working for Netscape Communications in 1994 to solve a marketing problem. Montulli was charged with devising a better “shopping cart,” which enables a website to keep track of the different items that a customer sets aside for purchase. Without a way to identify the customer, every click to put an item in the cart would appear to the online store as if it were originating from a different individual. Consequently, a person would not be able to buy more than one item at a time. Previous attempts to solve this problem involved storing information about the ongoing transaction in the Web address, or uniform resource locator (URL), of an individual's browser, which could be read by the store to differentiate between what each individual customer was buying. These methods didn't work very well, so technicians in the server division asked Montulli to come up with something better.37 His concept, which he refined with the help of John Giannadrea, another Netscape employee, was a small text file—a “cookie”—that a website could place on a visitor's computer. The file would have an identification code for the visitor and other codes detailing the person's clicks during the visit. The next time the person used the same computer to access the website, tags on the browser would recognize the cookie. By decoding the information the site would learn where the user of that computer had clicked previously, what had been purchased, and even what had been placed in the shopping cart even if the shopper had decided not to click through to give payment information and complete the purchase.

Montulli named his invention a cookie because the concept seemed similar to what computer programmers called “magic cookies”: packets of data a program receives and sends out again unchanged. By itself the cookie could not distinguish between different people using the same computer; Montulli and Giannadrea made a conscious decision to have the cookie work without asking the computer user to accept or contribute information to it. Consequently, this seamless approach had an ominous downside: by not requiring the computer user's permission to accept the cookie, the two programmers were legitimating the trend toward lack of openness and inserting it into the center of the consumer's digital transactions with marketers.

Netscape installed cookie-placement capability into its Navigator browser in late 1994. Microsoft incorporated it into its first Internet Explorer browser, released in 1995. The head of Microsoft's browser efforts, Michael Wallent, suggested that to compete with Netscape Navigator his company needed Web publishers and advertisers to conform to Internet Explorer's specifications; otherwise large numbers of internet users would not adopt this browser. If online companies were to support Explorer, he recalled in 2001, Explorer needed to support cookies: “I don't think anyone ever thought that cookies were anything that could be excluded in the browser and have that browser become a success in the marketplace.”38

Most directly, the cookie allowed websites to quietly determine the number of separate individuals entering various parts of their domains and clicking on their ads. Yet the major advertisers that were experimenting with the internet in 1996 made it clear that knowing this was not enough. “We're not buying banners right now because there's no adequate measurement factor out there,” said Mike Perugi, communications specialist for Dodge Division in October 1996.39 Underscore the word adequate; entrepreneurs certainly kept trying to find measurements that they hoped would satisfy traditional marketers. Server-side measurement firms such as I/Pro could make visible certain types of information about a visit, but not much about the specific characteristics of the audience. To fill that void, audience-side companies emerged to bid for media buyers’ research money. The audience-side group aimed to create a ratings system that was similar to Nielsen's television and Arbitron's radio ratings. So, for example, PC Meter (later called Media Metrix), a subsidiary of the research firm NDP, systematically audited the computers of a panel of ten thousand U.S. households to determine what sites they visited. It charged ad agencies and websites an annual subscription to its findings starting at $50,000. Another such company, @plan, polled forty thousand Web users with partner Gallup Organization and charged $65,000 annually for access to its data. For the more frugal buyers, advertisers, and Web publishers, the research company Relevant Knowledge said it could do the job for $10,000 annually with a five-thousand-person panel. As had become typical with the arrival of new Web metrics, those involved in Web ratings depicted the activity as a major advance that would finally bring major marketers to treat the Web as an established rather than a still-experimental advertising vehicle. “There is an absolute crying need for third-party neutral information that will help advance this new medium,” said Matt Wright, CEO of @plan, when describing his new firm's anticipated contributions toward that end. Jeff Levy, CEO of Relevant Knowledge, was even more direct. “The large package-goods companies are not going on the Web right now because there is no reliable standard [for measuring Web usage]. If we can give those people the tools to understand the Web, they will spend a lot more than $300 million or even $5 billion to chase 44.7 million individuals,” he said, referring to Relevant Knowledge's projected number of individual Web users online in the United States in 1997.40

And yet big marketers still weren't overly enthused. 3M Co.’s Buf Puf skin-care product brand, which launched a website for teens in April 1996, received that sort of audience quantification through its ad agency, Martin/Williams Advertising of Minneapolis. “We've been able to track not only the ‘hits’ but what the audience is seeking beyond the home page,” said Buf Puf's brand supervisor. “We're interested in the overall number of user sessions, which we calculate has been in the range of 1,000 daily.” Buf Puf, like many website owners, invited visitors to supply information about themselves, including e-mail addresses, and their reaction to the material posted on the website. The brand manager said that this information was more valuable to Buf Puf than was the actual number of visitors. “Having numbers is great, but it's not telling us much,” she told Advertising Age later that year.41

Part of the problem was that media buyers and their clients didn't know what to make of the value of clicks, or even user sessions (clicks that cookies showed as being from one individual's visit). “What does it mean when 1,000 people visit your site?” asked Farris Khan, interactive consumer marketing coordinator at General Motors Corp.’s Saturn division. “Is it better or worse than 100,000 people seeing your TV commercial?”42 It was not an idle philosophical issue. Apart from comparisons to traditional media, it led to questions about whom to target with websites (for example, should the target be potential customers or brand loyalists), the relative value of a website with many pages versus a website that allowed visitors to obtain the desired information quickly and leave, and the utility of a website versus a banner ad. Those ads, too, were raising major questions. Media buyers and chagrined website publishers were finding that after an initial flurry of high activity the number of clicks on banner ads dropped dramatically. In 1996 I/Pro reported only a 2.1 percent average click-through rate. Moreover, a five-hundred-person survey by Market Facts for Advertising Age found that 42 percent of online users said they never look at ad banners.43

It was, in fact, the low percentage of clicks along with the dearth of information about people who saw Web ads without clicking that led Procter and Gamble to a decision in April 1996 that shook Web publishers. Together with its interactive agency media.com (a subsidiary of Grey Advertising's MediaCom), P&G used audience-side ratings rankings to solicit proposals from major websites to place banner ads for sites built around its products.44 The catch was that P&G would accept only those proposals that jettisoned the standard cost-per-thousand model and agreed to base advertising fees on the number of times an ad was clicked and that visitor sent to a P&G site. Media.com executives had urged their client to take this conservative buying route; they figured they could, given P&G's clout.45 They knew that lots of people would see the ads and not click, so they would get a residual display audience without cost. But in justifying the move the buyers complained that site costs per thousand were outrageous in view of the lack of sufficiently detailed data about visitors and the lack of interactivity with them that was supposed to be the attraction to marketers of the online environment. “If you can't articulate a message with sufficient depth and create awareness, we will maintain that it's not what the medium's about,” said Alec Gerster, the head of media buying at Grey. “Then I shouldn't be spending money there, I can go and buy outdoor.” He added: “You know if someone's entering the [sponsor's] site, you're probably getting pretty good value. It's someone who's interested, who wants to find out more.”46

P&G announced it had made deals with Yahoo! and a few other big Web names. Yet executives representing many site publishers initially registered shock and put up lots of resistance. AOL refused the deal. Because it was the largest internet service provider, its leaders had the advantage of believing (correctly) that its ability to supply P&G with millions of customers gave it clout. As an internet service provider (ISP) with “walled gardens”—websites exclusively for its users—AOL also knew more about its audience than other sites knew about theirs. Still, one of its executives repudiated the P&G approach, which other marketers tried to copy, on principle. “What we are telling advertisers is that with a click-through standard, if TV advertising had developed on an inquiry-only basis, [the medium] would have elevated into nothing but infomercials,” and in the process would have alienated much of the audience.47 An Advertising Age editorial agreed, noting “the dirty secret of Web publishing: Banners just don't get clicked. On many Web pages, they're mere billboards. And if they're not clicked, they're not interactive.” P&G, the editorial said, was right to see the internet as “all about interactivity. Viewing it as a place to put a pretty picture is the wrong attitude entirely.” But the editorial suggested that P&G and other marketers find new ways to succeed rather than make demands that would ensure their failure. Calling the internet “the biggest and brightest ad medium to come along in years,” the editorial reminded its readers that “this is a fledgling economy… . Birds with clipped wings will never really be able to fly.”48 It was a wise exhortation that marketers and their agencies never quite followed.

P&G backed off from its click-through demand. Instead, in a quiet compromise implemented by other firms as well, the packaged-goods giant sometimes pursued deals that combined clicks with CPMs. One version, called a hybrid purchase, paid mostly on a CPM basis but added certain click-through expectations to the deal. Another, dubbed performance CPMs, involved negotiating a lower CPM price sweetened by revenues based on actions (sales, website visits) generated through clicks. P&G and other major marketers also began to emphasize new ways to encourage consumers to click on the ads. They were prodded by publishers, who claimed that they could not be blamed if Web users weren't clicking on what most observers agreed were uncreative banners.

Advertising agencies that focused on creative campaigns for the Web reveled in the term “beyond the banner” as an indicator of the new focus. It referred to new forms of ad units that transcended the static company logo and message in a banner ad. Some were interstitials, ads that covered the entire screen after a click while a new Web page was loading. A short-lived company called PointCast began to push news, information, and advertising through a downloadable screen saver. Its ads were animated and invited users to click to the sponsor's website. A reporter commented that “even if they don't jump to the marketer site, they still have an interactive experience with the brand.”49 A more long-lasting tactic was to create intermediate websites so that a person who clicked would not be afraid that this act would spirit him or her from the site on which the ad sat. Those banner ads carried Flash animations or Java applets—small interactive modules—that users could play without leaving the original website. AT&T's ad agency bought spots for these sorts of ads across the Web in order to invite users to its 1996 Olympics site. Similarly, in ads for GM's Oldsmobile division on Packet.com, when a user passed the cursor over the site's navigation bar, applets popped up with information about the car.50 Sun Microsystems tried to be even more alluring with its banner. It featured both a marketing message and a picture of the director of the company's science office. If you clicked on the marketing message, you would go to Sun's site; if you clicked on the picture you would hear an interview between the science-office director and the popular (“web-hot”) writer Howard Rheingold.51

Pizza Hut's 1996 Web promotion, created by BBDO and overseen by its director of media services, stood out in the category of using an ad to collect e-mail addresses and begin the kind of relationship with customers that Erica Gruen had described as the real utility of the Web. Targeting college-age men, the company eschewed driving them to its own website in favor of a deal with ESPN SportsZone's online home. Along with regular banner advertising on the site, Pizza Hut created three games—Baseball Challenge, Pigskin Pick'em, and NCAA Tournament Challenge—each requiring participants to register, which included providing an e-mail address. A couple of months after launching the games, Pizza Hut sent e-mails to 327,000 players, thanking them for participating in the games and offering them online discount coupons for pizza. Pizza Hut's agency measured the results in clicks but made clear that they were far more efficient and meaningful than standard banner click-throughs. Said BBDO vice president Frances Laufer:

[W]ithin three days after the e-mails were sent out, 14.4 percent of the recipients, or 47,115 users, had visited the coupon page. Of those, nearly 40 percent completed a form stating that they wanted more information or offers from Pizza Hut. It's too early to tell how many actually tried the “totally new pizza.”

Within 10 days of the blanket e-mail, 79,000 SportsZone regulars, or 24 percent of the original group, had visited the coupon page. This can be compared to the pull from 1.5 million guaranteed impressions the Pizza Hut banners receive per month, which have about a 2 percent click-through rate. In effect, the coupon initiative drew nearly three times the responses to a banner ad. The cost to achieve this? Roughly $500,000 a year for the entire package, which includes development fees for the games, banners and maintenance.52

“We went after the user by giving them an experience that was relevant to them,” explained Laufer. “There's a game they can play every week. That was the strategy—to brand ourself [sic] and leverage the Internet. It's not just a Pizza Hut ad there. Otherwise we're not making use of the medium for what it could be.”53

Results such as Pizza Hut's helped focus attention on new ways to think of relevant interactive measures to get visitors to click on ads and identify themselves. It didn't solve the problem accountability-oriented media buyers had with even websites such as ESPN SportsZone. The Web was supposed to be “the most measureable of media,” and yet most websites knew little about who was coming to them, while at the same time many insisted on CPM charges that were at best on par with traditional media and at worst substantially higher. Those prices were floated even though the sites knew almost nothing about whether people were attending to the ads unless they clicked on them. Ironically, despite the hoopla about the new medium, marketers believed that they obtained more information about audiences from traditional media than from internet sites. “What advertisers ultimately seek,” wrote a reporter in 1996, “are the kinds of in-depth demographic and psychographic information about Web page users that television, magazines, radio and newspapers provide about their viewers and subscribers.”54

The pressure to present data to help advertisers account for costs increased as major advertising agencies became involved. In 1996 advertisers spent $300 million to advertise online, according to Jupiter Communications.55 The amount was apparently enough, and the direction sufficiently upward, that in 1997 a number of major advertising companies began digital buying units after years of hesitation. Previously, the buying department that handled traditional media had also handled Web work. Web enthusiasts saw the creation of separate departments as real progress toward management's understanding that the new world needed special handling. “As clients require more capabilities, we're listening to what they want,” said Rishad Tobaccowala, president of Giant Step, advertising agency Leo Burnett's new interactive subsidiary. “It makes a lot of sense for the interactive media experts to be here rather than at Burnett.”56 Similarly, Grey Interactive spun off its media.com division as a separate entity available to solicit clients for strategy, planning, and buying in the interactive space. David Dowling, director of the new unit, said that clients were demanding more online media expertise. “Media continues to increase in importance in (the online) area,” he said. Marketers that built websites are looking for creative ways to promote them, he said, adding, “We can benefit from being a separate entity. There's a lot of opportunity in this area.”57

One practical need buyers and publishers noted was standardized ad sizes and shapes so that agencies would not have to spend substantial time sizing ads repeatedly to fit sites’ varied specifications. Web publishers worried that the difficulty of doing this may have deterred media buyers from certain sites. Two new industry groups that represented publisher interests—the Coalition for Advertising Supported Information & Entertainment (CASIE) and the Internet Advertising Bureau (IAB)—were working separately on guidelines; eventually they would co-release a set of eight standardized banners. But the real pressure on the Web-specific media buyers was to show their clients—and presumably traditional media buyers in their agencies—that they were pushing for the data they needed to serve their clients. Joe Philport, an executive at Competitive Media Reporting, which assessed national advertisers’ media buys, suggested that the Web world was trying his clients’ patience. “We know that advertisers aren't going to wait forever … to know how their Web site ranks [with audiences] compared to the competition.”58

Entrepreneurs looked to the cookie as the natural vehicle to learn more and more what people were doing without having them knowingly raise their hands. Cookie inventor Montulli confessed to mixed emotions about this development. He and co-inventor Giannadrea originally had taken steps to limit the information sent back to the website. Montulli recalled that he considered and rejected an idea for creating a single identification number that a person's browser would use in all Web explorations. That would have made the cookie a universal tracking mechanism, a phenomenon he wanted to avoid.59 To make sure companies that had not inserted the cookie in the browser could not update or alter it, Netscape Navigator 2.0 also baked a “same origin policy” into the cookie structure. This meant that while any party could note the existence of a cookie in a visitor's browser, only a site that created the cookie could read or change it.60

Montulli and Giannadrea did decide to design the cookie so its creator could detect it across all the sites the creator controlled. Savvy marketing entrepreneurs quickly realized that if they received permission to place cookies across sites, they could note what individuals did after they went to one site. If a cookie were detected at one of the related sites, the marketers could serve an ad to that individual's screen in sync not only with the topic of the current website but with those visited previously. Data about what the cookie owner learned about the individual could be added to the cookie (or stored on a server and linked to the cookie), and revenues could be shared with the participating sites. The challenge was to incorporate many sites to create an ad network. Technologists called cookies in these networks third-party cookies because they could be controlled by an entity separate from the website on which they appear or from their advertiser. The advantage of a wide-ranging network to an advertiser was the high likelihood of noticing a computer with a related cookie.61

By the fall of 1996 and early 1997 advertising networks had emerged “in a big way,” according to a trade writer of the time. Part of the reason was that the Forrester Research consulting firm endorsed the entity as an easy and efficient way for Web publishers to sell advertising.62 Networks would share their revenues with the sites on which they served the ads. In fact, the idea of aggregating sites to sell ads was so popular that even content providers such as Starwave, CNET, and Yahoo! gave advertisers the ability to buy ad space across their domains and, through cookies, infer whether the ads were going to new or repeat visitors. Nobody in these articles mentioned whether the people followed would want that to happen. Some observers saw the practice of linking sites to sell commercial messages as merely an extension of ad representation in traditional media. Adweek called it one of the early signs of new media “convergence,” noting that the television advertising representation firms Petry and Katz were involved in packaging groups of sites for clients just as they packaged stations for television commercials. Petry noted that it could serve ads to a bit more than sixty sites in May 1997; Katz had fourteen affiliates, including Netfind, which was America Online's search service, and AOL.com itself.63

It quickly became apparent that a major problem for these aggregators was scale. In traditional media, buyers for major advertisers were accustomed to buying huge numbers of people by contacting just a few media firms. With the internet, audiences were scattered throughout the Web so that a few websites delivered a relatively small number of people. Media buyers, though, wanted both the simplicity of reaching huge numbers with a few buys as well as the ability to take advantage of the targeting of interests that the Web presumably allowed. As a result, while Petry, Katz, and other networks concentrated on small numbers of sites that major advertisers knew and found credible, a growing number of other networks prided themselves on delivering to advertising agencies and their clients huge numbers of people across thousands of sites. Executives at internet groups in major agencies as well as in fledging internet advertising agencies such as Modem Media acknowledged that the chief benefit of such operations was saving the time of staff members who would otherwise have to contact many websites individually. But Adweek suggested in May 1997 that issues of measurement and credibility plagued them, and that networks buyers who used these operations were fooling themselves. “The ad networks are like young adolescents,” the article stated. “They have grown to considerable size physically, representing dozens or even thousands of sites, but they haven't evolved intellectually to the same extent. They have yet to assure buyers that their targeting works, for one thing, and no model has presented itself as the ultimate advertising option—one with extensive reach and desirable demographics and quality sites.”64

The article presented the Commonwealth Network, Cliqnow, and DoubleClick as examples. Cliqnow aggregated its affiliate websites into the five “networks”—on travel, golf, college, financial, and children's topics—and sold ads to advertisers based on the affinity of the topics to the audience they were pursuing. Media buyers, though, said that the differentiation among categories wasn't granular enough for their needs. Commonwealth touted the information it collected through registration of users of its 3,400 small-to-medium website affiliates. Media buyers worried that they didn't know exactly which sites were in the network and that sites they never heard of could post content that wouldn't go well with their clients’ ads. DoubleClick was the most elite of the three, a network with popular sites like the AltaVista search engine and the travel booking site Travelocity, and that reported more than five hundred million deliverable impressions per month and claimed high-end internet demographics judging by the sites’ registration data. “But DoubleClick has its own hurdles,” noted Adweek. “Its boast of whiz-bang technology that zips advertisers’ messages across the screens of their most desirable eyeballs isn't a claim every buyer believes.” Buyers were concerned that DoubleClick still could not guarantee specific types of users and was making assertions about users that couldn't be verified. For its part, DoubleClick insisted it was promising only that ads would be technically well served to a great group of sites. Kevin O'Conner, the firm's CEO, saw the answer to accountability in the impending sophisticated use of cookies. He said that based on DoubleClick's so-far limited use of cookies, it could follow individuals’ sequence of clicks (their “clickstreams”) and the number of pages opened, and through that infer a few categories of users that advertisers might want to consider. For example, “Someone who signs on to their ISP at 10 am on Fridays is likely to be a home-office type.” O'Conner said that more solid attributions about users would await full deployment of its cookie technology.65

Many internet advertising network and buying executives shared this expectation that cookies would soon solve the problem of audience identification and verification. Consequently, they were aghast when a working group from the Internet Society's Internet Engineering Task Force identified third-party cookies as a considerable privacy threat. Founded in 1992 by internet pioneers Vint Cerf, Bob Kahn, and Lyman Chapin, the nonprofit organization aimed to provide direction in internet-related standards, education, and policy. The Task Force advised that cookies “should be shut off unless someone decides they're willing to accept them.”66 Opponents of this opt-in proposal were quite happy with the approach of the then-current Netscape browser (with a dominant 70 percent of the market) and Microsoft Explorer (with most of the rest). Both allowed users to change their “cookie preferences” manually to show an alert when a site was trying to deliver a cookie, but they could not stop cookies. The online marketing executives noted that building the engineers’ restrictions into the browser would affect not only ad networks, which dealt with thousands of sites. It would also prevent Web publishers with multiple sites, such as CNET, News.com, and Download.com, from using cookies across their firm's own domains to follow and serve ads to visitors without their permission. To the cookie supporters that restriction seemed so obviously wrong as to invalidate the whole opt-in proposal.

The fight over third-party cookies was the first time that accountability to marketers on the Web publicly came into direct conflict with the Web's users. Jonathan Rosenberg, CNET's executive vice president of technology, argued it was actually a false choice. He contended that the Internet Society's engineers were conceiving trouble where none existed. “It seems to me it's an extreme reaction from a bunch of people who are saying … ‘We're going to convince you there is a privacy problem on the Web,’” he told Advertising Age in 1997.67 Sue Doyle, director of marketing at AdSmart, an ad network, went right to the heart of the matter: the future of Web advertising. “What concerns us is the tone of the proposal, which is that advertising is not good for us, so we want to avoid it,” she exclaimed. “That begs the question, how is the Web going to be funded?”68

When she said this in March 1997, internet ad network executives already were drafting counterproposals and lobbying Netscape and Microsoft directly against accepting the engineers’ cookie proposal. By early May, Netscape decided to bow to its commercial constituencies while nodding to the engineers’ concerns. The company announced that the next version of the Netscape Navigator browser would still accept all types of cookies. That would enable ad networks and publishers to continue using them to deliver targeted ads and content to internet users. “We are not planning on making any changes at all to the basic function of Navigator with regard to cookies,” Lou Montulli told Advertising Age.69 Montulli, who had been a member of the IETF working group and who was feeling strong pressure from its members, did signal a compromise: the next version of Navigator, 4.0, would offer the ability to reject all cookies outright as well as reject only certain types of cookies.70 “We simply will be adding the ability for the users to make changes to cookie acceptance policies, if they wish,” he noted.71 Advertising Age’s reporter suggested that members of the online ad industry, whom he said had been in a “panic” about the proposed changes, should feel relieved. He played down the coming ability to stop cookies, noting that “because the vast majority of Web users never bother to change their cookie preferences, the effect on companies that use cookies as targeting tools will be minimal.” The magazine's headline was “Advertisers Win One in Debate over ‘Cookies.’”72

As it turned out, the intra-industry squabble marked only the beginning of a public debate over the right of site visitors to know what companies were learning about them. Publisher, ad network, and advertising executives stressed that cookies “aren't able to grab an email address” or to probe an individual's computer. Concern escalated nonetheless, fueled by a 1996 article in MacWeek about the ways cookies in combination with an advanced writing tool for Web browsers, called JavaScript, could be used on the Netscape browser for the Apple Macintosh computer to “retrieve a user's email address, real name and activity from the Netscape cache file, which documents a user's movement on the Web.”73 Netscape acknowledged the problem and said it was taking steps to remedy it and make the cookie more secure. Web publishers and marketers alike kept repeating that cookies were anonymous, so personal privacy was not at issue. But such incidents and the mere presence of cookies worried privacy advocates that the new medium might threaten its users with the theft of personal information. That same year saw a report from the advocacy group Center for Media Education titled “Web of Deceit,” about how marketers were using their websites to pull personal information from youngsters about themselves and family members.74 The storm it created among legislators led in 1998 to the Children's Online Privacy Protection Act (COPPA), which prohibited websites from receiving personally identifiable information from children younger than thirteen years of age without their parents’ permission. Industry assurances of self-regulation halted government attempts to require any information about the marketing activities that were going on behind the screen.

A lot was certainly going on. The arguments in Washington didn't stop a great rush by Web publishers and third-party ad networks in the late 1990s to profile individual Web users in order to attract money to reach them. Whereas in the past ad buyers had relied on the subject matter of a site to infer user interests—much as they did with traditional television programs or magazines—now they increasingly had the opportunity to get more detailed information by analyzing their clicking habits across different sites, or on different pages of the same sites. Cross-site clicking was typically the province of the ad networks, and competition spurred them toward creative attempts to describe people in ways advertisers would like. DoubleClick, for example, came up with a way to use cookies to observe a visitor's behavior and then serve an ad based on that pattern and retarget the same person with an ad for the same product on another venue in its 3,800-site network. Another approach that got press attention was the AdSmart network's collaboration with Engage Technologies. AdSmart classified every page of the more than ninety sites in its network into about 450 content categories. Tracking more than ten million of the AdSmart network visitors via cross-site cookie files, Engage inferred their demographic and personality characteristics and stored those data, linked to the cookies, on its computers. Moreover, using AdSmart's page categorization, Engage could compile detailed profiles of the visitors’ interests based on the pages they viewed, which could also be stored on the database. The collected data could then be used to serve relevant ads to individuals when (and if) the cookie in their computer indicated that they were visiting an AdSmart-network site. “We can target users on what we call ‘first-time relevance,’” said Paul Schaut, CEO of Engage. “With our system, from the first moment a visitor shows up at a sports site, we already know he's a baseball fan and can start serving relevant ads immediately.”75

While tracking people across websites made use of cookies, ad networks’ desire to track what people did on particular pages of a specific site required them to create a different technology. The challenge was that their ads were not stored on the same computer servers as the pages of the websites onto which the ads were served. When all the elements of a website are stored together on the same computer server, the controller of the website can easily follow a visitor across pages. The server knows and can store the internet address of the user's computer (referred to as the internet protocol, or IP, address) requesting the page via the click. The website's employees can then retrieve the information from the server's activity records—its “log files.”

The situation becomes more complex, though, when an ad network that serves an ad wants to know the specific website page on which a visitor sees an ad because the network doesn't have access to the site's server logs. Instead, it uses a technology called a Web bug. As described by Richard Smith of the Electronic Frontier Foundation, a digital consumer rights advocacy group, a Web bug in the late 1990s was a small, invisible graphic, typically only one pixel by one pixel in size.76 When the user clicked on the page, an advertising image was downloaded. This download required the browser to request the image from the ad network's server that was storing it. That request would include the page on which the ad would appear. In that way, the ad network would know which pages the visitor had browsed, and that information could be connected to that person's cookie ID and stored either on the cookie or on the network's computers.

In fact, an ad didn't have to accompany a Web bug. A marketing firm might simply have purchased permission from the website to place an invisible graphic on its pages that would not download a commercial message but would instead trigger the gathering of information about the visitor yet invisible to him. The purpose was not to advertise but to learn what pages particular audience members were visiting on sites. A similar Web bug in e-mails that used graphics allowed advertisers to note whether and when the messages were opened. Together with cookie developments, server files and Web bugs gave publishers and ad networks an increasing number of ways to note the amount of people coming to their domains and to profile them in ways that might attract advertisers. And, following the precedent the cookie had set, those activities were hidden from the people whose movements were being recorded. Certainly, Web bugs did not have to be invisible. Smith asked, “Why are web bugs invisible on a page?” He answered: “To hide the fact that monitoring is taking place.”77

And yet, despite the growth of cookies, third-party advertising firms, Web bugs, and server side-audits, Advertising Age found in August 1998 that “some [major] advertisers are still hesitant” about buying into the Web. The trade magazine added that “lack of accurate measurement and difficulty tracking return on investment are cited as the biggest barriers to buying online media in a survey conducted by the Association of National Advertisers earlier this year.”78 Jed Breger, media director for Webnet Marketing, whose clients include Cablevision and Network Solutions, stated that “unsophisticated and sometimes even inaccurate measurement systems have been a real hindrance to many marketers and [have] kept them at very stable advertising levels.”79 David Dowling, president of Grey Interactive's media.com, which handled planning and buying for P&G, agreed. “Advertisers will be excited to spend more money online when the medium proves it is accountable,” he said.80 To some commentators, the problem was that many advertisers were unfamiliar with the new techniques to audit the display of ads and the audiences who viewed them. A larger problem, marketers and their agency counterparts agreed, involved a lack of consistency in the measurements that companies were offering. When marketers compared the visitor numbers presented by the ratings firms, the figures often didn't mesh. They also had difficulty making sense of how those data meshed with the data that companies tracking the sites themselves were reporting. “A big part of the confusion out there now is that it's hard for people to make sense of both site-centric and audience-centric data,” noted Mary Ann Packo, president of Media Metrix. “We are trying to marry traditional audience data like unduplicated reach and demographics with more site-centric measures like how many pages are viewed or clickstreams.”81

CASIE and the IAB were trying to push Web publishers toward industrywide measurement standards.82 Publishers and technology firms were also trying to encourage media buyers into following realistic best practices regarding internet ads. Advertising Age, Business Marketing, chip maker Intel, and the search engine website AltaVista were among the groups funding a “Camp Interactive” convocation in Beaver Creek, Colorado, where more than three hundred media-buying executives learned ways to think about evaluating websites, generating realistic research-based campaign ideas, and pitching these ideas to clients.83 Procter and Gamble had the even more ambitious goal of remaking the Web into a packaged-goods marketer's dream. Since the acclaimed speech by Ed Artzt in 1994 executives had puzzled over the best ways to think about the Web's role in their marketing. P&G's insistence on click-through measurement had turned into a public-relations mess. Moreover, the pay-per-click (PPC) model seemed to be focused on a vision of the Web that didn't fit with the company's emerging strategy of differentially targeting particular consumer segments with messages that would enhance brands and persuade at the same time.

P&G's advertising expenditures reflected the firm's lack of confidence in the medium so far. Four years into the presence of the Web as a commercial medium, the company still pitched a relative pittance its way. In the fourth quarter of its 1998 fiscal year, P&G spent $3 million on U.S. internet advertising out of an annual $3 billion worldwide advertising budget. P&G watchers pointed out that its approach to television had taken quite a different trajectory. Its commitment to the medium jumped from 2 percent of its ad spending in 1950 to 61 percent five years later.84

To Denis Beausejour, the company's vice president of advertising, the issue was very much how to configure the Web so it could conceivably replace conventional television, P&G's advertising mainstay. Beausejour was intensely interested in applying the Web to product awareness. Placing products in front of consumers was a crucial activity for P&G divisions. Their customers were continually aware of the competition among companies for their attention regarding cleaning products, health care items, cosmetics, diapers, and other typical purchases. The promise of the Web was that a TV-like ad on a site could stir emotions that would reinforce branding while encouraging clicks that would lead people to learn more and leave their e-mail addresses for coupons and other ways P&G could address them. The reality in the late 1990s was different. Not only were ads not TV-like, the then-key measure of interactivity, click-through rates, were, in the words of an expert, “miserable”—less than one half of one percent. Fixing both problems through creative-enhancing technologies was Beausejour's purpose for a two-day summit at P&G's Cincinnati headquarters in 1998 that brought together senior executives from leading technology companies, advertising agencies, and marketers, including AT&T Corp., Coca-Coca Co., Euro RSCG Worldwide, Levi Strauss & Co., McDonald's Corp., and even P&G archrival Unilever. Sessions of the summit discussed the problems of Americans’ dial-up connection to the Web and ways to change that system; the future of computer-chip manufacturing and the Web's ability to process audiovisual materials; and the need for an industry association to foster collaboration among its various sectors in order to speed up the Web's development as an advertising medium.85

It became clear there, though, that the central connection most Americans had to the Web, America Online, had no intention of abandoning its lucrative dial-up service to encourage a much less profitable broadband service. The firm had no broadband infrastructure of its own, and tying into broadband providers to boost customer speeds would cost it far more money than it spent on linking to standard phone lines.86 If it and other dial-up service providers encouraged their members to switch to broadband, their profit margins would plummet. That awareness seems to have dampened marketers’ enthusiasm about the Web beating television with branding commercials anytime soon. Rather than utopian expectations, efficiency and pragmatism reigned. In a move that startled the trade, P&G's buying agency, media.com, told website executives that it would purchase ad space on the basis of a flat $5 cost per thousand ad impressions served onto Web pages. The company implied that the amount was similar to what the packaged-goods giant paid television broadcasters for a thousand viewers of a program (the cost per mil, or CPM) around which an ad appeared. The research firm eMarketer found in 1999 that the asking CPM for a thirty-second prime-time commercial was $12. This was at a time when sites were earning an average $36 CPM for Web buys, according to the measurement firm AdKnowledge.87

Many in an advertising industry mired in a business recession at the turn of the millennium seemed to agree that Web publishing did not deserve a lot of attention from brand advertisers. In 2001 an article in the Industry Standard, a richly produced short-lived print magazine subtitled “The Newsmagazine of the Internet Economy,” noted that “about 12 percent of media consumption is on the Internet, yet it accounts for 3 percent or less of overall U.S. ad dollars.” One reason, the article stated, was that the segregation of internet marketing from other areas of media spending was making it less likely that traditional media planners would integrate Web advertising into a client's campaign. Traditional planners, “creatives” (copywriters and art directors), and clients didn't mind the omission, according to the article, because “traditional advertisers, it turns out, never really bought into this new medium. They were sold the Web on the basis of fear, convinced that if they didn't jump on the bandwagon, they would go the way of all dinosaurs… . These extravagant promises are not easily forgotten—or forgiven.” And, noted the article, “As the world waits for broadband and various forms of ‘rich media’ that promise to make Internet ads akin to TV commercials, many of the marketers holding the purse strings on big campaigns remain unconvinced.”88 The co-CEO and chief creative officer of the large Omnicom-owned BBDO advertising agency expressed the Web's utility in a way that might make online publishers wince: “Today the Net is fine for a discount offer. But nobody has figured out how to build brands.”89

The spread of broadband allowed for vivid commercial possibilities by the late 2000s. Nevertheless, publishers found that the media buyers for P&G and other large marketers still expected Web CPMs to be cheap. They encouraged competition among internet vehicles to keep it that way, no matter the technology or the website publishers’ expenses. Helping them was a search engine called Google. Even though major advertisers didn't consider it a builder of brand images, they joined with thousands of small marketers that saw it as a practical, efficient, and more measureable way than display advertising to lead consumers to clients’ products. In response, Web publishers joined a phalanx of firms in the belief that the path to lucrative display advertising would come through giving sponsors more and more information about individual anonymous site visitors. The future of advertising, publishing, and audience targeting on the Web would be shaped by these developments.