9
GAINING AN EDGE

Beyond market research

In the past 50 years market research has really become an unhelpful distraction to business. The implicit belief is that perfect judgments can be made each time a decision exists to be taken, so long as the right people are asked the right questions in the right way. Over this period many of the initiatives that the research process has informed have gone on to be successful, although I suspect often not for the reasons research thought they were. Such “successes” have been enough to justify the wishful thought that people usually know what they think and why they do what they do; a thought that most of us would prefer to believe is true of ourselves.

Market research is a relatively new invention. Before it occurred to someone that you could just ask people what they thought and what they wanted and do whatever they said, some other process was required. As long ago as the 1920s, Claude Hopkins wrote Scientific Advertising. In it he explained that expertise in advertising should be developed by learning the principles and proving them by repeated tests, comparing one way with another, and studying the results:

One ad is compared to another, one method with another. Headlines, settings, sizes, arguments and pictures are compared. To reduce the cost of results even one percent means much in some mail order advertising. One must know what is best.

In lines where direct returns are impossible we compare one town with another. Scores of methods may be compared in this way, measured by cost of sales.1

As the title of his book suggests, Hopkins believed in a scientific approach. But he did that because it gave him the license to suggest things that his clients thought were ridiculous and to show that his creativity, understanding of people, and belief in the benefits of advertising were justified. When he was presented with a dud soap brand called Palmolive, he recalled from his bible-studying days that olive oil had been used by the wealthy as a beauty treatment. His clients thought that his ad, depicting Cleopatra’s skin being rubbed with oil, was bizarre, but he tested the campaign and it succeeded. Hopkins had invented beauty advertising.

What we can add to the process is an understanding of human psychology. This understanding is continually evolving, but it’s important to recognize that people themselves are more or less a constant; it is the context that shifts. As Hopkins said:

Human nature is perpetual. In most respects, it is the same today as in the time of Caesar. So the principles of psychology are fixed and enduring.

Ultimately, success will be determined not by how thoroughly organizations research their customers, but by how astutely they are able to understand the response to what they are currently doing and how quickly they can evaluate and implement alternatives. The classic case of Avis Rent a Car’s “We Try Harder” campaign is an example that encapsulates all of the elements of the research dilemma. The campaign acknowledged the company’s number two position in the market and people in focus groups hated it, but the confidence of agency chief Bill Bernbach and willingness of Avis CEO Robert Townshend to try it led to the company challenging Hertz for the number one position for the first time. Two years later Avis dropped the campaign: once people had tried Avis they didn’t always like what they experienced. Hertz counter-attacked with a campaign that told customers it was number one for a reason (a powerful social proof message). As one commentator put it, “People didn’t care how hard Avis tried, they only cared how effective Avis was.” It didn’t help that, while the slogan became something of a cultural sensation, comedians picked up on the campaign and jokes about “number twos” became associated with Avis.

So in one short period, during which the market share of the two companies shifted by as much as 10 percentage points, research was unable to predict how people would respond to the campaign in reality, nor that the campaign would ultimately be unsuccessful because of a competitor’s response, nor the greater impact on customers’ perceptions when operational mistakes happened after people had been so sensitized to the company’s customer service efforts. Everybody was right, everybody was wrong.

The AFECT criteria:
How much faith can you have in any consumer insight?

One of my reasons for writing this book is that consumer research reaches and affects so many people in business. Whether it is the informal feedback solicited by someone with a small business or the employee of a larger company sitting in a research debrief, it can be difficult to reconcile the feeling that what you’re hearing isn’t right with the fact that an apparently well-intentioned market researcher has gathered the information in an established and professional manner from people who you believe are your customers. Hopefully, I will have explained this lack of congruence for both groups.

However, the desire for reassurance is such that undoubtedly readers of this book will still find themselves in situations where they or their organization still want something to lessen the feeling of risk and responsibility in decisions where trials aren’t possible. How confident should you be that what you’re hearing in a research debrief is something you should take to heart and act on? When should you make a stand for the feeling you have that an alternative course of action is preferable to the one recommended by the research agency?

Traditionally, the issue of confidence in research findings has been the domain of statistics. As I said in the introduction, I have no issue with statistical methods; in my view they are pure concepts that are no less valid than basic arithmetic. Granted, they are open to abuse. Just as language can be used selectively so that the truth isn’t told but nor is a lie, the selective application of statistical methods is rife with the potential to mislead. But in consumer research the fundamental issue really has nothing to do with the likelihood that if the survey were repeated the same answer would be obtained most of the time, give or take a few percentage points each way. The real question is whether the process has a chance of soliciting reliable information in the first place. As a result, the old chestnut of asking “How many people did we ask?” is not particularly relevant; at the very least, it should be the last basis for doubting the data rather than the first (and only) one.

So when that sinking feeling strikes you in a research debrief, how are you to decide whether it’s because your preconceived notions have been legitimately confounded by consumers, or because the research process is flawed? Is it you or the researcher who’s bad at their job?

Fortunately, by evaluating five aspects of the research process behind the “insights” being offered, you can gauge how much faith you should have in the conclusions. Consideration of the AFECT criteria will show how confident you can be about what research is telling you.

1 AFECT: Analysis of behavioral data

The first, and most important, question to consider is whether what you are being asked to believe is an analysis of consumer behavior or not. Is it information about what consumers do (or have done), or is it consumers’ opinions about themselves? Chapters 1, 3, 4, and 7 of this book provide copious examples of why, if it’s the latter, you have strong grounds for skepticism.

As I hope I’ve demonstrated, sales data and behavioral observation should inspire the most confidence. Where it is impossible to gather such data, ensuring that the research is derived from a behavioral focus – rather than from soliciting conscious attitudes and feelings – offers the best prospect of identifying unconscious associations and emotions. The alternative of conscious introspection normally required of the research interview process is best avoided. Even when the nature of a project demands that the research process looks into the future, I would argue that the only reliable insights will come from an analysis of current consumer behavior. Recognizing that the futurology required by many marketing projects is divorced from the process of consumer investigation is, at least, more honest, as well as affording the opportunity to learn from one’s mistakes.

2 AFECT: Frame of mind

Where consumer evidence is gathered covertly from observing the relevant retail environment, the consumer mindset takes care of itself. However, when research is conducted overtly or remotely from the consumer environment, it is more likely that the mindset of the respondent will be at odds with the real one than that it will happen to coincide with it.

Where research has been conducted without reference to the way consumers behave when interacting with the product, service, or communication, it should merit no greater confidence than were it to have been obtained by interviewing an irrelevant target audience. When there is evidence that the research has encouraged an artificial mindset – for instance by making an experience that is usually unconscious and fun, conscious and analytical, or one that is a source of anxiety, calm and considered – it should merit no greater confidence than if it had asked questions about the wrong subject!

3 AFECT: Environment

Another question to consider is the context of the research. If behavioral data isn’t available, at least research conducted in the appropriate consumer environment will have the contextual influences present. Just as importantly, it won’t have an entirely different set of environmental influences, created by virtue of the research having taken place elsewhere.

In the case of products, have the price, packaging, and competing products been included? Have unrelated products that would normally be available around the subject of the research been present? The more research becomes a process scrutinizing one aspect of the total consumer experience, the less likely it is to be able to reflect reality and real consumer responses.

4 AFECT: Covert study

Whatever the basis for the information, behavioral or otherwise, it is important to consider how apparent the focus of the research was to the consumers concerned. Where the subject of research is apparent, it dramatically increases the likelihood of influencing the response obtained. Putting the subject matter of research into the path of respondents creates a heightened sense of self-awareness that is likely to change how people behave.

While concealing the specific target of research among other alternatives is beneficial, for example testing alternative packaging designs from different brands, it is far better to conceal the nature of the research entirely by promoting it as being about something else altogether. For example, you could invite people to take part in a general discussion about newspapers while testing reactions to new packaging for a drink by having a selection of the products available, inviting people back the following day, and seeing if they select the same product and, if so, how quickly.

5 AFECT: Timeframe

Tempting as it is to believe that a detailed, in-depth, considered response is more dependable than a brief reaction, a process that turns a consumer experience that takes place in just a few seconds into a 90-minute discussion or 10-minute question-and-answer session should not persuade you. On the contrary, any time you believe the unconscious mind is involved, a quick response (that is, the one that takes place in the first second or so) is much more dependable. Consumer reality should determine the research process, not the amount of justification felt to be required. The AFECT criteria provide a means of gauging the extent to which consumer research findings are an artificial by-product of the research process or an accurate reflection of consumer reality. They are a good tool to use when considering if an investment in research is likely to be beneficial.

I have just completed a project testing new in-store communication and price tickets. My client had developed fresh information about how various products could make life easier for customers and was keen to learn the extent to which they helped people choose the right product for them. Many companies would have tested the communication in depth interviews or focus groups, and asked target consumers to discuss how useful they thought the information was; this approach would even have afforded the option of testing alternatives. For a moment let’s assume this approach was used. How reliable might the research be?

A Would it be an analysis of behavioral data? No.

F Would consumers be in a realistic frame of mind? It is possible to get consumers into a realistic mindset, but most research of this type invites either a critical (Parent ego state) or balanced rational (Adult) mindset through the nature of the questioning exchange and the moderator’s style.

E How real would the environment (or context) be? It would be extremely difficult (and expensive) to recreate the store environment.

C Would the focus of the research be covert? No, it would be overt.

T Would the timeframe given for response match the timeframe consumers would usually use? Almost certainly not. The length of the interview would be considerably longer than the time it took to read the sign, but it might be possible to record an initial reaction and use it to measure the impact of the communication (although what would the rest of the interview be spent doing?).

I would suggest that such an approach should inspire a very low level of “psychological confidence.” Consumers would be engaging artificially with the communication and the risk of their processing it in a way other than that in which they normally would is considerable. Not least, as you will see in a moment, this approach entirely disregards the most critical component of all.

Instead of conducting an artificial piece of research, my client opted to conduct a live trial and asked me to help evaluate the impact of the new communication. During the day I spent watching customers, it became apparent that no one shopping in the store looked at the new communication for more than a fraction of a second – nowhere near long enough to process consciously what their eyes were scanning. I deduced that they were unconsciously filtering out what was there, regarding it as irrelevant, and consequently it would have no opportunity to help them select a product. To confirm this, I intercepted customers and, with their back to the display, asked them what was on the communication. Some people guessed incorrectly (providing a suggestion of what their unconscious might have been hoping to see), some couldn’t recall anything, and some were totally unaware of the sign. I could advise my client with complete certainty that there was no point in pursuing the communication as intended and provide some clues as to what might work better based on what customers had inadvertently revealed when guessing.

A Was it an analysis of behavioral data? Yes.

F Were consumers in a realistic frame of mind? Yes, each customer’s mindset was purely a by-product of his or her own experience.

E How real was the environment? The environment was completely authentic.

C Was the focus of the research covert? Yes.

T Did the timeframe given for response match the timeframe consumers would usually use? Yes, it was determined by consumers themselves (and measurable in tenths of a second).

My clients could have complete confidence in the accuracy of what I was reporting. Even though only one store was used in the test, the results were so clear-cut that any sales variations could confidently be attributed to external factors.

To illustrate this qualitative scale further, it is interesting to compare how well various hypothetical examples of consumer research perform. At one extreme, consider a live trial of a new pack design. A small run of sample packs are produced and stocked in a suitably typical retail outlet, and success is gauged via sales, covert behavioral observation of consumers buying the product (e.g., time spent considering, whether the new pack is touched or not, and so on), and possibly supplemented by exit interviews to confirm or clarify what’s been observed, and to identify what was and wasn’t influential.

A Is it an analysis of behavioral data? Yes.

F Were consumers in the right frame of mind? Each customer’s mindset was purely a by-product of his or her own experience, both prior to and during the interaction with the store, fixture, and product.

E How real was the environment? The only element changed was the substitution of the new pack being tested.

C Was the focus of the research covert? Yes.

T Did the timeframe given for response match the timeframe consumers would usually use? Yes. Consumers determined how long they spent at the fixture without being aware that they were involved in a research process of any kind.

Given that the research process has stayed out of the consumer experience, one can feel very confident about the likely performance of the new pack in the market as a whole. If the pack has performed poorly (in contrast with the immediate prior sales of the existing pack in the same store and a separate control store selling the original pack over the time of the test), there will be evidence of whether this is because it was selected and rejected from the behavioral observation, an understanding of whether this was conscious or unconscious from the subsequent interview, and, if it was conscious, the reasons for it. Granted, consumers won’t be providing the brief for the redesign, but unless they provided the brief for the original design, what’s happened to turn them into experts all of a sudden?

Alternatively, the manufacturer decides to solicit consumer opinion through an internet survey. An online research company sends an email out to thousands of people who have subscribed, asking them to complete a short survey in exchange for payment or entry into a prize draw for a few thousand pounds. Participants sit at their computer and answer a series of questions about how they buy motor oil, before being shown some pack designs to rate, along with a 500-word summary of the product’s positioning, and are then asked to rate some statements about their attitude to the product.

A Is it an analysis of behavioral data? No.

F Were consumers in the right frame of mind? Probably not. Respondents are sitting in front of their computers, probably at home, probably in the evening, probably in the midst of internet-related leisure activity. They are not taking part in the survey because they need motor oil. The experience is one of heightened consciousness, answering questions on a computer in a way that has far more in common with taking a test than it does the purchase experience concerned.

E How real was the environment? Unless most sales happen online, and even then unless products are purchased after a detailed question-and-answer session, one would have to say that the environment has no similarity to the retail one in which purchase decisions will actually be made.

C Was the focus of the research covert? No, it was overt. From the outset respondents have been told that they are taking part in a survey on motor oil. The nature of the questionnaire makes it impossible for them to be under any illusion about what is being researched.

T Did the timeframe given for response match the timeframe consumers would usually use? No. Respondents are reading statements and summaries and their reaction to the pack design is the result of a relatively long run-up of questions relating to the purchase of motor oil.

While this internet survey will produce a wealth of data relatively inexpensively, there is no reason to feel confident that the data will reflect the way in which consumers will respond when encountering the product for real. Since none of the six conditions has been met, there is a very high likelihood that consumers have presented how they would like to believe they make decisions, and what they think would make a product appealing, but there is no way of knowing how much conscious invention has taken place. Rather, given the shift in mindset and the nature of the questions, there are considerable grounds to ignore the results entirely.

As a further alternative approach, say the manufacturer is unable to secure the cooperation of a retailer, or isn’t prepared to make the investment in producing a limited run of the new packs, and wants to obtain a consumer perspective before taking one of the designs it has developed further. After observing customers in-store to identify the typical behavior and mindset of a motor oil consumer, people who purchase motor oil are recruited for individual interviews (although that product is concealed among many other products they are asked about). Mock displays are created (either virtually or physically) to simulate visually as much of the store as possible, within which the new design is substituted. Each respondent is directed into the appropriate frame of mind and asked to make a number of product purchases from the simulated display, including one for motor oil, and told that they will be asked to pay for their product, which they’ll receive at the end of the process. Their behavior, any questions they ask, and the choices they make are analyzed. Subsequently questions may be asked to confirm or clarify the choices made, before their money is returned to them.

A Is it an analysis of behavioral data? Partially. Behavior has been observed and then simulated, rather than attitudes or opinions solicited.

F Were consumers in a realistic frame of mind? Yes. Observation was used to identify customers’ mindset, and this mindset was recreated in the simulated shopping experience.

E How real was the environment? It was a simulation. Some of the contextual information was available, but the environment was different.

C Was the focus of the research covert? Primarily covert. The product of interest wasn’t identified and was concealed with several other products.

T Did the timeframe given for response match the timeframe consumers would usually use? Yes. Respondents are making a purchase decision rather than answering a series of questions.

In this case the research has met three of the conditions and partially met the other two. While not providing the reassurance of a live test, it does take account of the role of the unconscious and the potential impact of contextual elements by simulating a purchase. By including price information and requiring respondents to make a physical payment, it also does all it can to make the decision as risk sensitive as it would be in reality. Note, too, that it has avoided inviting an artificial conscious analysis of the new product by isolating it, not making it the overt focus of the experience, and not asking questions that can change how and what people think.

A final check, which can be used to assess the likelihood that information obtained from consumers is reliable, is whether the learning is congruent with that from the experiments conducted by consumer and social psychologists. Is it in line with learning about how the unconscious drives responses, or with research on how people are influenced? If the answer is yes, the research has at the very least coincided with behavioral traits identified independently elsewhere.

Value for money

It might surprise you to hear that I don’t think traditional “asking people what they want and listening to their answers” research is entirely futile. My issue is with research that, duplicitously or otherwise, is relied on to provide an ultimately unjustifiable sense of reassurance about a decision that is being taken or, worse still, to inform an organization’s strategy. Every now and then it is inevitable that someone will, in response to a question, say something that triggers a good idea, constructive change, or worthwhile action. But the operative word is someone. If human beings were routinely capable of such accurate introspection, psychoanalysts could be replaced by a two-line computer program that asked patients what their problem was and told them to do whatever they thought best to resolve it. Such sparks of astute observation or innovation are not and cannot be a dependable consequence of a conscious interviewing process, and so the only benefit of asking more people is an increased chance of encountering one person who does have such insight. Of course, whether a person’s comment is valuable or not is a qualitative judgment; its value is in triggering an association or reinforcing a prejudice in the mind of the decision maker.

While asking one question is inherently problematic, asking several makes it much more likely that the questioning process will influence the answers obtained, thus the case for large-scale, long-interview market research is extremely dubious. And yet this is exactly how the market research industry has typically defined the value of its offer.

When considered in this way, the approach and value of such research are brought into focus. Is it really necessary to speak to a large number of people? Is a “trained” moderator required to ask the questions? Is a detailed report of what everyone who was interviewed said likely to be helpful? Most importantly of all, how much should be spent on such a process? Wouldn’t it be better for whoever is tasked with the decision to put themselves among the people of interest to them and let unconscious and conscious stimuli, allied to whatever expertise has put them in the decision-making position they occupy, trigger in them the feeling of what they should do?

Skilled observation, particularly where it brings an informed understanding of consumer psychology, can provide genuine insights into how and why people are behaving as they do and what might be done to influence them. However, just as a mechanic is most useful if he can listen to your car and diagnose its fault rather than taking the entire vehicle to bits to inspect each part, the value of such a service is not in its scale but in its ability to find the problems and provide appropriate solutions.

Gaining a competitive edge

In the future, the companies that gain an edge over their competitors will be those that, intuitively or through application, best understand the complex interplay that exists between their customers’ unconscious and conscious minds. The understanding that is emerging from social psychology and neuroscience provides the insights that help explain why customers behave as they do, and why what seems logical or is endorsed by customers in an abstract context may not succeed in reality.

Scientific understanding about how the brain works is developing swiftly, but we are a long way off being able to read minds or predict what people will choose to do with accuracy. Designers have always known it is better to create an attractive retail space to sell products. Comprehending how apparently peripheral elements such as color, smell, and texture can dramatically shift how products are perceived helps bring some scientific knowledge to this process.

With the impact of the associative nature of the human mind and the role of unconscious filtering becoming better understood, there is an opportunity for organizations to get more in tune with their customers and be more effective at marketing to them. Historically, marketing has dealt in terms of consumer “needs,” whereas what matters more when it comes to consumer behavior is how unconscious associations are managed, unconscious fears overcome, and uncomfortable confusion avoided.

The nature of unconscious misattribution, whereby a feeling created by one thing is projected onto another, is such that nothing may in fact be something. From studying the way in which people are influenced it is easy to see how, very often, success is achieved without any tangible, consciously appraisable benefit. A few years ago I was involved in a product launch for a new pizza for Pizza Hut. The concept that had been developed was for a product with larger toppings: the meat pieces were going to be chunkier, the vegetables more thickly sliced, and the more attractive red onion would replace white. The concept products were prepared and presented to a group of senior managers and directors and everyone agreed that the resulting pizza looked more appetizing. Over the next few weeks the product development team went to work on sourcing the necessary ingredients and establishing the final cost of the product.

When the product was presented to the board for approval along with the cost, the chief executive became nervous. The new ingredients were significantly more expensive and there was no plan to increase the price of the pizza. The members of the product team were sent away to see what they could do. Following a series of meetings where revised products with cheaper ingredients were presented and discussed, the board eventually reached a point where it was comfortable with launching the product. Unfortunately, by this stage as the launch deadline loomed, what little objectivity might have been present at the beginning of the process was gone, and the toppings had been reduced so much that, had anyone thought to put the existing version alongside the new one, they would have seen that you would have needed a micrometer to spot the difference in topping dimensions.

The launch went ahead, and the company announced its “new” product to the nation. Within a few days of the launch, several of us were called to a crisis meeting with the chief executive. Conveniently forgetting his involvement in the move to reduce the cost of the toppings, he demanded to know why the restaurant managers were saying that the new pizza looked no different from its predecessor.

If the company had conducted research in its standard comparative way, it is hard to imagine that it could have concluded anything but that the cost-managed “new” pizza was the same as the current one. In this situation, it would almost certainly not have launched the product. However, excited by celebrity-based advertising and promotional activity, people wanted to buy it and the launch was a success. The company had inadvertently conducted a successful live trial of an initiative that research would have rejected.

Developing cost-effective yet meaningful live tests should be a much higher priority than reaching for the researcher’s clipboard or convening a focus group, and it demands an appreciation of the subtle elements that often influence consumer behavior. Many of the most interesting experiments in social psychology utilize a test-and-control approach whereby, unbeknown to the participants, a variable is altered and participants’ reactions are observed. Through this kind of approach it is possible to identify, for example, that a simple change in the wording of a sign can dramatically alter the proportion of people who conform to a request, be it to keep a doctor’s appointment or to reuse their hotel bath towel, or that a well-phrased apology can have a more powerful impact on how customers feel about being let down than putting money in their hand.2

One business that has very successfully embraced the benefits of leveraging live data is the world’s biggest fashion retailer, Inditex (which owns brands including Zara, Bershka, and Massimo Dutti). It carefully monitors sales of new lines and captures unsolicited feedback from its stores, to the extent that around half of its clothing collections evolve and adapt during each season. In essence, every day of the business is a live test in more than 4,000 stores across 73 countries and the company is obsessive about learning from every moment: not just which garments are selling, but which colors, sizes, and shapes. With its marketing, design, and manufacturing tuned to respond and adapt to the feedback it captures, successful ranges can be continued, promoted more prominently, and expanded, and those that aren’t working can be swiftly withdrawn and replaced without the burden of excessive stock. In addition to the unparallelled speed of feedback that this approach provides, it also engages employees as experts in their business of connecting with customers, rather than outsourcing this role to market research organizations. It’s easy to understand why Inditex’s chief operating officer believes that the store managers appreciate being able to contribute in this way and perform better as a consequence.3

We are at the dawn of an exciting time for understanding consumers. Developments in social psychology, neural imaging, and a number of technologies that covertly track the movements of shoppers are providing new insights into what people do and why. But technology will also tempt people into gathering customers’ opinions swiftly at the expense of accuracy, either because it appeals to our vain notion of conscious will, or because it panders to a desire to place convenience above accuracy.

Ultimately, the prize for organizations that are willing to remove their dependency on traditional approaches to market research is considerable. By recognizing that consumers aren’t well placed to tell us how they do or will behave, and developing alternative approaches to evaluating and testing, we can place consumers much closer to the “heart of a business” than they are at present. The benefit of divorcing oneself from superstition is the opportunity to take responsibility for one’s own success and to learn the lessons from failures. Just as you got promoted not because of the “current planetary energy at play” but because you did something well, a new product deserves to be launched not because consumers approved it in focus groups, but because someone saw the opportunity for it.

Of course, where research is used as a crutch to give a sense of risk minimization (however unfounded), moving ahead without it may not feel comfortable. Nevertheless, as I have explained, it is not a question of all or nothing; rather, it’s a matter of reappraising what can and can’t be validated with consumers and recognizing that the key to their “thoughts” lies in studying what they actually do, not what they say when they’re invited to think about it.

Arguably, no company illustrates the benefit of this approach more than Apple, which has recognized the important distinction between needing to be able to connect with and relate to your customers and the futility of attempting to consolidate these people into representative data. Few could doubt Apple’s ability to create products that really resonate with consumers although, as Steve Jobs told Fortune, “We do no market research.” It is a company that employs people who are just like the people they want to sell to, and they develop the products and services that they find really exciting themselves, then take them to market with the enthusiasm and confidence they genuinely feel for what they have created.4