Peter Fader is the Frances and Pei-Yuan Chia Professor of Marketing at the Wharton School of the University of Pennsylvania. His expertise centers around the analysis of behavioral data to understand and forecast customer shopping/purchasing activities. Much of Mr. Fader’s research highlights the common behavioral patterns that exist across these and other seemingly different domains. His work has been published in (and he serves on the editorial boards of) a number of leading journals in marketing, statistics and the management sciences.
PERFORM: The Internet has created an explosion of data about consumer habits and preferences. What are companies doing right or wrong in applying this data to their marketing efforts?
FADER: One thing they’re doing wrong is trying to collect more data. Companies have more data but not necessarily more knowledge than they did even 30, 40 years ago, when data was a trickle compared with today’s fire hose. Back then, data was hard to come by, and companies had to utilize what they had much more extensively. And then they’d sit around and say, “Boy oh boy, if we had data on this or that, here’s what we would do with it.” And that process caused them to think more deeply about what their customers did or the results of their marketing actions, and it caused them to try to stretch the data.
Today you get all this data coming out of the fire hose, and the best you can do is come up with a top-line summary and move on. Companies are data rich but knowledge poor, and they keep thinking that fancy software is going to do all the knowledge creation for them, but it hasn’t really happened that way.
PERFORM: So it’s really that they don’t have the people power?
FADER: Exactly right. If you pick on the broadcasting and advertising industries, they used to hire people with graduate degrees to make more sense of this. But when times get tough, who are the first ones to go? It’s those geeks and nerds in the back room. These staffing issues are a very big part of the problem.
To go one step further, look at advertising agencies today. They rarely hire M.B.A. students, much less Ph.D. students, anymore. That used to be pretty routine. There were good business skills going into advertising. Now it’s the creative types – they’re good at what they do, but they have their limits. They keep thinking it’s a whole new world and the old rules don’t apply. Actually, the basic patterns are largely the same now as they’ve been, well, forever, as far as customers doing something and then doing it again on a repeat basis.
PERFORM: You’ve been quoted as saying that all metrics are not created equal. Which ones are the most relevant and insightful for corporate marketers?
FADER: I’m interested in forward-looking metrics. Take an example. A lot of companies are looking at metrics like, how many of our current orders come from repeat customers? That’s a classic backward-looking metric. You might find that 90 percent are repeat customers, but is that good or bad? What would that number be for Crest toothpaste, 95 percent? That doesn’t necessarily mean it’s a big, healthy, growing statistic. The point isn’t that Crest is bad. It might be that you have chased away most of your customers, and there are a few who have stuck around and you’re getting a lot of orders from that small group. It doesn’t mean you’re doing well. That’s a backward-looking metric.
A forward-looking metric in that same domain would be, “Of all the stuff we sell this quarter, how many people will come back and buy again next quarter”? If that number is 90 percent, that would be huge. A lot of people would say it’s the same thing; it’s just quarter-on-quarter buying. But in one case you’re looking backwards, and in the other you’re looking forward, and it makes all the difference in the world. In general, I want metrics that ask, “Based on what we’ve seen so far, what do we anticipate next”? A big hot topic these days is companies setting up dashboards. The basic concept is sound, I can’t knock it, but so many of the metrics on the dashboard are purely backward-looking and tell you what was going on but aren’t necessarily a good indication of what we can expect to happen in the future.
Another issue gets back to the whole fire hose thing. People look at data that’s excessively aggregated and gives very little indication of what’s going on below the surface. The only way to really make sense of it is to assume that all customers are the same or you consider only the “average” customer. It turns out that customers in any industry are incredibly different from each other, and there’s as much action in how they vary from each other as there is in what their average tendencies are.
PERFORM: But the Internet is supposed to give us so much information about individual behaviors.
FADER: That’s right. For example, a company will come to me and boast that their online conversion rate is 20 percent. A pretty high number, but to me that number doesn’t mean anything. Does it mean that 20 percent of our customers always buy when they come to the website, and 80 percent never do? Or does it mean that every person has a propensity to buy 20 percent of the time? Or is it somewhere in between? It’s very important to understand the spread across customers.
When it comes down to it, what’s marketing all about? It’s all about recognizing and capitalizing on differences among customers. It’s segmentation. So if you just tell me about the average customer, that’s not giving me any information about the nature of the segments, and so I really don’t know what to do with the data. That’s a classic example of what we’re seeing: many, many more metrics, aggregated up to a level where they become basically useless.
PERFORM: What are common misconceptions about using data mining and business intelligence to improve customer relationships and support marketing campaigns?
FADER: I have a love-hate relationship with data mining. For the things that it’s good at, it’s great. But let’s make clear what it can and can’t do. Data mining is superb for classification tasks. The classic example would be credit scoring – we get a new application and want to figure out if this person is a good credit risk or a bad one. So we look at past customers and dozens of characteristics, and we find out that this new person matches in certain respects and, therefore, goes into bin A versus bin B. Data mining is good for looking at patterns and saying this thing is an A or a B. But it’s not good at longitudinal tasks. It’s OK for determining that this customer is a risk to churn during a certain period, but what if you turn that question on its head and ask when is this customer likely to churn? It might not be this period, it might not be next period, or it may be three and a half years from now. Data mining tends to fall apart for those “when” questions.
PERFORM: How can marketers collect data that will generate the type of forward-looking metrics you discussed earlier?
FADER: The data’s fine; it’s just using it in a better way. Data doesn’t kill companies; people do. You don’t have to use all the numbers out there. It’s a matter of picking just a few numbers to tell the story most effectively, not just in this situation but across other situations.
People come to me boasting about the size of their data warehouse. They have 600 measures collected on every customer and every transaction, but I’m thinking to myself that 597 of them are probably useless. I’d rather not clutter up servers or, worse yet, people’s minds with data that’s not going to do much. I don’t want to have measures unless they lead to specific, forward-looking questions that I can ask and have relevant answers for, not “nice to know” sorts of things.
PERFORM: How does multichannel marketing make that job even more complicated?
FADER: It’s a blessing and a curse. It gives us more opportunities to see what a customer’s about, which is nice, but it’s also a great example of this tendency to collect all this data across each of the channels, rather than coming up with appropriate summaries that give a clearer picture of what’s going on in each channel as well as a raw picture of the customer. “Multichannel” is a word that didn’t exist a few years ago, and so many people are rushing into this whole notion that if we can touch a customer in multiple ways, then all of a sudden they’ll be a better customer. I’m not so sure I agree with that.
PERFORM: It’s true that there are all these new online tools emerging, and marketers feel a lot of pressure to stay up-to-date and use those tools in their daily jobs. What worthwhile technologies are being overlooked?
FADER: That’s a good question. Some very good methods were invented back in the 1960s under the frequently maligned rubric of direct marketing. No one wants to be a direct marketer; it makes you think of Ginsu knives and things like that. But those people were smart. If they only had sparse data available, they would make use of it. And they would constantly run experiments, so they knew what to manipulate, how to present this manipulated set of factors to the customer in an unobtrusive way, how to read the results and how to act on them.
The companies I admire most are the ones that still embody those principles, firms like Capital One and Harrah’s, and Tesco from the U.K. It’s not that they have more data or bigger computers, but they ask questions the right way. And they pay homage to the classic old direct marketers. One of the biggest disappointments of the dot-com era is how many companies didn’t see themselves as direct marketers. If you’re on the Internet, it’s all direct marketing, but they think that’s old and dated.
PERFORM: You sound like a Luddite!
FADER: No, a Luddite is someone who rejects new stuff, saying the sky is falling and so on. I’m totally into the new stuff. If you give me a new source of data, I’ll capitalize on it. But I’m a traditionalist, which means that most of what we need to know has been learned already, in terms of techniques and basic customer behavior patterns. I’m into new gadgets and always interested in reading about cool new data sources, but I worry about people who are getting their hands on them who haven’t learned to walk before they can run.
PERFORM: When you see a company, for example, establishing a presence in Second Life, is there any value in that? Is it just about raising awareness and having more visibility?
FADER: I’m into raising awareness and creating visibility. That’s mass marketing, which I’m a great fan of, and again, this makes me sound like a Luddite. Too many people are into this notion of one-to-one marketing. In most contexts, one-to-one marketing is inefficient, if not impossible. Coca-Cola cannot have a one-to-one relationship with its customers, and if it wanted to, it would be too expensive. The best approach is to use careful mass-marketing programs and run experiments to find out what ads work under what circumstances and so on, but not necessarily trying to cater to the needs of every individual, which raises two problems. One is the inefficiency, and the other is the amount of randomness that surrounds each individual.
Today’s hope is that if we collect all this data, we can figure out what this customer over here is going to do next and we can, therefore, tweak our pricing and our product offering to be just right for them. But then they go and do something crazy and unexpected. The primary tenet in the way I look at customer data is to acknowledge and embrace all the randomness around actual behavior. When I’m scanning the shelf deciding what brand of orange juice to buy, a zillion factors are driving that decision, but only a few of them are measurable or observable. A lot of them are seemingly minor, temporary things, such as: “Is one of my kids sick?” or “Am I starting a new diet?” – issues that you just can’t collect as data.
PERFORM: What do you think are some of the best ways to measure the success of a marketing campaign?
FADER: The first thing, which I think a lot of people would agree with, is you have to look at incremental impact. One thing that drives me bananas is when someone’s reporting on a campaign and they talk about how many units they sold during that campaign. That doesn’t do me any good because it begs the question, “How many would you have sold in the absence of the campaign?” You need to have baselines for what would be happening under a business-as-usual scenario, some known expectations before you start manipulating things.
The second thing is what I alluded to earlier – running systematic experiments. If you’re going to do a particular promotional campaign, vary different characteristics so you can differentially assess how well things are working. I mentioned Capital One earlier. At any given time, they’re running about 40,000 different experiments, and I’m not exaggerating. Everything from, are they using real stamps or a postage meter to is there a return address, and do they emphasize the interest rate or the annual fee? They’re manipulating every possible fact. So while you might think, “It’s just another piece of junk mail,” it’s actually a very, very carefully designed step in this broader experiment. They run it and read the results and then go back and do it again. Too often, people see it as a one-shot deal, like, “This promotion is different from all the other ones we did, so we’re going to get certain results and then we’re going to move on to the next one.”
But you don’t learn what’s effective that way. You have to be much more programmatic about designing and then in making inferences from promotional campaigns. You have to view it as just a step along the way. At most companies, they say, “That’s nice, yes, we’d love to learn, but we have a budget to meet, and we’re on a tight schedule, and we don’t have time to sit around and manipulate factors and things like that.” That’s the usual reaction, and that’s a terrible mistake.