Monday, January 30, 2012

Is Marketing Really "Data-Driven?" Pt. 1

               No matter what you call it, the clear trend in marketing today is towards a model that depends on consumer data collected digitally to inform both online and offline media strategy.  Terms like “data-driven” and “fast-moving data” are bandied about, conjuring up an image of an agile, precise campaign that links brands to individuals, rather than demographics.  Marketers know that the shift from art to science is already in progress, and I should say that I wholeheartedly agree with this approach.
                The problem is that there is a danger in a job only half done, and at times I fear that we as an industry talk about “data-driven marketing” like experts, but that there is no rigor to the approach.  Additionally, using digital data to inform traditional media, both in terms of planning and creative, when the same statistical approach isn’t applied to those channels, will return misleading results.  No matter how cleverly you apply your digital learnings to traditional performance, if the metrics by which we measure TV are inaccurate, or not properly tied to business goals, then we risk just painting a picture that is different but no more insightful.
                To truly claim a data-driven approach, you need to collect data at every step of the marketing process systematically, and analyze it methodically, adhering to sound statistical procedure.  Just as importantly, you need to know what data to gather, and how it helps you to achieve your goal.
                Let’s start looking at an example of how this can affect measurement at every level of a campaign.  Starting with the broadest, what is the goal of marketing?  To increase the sales/services provided of the client.  How is that measured?  Brand loyalty?  Market share?  Sales in dollars?  Profit?  Units sold?  The first thing that an advertising agency has to do (ideally) is identify what the client goals are, and frankly, the media agency should be the one that determines the goal, as it is part of the marketing process.
                Why is that?  Let’s look at the list of client goals that I mentioned above, all of which at first blush appear to be totally normal, reasonable ways to judge a marketing agency, but all of which have some issues from a statistical and/or business standpoint.
Brand Loyalty: This is probably the worst measure for a number of reasons, over and above the fact that it is a vague concept.  Anything that is survey or panel based can be looked at, but the methodology and sampling issues make it less scientific. 
Market Share:  Better than brand loyalty, but because the information has to come from a number of outside sources makes the gathering of this data ponderous, and more importantly there is a long time lag for reporting.
Sales (dollars): On its own this number is somewhat useful because it is an absolute 1-to-1 value, but it really should be adjusted to account for the market environment of the client’s particular category, rather than taken raw.
Profit: Terrible.  I don’t think that anyone would actually measure a company’s marketing success based on profits, but it is something that clients think about and a useful illustration about what stats you don’t want.  Too many uncontrolled variables go into profit and revenue numbers.  If a company sells more product but the cost of raw materials increases as well, it shouldn’t be factored into any measure of advertising.
Sales (Units): This is probably the best way to measure overall advertising success over a long period of time, once again normalizing the number to the broader market conditions.  By using sales in units you remove some of the variables around pricing and competitive environment (aside from the ones that can adjusted for).
The key to any good statistical measure of success or ability is removing as many uncontrolled variables as possible, and not crediting/blaming advertising for things it can’t control.
Since I often use baseball as examples for statistics and how to use them, the perfect analogue here is using ‘wins’ to judge a pitcher.  Conventionally, people looked at wins to determine how good a pitcher is, but that number is quickly falling out of favor, because it has very little statistical relevance to how well a pitcher performs.  Think about it, if a pitcher gives up 5 runs but his team scores 8, he gets a win.  If another pitcher gives up 2 runs but his team is shut out, he gets a loss.  Who did a better job?
The lesson is not that we shouldn’t measure things and use the data as much as possible, but that not all stats are created equal, and that we need to make sure that what we are collecting is telling us what we think it is.  Right now I would say that marketing is getting good at amassing data, but still extremely infantile in terms of manipulating it properly.  We are still at the stage of evaluating pitchers based on wins, as it were.
Some of this is also based on assumptions, and how many of them are based on traditionally held marketing beliefs that we take for granted, despite never seeing empirical evidence for them.  Every marketer should be a gadfly.  Poke holes in theories or justifications that don't make sense.  If you see a test that doesn't account for uncontrolled variables in the results, point it out.  If a conversation is centered around an idea that everyone accepts but no one has proved, ask why.  
Next up, it might be worth looking at TV, and the relationship between digital/social and offline, in order to challenge some of the preconceived notions.

Monday, January 23, 2012

We Need to Make Digital Measurement Easier


[Editor’s Note:  Sorry for the long layoff, I am going to be better about posting starting today]
To be clear, I don’t mean that we need to make it easier for us digital marketers, but that we need to make it easier for the brand representatives that we have to report to.
When I go through plans and recaps for marketing programs, the problem becomes very clear.  People who have been dealing with traditional marketers for a long time expect just a few things from TV, print, and radio: Reach and Frequency.  These are estimates, and they are provided by the people selling the media, so they don’t need to be calculated by those buying it.
It’s simple, and clean, though it does nothing to tell you about the effectiveness of the channel after the fact.  How many people saw the ad, and how often.  Move on.
Sometimes you will see a brand lift study built into a buy, which basically just consists of polling some consumers to see how the ad made them feel (with varying degrees of scientific rigour).
Then we get to digital, and suddenly the performance metrics increase exponentially.  The breadth and depth of data that we have available to us in the digital space is both a blessing and a curse in that sense.
First of all, we subdivide “digital” into myriad channels of increasing specificity.  There is display, search, social, in-text, and more.  Each of these sub-channels has multiple ad unit types, and in turn, each ad type has multiple statistics that can be tracked.
(For instance, display ads can be static units or interactive units (and static units can be further broken down by size, so there are standard banners, skyscrapers, etc.), and so you have reach in terms of unique users, then interaction rates, time spent in the ad unit for rich media, click through rate, video plays in unit, and more.))
You can measure attributes of the ads themselves, like click through rate, impressions, cost per impression/click, etc., and you can also measure on-site actions and behavior, like conversions, bounce rate, time on site, and so on.
We haven’t even talked about the social metrics like Facebook likes, tweets, +1s, ‘conversations, and additional followers/friends.
The upside of all of this is that obviously the data gives us visibility and optimization options that traditional marketers can only dream of.  The downside is that we are actually held to performance standards unlike traditional offline media channels, and moreover, that the people who we report to get lost in all of these metrics.
Traditional media channels don’t provide brands with much in the way of data or measurement options, and maybe the answer is that they should be forced to come up with better ways to justify their value.  More likely however, we as digital marketers need to find ways to simplify our reporting.
This may mean actually giving brands less raw data, and it’s possible that Pandora’s box has been opened and it is too late.  However, I think that the only possible outcome is the creation of a weighted composite number that is based on an equation taking into account a variety of metrics across digital channels, pegged to an index.  The million dollar problem is just figuring out how to do it, but you can bet that I will be working on it, as I am sure others are.
Expect a 'part two' of this entry in the future.