Showing posts with label Paid Search. Show all posts
Showing posts with label Paid Search. Show all posts

Wednesday, April 13, 2016

Mutli-Channel Attribution and Understanding Interaction

I'm no cosmologist, but this post is going to rely on a concept well known to astrophysicists, who often have something in common with today's marketers (as much as they might be loathe to admit it). So what is it that links marketing analytics to one of the coolest and most 'pure' sciences known to man?

I'll give you a hint: it has to do with such awesome topics as black holes, distant planets, and dark matter

The answer? It has to do with measuring the impacts of things that we can't actually see directly, but still make their presence felt. This is common practice for scientists who study the universe, and yet not nearly common enough among marketers and people who evaluate media spend and results. Like physicists, marketing analysis has progressed in stages, but we have the advantage of coming into a much more mature field, and thus avoiding the mistakes of earlier times.

Marketing analytics over the years and the assumptions created :

  • Overall Business Results (i.e. revenue) : if good, marketing is working!
  • Reach/Audience Measures (i.e. GRPs/TRPs) : more eyeballs = better marketing!
  • Last-click Attribution (i.e. click conversions) : put more money into paid search!
  • Path-based Attribution (i.e. weighted conversions) : I can track a linear path to purchase!
  • Model-based Attribution (i.e. beta coefficients) : marketing is a complex web of influences!

So what does this last one mean, and how does it relate to space? When trying to find objects in the distant regions of the cosmos, scientists often rely on indirect means of locating and measuring their targets, because they can't be observed normally. For instance, we can't see planets orbiting distant stars even with our best telescopes. However, based on things like the bend in light emitted from a star, and the composition of gases detected, we can 'know' that there is a planet in orbit of a certain size and density, that is affecting the measurements that we would expect to get from that star in the absence of such a hypothetical planet. Similarly we don't see black holes, but we can detect a certain radiation signature that is created when gases under the immense gravitational force of the black hole give off x-rays.

This is basically what a good media mix/attribution model is attempting to do, and it's why regression models can work so well. You are trying to isolate the effect of a particular marketing channel or effort, not in a vacuum, but in the overall context of the consumer environment. I first remember seeing white papers about this mainly about measuring brand lift due to exposure to TV or display ads, but those were usually simple linear regression problems, connecting a single predictor variable to a response, or done as a chi-square style hypothesis test. But outside of a controlled experiment, this method simply won't give you an accurate picture of your marketing ecosystem that takes into account the whole customer journey.

As a marketer, you've surely been asked at some point "what's the ROI of x channel?" or "How many sales did x advertisement drive?" And perhaps, once upon a time, you would have been content to pull a quick conversion number out of your web analytics platform and call it a day. However, any company that does things this way isn't only going to get a completely incorrect (and therefore useless) answer, but they aren't really even asking the right question.

Modern marketing models tell us that channels can't be evaluated in isolation, even if you can make a substantially accurate attempt to isolate a specific channel's contribution to overall marketing outcomes in a particular holistic context.

Why does that last part matter? Because even if you can build a great model out of clean data that is highly predictive, all of the 'contribution' measuring that you are doing is dependent on the other variables.

So for example, if you determine that PPC is responsible for 15% of all conversions, Facebook is 9%, and email is 6%, and then back into an ROI value based on the cost of each channel and the value of the conversions, you still have to be very careful with what you do with that information. The nature of many common methods for predictive modeling is such that if your boss says, "Well, based on your model PPC has the best ROI and Facebook has the worst, so take the Facebook budget and put it into PPC" you have no reason to think that your results will improve, or change the way you assume.

Why not? Because hidden interactivity between channels is built into the models, so some of the value that PPC is providing in your initial model (as well as any error term), is based on the levels of Facebook activity that were measured during your sample period.

It's a subtle distinction, but an important one. If you truly want to have an accurate understanding of the real world that your marketing takes place in, be ready to do a few things:
  1. Ask slightly different questions; look at overall marketing ROI with the current channel mix, and how each channel contributes, taking into account interaction
  2. Use that information to make incremental changes to your budget allocations and marketing strategies, while continuously updating your models to make sure they still predict out-of-sample data accurately
  3. If you are testing something across channels or running a new campaign, try adding it as a binary categorical variable to your model, or a split in your decision tree
Just remember, ROI is a top-level metric, and shouldn't necessarily be applied at the channel level the way that people are used to. Say this to your boss "The marketing ROI, given our current/recent marketing mix, is xxxxxxx, with relative attribution between the channels being yyyyyyy. Knowing that, I would recommend increasing/decreasing investment in channel (variable) A for a few weeks, which according to the model would increase conversions by Z, and then see if that prediction is accurate." Re-run the model, check assumptions, rinse, repeat.


Monday, January 30, 2012

Is Marketing Really "Data-Driven?" Pt. 1

               No matter what you call it, the clear trend in marketing today is towards a model that depends on consumer data collected digitally to inform both online and offline media strategy.  Terms like “data-driven” and “fast-moving data” are bandied about, conjuring up an image of an agile, precise campaign that links brands to individuals, rather than demographics.  Marketers know that the shift from art to science is already in progress, and I should say that I wholeheartedly agree with this approach.
                The problem is that there is a danger in a job only half done, and at times I fear that we as an industry talk about “data-driven marketing” like experts, but that there is no rigor to the approach.  Additionally, using digital data to inform traditional media, both in terms of planning and creative, when the same statistical approach isn’t applied to those channels, will return misleading results.  No matter how cleverly you apply your digital learnings to traditional performance, if the metrics by which we measure TV are inaccurate, or not properly tied to business goals, then we risk just painting a picture that is different but no more insightful.
                To truly claim a data-driven approach, you need to collect data at every step of the marketing process systematically, and analyze it methodically, adhering to sound statistical procedure.  Just as importantly, you need to know what data to gather, and how it helps you to achieve your goal.
                Let’s start looking at an example of how this can affect measurement at every level of a campaign.  Starting with the broadest, what is the goal of marketing?  To increase the sales/services provided of the client.  How is that measured?  Brand loyalty?  Market share?  Sales in dollars?  Profit?  Units sold?  The first thing that an advertising agency has to do (ideally) is identify what the client goals are, and frankly, the media agency should be the one that determines the goal, as it is part of the marketing process.
                Why is that?  Let’s look at the list of client goals that I mentioned above, all of which at first blush appear to be totally normal, reasonable ways to judge a marketing agency, but all of which have some issues from a statistical and/or business standpoint.
Brand Loyalty: This is probably the worst measure for a number of reasons, over and above the fact that it is a vague concept.  Anything that is survey or panel based can be looked at, but the methodology and sampling issues make it less scientific. 
Market Share:  Better than brand loyalty, but because the information has to come from a number of outside sources makes the gathering of this data ponderous, and more importantly there is a long time lag for reporting.
Sales (dollars): On its own this number is somewhat useful because it is an absolute 1-to-1 value, but it really should be adjusted to account for the market environment of the client’s particular category, rather than taken raw.
Profit: Terrible.  I don’t think that anyone would actually measure a company’s marketing success based on profits, but it is something that clients think about and a useful illustration about what stats you don’t want.  Too many uncontrolled variables go into profit and revenue numbers.  If a company sells more product but the cost of raw materials increases as well, it shouldn’t be factored into any measure of advertising.
Sales (Units): This is probably the best way to measure overall advertising success over a long period of time, once again normalizing the number to the broader market conditions.  By using sales in units you remove some of the variables around pricing and competitive environment (aside from the ones that can adjusted for).
The key to any good statistical measure of success or ability is removing as many uncontrolled variables as possible, and not crediting/blaming advertising for things it can’t control.
Since I often use baseball as examples for statistics and how to use them, the perfect analogue here is using ‘wins’ to judge a pitcher.  Conventionally, people looked at wins to determine how good a pitcher is, but that number is quickly falling out of favor, because it has very little statistical relevance to how well a pitcher performs.  Think about it, if a pitcher gives up 5 runs but his team scores 8, he gets a win.  If another pitcher gives up 2 runs but his team is shut out, he gets a loss.  Who did a better job?
The lesson is not that we shouldn’t measure things and use the data as much as possible, but that not all stats are created equal, and that we need to make sure that what we are collecting is telling us what we think it is.  Right now I would say that marketing is getting good at amassing data, but still extremely infantile in terms of manipulating it properly.  We are still at the stage of evaluating pitchers based on wins, as it were.
Some of this is also based on assumptions, and how many of them are based on traditionally held marketing beliefs that we take for granted, despite never seeing empirical evidence for them.  Every marketer should be a gadfly.  Poke holes in theories or justifications that don't make sense.  If you see a test that doesn't account for uncontrolled variables in the results, point it out.  If a conversation is centered around an idea that everyone accepts but no one has proved, ask why.  
Next up, it might be worth looking at TV, and the relationship between digital/social and offline, in order to challenge some of the preconceived notions.

Monday, January 23, 2012

We Need to Make Digital Measurement Easier


[Editor’s Note:  Sorry for the long layoff, I am going to be better about posting starting today]
To be clear, I don’t mean that we need to make it easier for us digital marketers, but that we need to make it easier for the brand representatives that we have to report to.
When I go through plans and recaps for marketing programs, the problem becomes very clear.  People who have been dealing with traditional marketers for a long time expect just a few things from TV, print, and radio: Reach and Frequency.  These are estimates, and they are provided by the people selling the media, so they don’t need to be calculated by those buying it.
It’s simple, and clean, though it does nothing to tell you about the effectiveness of the channel after the fact.  How many people saw the ad, and how often.  Move on.
Sometimes you will see a brand lift study built into a buy, which basically just consists of polling some consumers to see how the ad made them feel (with varying degrees of scientific rigour).
Then we get to digital, and suddenly the performance metrics increase exponentially.  The breadth and depth of data that we have available to us in the digital space is both a blessing and a curse in that sense.
First of all, we subdivide “digital” into myriad channels of increasing specificity.  There is display, search, social, in-text, and more.  Each of these sub-channels has multiple ad unit types, and in turn, each ad type has multiple statistics that can be tracked.
(For instance, display ads can be static units or interactive units (and static units can be further broken down by size, so there are standard banners, skyscrapers, etc.), and so you have reach in terms of unique users, then interaction rates, time spent in the ad unit for rich media, click through rate, video plays in unit, and more.))
You can measure attributes of the ads themselves, like click through rate, impressions, cost per impression/click, etc., and you can also measure on-site actions and behavior, like conversions, bounce rate, time on site, and so on.
We haven’t even talked about the social metrics like Facebook likes, tweets, +1s, ‘conversations, and additional followers/friends.
The upside of all of this is that obviously the data gives us visibility and optimization options that traditional marketers can only dream of.  The downside is that we are actually held to performance standards unlike traditional offline media channels, and moreover, that the people who we report to get lost in all of these metrics.
Traditional media channels don’t provide brands with much in the way of data or measurement options, and maybe the answer is that they should be forced to come up with better ways to justify their value.  More likely however, we as digital marketers need to find ways to simplify our reporting.
This may mean actually giving brands less raw data, and it’s possible that Pandora’s box has been opened and it is too late.  However, I think that the only possible outcome is the creation of a weighted composite number that is based on an equation taking into account a variety of metrics across digital channels, pegged to an index.  The million dollar problem is just figuring out how to do it, but you can bet that I will be working on it, as I am sure others are.
Expect a 'part two' of this entry in the future.

Friday, November 11, 2011

Google Is Not Shaking Things Up as Much as We Thought

::NEWS FLASH::

I just received this emergency telegram from my high-level sources at Google:

"Matthew

AdWords blog was unclear -(stop)-
Rumors abound in POVs and the blogosphere about reduced ad inventory -(stop)- 
We plan to show the same number of 1st page ads as before -(stop)-
If 6 ads would appear in the right-side rail, they will not move to the bottom -(stop)-
CPCs should therefore not increase -(stop)-

Please stop calling us every ten minutes -(stop)-
Seriously stop -(stop)-

-Deep Throat"

So my previous take on this, and a number of POVs that agencies have already made public and sent to clients, appear to have been a little hasty.

The Google employee I spoke to seemed unaware of the industry's first take on this change, but seemed to understand how this could be misconstrued given the brevity of the announcement on the AdWords blog, as well as their general lack of transparency on exactly how and why ads would be shifting around.

Scooped!

Thursday, November 10, 2011

Crowd-source This Contest! Update 2

So far, we have 3 comments/entries.  Here are some highlights:

1.) Geotargeting (but to where?)
2.) 3 pieces of creative per ad group
3.) Small kw list divided into ad groups of no more than 3-10 kws
4.) Exact match only
5.) Lower max bids as history creates CTR/quality score
6.) Bid to position 4
7.) Social integration
8.) Viral "real world" marketing
9.) Writing on $100 bill with a sharpie (this might be illegal)
10.) Gravy trains.

This is all good stuff, but we are still 7 entries away from anyone winning anything.  Do you disagree with these people?  Wouldn't you do it differently because you are smarter than them? 

Prove it.  Leave a comment with how you would advertise this blog with just $100 worth of AdWords and a ton of moxie, and you could win the Amazon gift certificate or whatever you people vote for.

Oh yeah, vote in the poll for what the prize should be.  As mentioned in the original contest post, value will increase with the number of entries.

Wednesday, November 9, 2011

Google is Shaking Things Up (Again)

What's missing in this picture? (Hint: paid ads on the right side)

The big news for advertisers to come from Google lately (and yes, I am ignoring the organic algorithm change for right now) is that they are changing where paid ads are going to appear on the search results page.  For all of the claims that Google is a power-hungry, money grubbing monster, I have to say that as an advertiser, I don’t generally see that being the case.  In fact, I am normally frustrated by things they do in order to improve the user experience because they run contrary to what I would like as a marketer.  This latest decision, to move paid ads from the right-hand rail to the bottom of the page is no different.
The first issue that we have to consider is user behavior.  Thanks to eye-mapping technology (they plop users down in a chair, tell them to navigate the search page, and have a camera mounted on top of the monitor to track their eye movement as they look around the page; not clear if they hold the lids open Clockwork Orange-style), studies have been done telling us how people actually view the SERP (search engine results page).  Combined with click data on the various links, we have a pretty good idea of how users see and interact with the search engine results, and as Google has changed the layout and result types that show up, user behavior has changed as well:
Basically, what we have seen in the past is that most people tend to view the page top-to-bottom, and only occasionally do their eyes wander over to the margin, unless it is a relevant part of the answer to their query (like a map or a video).  Thus, Google feels that putting those paid ads that don’t make it into the prime top-of-page slots will actually be better served at the end of the organic results, due to the natural progression of users through the listings.
I know that for organic results there is some basis for this, as listings in positions 9-10 often actually have a better click through rate than those in positions 7-8.  The idea is that we tend to gloss over the middle results, paying the most attention at the beginning, and then at the end when we are forced to decide to either click on something on the first page, or make the (increasingly rare) decision to see if page 2 of the results will have something more to our liking.
Google thinks that CTR will increase at the bottom of the page compared to the right rail, but we will see.
The other issue is a purely mathematical one for advertisers.  Where you once had 6-8 paid search results on a page, it will not be uncommon to only have 4 results, which is what I tend to see now when I get these search result layouts.  Two on top, two on bottom.  Now, if there are fewer first-page ads available, then simple supply & demand tells us that competition, and thus cost per click, is going to go up. 
Google is not saying that 2nd page ads are going to see improved performance, so this is strictly a cut in inventory.  Everyone wants to be on the first page, so get ready to see your CPCs go up (and presumably CPA as well, all other things being equal). 
Now, I have already heard the argument that this is Google’s way to make more money from ad revenue.  Higher CPC = more fees for Google, right?  Probably not.  Think about it how much CPCs would have to increase in order for them to make up for the revenue lost by having their total first page ad inventory drop by as much as 50%.  If anything, Google is most likely costing themselves money, because people who find that their bids have moved their ads to the second page are just as likely to pause the keywords as they are to increase the amount they will pay for them.
I don’t love this change as an advertiser, but it is once again going to be hard to argue that it isn’t better for the user.

Friday, September 30, 2011

Beware Broad Match + Dynamic Keyword Insertion

We all know by now that dynamic keyword insertion can be a great tool.  Users are often more likely to click on an ad if it contains the wording that they used in their query, because it suggests a high level of relevance (of course, that relevance only goes as deep as the ad headline, not to the landing page). Recently however, I got a reminder that the technology on its own can lead to some unintended consequences if not carefully monitored.

Shortly after a client released a new promotional offer which we supported in paid search, an ad from a competitor appeared offering the same product.  Of course, this led to all kids of fears about corporate sabotage and bid wars, especially once we discovered that the competitor has no such product on the site anywhere.

To make a long story short, the query was for "category term + offer item" and the competitor was bidding on "cetegory term" on broad match with dynamic keyword insertion, leading to an ad headline that made it look like an ad for a specific promotional product(this is hard when I can't give any actual information out). 

The result is that right now that competitor is providing a terrible user experience, driving users to their site who will be frustrated by what essentially amounts to false advertising/bait and switch tactics.  Even worse, they are paying money to create a bunch of irritated consumers.  Worst of all, they apparently haven't realised that they are doing it, and it has been two weeks.

Example:  You have a company that makes gun racks.  You come up with a boss new gun rack for mounting in that little back window of a pick up truck, so you run on the keyword "pick-up gun rack" with an ad that talks about your product, and leads to your website.  Perfect.

Meanwhile, Chevrolet is bidding on the keyword "pick-up" on broad match, and using dynamic keyword insertion without any negatives.  So when someone types "pick-up gun rack" into Google, they see an ad like:

Pick-Up Gun Rack
Find Out About Big Savings on the 2012 Chevy Silverado Today!
www.Chevrolet.com/Silverado

Users will see this, go to the website, and then realize that they have been duped!  Chevy didn't start making gun racks, they just forgot to have negative keywords around their broad category terms.  They aren't tricking users on purpose, they just didn't consider the risk of dynamic keyword insertion.

As paid search marketers, we need to consider all possible outcomes when using any automated technology like dynamic keyword insertion.  It's easy to become complacent on matters like this, but it highlights how often we need to pull search query reports to scan for trending keywords that we need to add negatives around.  I can say for sure that at least one company hasn't done it in a couple of weeks for a brand, and that it is costing them money and goodwill.

I will be more careful in the future

Tuesday, August 30, 2011

Blogging is for Self-Important People

It turns out, when I was creating content on ppc advertising and other aspects of search engine management (SEM), having it be viewed by a handful of people on the internal message board at work was somehow unsatisfying.

Instead of an audience of 4, I want an audience of 104.

This is part sounding board, part professional observation, and all UGC (get ready for a lot of lingo).  Largely though, this is a test, because I am curious about how the integration of social media works from the ground up, and starting an entirely new entity is the only way to test that.  Follow the name "SEMiotic5" on Twitter to get basically the same content, but shorter! (Is that desirable?  Let's find out!).

Working in paid search marketing, I need data, and I need feedback.  Leave comments, interact with the site, etc..  Ask questions, if there is content you would like to see, tell me about it.  Tell me about yourself, your habits, how you engage with media and advertising.  Not just in search, but across the internet.  What sort of content matters to you?  Do you respond to polls on blogs?  Would you like polls on this blog?  Would you like a poll about responding to polls on this blog?

That said, I realize that this has the potential to be very boring, so I will make sure to spice it up with posts that just barely have any relation to marketing every once in a while.