Friday, April 8, 2016

Own Your Data (Or at least house a copy)

There is a common trope among data analysts that 80% of your time is spent collecting, cleaning, and organizing your data, and just 20% is spent on analyzing or modeling. While the exact numbers may vary, if you work with data much you have probably heard something like that, and found it to be more or less true, if exaggerated. As society has moved to collecting exponentially more information, we have of course seen proliferation in the types, formats, and structures of the information, and thus the technologies that we use to store it. Moreover, very often you want to build a model that incorporates data that someone else is holding, according to their own methods that may or may not mesh well with your own.

For something as seemingly simple as a marketing channel attribution model, you might be looking at starting with a flat file containing upwards of 50-100 variables to start, pulled from 10+ sources. I recently went through this process to update two years' worth of weekly data, and it no joke took days and days of data prep just to get a CSV that could be imported into R or SAS for the actual analysis and modeling steps. Facebook, Twitter, Adwords, Bing, LinkedIn, YouTube, Google Analytics, a whole host of display providers... the list goes on and on. All of them using different date formats, calling similar/identical variables by different names, limited data exports to 30 days or 90 days at a time, etc..

Obviously, worth the effort for a big one-time study, but what about actually building a model for production? What about wanting to update your data set periodically and make sure the coefficients haven't changed too much? When dealing with in-house data (customer behavior, revenue forecasting, lead scoring, etc.) we often get spoiled by our databases, because we can just bang out a SQL query to return whatever information we want, in whatever shape we want. Plus, most tools like Tableau or R will plug right into a database, so you don't even have to transfer files manually.

At day's end, it quickly became apparent to me that having elements of our data, from CRM to Social to Advertising, live in an environment that I can't query or code against is just not compatible with solving the kinds of problems we want to solve. So of course, the next call I made was to our superstar data pipeline architect, an all-around genius who was building services for the Dev team running off our many AWS instances. I ask him to start thinking about how we should implement a data warehouse and connections to all of these sources if I can hook up the APIs, and he of course says he has already not only thought of it, but started building it. Turns out, he had a Redshift database up and was running Hadoop MapReduce jobs to populate it from our internal MongoDB!

So with that box checked, I started listing out the API calls we would want and all of the fields we should pull in, figure out the hook ups to the third party access points. Of course, as we have an agency partner for a lot of our paid media, that became the biggest remaining road block to my data heaven. I schedule a call to the rep in charge of their display & programmatic trade desk unit, just so we can chat about the best way to hook up and siphon off all of our daily ad traffic data from their proprietary system. After some back and forth, we finally arrive at a mainly satisfying strategy (with a few gaps due to how they calculate costs and potentially exposing other clients' data to us), but here is the kicker:

As we are trying to figure this out, he says that we are the first client to even ask about this.

I was so worried about being late to the game, that it didn't even occur to me that we would have to blaze this trail for them.

The takeaway? In an age of virtually limitless cheap cloud storage, and DevOps tools to automate API calls and database jobs, there is no reason that data analysts shouldn't have consistent access to a large, refreshing data lake (pun fully intended). The old model created a problem where we spend too much time gathering and pre-processing data, but the same technological advances that threaten to compound the problem can also solve it. JSON, SQL, Unstructured, and every other kind of data can live together, extracted, blended and loaded by HDFS into a temporary cloud instance, as needed.

The old 80/20 time model existed, and exists, because doing the right thing is harder, and takes more up-front work, but I'm pretty excited to take this journey and see how much time it saves over the long run.

(Famous last words before a 6 month project that ultimately fails to deliver on expectations; hope springs eternal)

What do you think? Have you tried to pull outside data into your own warehouse structure? Already solved this issue, or run into problems along the way? Share your experience in the comments!

No comments:

Post a Comment