The Problems with "Attribution Studies"
26 November 2019 • tvsquared
Meg Coyle,
MarCom Director 

During a panel session at a recent industry event, the term “attribution study” came up several times. It wasn’t mentioned in the context of an industry-wide study – ala the informative ones that groups like the VAB and ThinkBox release. It was brought up to describe the basis of TV attribution solutions from several U.S. vendors. Cut to me in the crowd looking like the human version of the hand-to-forehead emoji.

Attribution studies are hand-crafted analysis using a ton of data that are typically presented via PPT or written report. As you can imagine, an actual software platform to turn the data into insights quickly is nowhere in sight. Not to mince words, they are problematic and poor representations of the overall TV attribution market, which is exploding with innovation and adoption across the advertising ecosystem.

These are my two biggest issues with attribution studies:

  • They are not scalable: Anything built manually is inherently unscalable, making attribution studies a poor choice for media owners, network agencies and large brands, where scale is everything.
  • They are slow: The time to insights with attribution studies are nothing to write home about. The TV industry has moved on from waiting months at a time to gauge campaign performance. While attribution studies are not quite that slow, they are by no means fast, and that’s a little too close to the old way of doing things for my liking. Anything slower than always-on is using old data; and basing campaign planning on out-of-date information is not the way to go.

So, what’s the answer? It’s simple, really. The most accurate TV attribution has to be an always-on reflection of 100% of the market – and thousands of advertisers worldwide are doing this today across. Anything less and you’ll fall behind.