top of page

The Dirty Little Secret of Marketing Mix Models: Not All Models Are Objectively True

Assess How Data-Driven Your Recommendations Really Are  


By Jon Lorenzini, Vice President of Marketing Science and Product Marketing


red, yellow, green light indicators
How strong is the data feeding your model?

Recently, a CMO colleague shared a story about a marketing mix model. The data science team presented her with a model and spend recommendations, but intuitively, she felt some numbers just didn’t make sense. Not to worry, the lead modeler assured her – and proceeded to make changes to fit the CMO’s worldview.

Problem solved? Not quite – more like a problem revealed. What the CMO encountered was what I call the Marketing Mix Model (MMM)’s “Dirty Little Secret.” Models are often showcased as purely scientific algorithms – but behind the curtain (or, in the CMO’s case, in front of it), they can be the product of subjective judgment calls.

Ultimately, what’s driving this Dirty Little Secret tends to come down to one problem: Insufficient quality data to produce a clear-cut signal. The result is low confidence in the output and models that can be spun and tweaked any which way. 


Weak Signals: Do Your Marketing Mix Models Know Enough?

Should you be worried about low-quality data—resulting in models that just don’t “know enough”? If the channels you’re studying or data sources you’re working with fit the descriptions below, there might be cause for concern. 


  • New channels to you. It takes time to get enough data to evaluate the impact. If you’ve only been running a search campaign for a few days or investing in TV for one week, there’s likely not enough data to work with. 

  • New channels on the market. Setting aside experimental dollars for the latest shiny object can be a smart practice. But with a short track record and few reliable benchmarks to start from, the ROAS data you get from the hot new channels can be unreliable - which is why the newest channels often don’t lend themselves to MMM.

  • A channel with flat or even spend. One marketer I know spent $300 a day on Snapchat for two months and then upped it to $375 daily for the next month. I challenge any modeler to glean meaningful insights about that investment. MMM is an analysis of the impact of varying spend levels—which means you need to see real spend variability in order to produce a solid model. 

  • A channel is highly correlated with other channels. TV ads often drive search activity, and OOH can lead to mobile visits. If you’re looking at these or other similarly correlated channels in a vacuum, you might be only getting a part of the picture in terms of investment value.

  • Campaign audiences are too narrow To be sure, narrow-targeted campaigns can be highly efficient. But at the same time, tiny audiences can hit ad saturation quickly: once you’ve reached, say, most of the sourdough fanatics in Oakland, customers just stop engaging. If your channels are getting poor data signals, make sure you’ve cast your audiences broadly enough to merit continuous engagement and data input.

  • Data from the low-traffic seasons. Travel ads may be less likely to work in Q1, and retail ads may face a tougher sell in March.  If you’re only measuring ad results in the off season, your results may be a product of seasonality, not channel impact.  


The Path to Smarter, More Agile MMM: Improve the Data Signal through Experimentation 

If the conditions above describe the data sources you’re working with, your MMM isn’t a lost cause – far from it. Your best step forward is to improve the data signal by conducting experiments and digging deeper if problems persist. 


On the experimentation side, you’ll want to conduct multiple types of testing—both causal experimentation that looks for direct associations, and correlative experimentation that tracks indirect associations.


  • Causal Experimentation: Matched Markets For causal experimentation, we highly recommend Matched Market Testing, which varies media mixes in markets that are demographically similar but geographically far apart. Among other advantages, geographic distance helps ensure that everyone in the distinct market is truly getting served separate media mixes.

  • Correlative Experimentation: Varying Spend For correlative data, we recommend varying spend and watching what happens. How are results impacted when you significantly up spend in a given channel? What if you lower spend – or turn off that channel entirely? 

If you’ve run these experiments and worked them into your new models, and the results still feel off, you may want to dig deeper into data quality. Here are some key questions to ask:


  • How long has the channel been running? As mentioned above, the longer you’ve run ads on a channel – and the longer the channel has been around – the more data you’ll have to learn from.

  • How large is your investment? Large investments can deliver more results and more data overall. Meanwhile, large investments in a given channel make that channel easier to parse out than comparatively small “rounding error” investments that can get lost in the noise.

  • How consistent are your campaigns? Activating too many distinct campaigns and tactics within a channel can produce chaotic or conflicting data—making channel measurement a moving target.

  • Is seasonality at play? If you’re measuring spend during peak or off seasons, it can be hard to differentiate the signals: what’s the impact of the ads, and what’s the background influence of the calendar?

Answering these questions will help you set up your campaigns for better experimentations, more meaningful data inputs –and ultimately for authoritative models you can confidently trust. You’re ensuring your marketing mix is driven by the surest source of impactful marketing insight: science.


Learn if Your Model Gets an A+

This month, LiftLab formally launched Channel Signal Strength Score, a new product feature that offers a “report card” highlighting the strength and health of the signals in your LiftLab Agile Mix Model (AMM). With a Channel Signal Strength Score in play, you’ll know exactly when to run more Experiments to enhance your LiftLab AMM or dive deeper through the paths laid out above.


For more information, follow LiftLab. Or, to get a smarter modeling conversation started right away, schedule a conversation with our team now.


bottom of page