Everyone Is Talking about MMM. That is the Problem.
Marketing Mix Modeling is having a moment. Open any marketing newsletter, attend any growth conference, and you will hear the same thing: MMM is back, it is affordable now, it is the answer to the post-cookie measurement problem.
Here is what those conversations keep leaving out. MMM alone is not the answer. It never was. Brands that invest in MMM as their single measurement solution are trading one blind spot for a slightly different one.
The real answer is triangulation. A measurement stack that uses five methods together, each designed to answer a different question, each covering what the others cannot see. No single tool does this on its own. The brands getting measurement right know exactly which method to reach for at which moment, and they have the platform to run all five from the same data foundation.
Here is why that matters, and what it looks like in practice.
Why MMM alone leaves you with blind spots
MMM is a macro tool. It looks at historical spend data over time and estimates how much each channel contributed to revenue. The modern versions, Google's open-source Meridian and Meta's Robyn, have brought the cost down so that mid-market brands can now run credible models. That is genuinely good news.
But MMM has hard limits that the hype cycle glosses over:
- It cannot optimize at the placement level. MMM will tell you that TV is working. It will not tell you that your primetime NBC buy outperformed your late-night cable rotation by three to one. It cannot tell you which podcast show drove results versus which one generated noise. It sees channels, not individual placements.
- It needs your spending to vary over time. If you run TV and social at the same budget every single week, the model cannot separate their effects. It needs on and off patterns to work. This is a real operational constraint that most media plans do not naturally create.
- It was built for scheduled, concentrated media. A TV spot airs at 9pm on Tuesday and the whole market sees it at once. That is what MMM was designed to detect. Programmatic ads dripped across millions of auctions around the clock, podcast impressions scattered across a six-month back catalog, algorithmically-delivered social content, none of these have a concentrated moment of exposure for MMM to pick up cleanly.
- It measures correlation, not cause. MMM can show you that spend and revenue moved together. It cannot prove that the spend caused the revenue. Without testing to validate it, you are working with a sophisticated hypothesis.
The five methods, and what each one is actually for
A complete measurement stack has five components. They are not competing methods. Each one answers a question the others cannot.
- MMM — Strategic budget allocation Best for annual and quarterly decisions about how much to spend across channels. Also the only method that can pick up signals from channels where people never click anything, like linear TV, radio, and billboards. Honest limit: Cannot optimize within a channel. Cannot prove causality on its own.
- MTA / CAPI — Daily campaign performance Multi-touch attribution tracks the individual user journey across touchpoints. Server-side implementations like Meta's Conversions API (CAPI) and Google's Enhanced Conversions have recovered a lot of the signal lost after Apple's iOS privacy changes. Best for paid search, email, and retargeting, channels where users click something you can track. Honest limit: Over-credits the platform running the attribution. Cannot answer whether the sale was incremental.
- Geo holdout testing — Proving incrementality for contained media Splits comparable markets into exposed and holdout groups, runs media in the exposed markets only, and measures the true revenue difference. This is the closest thing to a controlled experiment in marketing. It answers the question MMM cannot: would those customers have bought anyway? Honest limit: Defeated by social and programmatic advertising, where the algorithm delivers across cookie pools that do not respect geographic lines. Spillover and shared audiences contaminate the test.
- Spike lift analysis — TV and streaming placement optimization Matches individual ad airings to branded search spikes in the 15 to 20 minute window right after the spot airs. When someone sees your ad and immediately searches for your brand, that search is the cleanest signal you have that the creative landed. Spike lift tells you which programs, which time slots, and which networks drive the strongest response, at a granularity MMM will never reach. CoreMedia Systems, now part of Simpli.fi, is the specialist tool here, providing direct response analytics and spot-level attribution for linear TV and CTV. Honest limit: Only works when there is a known, specific moment of exposure. Programmatic, social, and podcast DAI (dynamic ad insertion) do not have that, so spike lift does not apply.
- HDYHAU — What the customer actually remembers A post-purchase survey asking "How did you hear about us?" This is underrated by performance marketers, but it captures something no model can: the consumer's own memory of what moved them to buy. It is the most reliable signal available for channels that leave no trackable footprint, podcast host reads, influencer content, word of mouth, PR. These channels (sometimes called "dark channels" because they are invisible to click-based tracking) drive real behavior but generate nothing for your attribution platform to detect. Honest limit: Only motivated customers complete surveys. Response rates vary. Needs to run consistently over time to be statistically useful. Fairing is the specialist platform here, with native Shopify integration and compatibility with the major attribution platforms including Rockerbox and Northbeam, making it easy to get HDYHAU data flowing into whichever measurement stack you are running.
The channels where the measurement problem is most serious
When you look at where the money actually goes and map it against these five methods, the picture is uncomfortable. The fastest-growing, highest-spend channels are the hardest to measure independently.
- Paid social, Meta, TikTok, Snap ($83B in 2025, up 20% year over year): Geo holdout is defeated by algorithmic delivery and content that spreads across networks regardless of geography. Platform-native lift tests are methodologically sound, but the platform is grading its own homework. There is no reliable independent measurement method for prospecting on these channels. That is $83 billion largely operating on platform-reported numbers.
- Retail media, Amazon and Walmart ($54B, up 20%): Amazon shows you a closed loop: someone saw your sponsored product, searched, and bought. The attribution chain within the platform is real. What it cannot tell you is whether that customer would have found you through organic search anyway, or whether you are now paying for placement you used to earn on merit. Brands that run controlled tests, pausing spend on their strongest organic products for a defined period, consistently find they were cannibalizing themselves. The platform dashboard looks like accountability. The incrementality question is untouched.
- Programmatic display and digital video ($85B combined): Geo holdout is defeated by national cookie pools. Continuous drip delivery defeats spike lift. MMM signal is weak because there is no on and off pattern. This is $85 billion with no clean independent path to understanding true impact.
Every channel mapped against all five methods
2025 US ad spend, measurement signal quality, and honest best method for each channel
| Channel | 2025 Spend | YoY | MMM | MTA / CAPI | Geo holdout | HDYHAU | Spike lift | Honest best method |
|---|---|---|---|---|---|---|---|---|
| Performance digital — search well measured, rest problematic | ||||||||
| Paid search (Google / Bing) |
+14% | Med | High | None | None | None | MTA / CAPI — intent is deterministic, attribution is clean | |
| Retail media (AMZN / WMT) |
+20% | Low | Walled garden | None | None | None | No reliable method — cannibalization of organic placement unknown | |
| Display / DSP (DV360 / Trade Desk) |
+8% | Low | Med | Defeated | None | None | No reliable method — cookie pools defeat geo, delivery is continuous | |
| Digital video (YouTube / online video) |
+17% | Low | Med | Defeated | Low | None | No reliable method — algorithmic delivery defeats geo isolation | |
| Paid social — largest measurement blind spot | ||||||||
| Social media (Meta / TikTok / Snap) |
+20% | Med | Walled garden | Defeated | Med | None | No reliable independent method — platforms hide incrementality by design | |
| Television — best measured traditional medium | ||||||||
| Linear TV (NBC / CBS / Cable) |
-19% | High | None | High | Med | High | MMM (budget) + geo holdout (incrementality) + spike lift (placement) | |
| CTV / streaming TV (Hulu / Peacock / Disney+) |
+18% | Med | Low | Med | Low | Med | Spike lift via ACR data + MMM for budget allocation | |
| Direct mail — underrated measurement story | ||||||||
| Direct mail (catalogs / inserts / EDDM) |
flat | Med | None | High | Med | None | Geo holdout — hard physical boundary, no spillover possible | |
| Audio — mixed picture | ||||||||
| Linear radio (iHeart / Cumulus / local) |
-5% | High | None | Med | Med | Med | MMM for budget + spike lift for daypart optimization | |
| Podcast (Spotify / Apple / iHeart) |
+16% | Med | Med | High | High | None | Geo holdout (baked-in) + HDYHAU + Podscribe MTA (DAI) | |
| Out-of-home — geo is the only answer | ||||||||
| OOH / DOOH (Lamar / Clear Channel / Outfront) |
+5% | Med | None | High | Med | None | Geo holdout — location-fixed media, clean market isolation | |
Sources: Winterberry Group, PwC, eMarketer, Marketing Charts 2025. "Walled garden" = platform reports attribution within its own system but cannot answer whether the sale was incremental. "Defeated" = geographic or spillover contamination makes geo holdout structurally unreliable for that channel. Linear TV trend reflects continued structural decline ex-political spend.
How a triangulated stack works in practice
Running five methods independently across disconnected tools is not practical for most marketing teams. The stack only works when the data foundation is unified, consistent inputs feeding all five methods from a single source. Here is how the methods divide the workload:
- MMM runs quarterly for strategic budget allocation. It tells you how much to spend on TV versus social versus search versus podcast at the channel level.
- Spike lift runs continuously for TV and premium streaming. It tells you which programs and time slots drive the strongest response, giving your media buyer the signal to optimize the actual buy.
- Geo holdout runs periodically for linear TV, direct mail, and OOH. One well-designed holdout test per channel per year also gives your MMM model a real calibration anchor, making both methods more reliable.
- MTA and CAPI run daily for paid search, email, and retargeting, the channels where the user journey is trackable and deterministic attribution works.
- HDYHAU runs at checkout continuously, weighted against conversion volume and tracked over time to detect when podcast, influencer, or word of mouth is moving the needle in ways no platform can see.
The real value shows up when methods agree and when they disagree. When MMM says TV is working and spike lift confirms the specific placements, you have both strategic and tactical validation. When MMM says social is efficient but almost no one mentions it in post-purchase surveys, you have a hypothesis worth testing. Disagreement between methods is not a problem. It is a signal.
Platforms building toward this
A handful of platforms have built toward unified measurement. None covers all five methods perfectly, but the best ones are closing the gap fast.
The most complete unified platform for mid-market and enterprise brands. Combines MTA, MMM, and incrementality testing on a single data foundation with 100+ integrations covering digital and offline channels including TV, direct mail, and podcast. Importantly, Rockerbox will accept spike lift attribution data from a third-party supplier and incorporate it directly into the model. In practice this means pairing Rockerbox with a TV attribution platform like CoreMedia Systems (now part of Simpli.fi), which provides direct response analytics and spot-level attribution for linear TV and CTV, giving you a genuinely unified view across all five measurement methods in one place. Built to show where methods agree and where they diverge. Acquired by DoubleVerify in 2024, which raises questions about future direction worth monitoring.
Built around proving incrementality through structured experiments, with MMM and attribution layered in. Purpose-built for brands where proving causal impact to the CFO is the primary goal. Strong enterprise customer ratings. Less suited for teams that need fast daily tactical optimization.
Strong for DTC and ecommerce brands with substantial paid digital budgets. Creative-level attribution granularity that media buyers actually use day to day. Expanded into MMM in 2025 and continues to mature its offline measurement capabilities. Less equipped for brands with heavy offline media mixes that include TV, OOH, and podcast.
The specialist tool for TV and CTV spike lift. CoreMedia, now part of Simpli.fi, provides direct response analytics and spot-level attribution for linear TV, CTV, and radio, matching individual airings to response data in near real time. Gives media buyers the placement-level signal to move beyond ratings-based buying toward outcome-based decisions. Works directly with Rockerbox, feeding spot-level attribution data into the unified model so spike lift does not sit in a separate silo.
The best purpose-built post-purchase survey platform available and the natural choice for the HDYHAU layer of the stack. Fairing integrates natively with Shopify and delivers category-leading response rates of 40 to 80 percent. Survey data syncs to your data warehouse, Klaviyo, and the major attribution platforms including Rockerbox and Northbeam, making it compatible with whichever unified measurement stack you are running. For DTC and ecommerce brands, it is the cleanest way to capture what customers actually remember about how they found you, covering the dark channels that no attribution model can see on its own.
For brands with in-house data science resources. Both are Bayesian MMM frameworks that can be run without a vendor. Meridian in particular is designed to ingest geo holdout results as calibration inputs, making the model more accurate over time. The tradeoff is the analyst resources required to build, run, and interpret them.
The bottom line
The measurement problem in 2026 is not that good tools do not exist. It is that the channels eating the most budget are structurally resistant to independent measurement, and the tools best suited to answer the real question, did this spend actually cause incremental revenue, are being applied to the wrong channels or used in isolation.
The brands solving this build a stack where:
- MMM sets strategic allocation and picks up signals from channels where people never click anything
- Spike lift optimizes placement within TV and streaming at a granularity MMM cannot reach
- Geo holdout provides causal validation for the channels where it works cleanly
- MTA and CAPI handle the deterministic, click-based channels with daily precision
- HDYHAU captures what no model can, the consumer's own account of what moved them
And they run it all from a unified data foundation. Rockerbox is worth calling out specifically here: it will accept spike lift attribution data from a supplier like CoreMedia Systems (now part of Simpli.fi) and incorporate it directly into the model, which means you can get close to all five methods feeding a single platform rather than managing five disconnected dashboards producing five different answers.
For pre-IPO brands, the measurement story you tell investors matters as much as your growth numbers. A CMO who can speak to triangulated incrementality, here is what MMM shows, here is what our geo holdout confirmed, here is what customers tell us they remember, is a fundamentally different conversation than one showing platform screenshots.
Everyone is talking about MMM. The brands getting ahead are building the stack that makes MMM mean something.