What Incrementality Testing Measures
Incrementality testing answers the most fundamental question in advertising: did this campaign actually cause conversions, or would those conversions have happened anyway? Attribution systems tell you which touchpoint gets credit for a conversion, but they cannot distinguish between users who converted because of an ad and users who would have converted regardless. Incrementality testing makes that distinction through controlled experimentation.
The concept is straightforward. You divide your potential audience into two groups: a test group that sees your ads and a control group that does not. After a defined period, you compare conversion rates between the groups. The difference, the incremental lift, represents the true causal impact of your advertising. If the test group converts at 5% and the control group converts at 3%, your ads drove a 2 percentage point incremental lift, meaning 40% of the conversions attributed to your campaign were truly incremental.
This distinction matters enormously for budget allocation. Without incrementality data, you might allocate significant budget to a retargeting campaign that shows a strong ROAS in attribution reports but is actually reaching users who were already going to convert. Incrementality testing reveals these over-attributed channels and redirects budget toward campaigns that genuinely drive new conversions.
Methods for Running Incrementality Tests
There are several established methods for measuring incrementality, each with different trade-offs between accuracy, cost, and operational complexity. The most rigorous approach is a randomized controlled trial (RCT) where users are randomly assigned to test and control groups at the individual level. The test group sees your ads while the control group sees either no ads or public service announcements. This method provides the cleanest causal measurement but requires cooperation from ad platforms and sacrifices potential conversions from the control group.
Geo-based testing is the most practical method for most mobile growth teams. You select matched pairs of geographic regions, cities or DMAs with similar demographics and historical conversion patterns. One region in each pair receives your advertising while the other serves as a control. After the test period, you compare conversion rates between test and control regions. This method works within privacy frameworks because it relies on aggregate regional data rather than user-level tracking.
Ghost ads or intent-to-treat methods offer a middle ground. The ad platform identifies users who would have seen your ad based on targeting criteria and auction dynamics, but shows them a placeholder instead. This creates a control group of users who match your targeting but were not exposed to your creative. The method avoids the revenue loss of pausing campaigns entirely while still providing a valid counterfactual.
Designing a Valid Incrementality Test
A poorly designed incrementality test produces misleading results that can be worse than no test at all. Several design principles are essential for valid results. First, ensure your test and control groups are truly comparable. For geo tests, match regions on population size, demographics, historical conversion rates, and seasonality patterns. Use pre-test data to verify that matched pairs behave similarly before the test begins.
Sample size and test duration are critical. Calculate the required sample size based on your baseline conversion rate, the minimum detectable effect you care about, and your desired statistical confidence level. Running a test that is too short or too small produces results that look definitive but are actually noise. Most mobile incrementality tests need at least two weeks and often four weeks to reach reliable conclusions.
Control for external factors that could confound your results. If you are running a geo test and a competitor launches a major campaign in one of your control regions during the test period, your results will be skewed. Monitor for confounding events and be prepared to extend or restart tests when they occur. Document all external factors that might influence results so you can account for them in your analysis.
Interpreting Incrementality Results
Raw incrementality results need careful interpretation before they become actionable. The primary metric is incremental lift, the percentage increase in conversions attributable to your advertising. But lift alone does not tell you whether the campaign is profitable. You need to calculate the incremental cost per acquisition (iCPA) by dividing your total ad spend by the number of incremental conversions, not total attributed conversions.
Linkrunner's attribution data complements incrementality testing by providing the granular campaign performance data needed to design tests and interpret results. When you know which campaigns, networks, and creatives drive the most attributed conversions, you can prioritize incrementality tests on the highest-spend channels where the stakes are greatest. The combination of attribution for day-to-day optimization and incrementality for strategic validation creates a measurement framework that is both actionable and accurate.
Compare your incremental ROAS to your attributed ROAS for each channel. Channels where incremental ROAS is close to attributed ROAS are genuinely driving value. Channels where incremental ROAS is significantly lower than attributed ROAS are over-credited by your attribution model, they are taking credit for conversions that would have happened organically. This gap analysis is one of the most valuable outputs of incrementality testing because it directly informs budget reallocation decisions.
Building an Incrementality Testing Program
Incrementality testing should not be a one-time exercise but an ongoing program that continuously validates and refines your marketing strategy. Establish a testing calendar that cycles through your major channels and campaigns over the course of a quarter. Prioritize tests based on spend, test your highest-spend channels first because the potential for budget reallocation is greatest.
Build institutional knowledge from each test. Document the methodology, results, and decisions made for every incrementality test. Over time, you will develop a clear picture of which channels consistently deliver incremental value and which rely on attribution inflation. This knowledge base informs not just budget allocation but also negotiation with ad networks and strategic planning for new channel expansion.
Integrate incrementality insights with your attribution model. If incrementality testing consistently shows that a particular channel delivers 60% of its attributed conversions incrementally, apply that factor to your ongoing attribution data for more accurate day-to-day optimization. This calibrated attribution approach gives you the speed of real-time attribution data with the accuracy of periodic incrementality validation, creating a measurement system that is both responsive and trustworthy.
