What is Return on Ad Spend (ROAS)? Complete Guide for 2026

ROAS measures the revenue generated for every dollar spent on advertising. Learn how to calculate, benchmark, and optimize ROAS for mobile app campaigns.

How ROAS Is Calculated

Return on Ad Spend is the ratio of revenue generated to advertising cost, expressed as a multiple or percentage. The formula is straightforward: divide attributed revenue by ad spend. A campaign that cost $5,000 and generated $15,000 in revenue has a ROAS of 3.0x, meaning every dollar spent returned three dollars in revenue.

The simplicity of the formula masks significant complexity in how the inputs are measured. Revenue attribution requires connecting each dollar of in-app revenue back to the campaign that acquired the user who generated it. This depends on accurate install attribution, reliable post-install event tracking, and correct revenue values attached to conversion events. A single broken link in this chain, a misconfigured SDK, a missing postback, or an incorrect revenue parameter, can distort ROAS calculations across your entire portfolio.

Time horizon is the other critical variable. Day 0 ROAS measures revenue generated on the install day itself. Day 7 ROAS includes the first week of revenue. Day 30, Day 90, and Day 365 ROAS extend the measurement window further. For subscription apps, Day 0 ROAS might be near zero (free trial period), while Day 90 ROAS could be 2.0x or higher as trial users convert to paid. The time horizon you use for optimization decisions should match your business model's revenue realization pattern.

ROAS Benchmarks and Targets

Setting appropriate ROAS targets requires understanding your unit economics, not copying industry benchmarks. A subscription app with 80% gross margins can be profitable at 1.5x ROAS because the revenue is high-margin and recurring. An e-commerce app with 30% margins needs 3.5x+ ROAS just to break even on the ad spend, before accounting for operational costs.

For mobile gaming, ROAS targets vary dramatically by genre. Hyper-casual games with ad-based monetization often target 0.8-1.0x Day 7 ROAS, relying on the long tail of ad revenue to reach profitability over 30-90 days. Mid-core games with in-app purchase monetization might target 0.3-0.5x Day 7 ROAS because their revenue curves are back-loaded, whales make large purchases weeks or months after install. Setting a Day 7 ROAS target that is too aggressive for a back-loaded monetization model will cause you to kill profitable campaigns prematurely.

The relationship between ROAS targets and scale is inverse. As you increase spend, you exhaust the highest-value audience segments first and must reach broader, lower-intent audiences. This naturally compresses ROAS. A campaign delivering 5.0x ROAS at $10,000/month might deliver 2.5x ROAS at $100,000/month. Growth teams must find the spend level where marginal ROAS meets their profitability threshold, the point where the next dollar spent still generates acceptable returns.

Measuring ROAS Across Channels

Accurate cross-channel ROAS measurement requires a unified attribution system that deduplicates installs and connects revenue events to their true acquisition source. Without this, each ad network reports its own ROAS based on its own attribution claims, and the numbers do not add up, total attributed revenue across all networks exceeds your actual revenue because multiple networks claim credit for the same users.

Linkrunner solves this by providing a single source of truth for attribution and revenue data across all channels. Every install is attributed once, to one source, using consistent methodology. Revenue events flow through the same system, ensuring that ROAS calculations are based on deduplicated, accurate data. Growth teams can compare ROAS across Meta, Google, TikTok, Apple Search Ads, and any other network in a single dashboard, with confidence that the numbers are apples-to-apples.

Channel-level ROAS comparison often reveals surprising insights. A network with high CPI might deliver superior ROAS because its users monetize at higher rates. Conversely, a network with rock-bottom CPI might show poor ROAS because its users never convert to paying customers. These insights are invisible when you evaluate channels on CPI alone, which is why ROAS should be the primary efficiency metric for any app with direct monetization. CPI is an input cost; ROAS is the output that determines whether that cost was justified.

Optimizing ROAS at Scale

ROAS optimization operates at multiple levels: network allocation, campaign structure, audience targeting, and creative performance. The highest-leverage optimization is usually network-level budget reallocation, shifting spend from low-ROAS networks to high-ROAS networks. This sounds obvious but requires the accurate cross-channel measurement described above, which many teams lack.

Within a network, campaign structure significantly impacts ROAS. Consolidating campaigns to give the network's algorithm more data typically improves performance. Running 50 campaigns with $100/day each gives the algorithm far less signal than running 5 campaigns with $1,000/day each. The algorithm needs conversion volume to optimize effectively, and fragmented campaign structures starve it of data.

Creative optimization is the most sustainable ROAS lever because it improves performance without requiring additional spend. Testing new creative concepts, formats, and messaging systematically, and measuring their impact on ROAS rather than just click-through rate, identifies the assets that attract high-value users. A creative with a lower click-through rate but higher ROAS is more valuable than one that drives clicks from users who never monetize. Build your creative testing framework around ROAS as the primary success metric, with CTR and CPI as secondary indicators.

ROAS in a Privacy-Constrained Environment

Privacy changes have made ROAS measurement more challenging but not less important. On iOS, SKAN provides campaign-level conversion data but with limited granularity and delayed reporting. You can calculate approximate ROAS using SKAN conversion values mapped to revenue ranges, but the precision is lower than traditional user-level measurement. Android's Privacy Sandbox will introduce similar constraints over time.

The practical impact is that ROAS measurement is becoming probabilistic rather than deterministic. Instead of knowing exactly which campaign generated $15,000 in revenue, you are working with modeled estimates based on aggregated signals. This requires growth teams to build comfort with uncertainty ranges rather than point estimates. A campaign's true ROAS might be 2.5x ± 0.5x rather than exactly 2.8x. Decisions should account for this uncertainty, do not kill a campaign based on a ROAS estimate that falls within the margin of error.

Incrementality testing provides a valuable complement to attribution-based ROAS in this environment. By running controlled experiments that measure the true incremental revenue generated by a campaign, you can validate whether your attributed ROAS reflects reality. If a campaign shows 3.0x attributed ROAS but incrementality testing reveals only 1.5x incremental ROAS, half of the attributed revenue was coming from users who would have converted organically. This calibration is increasingly important as attribution accuracy decreases under privacy restrictions, and teams that invest in incrementality measurement will make better budget decisions than those relying solely on attributed ROAS.

Frequently asked questions

See what mobile growth looks like when the product can think with you

Explore Linkrunner’s AI-native approach to attribution, deep linking, creative intelligence, and generation.