What Aggregate Attribution Is
Aggregate attribution is a measurement approach that reports campaign performance using grouped, anonymized data rather than tracking individual user journeys. Instead of saying "User 12345 clicked Ad A and installed the app 3 hours later," aggregate attribution says "Campaign A drove approximately 500 installs this week." The individual user is never identified, but the campaign's overall impact is measured.
This approach has become the dominant attribution paradigm on iOS since Apple introduced SKAdNetwork and required App Tracking Transparency consent for user-level tracking. With the majority of iOS users opting out of tracking, aggregate attribution through SKAN is often the only attribution signal available for those users. Google's Privacy Sandbox is bringing similar aggregate approaches to Android, making this the future of mobile attribution across both platforms.
The shift to aggregate attribution represents a fundamental change in how growth teams operate. Decisions that were previously made based on precise, user-level data must now be made based on statistical estimates with inherent uncertainty. This is not a degradation of measurement, it is a different measurement paradigm that requires different analytical approaches, different optimization strategies, and different expectations about data precision.
How Aggregate Attribution Works
The mechanics of aggregate attribution vary by framework, but the core principles are consistent. The attribution system collects conversion signals, installs, post-install events, revenue, and groups them by campaign, time period, and other dimensions before reporting. Individual user identifiers are either never collected or are stripped before the data leaves the device.
In Apple's SKAN framework, the process works as follows: when a user installs an app from an ad, the device registers the install with Apple's attribution system. The app can update a conversion value that encodes post-install behavior. After a timer expires, Apple sends a postback to the ad network with the campaign ID and conversion value, but without any user identifier or precise timestamp. The data is further protected by privacy thresholds that suppress postbacks for campaigns below a minimum install volume.
Google's Privacy Sandbox Attribution Reporting API follows a similar philosophy with different implementation details. It supports both event-level reports with limited data and summary reports with aggregate data. The summary reports use differential privacy techniques, adding calibrated noise to the data, to prevent re-identification of individual users while preserving statistical accuracy at the campaign level.
Optimizing with Aggregate Data
Campaign optimization with aggregate attribution requires a shift in methodology but not a reduction in rigor. The key is working at the right level of granularity and using statistical techniques that account for the noise and delay inherent in aggregate data.
Focus optimization on the variables you can measure reliably: campaign-level performance, creative variant performance, and geo-level performance. SKAN provides campaign IDs that let you compare performance across campaigns and ad groups. Creative-level analysis is possible when you map SKAN campaign IDs to specific creative variants. Geo-level analysis uses regional install and revenue data that does not require user-level tracking.
Linkrunner supports aggregate attribution workflows by integrating with SKAN and Privacy Sandbox to capture and decode conversion data across both platforms. The platform maps SKAN postbacks to your campaign structure, decodes conversion values into meaningful event data, and presents aggregate performance metrics alongside any available user-level attribution from opted-in users. This unified view lets your team optimize across both data types without maintaining separate workflows for aggregate and user-level attribution.
Combining Aggregate Attribution with Other Methods
Aggregate attribution is most powerful when combined with complementary measurement approaches. No single method provides a complete picture of marketing performance in the privacy era. The most effective measurement frameworks layer multiple approaches, each compensating for the others' limitations.
Incrementality testing validates whether the conversions reported by aggregate attribution are truly incremental. SKAN might report that a campaign drove 1,000 installs, but incrementality testing reveals whether those users would have installed anyway. This combination prevents over-investment in channels that appear effective in attribution reports but are actually capturing organic demand.
Media mix modeling provides the strategic layer that aggregate attribution cannot. While SKAN tells you how many installs each campaign drove this week, MMM tells you the optimal budget allocation across channels for the next quarter. MMM uses the same aggregate data that powers aggregate attribution, spend and conversion totals by channel and time period, but applies statistical modeling to extract strategic insights about channel efficiency and diminishing returns.
First-party data fills the personalization gap. Aggregate attribution cannot tell you which specific users came from which campaigns, but authenticated users who create accounts provide first-party signals that enable personalization, retention analysis, and lifetime value calculation. Encouraging account creation early in the user journey recovers much of the analytical capability lost when user-level attribution is unavailable.
Preparing for an Aggregate-First Future
The transition to aggregate attribution is not complete, but the direction is irreversible. Growth teams that build their measurement and optimization infrastructure around aggregate data now will be better positioned than those waiting for the old paradigm to return. Several practical steps accelerate this transition.
Redesign your conversion value schema to maximize the information extracted from SKAN's limited bits. Map your most important post-install events to conversion value ranges that capture the signals most relevant to your optimization decisions. Test different schemas and measure which encoding provides the most actionable data for your specific business model and campaign structure.
Build analytical workflows that account for data delay and noise. SKAN postbacks arrive 24 to 48 hours after install, and Privacy Sandbox reports have similar latency. Your optimization cadence must accommodate this delay, daily bid adjustments based on yesterday's data are not possible with aggregate attribution. Shift to weekly optimization cycles that allow sufficient data accumulation for statistically meaningful analysis.
Invest in server-side measurement infrastructure. As client-side tracking becomes more restricted, server-side events become the most reliable source of conversion data. Implement server-to-server postbacks for key events, use conversion APIs provided by ad platforms, and ensure your backend can generate the conversion signals that feed both aggregate attribution frameworks and your own analytics. The teams with the strongest server-side measurement infrastructure will have the most complete and accurate data in an aggregate-first world.
