8 Reasons Most UA Strategies Fail as they Scale
Your ROAS is like an iceberg. The biggest killers are invisible. UA teams need to dig below the surface to save their ROAS. These are my takeaways from a podcast with Nilay Patel (Head of Product, Kohort) and Bruno Sista (Head of Analytics, Kohort), two of the sharpest minds in predictive modeling, signal engineering, and all the messy realities of post-ATT user acquisition.
1. Giving paid networks credit for organics
Many studios misattribute organic installs to paid channels when they analyse performance, inflating metrics that crumble at scale. Without strong IP, viral loops, or social features, teams in effect reward paid channels for installs they didn't create, paying premium CPIs for ghost contributions.
The fix? Use regression analysis to separate baseline organics (app store features, brand) from incremental organics that paid channels actually generate. Stop crediting networks for users who would have come anyway.
2. ROAS Curves ≠ Universal Laws
Assuming ROAS curves behave consistently across geos, platforms, and campaign types is expensive. You're most vulnerable when you have the least data, launching new networks or geos. Don’t just use ratios from one channel on a new channel.
For example, a team with SDK network data will catastrophically overestimate returns on rewarded networks, where ROAS peaks much earlier and then flatlines. Add seasonal factors like Black Friday, and your confidence collapses.
Accurate prediction requires stable traffic data, clear confidence intervals, and humility about operating outside your training set. Otherwise, you're simply gambling.
3. Gross vs Net ROAS: The Silent Misalignment
Here’s a typical scenario: UA (user acquisition) teams send gross revenue signals to ad networks while CFOs operate in net revenue.
When you tell Meta to hit 12% ROAS at day seven, are you looking for gross or net returns? Your MMP doesn’t do any conversion by default - IAP is usually gross, IAA is always net, and networks have no idea what you actually need. The difference? App store fees and VAT can reduce returns by 30-40%. So, Meta giving you 100% ROAS does not equal a happy CFO. Even worse, you may not even be able to tell your CFO what your net ROAS will be.
Solutions are painful: convert all MMP integrations to net (breaks campaigns during transition) or build a data layer to track true net ROAS. Without alignment, you'll hit revenue targets at 63 cents on the dollar. And just to be clear, that’s a miss.
4. Connecting LTV and CPI, revenue, and spend
Higher spending doesn't just raise CPI; it changes the CPI-LTV relationship unpredictably. As you scale beyond core audiences, CPI rises - everyone knows this. But LTV can also change (higher CPIs can mean higher-quality users, or you could start targeting users that aren’t in your core audience and retain less). This creates an optimization nightmare where controlling spend and profit means user quality becomes a market-driven wild card.
Kohort's analysis of $6 billion in UA spend shows elasticity varies dramatically between IAP/IAA apps, geos, and campaign types. The iron triangle of marketing: you can't control spend, profit margins, and acquisition costs simultaneously. Pick two, negotiate the third.
5. Signal Engineering: The New UA Meta
Most teams set one global day-seven ROAS target across all campaigns, Meta, AppLovin, and rewarded networks, ignoring that each has radically different user behaviors and conversion curves. This one-size-fits-all approach does not scale because every developer is going after the same traffic, users who are likely to make a purchase, with the same targets.
To access traffic that no one else is looking at requires UA and product teams to identify the ideal user journey, then translate those insights into custom MMP events. If users who log in on day three become your best cohorts, instrument that path. The organizational trap? Product teams A/B test features and accidentally destroy UA signals, while UA managers aren't invited to product meetings.
6. The Single Source of Truth
MMPs can be blind to some user behavior: re-engagement, post-day-180 activity, cross-platform play, and web store purchases. Building real truth means treating your MMP as one mobile data stream, then enriching it with everything else.
Start with the biggest problem: Apple’s SKAN. If you can match Android spend visibility on iOS, you've solved the critical blindness for most studios. Use opt-in data as a proxy, then find similarity patterns among non-opt-in users to classify them. Re-engagement and post-day-180 attribution offer 10-20% upside, meaningful, but secondary to fixing SKAN first.
Cross-platform attribution is a headache - it often requires significant data investment - but it could make enough difference to empower an omni-channel Marketing strategy. Getting your data under control is often the best use for your UA dollars.
7. Bad Metrics Create Bad Confidence
Accuracy gets all the focus, but confidence matters equally. One network delivers $10-15 LTV consistently; another swings between $1-100. Same accuracy, wildly different confidence intervals.
Great studios use MAPE (mean absolute percentage error) to constantly back-test predictions against actual performance. But data science teams must also arm UA managers with confidence intervals; your predictions are always just predictions. High noise recently? Let campaigns mature another week before making changes. MAPE lives with data science as your batting average before making the call.
8. Pacing & Network Algorithms
Teams change bids, and tROAS (Target Return on Ad Spend) targets too quickly, killing campaign performance.
Bruno's rule: never make more than one change per conversion window. Optimizing toward day-seven ROAS? Wait seven days between changes so algorithms can learn. Make changes of less than 20% up or down; drastic shifts force campaigns to explore new audiences inefficiently.
The 7-20 rule feels counterintuitive when performance looks bad (a 20% cut seems stupid), but knee-jerk reactions extend the learning process and stunt growth. Let algorithms learn. Your impatience costs more than bad performance.
Philosophical Musing: The Future of Predictive ROAS
Most "predictive models" are spreadsheets with lipstick, but that's changing fast. Ad networks will shift to tROAS-only campaigns with longer windows. AppLovin already moved from day seven to day 28, and the trend points toward day 365. All while dynamic creative optimization, powered by generative AI, will remove creative changes from UA manager workflows.
The role evolves: UA managers become auditors who manage multiple ad networks running autonomous campaigns, using independent data to verify network-reported performance. More automation, more probabilistic modeling, later attribution windows, and hopefully, spreadsheets will finally become obsolete.
The future of UA is feeding algorithms the right signals, then auditing the results.
P.S.: If any of the topics in this podcast are keeping you up at night, reach out to Kohort to access their machine learning analytics platform. It solves the tricky data science behind optimisation targets, new segment launches, and organics, so you can focus on growing your app.
This is not a paid ad. I just know the folks at the company have clients who used the platform and like them a lot.

