Mobile Programmatic Advertising: How Brands Buy, Optimize, and Scale Mobile Ads in 2026
Mary Gabrielyan
February 20, 2026
15
minutes read
Mobile programmatic advertising is where performance budgets go to scale—and where weak measurement and sloppy supply choices quietly erase that value. This guide shows how mobile buying really works in 2026, from in-app vs mobile web inventory to privacy-safe targeting, fraud controls, and attribution you can defend.
Mobile programmatic advertising is the automated buying and selling of mobile ad inventory—mainly in-app and mobile web—using software platforms that evaluate each impression and decide, in milliseconds, whether to bid, how much to bid, and which creative to serve. If you’re running performance budgets, this matters for a simple reason: mobile is where the spend, attention, and measurement complexity are colliding hardest. (In the U.S., mobile ad spend was forecast to reach $228.94B in 2025, close to two-thirds of digital ad spend.)
This guide follows the technical spec you shared and is written for performance marketers, media buyers, app marketers, and ad tech teams who already know the basics of programmatic, but want a clearer, more current mobile-specific map for 2026 planning.
⚡ Mobile programmatic isn’t “desktop programmatic on a smaller screen.” The inventory, IDs, and measurement rules are different enough that your playbook should be too.
The “programmatic” part means the transaction is handled by platforms and protocols—rather than manual IOs—across auctions (open exchange), curated private deals (PMPs), or fixed-price, guaranteed deals.
A useful way to frame it is by outcomes. Mobile programmatic is used to drive:
App installs and in-app events (UA and re-engagement)
Mobile web conversions (lead forms, sign-ups, purchases)
Mobile is also where consumer time is concentrated. Sensor Tower’s State of Mobile report highlights the scale of app usage, reporting 4.2 trillion hours spent in apps in 2024. That attention is exactly why in-app inventory remains central to most mobile programmatic strategies.
Average time spent per day with mobile device (Source)
How mobile programmatic advertising works
Before the details, it helps to picture the moving parts. A standard mobile programmatic impression decision typically involves:
The user (in an app or mobile browser)
The publisher (app developer or mobile site)
An SSP/ad exchange (packaging the impression opportunity)
A DSP (evaluating whether to buy)
A data layer (first-party audiences, contextual signals, privacy-safe IDs)
A measurement layer (MMPs, tags, SKAdNetwork, attribution APIs, verification)
Yearly worldwide APP IAP revenue and worldwide IAP revenue growth by app category (Source)
⚡ Programmatic isn’t a niche workflow anymore: programmatic revenue reached $134.8B in 2024, growing 18% YoY. The upside is scale and automation; the downside is you need tighter controls to avoid buying problems at speed.
Mobile inventory types: in-app vs mobile web
Mobile inventory is not one thing. Your buying mechanics—and your risks—change depending on where the ad renders.
In-app inventory:
Delivered via SDKs (or SDK runtime environments) integrated into apps
Often sold via in-app bidding, mediation, or exchange pathways
Richer device/app signals can be available, but fraud exposure can be higher on low-quality supply
Mobile web inventory:
Delivered via tags in the browser (or via server-side integrations)
Increasingly constrained by privacy controls and browser limits
Often more familiar to “desktop programmatic” teams, but measurement can degrade without strong first-party signals
📌 A practical takeaway: treat as its own channel with its own QA standards, not merely “more display.”
sellers.json and OpenRTB SupplyChain object (visibility into who is selling/reselling)
app-ads.txt (the in-app extension that helps combat spoofing by declaring authorized sellers)
Real-time bidding and deal types
RTB is the most common transaction type in open exchange buying. The simplified flow looks like this:
A user opens an app screen or mobile web page with an ad slot.
The publisher’s ad stack creates a bid request with relevant signals (device, app/site, placement, geo, time, user signals when permitted).
The SSP/exchange sends the request to multiple DSPs.
DSPs evaluate the impression against campaign goals, targeting rules, and budget constraints.
DSPs submit bids (and select an eligible creative).
The SSP runs the auction and returns the winning ad to render.
This process is extremely fast, but “fast” can hide sloppy decisioning. In mobile, many performance issues come from what you didn’t filter, not what you optimised.
⚡ In mobile, “cheap scale” is often just a different way of saying “you bought the problems.”
Mobile programmatic vs desktop programmatic
If your organisation still applies desktop assumptions to mobile, you’ll usually see it in three places: IDs, measurement, and quality controls.
One data point that’s hard to ignore: Pixalate’s Q3 2025 global analysis reportedhigher invalid traffic (IVT) rates on mobile app traffic than mobile web traffic (they cite mobile apps at 33% IVT vs mobile web at 21% IVT, globally). You don’t need to panic at the headline number, but you should treat it as a sign that mobile app supply demands tighter controls.
Targeting in mobile programmatic advertising
Mobile targeting is powerful, but it’s also where privacy and platform rules can bite you. The best mobile programmatic strategies treat targeting as a ladder:
Start with what you know (first-party data, contextual signals)
Add privacy-safe identifiers where permitted
Use probabilistic methods carefully, and validate with incrementality
Device IDs and privacy-safe identifiers
On mobile, “identity” starts with the operating system. Android and iOS handle advertising identifiers very differently, and those differences determine what targeting, frequency control, and attribution are realistic. Start with Android because it still supports a persistent advertising identifier for many use cases, then contrast that with iOS, where access is more tightly permissioned.
⚡ Platform dynamics are shifting fast: AppsFlyer reportsiOS paid installs rose in the U.S. (+31%) while Android showed +8% in the same market in its 2025 data trends analysis. If you’re blending iOS and Android into one set of assumptions, you’re probably hiding real differences in cost and performance.
Android: Advertising ID (AAID)
On Android, the advertising identifier is still the primary addressable signal for many programmatic mobile ads, especially for app install and in-app event optimization. But it was designed with consumer controls in mind, so it’s not a “forever ID.” Any strategy that depends on it needs to account for resets, opt-outs, and the gradual shift toward privacy-first APIs.
Google describes the advertising ID as user-resettable and user-deletable, provided by Google Play services.
Android documentation (updated Oct 2025) explains how apps obtain a consistent advertising ID on a per-device-user basis, and references user controls for reset and opt-out.
iOS: IDFA and ATT
iOS flips the default assumption. Instead of “tracking unless the user opts out,” the system now treats cross-app tracking as something you earn through explicit consent. That changes how much deterministic targeting you can do, how stable your frequency caps are, and how much you can rely on user-level attribution—especially for programmatic mobile advertising focused on performance.
Apple requires apps to request permission using AppTrackingTransparency (ATT) to track users and access the advertising identifier; without permission, the IDFA is effectively unavailable (zeroed).
In practical terms, iOS targeting that depends on cross-app tracking is consent-gated, and your addressable pool varies by app category, region, and how value is explained to users.
A real-world benchmark
Because ATT consent is a user choice, there is no universal access rate you can bank on. The practical move is to plan with a benchmark range, then validate it against your own app category, geos, and onboarding experience. The point of a benchmark isn’t to predict perfectly—it’s to prevent unrealistic targeting and measurement assumptions in your media plan.
Adjust’s 2025 benchmark reports an average ATT opt-in rate of 35% (with wide variation). Use that as a planning assumption, not a promise. Your own consent rate is a function of UX, category trust, and value exchange.
Privacy-safe alternatives (often combined)
When device-level identifiers are limited, programmatic mobile advertising doesn’t stop—it shifts. Instead of one dominant ID powering everything, campaigns increasingly rely on a stack of signals that work together: first-party data, publisher context, modeled audiences, and privacy-preserving measurement. The best approach is usually hybrid, picking the minimum set of signals you need to hit the objective without creating a fragile measurement setup.
First-party IDs (logged-in email/phone, hashed and consented)
Publisher-provided IDs (within an app network or publisher group)
Cohort and interest signals (Topics-like systems, on-device interest groups)
Contextual signals (content, placement type, app category, time, geo at a coarse level)
Clean rooms and secure matching (for larger advertisers with stable 1P data)
👉 If you need a general roadmap for “ID-less” approaches, IAB Tech Lab’s guidance is a good reference point for how the industry is thinking about addressability beyond traditional cross-context identifiers.
Location and contextual targeting
Location is one of mobile’s biggest advantages and one of its easiest foot-guns.
High-performing location targeting usually looks like:
Coarse location for prospecting (city/region, not “standing outside competitor store”)
Tighter geofencing only when you can justify the use case (and have consent)
Contextual overlays (app category, content theme, time of day, local conditions)
Regulatory and self-regulatory standards also matter here. The Network Advertising Initiative (NAI) updated location data privacy standards in 2024, including added clarity around sensitive points of interest.
📌 For most performance teams, the key action is simple: treat precise location as a sensitive signal, and make your vendor explain sourcing, consent, and suppression for sensitive POIs.
Audience and behavioral targeting
“Behavioral targeting” in mobile programmatic usually means one of three things:
On-platform app behavior (events inside your app, your CRM, your site)
In-network behavior (publisher or supply-side segments, often opaque)
Budgeting in mobile programmatic is not just “pick a CPM.” It’s pick a market, define what quality means, then control how budget flows through the system.
CPM cost drivers (what actually moves prices)
Mobile CPMs vary by country, format, placement, and demand density. Even within one DSP, two line items can price very differently if they differ on any of these levers:
Format and attention cost (rewarded video ≠ banner; full-screen ≠ small)
Placement type (feed, interstitial, end card, rewarded, native)
Supply path (direct vs resold; curated PMPs often cost more but waste less)
User value density (high-income geo, high-intent contexts, app categories)
Measurement constraints (SKAN-only optimisation can change clearing prices)
Fraud and viewability filtering (strict filters reduce supply, often raising CPM but improving outcomes)
A useful way to explain the “why did my CPM go up?” question to stakeholders is: you didn’t necessarily pay more for ads; you paid more to avoid junk.
The hidden budget problem: working media vs total media
One of the most practical recent analyses comes from the ANA’s programmatic transparency work, which highlights how much spend can be absorbed by the supply chain and low-quality supply.
In the ANA’s 2024 findings, only 43.9% of every $1,000 entering a DSP was estimated to reach the consumer (their “working media” concept), and improving supply quality increased that figure (they also quantify an improvement of $79 per $1,000 in one scenario).
You don’t have to agree with every methodology detail for the takeaway to matter: budget efficiency is partly a supply selection problem.
Budget pacing and optimization
Mobile pacing breaks when the platform can’t reconcile three constraints at once:
You want stable delivery across the flight
You want strict audience rules
You also want strict quality and measurement rules
To keep pacing predictable:
Start wider, then tighten: Launch with broader eligibility, then narrow once you’ve learned where performance and quality overlap.
Separate prospecting and retargeting budgets: They behave differently and need different frequency and measurement rules.
Use guardrails, not constant manual overrides: Over-tweaking often creates volatility (and “learning” resets).
Frequency and reach management
Frequency is more complicated on mobile because identity is more fragmented, especially on iOS. You can still manage it, but you need to understand the level you’re controlling:
Device-level frequency (stronger on Android with AAID; weaker on iOS without consent)
App/session-level frequency (inside a publisher environment)
Cohort-level frequency (modeled reach controls)
In practice, high-performing mobile teams do two things consistently:
They apply frequency caps by funnel stage (awareness vs consideration vs retargeting).
They monitor incremental reach, not just impression volume.
Measurement and attribution in mobile programmatic
Mobile measurement is where most programmatic strategies either earn credibility or lose it. The reason is simple: mobile has more “missingness”—missing IDs, missing clicks, missing visibility into the last mile.
Mobile attribution models (and when each lies to you)
Common models you’ll see in mobile programmatic reporting:
Last-click attribution: Simple, often too narrow for upper-funnel mobile.
View-through attribution (VTA): Useful in moderation; easy to abuse without strict rules.
Multi-touch attribution (MTA): Harder to sustain under platform privacy constraints.
SKAdNetwork / OS-level attribution: Privacy-preserving, but aggregated and delayed.
Modeled attribution: Increasingly common; must be validated with experiments.
The model choice should reflect the conversion path. If you’re running app install + early events, OS-level attribution is unavoidable on iOS for most advertisers.
Post-click vs post-view measurement
Post-view measurement is where you need the clearest internal policy, because it can inflate perceived performance.
A responsible post-view approach usually includes:
A short view-through window (hours to a couple days, depending on cycle)
Format-based eligibility (e.g., full-screen video may qualify; tiny banners often shouldn’t)
Viewability requirements (not just “served,” but “had a chance to be seen”)
Deduping rules (don’t double-count with click-based conversions)
Clear separation in reporting: click-attributed vs view-attributed
If you can’t defend your VTA policy in one paragraph, it’s probably too generous.
iOS measurement: SKAdNetwork and AdAttributionKit
Apple’s privacy-first attribution frameworks shape mobile optimisation on iOS.
Key points that matter to performance teams:
ATT governs cross-app tracking and access to the advertising identifier. Without permission, you cannot rely on IDFA-based user-level attribution.
SKAdNetwork 4supportsmultiple conversion windows and up to three postbacks for the winning attribution, helping advertisers understand engagement over time while maintaining crowd anonymity controls.
AdAttributionKitis positioned by Apple as a way to measure campaign success while maintaining user privacy, and it interoperates with SKAdNetwork concepts.
Operationally, this usually means:
Your iOS reporting is aggregated and delayed
You’ll do more optimisation around cohorts, not individuals
You’ll rely more heavily on creative testing, geo experiments, and lift studies
Incrementality and lift testing
When IDs weaken, experiments get more valuable.
Incrementality answers: Would this outcome have happened anyway? At a minimum, mobile programmatic teams should run lift tests for:
Retargeting (especially)
Prospecting in saturated markets
Major budget increases or creative shifts
Practical incrementality approaches include:
Geo holdouts (best for offline or regionally distributed outcomes)
Audience split tests (randomised where possible)
PSA tests (common in video environments)
Time-based holdouts (less ideal, but sometimes workable)
⚡ Attribution is a measurement tool. Incrementality is a decision tool. Don’t mix them up.
Fraud isn’t just a “verification vendor” problem. It’s also a supply selection and transaction choice problem.
Two recent signals worth paying attention to:
Pixalate reported higher IVT rates on mobile apps than mobile web in its global Q3 2025 analysis (33% vs 21%).
DoubleVerify’s 2025 release summarising 2024 trends notes bot fraud growth and flags mobile app video as a key driver at points in the year.
You shouldn’t use any single report as gospel, but the pattern is consistent across the industry: in-app programmatic video can attract sophisticated invalid traffic.
On mobile, the “MFA” concept shows up differently than on the open web. You’ll see:
Spoofed apps (inventory masquerading as a premium app)
Resold supply paths that hide the true source
Incentivized or low-intent placements that generate cheap volume
Supply-chain standards help, but only when enforced:
ads.txt and sellers.json make the selling path inspectable
app-ads.txt exists because in-app inventory remained vulnerable to certain spoofing patterns
There’s also progress on device authenticity signals. The IAB Tech Lab has been working on device attestation capabilities within measurement tooling to help combat spoofing (a direction that matters for mobile as much as CTV).
Brand safety and viewability
Brand safety in mobile is not only about content adjacency. It’s also about:
App category and app store metadata
Placement type (e.g., user-generated content surfaces)
Viewability and audibility for video formats
Supply path transparency
Mobile programmatic in omnichannel strategies
Mobile rarely sits alone in a media plan now. It’s often the connective tissue between screens and between online and offline outcomes.
Mobile + CTV coordination
CTV is where a lot of brand budgets are flowing, but mobile is still where the device-level response often happens (search, site visits, app installs).
Three practical coordination patterns:
Reach extension: Use mobile to extend CTV reach into younger or more mobile-heavy cohorts.
Sequential messaging: CTV for narrative; mobile for product proof and CTA.
Measurement alignment: Unify event taxonomy so you’re not comparing apples to oranges.
CTV measurement itself is moving toward privacy-respecting server-side approaches (CAPIs) to improve outcome measurement, and that direction matters for how mobile and CTV reporting can reconcile.
Mobile retargeting still works, but it needs tighter governance than it did pre-ATT.
A sensible sequential framework looks like:
Prospecting: Contextual + broad audiences, focus on creative learning.
Consideration: Site visitors / app engagers (first-party), tighter creative rotation.
Conversion: High-intent cohorts, strict frequency, short windows.
Retention: LTV segments, value-based bidding when possible.
The most common failure mode is blasting retargeting to everyone who merely touched your funnel, then crediting VTA for the result.
Offline-to-online attribution
Mobile is often the best “bridge” channel for offline outcomes, but only if measurement is handled carefully.
Examples:
Store visit lift tests (geo holdouts)
QR or offer redemption strategies
Loyalty-linked conversions (where consented)
📌 The safest mindset is: treat offline attribution as directional unless you can run holdouts and validate against multiple signals.
Best practices for effective mobile programmatic advertising
This section is intentionally tactical. The goal is to help you avoid the most common “we spent a lot and learned nothing” outcomes.
Choose inventory quality over scale
In mobile programmatic, you’re almost always better off with:
fewer supply paths,
more transparency,
stricter verification,
curated deals for core learnings,
…than with broad, cheap scale.
Actionable steps:
Prefer direct paths when available.
Use sellers.json/supply chain signals to reduce resold clutter.
Treat app-ads.txt compliance as table stakes for in-app buys.
⚡ If you can’t name the apps/sites where your spend landed, you didn’t buy “efficiently.” You bought blind.
Optimize creatives for mobile-first formats
Mobile creative performance is often limited by basic mismatch:
wrong aspect ratio,
unreadable type,
slow load,
unclear CTA.
A simple mobile creative checklist:
Use large, high-contrast text (thumb-scroll friendly).
Front-load the value prop in the first 1–2 seconds of video.
Keep landing pages fast and frictionless (deep links for apps).
Build variations that match placement types (feed vs interstitial vs rewarded).
Test, learn, and iterate continuously
Mobile programmatic improves when you treat it like a lab, not a vending machine.
A clean testing rhythm:
Weekly creative experiments (few variables at a time)
Bi-weekly supply audits (where did spend go, what did it buy?)
Monthly measurement calibration (attribution windows, VTA policy, lift test planning)
Set realistic performance benchmarks
Benchmarks should be:
segmented by OS (iOS vs Android),
segmented by inventory type (in-app vs web),
tied to your measurement constraints (SKAN vs deterministic).
If you keep one blended benchmark, you’ll end up “optimising” toward the easiest-to-measure slice, not the best business outcome.
The future of mobile programmatic advertising
Mobile programmatic is being reshaped by privacy, measurement design, and supply quality. Here are the trends most likely to shape 2026 planning.
Privacy-preserving attribution becomes the default, not the backup
On iOS, privacy-safe attribution frameworks are already central:
SKAdNetwork 4’s multi-window postbacks and crowd anonymity controls
AdAttributionKit’s direction toward cross-channel privacy-safe measurement
For advertisers, this pushes two strategic moves:
Invest in first-party event quality (clean funnels, consistent schemas)
Treat incrementality as a core competency, not an occasional project
Android evolves through Privacy Sandbox, but keeps legacy controls in play
Google’s Privacy Sandbox for Android work continues to expand, focusing on APIs such as Topics, Protected Audience, and Attribution Reporting, with ongoing progress updates and staged releases. Google also notes it intended to support existing ads platform features for at least two years while new solutions are designed and tested.
For buyers, the implication is not “Android will suddenly be iOS.” It’s more nuanced:
Expect more privacy controls, but also a long coexistence period.
Prepare to run mixed measurement stacks across Android versions and OEM environments.
Supply quality becomes a budgeting lever
The market is getting more explicit about waste and quality. The ANA’s transparency work, and the broader attention on MFA and supply-chain leakage, is pushing advertisers to treat quality as something you budget for—not merely a checkbox.
This shows up in practice as:
more curated marketplaces,
stricter allowlists,
deeper supply path optimisation,
and more investment in verification.
Contextual and “ID-less” strategies get smarter
As cross-app IDs shrink on iOS and probabilistic methods face scrutiny, contextual approaches improve:
better app-content taxonomies,
better placement-level performance modelling,
better creative-context matching.
IAB Tech Lab’s work on ID-less guidance reflects how seriously the ecosystem is treating this shift.
Creative becomes the primary optimisation surface
When user-level signals degrade, creative testing becomes the highest-leverage knob left.
In 2026, teams that win in mobile programmatic will usually have:
a disciplined creative testing pipeline,
strong landing experiences (and deep links),
and clean measurement policies that don’t reward inflated attribution.
Conclusion: why mobile programmatic remains a core channel
Mobile programmatic advertising keeps getting more complex, but it hasn’t gotten less useful. The channel still sits where people spend time, where intent shows up quickly, and where you can connect upper-funnel exposure to measurable outcomes—provided you buy carefully and measure honestly. The teams that win in 2026 won’t be the ones who “chase scale.” They’ll be the ones who control supply, set clear measurement rules, and treat mobile as a key connector in an omnichannel plan.
Key takeaways
When mobile programmatic makes sense: Use it when you need mobile-first reach plus performance control—app install and re-engagement, mobile web conversion, or “assist” roles in journeys that start on CTV/desktop and finish on a phone. It’s also a strong fit when you need fast testing (creative, audiences, placements) without waiting on manual buys.
How to manage costs and quality: Treat cost efficiency as a supply-path problem, not just a bid problem. Prioritise direct, transparent paths, reduce unnecessary bid hops, and apply IVT protection and publisher-level exclusions early. Smart Supply is designed to do exactly that by curating supply based on your KPI, filtering low-performing publishers, removing indirect traffic, and keeping the path to inventory clean. Smart Supply Pitch
Why measurement matters more than clicks: Clicks are easy to count and easy to misread. In mobile, especially in-app, you need a measurement framework that separates post-click from post-view, enforces sane windows, and validates performance with incrementality tests when signals are limited. If your reporting can’t explain why something worked, you’ll struggle to scale it responsibly.
How mobile fits into omnichannel strategies: Mobile is often the “handoff” channel—CTV drives awareness, and mobile captures response; desktop supports research, and mobile closes; offline campaigns spark intent, and mobile makes it measurable. Plan mobile as a coordinated layer (sequencing, frequency rules, unified audiences), not an isolated line item.
If you want help making mobile programmatic more predictable—without locking yourself into a single platform—Smart Supply is AI Digital’s supply-side curation service built to improve efficiency and outcomes. It’s DSP-agnostic, works across display, streaming video, CTV, and audio, and issues custom deal IDs based on your inventory type and KPI targets. It also neutralizes inventory bias by prioritising performance instead of an SSP or DSP’s preferred supply.
If you’d like to pressure-test your current supply paths, reduce waste, and build curated deal IDs that match your KPI (not someone else’s “standard package”), get in touch with AI Digital about Smart Supply. Share your primary goal (CPA, CPI, ROAS, attention/completions), target markets, and the inventory mix you’re running. Smart Supply can typically activate quickly—deal IDs can be issued within 24 hours—and there’s no minimum spend to start testing.
Blind spot
Key issues
Business impact
AI Digital solution
Lack of transparency in AI models
• Platforms own AI models and train on proprietary data • Brands have little visibility into decision-making • "Walled gardens" restrict data access
• Inefficient ad spend • Limited strategic control • Eroded consumer trust • Potential budget mismanagement
Open Garden framework providing: • Complete transparency • DSP-agnostic execution • Cross-platform data & insights
Optimizing ads vs. optimizing impact
• AI excels at short-term metrics but may struggle with brand building • Consumers can detect AI-generated content • Efficiency might come at cost of authenticity
• Short-term gains at expense of brand health • Potential loss of authentic connection • Reduced effectiveness in storytelling
Smart Supply offering: • Human oversight of AI recommendations • Custom KPI alignment beyond clicks • Brand-safe inventory verification
The illusion of personalization
• Segment optimization rebranded as personalization • First-party data infrastructure challenges • Personalization vs. surveillance concerns
• Potential mismatch between promise and reality • Privacy concerns affecting consumer trust • Cost barriers for smaller businesses
Elevate platform features: • Real-time AI + human intelligence • First-party data activation • Ethical personalization strategies
AI-Driven efficiency vs. decision-making
• AI shifting from tool to decision-maker • Black box optimization like Google Performance Max • Human oversight limitations
• Strategic control loss • Difficulty questioning AI outputs • Inability to measure granular impact • Potential brand damage from mistakes
Managed Service with: • Human strategists overseeing AI • Custom KPI optimization • Complete campaign transparency
Fig. 1. Summary of AI blind spots in advertising
Dimension
Walled garden advantage
Walled garden limitation
Strategic impact
Audience access
Massive, engaged user bases
Limited visibility beyond platform
Reach without understanding
Data control
Sophisticated targeting tools
Data remains siloed within platform
Fragmented customer view
Measurement
Detailed in-platform metrics
Inconsistent cross-platform standards
Difficult performance comparison
Intelligence
Platform-specific insights
Limited data portability
Restricted strategic learning
Optimization
Powerful automated tools
Black-box algorithms
Reduced marketer control
Fig. 2. Strategic trade-offs in walled garden advertising.
Core issue
Platform priority
Walled garden limitation
Real-world example
Attribution opacity
Claiming maximum credit for conversions
Limited visibility into true conversion paths
Meta and TikTok's conflicting attribution models after iOS privacy updates
Data restrictions
Maintaining proprietary data control
Inability to combine platform data with other sources
Amazon DSP's limitations on detailed performance data exports
Cross-channel blindspots
Keeping advertisers within ecosystem
Fragmented view of customer journey
YouTube/DV360 campaigns lacking integration with non-Google platforms
Black box algorithms
Optimizing for platform revenue
Reduced control over campaign execution
Self-serve platforms using opaque ML models with little advertiser input
Performance reporting
Presenting platform in best light
Discrepancies between platform-reported and independently measured results
Consistently higher performance metrics in platform reports vs. third-party measurement
Fig. 1. The Walled garden misalignment: Platform interests vs. advertiser needs.
Key dimension
Challenge
Strategic imperative
ROAS volatility
Softer returns across digital channels
Shift from soft KPIs to measurable revenue impact
Media planning
Static plans no longer effective
Develop agile, modular approaches adaptable to changing conditions
Brand/performance
Traditional division dissolving
Create full-funnel strategies balancing long-term equity with short-term conversion
Capability
Key features
Benefits
Performance data
Elevate forecasting tool
• Vertical-specific insights • Historical data from past economic turbulence • "Cascade planning" functionality • Real-time adaptation
• Provides agility to adjust campaign strategy based on performance • Shows which media channels work best to drive efficient and effective performance • Confident budget reallocation • Reduces reaction time to market shifts
• Dataset from 10,000+ campaigns • Cuts response time from weeks to minutes
• Reaches people most likely to buy • Avoids wasted impressions and budgets on poor-performing placements • Context-aligned messaging
• 25+ billion bid requests analyzed daily • 18% improvement in working media efficiency • 26% increase in engagement during recessions
Full-funnel accountability
• Links awareness campaigns to lower funnel outcomes • Tests if ads actually drive new business • Measures brand perception changes • "Ask Elevate" AI Chat Assistant
• Upper-funnel to outcome connection • Sentiment shift tracking • Personalized messaging • Helps balance immediate sales vs. long-term brand building
• Natural language data queries • True business impact measurement
Open Garden approach
• Cross-platform and channel planning • Not locked into specific platforms • Unified cross-platform reach • Shows exactly where money is spent
• Reduces complexity across channels • Performance-based ad placement • Rapid budget reallocation • Eliminates platform-specific commitments and provides platform-based optimization and agility
• Coverage across all inventory sources • Provides full visibility into spending • Avoids the inability to pivot across platform as you’re not in a singular platform
Fig. 1. How AI Digital helps during economic uncertainty.
Trend
What it means for marketers
Supply & demand lines are blurring
Platforms from Google (P-Max) to Microsoft are merging optimization and inventory in one opaque box. Expect more bundled “best available” media where the algorithm, not the trader, decides channel and publisher mix.
Walled gardens get taller
Microsoft’s O&O set now spans Bing, Xbox, Outlook, Edge and LinkedIn, which just launched revenue-sharing video programs to lure creators and ad dollars. (Business Insider)
Retail & commerce media shape strategy
Microsoft’s Curate lets retailers and data owners package first-party segments, an echo of Amazon’s and Walmart’s approaches. Agencies must master seller-defined audiences as well as buyer-side tactics.
AI oversight becomes critical
Closed AI bidding means fewer levers for traders. Independent verification, incrementality testing and commercial guardrails rise in importance.
Fig. 1. Platform trends and their implications.
Metric
Connected TV (CTV)
Linear TV
Video Completion Rate
94.5%
70%
Purchase Rate After Ad
23%
12%
Ad Attention Rate
57% (prefer CTV ads)
54.5%
Viewer Reach (U.S.)
85% of households
228 million viewers
Retail Media Trends 2025
Access Complete consumer behaviour analyses and competitor benchmarks.
Identify and categorize audience groups based on behaviors, preferences, and characteristics
Michaels Stores: Implemented a genAI platform that increased email personalization from 20% to 95%, leading to a 41% boost in SMS click through rates and a 25% increase in engagement.
Estée Lauder: Partnered with Google Cloud to leverage genAI technologies for real-time consumer feedback monitoring and analyzing consumer sentiment across various channels.
High
Medium
Automated ad campaigns
Automate ad creation, placement, and optimization across various platforms
Showmax: Partnered with AI firms toautomate ad creation and testing, reducing production time by 70% while streamlining their quality assurance process.
Headway: Employed AI tools for ad creation and optimization, boosting performance by 40% and reaching 3.3 billion impressions while incorporating AI-generated content in 20% of their paid campaigns.
High
High
Brand sentiment tracking
Monitor and analyze public opinion about a brand across multiple channels in real time
L’Oréal: Analyzed millions of online comments, images, and videos to identify potential product innovation opportunities, effectively tracking brand sentiment and consumer trends.
Kellogg Company: Used AI to scan trending recipes featuring cereal, leveraging this data to launch targeted social campaigns that capitalize on positive brand sentiment and culinary trends.
High
Low
Campaign strategy optimization
Analyze data to predict optimal campaign approaches, channels, and timing
DoorDash: Leveraged Google’s AI-powered Demand Gen tool, which boosted its conversion rate by 15 times and improved cost per action efficiency by 50% compared with previous campaigns.
Kitsch: Employed Meta’s Advantage+ shopping campaigns with AI-powered tools to optimize campaigns, identifying and delivering top-performing ads to high-value consumers.
High
High
Content strategy
Generate content ideas, predict performance, and optimize distribution strategies
JPMorgan Chase: Collaborated with Persado to develop LLMs for marketing copy, achieving up to 450% higher clickthrough rates compared with human-written ads in pilot tests.
Hotel Chocolat: Employed genAI for concept development and production of its Velvetiser TV ad, which earned the highest-ever System1 score for adomestic appliance commercial.
High
High
Personalization strategy development
Create tailored messaging and experiences for consumers at scale
Stitch Fix: Uses genAI to help stylists interpret customer feedback and provide product recommendations, effectively personalizing shopping experiences.
Instacart: Uses genAI to offer customers personalized recipes, mealplanning ideas, and shopping lists based on individual preferences and habits.
Medium
Medium
Share article
Url copied to clipboard
No items found.
Subscribe to our Newsletter
THANK YOU FOR YOUR SUBSCRIPTION
Oops! Something went wrong while submitting the form.
Questions? We have answers
How is mobile programmatic different from programmatic advertising?
Mobile programmatic advertising is a subset of programmatic advertising where the impressions happen on phones and tablets, using mobile-specific mobile ad inventory such as in-app placements (SDK-based) and mobile web. The biggest differences are the identifier and measurement layer: iOS access to IDFA is permission-gated via Apple’s AppTrackingTransparency rules, and iOS app measurement often relies on privacy-preserving frameworks like SKAdNetwork rather than user-level tracking.
Is mobile programmatic advertising privacy compliant?
It can be, as long as your programmatic mobile advertising setup respects consent, platform policies, and data minimization. On iOS, you must request permission to track and access the advertising identifier using AppTrackingTransparency, and without permission the IDFA is unavailable (all zeros). On Android, the Advertising ID is designed to be user-resettable and users can opt out of ad personalization, which your data practices need to honor.
How is mobile ad performance measured?
Mobile performance is usually measured with a mix of platform and third-party tools: click and impression logs from the DSP, in-app event measurement from an MMP/SDK, and OS-level attribution for iOS via SKAdNetwork (and related Apple frameworks) when user-level tracking isn’t available. SKAdNetwork 4 supports multiple conversion windows with up to three postbacks, which helps you evaluate early vs later post-install value while staying privacy-safe.
What minimum budget makes mobile programmatic effective?
There isn’t one universal number, because the “minimum” depends on your targeting breadth, geo, format, and whether you’re buying premium or open exchange supply, but you generally need enough spend to test multiple creatives and placements long enough to see stable patterns. As a rule of thumb, if your daily budget is so small that you can’t get meaningful delivery across at least a few publishers and creative variants, optimisation will mostly be noise rather than signal. The more constrained your targeting (or the more premium the inventory), the more budget you’ll need for the same learning.
What is an example of mobile programmatic advertising?
A common example is an app marketer running programmatic mobile ads through a DSP to buy in-app interstitial and native placements, optimising bids toward first-purchase events, and using OS-level attribution on iOS while running stricter post-click measurement on Android. Another example is a retailer buying mobile web inventory to retarget product viewers with dynamic creative, then validating impact with a holdout test to avoid over-crediting view-through.
How to measure mobile attribution?
Mobile attribution is measured by connecting ad exposure to outcomes with clear rules: define your conversion event(s), set click and view attribution windows, dedupe across touchpoints, and separate click-attributed vs view-attributed results. On iOS, you’ll often rely on SKAdNetwork’s aggregated postbacks (including multiple conversion windows in SKAN 4) and then validate with incrementality tests when user-level visibility is limited.
What is mobile media buying?
Mobile media buying is the process of purchasing mobile ad inventory across in-app and mobile web environments, either via mobile programmatic advertising (DSP/SSP transactions) or via direct deals with publishers. In practice, it includes planning formats and placements, setting targeting and frequency rules, controlling supply quality, and aligning measurement to your business outcomes.
Have other questions?
If you have more questions, contact us so we can help.