Omnicom, Google Debut AI to Pre-Test Video Ads in AMET

Omnicom, Google Debut AI to Pre-Test Video Ads in AMET

Marketers chasing attention in crowded video feeds have long gambled budgets on gut feel and post-campaign learning curves that arrive too late to rescue underperforming ads, and that lag has become a strategic liability as video spend concentrates on platforms where seconds define outcomes. A new collaboration between Omnicom Advertising and Google set out to move the decision line upstream by launching an AI-powered creative intelligence system in the Middle East that evaluates and improves video ads before they run. The pilot with telecommunications provider du combined YouTube’s ABCD framework with Omnicom’s proprietary agents and a regional cultural layer, aiming to replace broad creative debates with precise instructions on what to change, why it matters, and how it will likely affect performance. The effort framed AI not as a scoreboard but as a coach: interpret signals, diagnose friction, and prescribe edits while there is still time to act.

Why Pre-Flight Intelligence Matters

Inside the System: ABCD, Brave Bot, and Cultural Signals

At the core of the system sat Google’s ABCD AI detector, a model tuned to assess four pillars proven to influence YouTube outcomes: Attention, Branding, Connection, and Direction. It parsed early hooks, checked for early and consistent brand presence, scored emotional resonance, and verified whether a call to action appeared with clarity and timing. Omnicom layered in Brave Bot, a proprietary agent built to interrogate distinctiveness: Was the opening distinctive in-category, did the execution challenge expected tropes, were sonic and visual cues timely, and did the narrative move beyond safe claims? A third component applied regional cultural intelligence, examining tonal fit, contemporary behaviors, and human dynamics that shape reception in Gulf markets and neighboring regions. Together these engines translated qualitative opinions into practical tasks, such as accelerating the first three seconds, surfacing the brand mark by second two, or aligning on-screen copy with voiceover for a firmer CTA.

Proof in Practice: The du Pilot’s Signals and Fixes

The du pilot provided a controlled arena to validate the promise. Across 10 video assets, baseline effectiveness scores spanned from 44% to 80%, signaling both foundational gaps and strong building blocks. The AI flagged sluggish openings where a drifting cold open delayed the hook, late branding that appeared after the first scroll window, and emotional beats that landed too late to anchor memory. It did not stop at critique; it issued prescriptive edits with rationale: compress introductory sequences to under two seconds, introduce sonic branding with logo lock-up before the first cut, and reinforce the CTA by mirroring voiceover verbs in on-screen supers. Editorial teams used the guidance to iterate storyboards and rough cuts within hours rather than days, reallocating production time to scenes that moved ABCD metrics most. The net effect was less roulette, more surgical improvement, and a shared language that eased creative–media tensions.

From Pilot to Expansion

Regionalization at Scale: AMET Rollout and Channel Diversification

Building on this foundation, the partners charted a phased expansion across Africa, the Middle East, and Turkey before a broader global rollout. The cultural layer traveled with the system, adapting signals for market specifics—dialects, family dynamics, celebratory motifs, and privacy sensitivities that can shift the meaning of a scene or soundtrack. While YouTube remained the initial focus, the roadmap included social video, connected TV, and digital out-of-home, each with platform-aware heuristics. For connected TV, for instance, the emphasis tilted toward mid-roll persistence and co-viewing dynamics; for social verticals, it prioritized thumb-stopping motion in the first frame and caption clarity without audio. This approach naturally led to standardized creative diagnostics that respected each platform’s grammar while giving brands a consistent yardstick. It also reduced duplicative testing cycles by turning channel idiosyncrasies into reusable, codified guidance.

Practical Outcomes and Next Moves

The early pattern pointed to a broader shift: machine diagnostics and human craft working in tandem to de-risk ideas without flattening them. By reframing pacing, branding, and narrative choices as measurable levers, teams gained permission to be braver—swap a safe montage for a bolder cold open—because the system exposed where distinctiveness amplified, not hindered, clarity. For practitioners planning next steps, three moves stood out. First, treat pre-flight as a production input, not a gate; build ABCD and Brave Bot checks into script tables, animatics, and rough cuts. Second, invest in localized signal libraries—festivals, slang, and etiquette—so the cultural layer grows smarter with each campaign. Third, define success thresholds per channel and audience, then track deltas after each AI-guided edit to create a closed loop. Done this way, creative reviews became faster, feedback felt less subjective, and launch decisions carried more confidence because the hard work had already been pressure-tested.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later