Starting July 1st, Meta will add “location fees” to ad buys targeting users in six countries — effectively offloading the cost of European digital services taxes onto the advertisers themselves.
The numbers. Fees will match each country’s digital services tax rate:
France, Italy, Spain: 3%
Austria, Turkey: 5%
UK: 2%
How it works in practice. Per Meta’s email to advertisers — “$100 in ads delivered to Italy will cost $103, plus any applicable VAT on top of that.”
The fine print. The fees apply to where the ad is delivered, not where the advertiser is based — meaning a US brand running campaigns targeting French users will pay the French rate regardless.
Why we care. This is a direct, unavoidable cost increase hitting European campaigns on July 1 — with no opt-out. If you’re running ads targeting users in France, Italy, Spain, Austria, Turkey, or the UK, your effective CPM and CPA benchmarks are about to get more expensive, which means existing budgets will stretch less far and current ROAS targets may no longer be achievable without adjustment.
And since the fee is based on where the ad is delivered rather than where you’re based, even non-European brands aren’t off the hook.
The big picture for advertisers. This isn’t unique to Meta — Google and Amazon already charge similar pass-through fees. But it’s a meaningful shift in how European ad budgets need to be calculated, and campaign managers should revisit their cost models before July 1 to account for the added overhead across affected markets.
The backdrop. Digital services taxes have been a flashpoint between Europe and Washington. The Trump administration has threatened retaliation against European firms over the levies — adding geopolitical uncertainty to what is already a complex compliance landscape for global advertisers.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/03/Inside-Metas-AI-driven-advertising-system-How-Andromeda-and-GEM-work-together-irR0vC.jpg?fit=1920%2C1080&ssl=110801920http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-10 16:17:372026-03-10 16:17:37Meta is passing Europe’s digital taxes directly to advertisers
Positive coverage creates exposure, authority, trust, and often valuable backlinks.
But for many people, the path to getting it is a mystery. Others believe myths about how it works.
Some believe you have to be at the very top of your industry before the media will care about your story.
That’s simply false.
Others believe you can simply buy your way into media coverage.
There’s a small degree of truth to that.
You can find contributors willing to feature you (or your client) for a fee, but this blatantly violates every outlet’s contributor guidelines. You may land the feature, but editors will eventually find out.
What happens then?
First, the article gets deleted or any mention of you and your links gets removed. Then, the contributor gets removed from the platform and blacklisted in the media industry. Finally, you get blacklisted too.
Good luck getting featured again. It won’t happen.
The reality is that you can get featured in the media.
You just need to understand the process and execute it consistently.
Develop your story
You probably have a great story — you just may not realize it yet.
The media has to produce a constant stream of content. If you have a strong story, you’re already one-third of the way to getting featured.
Let’s start with what doesn’t make a great story.
You’re the first.
You think you’re the best (everyone thinks that, and no one cares except your mother).
You’re the biggest.
You want to change the world.
So what does make a great story?
Like the answer to most SEO questions: it depends.
A great story starts with an actual story.
You have to explain, in an engaging way, why anyone should care about what you have to say.
For example, I often tell the story of how I used PR to rebuild my success after being on my deathbed.
I explain that my agency’s specific PR approach comes from the exact process I used to rebuild my own business — and that I want to give others the same advantage.
And my story is easily verifiable.
But you don’t need a life-or-death struggle to have a compelling story.
You just need a story that shows a deeper purpose. A mission. Something people can get excited about and care about.
Craft your pitch
Even with the best story in the world, you still need an effective pitch.
Your pitch has to cut through the noise and grab attention. Journalists, producers, and others in the media are inundated with pitches — many receive hundreds every day. Your pitch has to tell your story clearly and quickly, and motivate them to respond.
Easier said than done.
Most pitches are sent by email, so most people start with the subject line. That’s the exact opposite of what you should do.
Start with the body of the email. There’s a reason for this, which we’ll get to shortly.
Find a way to connect your story to current events. If a topic is already popular in the media, other outlets are more likely to cover it.
But remember: while the story involves you, it isn’t about you.
You have to pitch from the perspective of what the audience wants. The journalist’s, editor’s, or producer’s needs come second, and yours come in a distant last place.
Sorry, that’s just the way it is.
You need to distill your story and why the audience should care into a few sentences. You can add a little more detail after that, but keep it short. If they see a wall of text, they’ll likely delete your email.
Once your pitch is solid, write your subject line. It should be short, punchy, and aligned with your pitch.
Short and punchy matters because the subject line determines whether they open your email.
If the pitch doesn’t align with the subject line, they’ll likely delete the email without reading it. Getting attention means nothing if they don’t read the message.
I once saw a publicist use a subject line that certainly grabbed attention, but it had zero positive impact and damaged his reputation.
What was it?
“Fuck You!”
Bottom line: your pitch must quickly and clearly show the value the audience will get, and your subject line must grab attention in a positive way while aligning with the pitch.
Build your media list
PR isn’t a numbers game.
Yet people treat it like one. They buy or compile lists of media contacts and blast their pitch to anyone they can find.
That’s no different from spam emails selling generic Viagra.
Success comes from sending the right pitch to the right people at the right time.
Finding the right people means identifying journalists, producers, and other media contacts who cover the types of stories you’re telling.
Several expensive tools can help you find these contacts and their information. But you can often find the same information with a search engine and social media. In fact, that’s how I built most of my media relationships.
As for the right time, that’s largely a matter of chance.
Send your pitch
There’s no magic formula.
The time of day you send your pitch doesn’t matter much unless it’s extremely time-sensitive, which most business topics aren’t. Producers often check email at certain times, but they won’t touch it while preparing for or running their show.
Now here’s something you need to avoid:
Don’t bombard them with follow-up emails!
For truly time-sensitive stories, it may be acceptable to follow up within the same week. In most cases, though, wait about a week. Frequent follow-ups will annoy journalists, producers, and other media contacts.
Stop after two or three follow-ups. If you haven’t received a response by then, they likely aren’t interested in the story.
Try not to take it personally. They probably won’t tell you it’s not a fit. Given the sheer volume of pitches they receive, responding to every one would be a full-time job.
Nurture your relationships
Most of your pitches won’t result in media coverage.
The problem is that most people stop after a rejection or no response.
That’s crazy to me.
I can’t tell you how many times I’ve heard “no” or received no reply before finally landing a feature.
It happened because I didn’t pitch once and move on. These contacts all started as strangers, but I invested time and energy in building real relationships.
As a result, when I reach out, they open and read my emails because I’m not a stranger. Those relationships make it far easier to turn a pitch into media coverage.
Most initial outreach won’t lead to coverage. But if you nurture the right relationships, you’ll eventually build a network of responsive press contacts.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/03/media-coverage-relationships-Jcra37.png?fit=1920%2C1080&ssl=110801920http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-10 16:00:002026-03-10 16:00:00How to get media coverage: A practical guide to pitching journalists
Perplexity AI must stop using its Comet browser agent to make purchases on Amazon. A federal judge sided with Amazon in an early ruling over AI shopping bots.
Why we care. The case targets a core promise of AI agents: completing tasks like shopping on a user’s behalf. If courts restrict how agents access sites, AI agents could face strict limits when interacting with logged-in accounts on major websites.
What happened. U.S. District Judge Maxine Chesney granted Amazon a preliminary injunction Monday in San Francisco federal court.
The order blocks Perplexity from using its Comet browser agent to access password-protected parts of Amazon, including Prime subscriber accounts.
Chesney wrote that Amazon presented “strong evidence” that Comet accessed accounts “with the Amazon user’s permission but without authorization by Amazon.”
The ruling also requires Perplexity to destroy any Amazon data it previously collected.
Catch-up quick. Amazon sued Perplexity in November, accusing the startup of computer fraud and unauthorized access. The company said Comet made purchases from Amazon on behalf of users without properly identifying itself as a bot.
What’s next. The order is paused for one week to allow Perplexity to appeal.
What they’re saying. Amazon spokesperson Lara Hendrickson told Bloomberg (subscription required) the injunction “will prevent Perplexity’s unauthorized access to the Amazon store and is an important step in maintaining a trusted shopping experience for Amazon customers.”
Google Ads is rolling out auto end screens — a new feature that appends an interactive, auto-generated card to the end of eligible video ads to nudge viewers toward a conversion.
How it works. An interactive screen appears for a few seconds immediately after the video finishes playing.
Content is auto-populated from campaign data — app name, icon, price, and a direct install link for app campaigns
End screens appear by default on eligible ads, requiring no setup from advertisers
Why we care. Advertisers no longer need to manually build post-roll calls-to-action. This feature is on by default and changes the end of your video ads — and if you’ve already built custom YouTube end screens, they’ll be overridden without any warning. With end screens being the last thing a viewer sees before deciding to act, losing control of that moment matters.
And with broader expansion planned, now is the time to understand how it works before it reaches more of your campaigns.
The catch. Enabling auto end screens in Google Ads overrides any manually added YouTube end screens — meaning advertisers who’ve already customized their YouTube end cards will lose them.
Current limitations. The feature is only available for in-stream ads running in mobile app install campaigns, with broader expansion planned but not yet dated.
What stays the same. Auto end screens don’t affect billing or view counts — they’re purely an added engagement layer tacked on after a full video view.
Next steps. Advertisers running mobile app install campaigns should audit their video ads now — check whether auto end screens are serving as expected and verify that any manually added YouTube end screens aren’t being silently overridden. As Google expands the feature beyond app installs, it’s worth establishing a review process early so campaigns are ready when eligibility broadens.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/03/Google-end-screen-JHkgTn.png?fit=740%2C416&ssl=1416740http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-10 15:12:382026-03-10 15:12:38Google adds automatic end screens to video ads
The DSCRI-ARGDW pipeline maps 10 gates between your content and an AI recommendation across two phases: infrastructure and competitive. Because confidence multiplies across the pipeline, the weakest gate is always your biggest opportunity. Here, we focus on the first five gates.
The infrastructure phase (discovery through indexing) is a sequence of absolute tests: the system either has your content, or it doesn’t. Then, as you pass through the gates, there’s degradation.
For example, a page that can’t be rendered doesn’t get “partially indexed,” but it may get indexed with degraded information, and every competitive gate downstream operates on whatever survived the infrastructure phase.
If the raw material is degraded, the competition in the ARGDW phase starts with a handicap that no amount of content quality can overcome.
The industry compressed these five distinct DSCRI gates into two words: “crawl and index.” That compression hides five separate failure modes behind a single checkbox. This piece breaks the simplistic “crawl and index” into five clear gates that will help you optimize significantly more effectively for the bots.
If you’re a technical SEO, you might feel you can skip this. Don’t.
You’re probably doing 80% of what follows and missing the other 20%. The gates below provide measurable proof that your content reached the index with maximum confidence, giving it the best possible chance in the competitive ARGDW phase that follows.
Sequential dependency: Fix the earliest failure first
The infrastructure gates are sequential dependencies: each gate’s output is the next gate’s input, and failure at any gate blocks everything downstream.
If your content isn’t being discovered, fixing your rendering is wasted effort, and if your content is crawled but renders poorly, every annotation downstream inherits that degradation. Better to be a straight C student than three As and an F, because the F is the gate that kills your pipeline.
The audit starts with discovery and moves forward. The temptation to jump to the gate you understand best (and for many technical SEOs, that’s crawling) is the temptation that wastes the most money.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
Discovery, selection, crawling: The three gates the industry already knows
Discovery and crawling are well-understood, while selection is often overlooked.
Discovery is an active signal. Three mechanisms feed it:
XML sitemaps (the census).
IndexNow (the telegraph).
Internal linking (the road network).
The entity home website is the primary discovery anchor for pull discovery, and confidence is key. The system asks not just “does this URL exist?” but “does this URL belong to an entity I already trust?” Content without entity association arrives as an orphan, and orphans wait at the back of the queue.
The push layer (IndexNow, MCP, structured feeds) changes the economics of this gate entirely, and I’ll explain what changes when you stop waiting to be found and start pushing.
Selection is the system’s opinion of you, expressed as crawl budget. As Microsoft Bing’s Fabrice Canel says, “Less is more for SEO. Never forget that. Less URLs to crawl, better for SEO.”
The industry spent two decades believing more pages equals more traffic. In the pipeline model, the opposite is true: fewer, higher-confidence pages get crawled faster, rendered more reliably, and indexed more completely. Every low-value URL you ask the system to crawl is a vote of no confidence in your own content, and the system notices.
Not every page that’s discovered in the pull model is selected. Canel states that the bot assesses the expected value of the destination page and will not crawl the URL if the value falls below a threshold.
Crawling is the most mature gate and the least differentiating. Server response time, robots.txt, redirect chains: solved problems with excellent tooling, and not where the wins are because you and most of your competition have been doing this for years.
What most practitioners miss, and what’s worth thinking about: Canel confirmed that context from the referring page carries forward during crawling.
Your internal linking architecture isn’t just a crawl pathway (getting the bot to the page) but a context pipeline (telling the bot what to expect when it arrives), and that context influences selection and then interpretation at rendering before the rendering engine even starts.
Rendering fidelity: The gate that determines what the bot sees
Rendering fidelity is where the infrastructure story diverges from what the industry has been measuring.
After crawling, the bot attempts to build the full page. It sometimes executes JavaScript (don’t count on this because the bot doesn’t always invest the resources to do so), constructs the document object model (DOM), and produces the rendered DOM.
I coined the term rendering fidelity to name this variable: how much of your published content the bot actually sees after building the page. Content behind client-side rendering that the bot never executes isn’t degraded, it’s gone, and information the bot never sees can’t be recovered at any downstream gate.
Every annotation, every grounding decision, every display outcome depends on what survived rendering. If rendering is your weakest gate, it’s your F on the report card, and remember: everything downstream inherits that grade.
The friction hierarchy: Why the bot renders some sites more carefully than others
The bot’s willingness to invest in rendering your page isn’t uniform. Canel confirmed that the more common a pattern is, the less friction the bot encounters.
I’ve reconstructed the following hierarchy from his observations. The ranking is my model. The underlying principle (pattern familiarity reduces selection, crawl, rendering, and indexing friction and processing cost) is confirmed:
Approach
Friction level
Why
WordPress + Gutenberg + clean theme
Lowest
30%+ of the web. Most common pattern. Bot has highest confidence in its own parsing.
Established platforms (Wix, Duda, Squarespace)
Low
Known patterns, predictable structure. Bot has learned these templates.
WordPress + page builders (Elementor, Divi)
Medium
Adds markup noise. Downstream processing has to work harder to find core content.
Bespoke code, perfect HTML5
Medium-High
Bot does not know your code is perfect. It has to infer structure without a pattern library to validate against.
Bespoke code, imperfect HTML5
High
Guessing with degraded signals.
The critical implication, also from Canel, is that if the site isn’t important enough (low publisher entity authority), the bot may never reach rendering because the cost of parsing unfamiliar code exceeds the estimated benefit of obtaining the content. Publisher entity confidence has a huge influence on whether you get crawled and also how carefully you get rendered (and everything else downstream).
JavaScript is the most common rendering obstacle, but it isn’t the only one: missing CSS, proprietary elements, and complex third-party dependencies can all produce the same result — a bot that sees a degraded version of what a human sees, or can’t render the page at all.
JavaScript was a favor, not a standard
Google and Bing render JavaScript. Most AI agent bots don’t. They fetch the initial HTML and work with that. The industry built on Google and Bing’s favor and assumed it was a standard.
Perplexity’s grounding fetches work primarily with server-rendered content. Smaller AI agent bots have no rendering infrastructure.
The practical consequence: a page that loads a product comparison table via JavaScript displays perfectly in a browser but renders as an empty container for a bot that doesn’t execute JS. The human sees a detailed comparison. The bot sees a div with a loading spinner.
The annotation system classifies the page based on an empty space where the content should be. I’ve seen this pattern repeatedly in our database: different systems see different versions of the same page because rendering fidelity varies by bot.
Three rendering pathways that bypass the JavaScript problem
The traditional rendering model assumes one pathway: HTML to DOM construction. You now have two alternatives.
WebMCP, built by Google and Microsoft, gives agents direct DOM access, bypassing the traditional rendering pipeline entirely. Instead of fetching your HTML and building the page, the agent accesses a structured representation of your DOM through a protocol connection.
With WebMCP, you give yourself a huge advantage because the bot doesn’t need to execute JavaScript or guess at your layout, because the structured DOM is served directly.
Markdown for Agents uses HTTP content negotiation to serve pre-simplified content. When the bot identifies itself, the server delivers a clean markdown version instead of the full HTML page.
The semantic content arrives pre-stripped of everything the bot would have to remove anyway (navigation, sidebars, JavaScript widgets), which means the rendering gate is effectively skipped with zero information loss. If you’re using Cloudflare, you have an easy implementation that they launched in early 2026.
Both alternatives change the economics of rendering fidelity in the same way that structured feeds change discovery: they replace a lossy process with a clean one.
For non-Google bots, try this: disable JavaScript in your browser and look at your page, because what you see is what most AI agent bots see. You can fix the JavaScript issue with server-side rendering (SSR) or static site generation (SSG), so the initial HTML contains the complete semantic content regardless of whether the bot executes JavaScript.
But the real opportunity lies in new pathways: one architectural investment in WebMCP or Markdown for Agents, and every bot benefits regardless of its rendering capabilities.
Rendering produces a DOM. Indexing transforms that DOM into the system’s proprietary internal format and stores it. Two things happen here that the industry has collapsed into one word.
Rendering fidelity (Gate 3) measures whether the bot saw your content. Conversion fidelity (Gate 4) measures whether the system preserved it accurately when filing it away. Both losses are irreversible, but they fail differently and require different fixes.
The strip, chunk, convert, and store sequence
What follows is a mechanical model I’ve reconstructed from confirmed statements by Canel and Gary Illyes.
Strip: The system removes repeating elements: navigation, header, footer, and sidebar. Canel confirmed directly that these aren’t stored per page.
The system’s primary goal is to find the core content. This is why semantic HTML5 matters at a mechanical level. <nav>, <header>, <footer>, <aside>, <main>, and <article> tags tell the system where to cut. Without semantic markup, it has to guess.
Illyes confirmed at BrightonSEO in 2017 that finding core content at scale was one of the hardest problems they faced.
Chunk: The core content is broken into segments: text blocks, images with associated text, video, and audio. Illyes described the result as something like a folder with subfolders, each containing a typed chunk (he probably used the term “passage” — potato, potarto, tomato, tomarto). The page becomes a hierarchical structure of typed content blocks.
Convert: Each chunk is transformed into the system’s proprietary internal format. This is where semantic relationships between elements are most vulnerable to loss.
The internal format preserves what the conversion process recognizes, and everything else is discarded.
Store: The converted chunks are stored in a hierarchical structure.
The individual steps are confirmed. The specific sequence and the wrapper hierarchy model are my reconstruction of how those confirmed pieces fit together.
In this model, the repeating elements stripped in the first step are not discarded but stored at the appropriate wrapper level: navigation at site level, category elements at category level. The system avoids redundancy by storing shared elements once at the highest applicable level.
Like my “Darwinism in search” piece from 2019, this is a well-informed, educated guess. And I’m confident it will prove to be substantively correct.
The wrapper hierarchy changes three things you already do:
URL structure and categorization: Because each page inherits context from its parent category wrapper, URL structure determines what topical context every child page receives during annotation (the first gate in the phase I’ll cover in the next article: ARGDW).
A page at /seo/technical/rendering/ inherits three layers of topical context before the annotation system reads a single word. A page at /blog/post-47/ inherits one generic layer. Flat URL structures and miscategorized pages create annotation problems that might appear to be content problems.
Breadcrumbs validate that the page’s position in the wrapper hierarchy matches the physical URL structure (i.e., match = confidence, mismatch = friction). Breadcrumbs matter even when users ignore them because they’re a structural integrity signal for the wrapper hierarchy.
Meta descriptions: Google’s Martin Splitt suggested in a webinar with me that the meta description is compared to the system’s own LLM-generated summary of the page. If they match, a slight confidence boost. If they diverge, no penalty, but a missed validation opportunity.
Where conversion fidelity fails
Conversion fidelity fails when the system can’t figure out which parts of your page are core content, when your structure doesn’t chunk cleanly, or when semantic relationships fail to survive format conversion.
The critical downstream consequence that I believe almost everyone is missing: indexing and annotation are separate processes.
A page can be indexed but poorly annotated (stored but semantically misclassified). I’ve watched it happen in our database: a page is indexed, it’s recruited by the algorithmic trinity, and yet the entity still gets misrepresented in AI responses because the annotation was wrong.
The page was there. The system read it. But it read a degraded version (rendering fidelity loss at Gate 3, conversion fidelity loss at Gate 4) and filed it in the wrong drawer (annotation failure at Gate 5).
Processing investment: Crawl budget was only the beginning
The industry built an entire sub-discipline around crawl budget. That’s important, but once you break the pipeline into its five DSCRI gates, you see that it’s just one piece of a larger set of parameters: every gate consumes computational resources, and the system allocates those resources based on expected return. This is my generalization of a principle Canel confirmed at the crawl level.
Gate
Budget type
What the system asks
1 (Selected)
Crawl budget
“Is this URL a candidate for fetching?”
2 (Crawled)
Fetch budget
“Is this URL worth fetching?”
3 (Rendered)
Render budget
“Is this page a candidate for rendering?”
4 (Indexed)
Chunking/conversion budget
“Is this content worth carefully decomposing?”
5 (Annotated)
Annotation budget
“Is this content worth classifying across all dimensions?”
Each budget is governed by multiple factors:
Publisher entity authority (overall trust).
Topical authority (trust in the specific topic the content addresses).
Technical complexity.
The system’s own ROI calculation against everything else competing for the same resource.
The system isn’t just deciding whether to process but how much to invest. The bot may crawl you but render cheaply, render fully but chunk lazily, or chunk carefully but annotate shallowly (fewer dimensions). Degradation can occur at any gate, and the crawl budget is just one example of a general principle.
Structured data: The native language of the infrastructure gates
The SEO industry’s misconceptions about structured data run the full spectrum:
The magic bullet camp that treats schema as the only thing they need.
The sticky plaster camp that applies markup to broken pages, hoping it compensates for what the content fails to communicate.
The ignore-it-entirely camp that finds it too complicated or simply doesn’t believe it moves the needle.
None of those positions is quite right.
Structured data isn’t necessary. The system can — and does — classify content without it. But it’s helpful in the same way the meta description is: it confirms what the system already suspects, reduces ambiguity, and builds confidence.
The catch, also like the meta description, is that it only works if it’s consistent with the page. Schema that contradicts the content doesn’t just fail to help: it introduces a conflict the system has to resolve, and the resolution rarely favors the markup.
When the bot crawls your page, structured data requires no rendering, interpretation, or language model to extract meaning. It arrives in the format the system already speaks: explicit entity declarations, typed relationships, and canonical identifiers.
In my model, this makes structured data the lowest-friction input the system processes, and I believe it’s processed before unstructured content because it’s machine-readable by design. Semantic HTML tells the system which parts carry the primary semantic load, and semantic structure is what survives the strip-and-chunk process best because it maps directly to the internal representation.
Schema at indexing works the same way: instead of requiring the annotation system to infer entity associations and content types from unstructured text, schema declares them explicitly, like a meta description confirming what the page summary already suggested.
The system compares, finds consistency, and confidence rises. The entire pipeline is a confidence preservation exercise: pass each gate and carry as much confidence forward as possible. Schema is one of the cleaner tools for protecting that confidence through the infrastructure phase.
That said, Canel noted that Microsoft has reduced its reliance on schema. The reasons are worth understanding:
Schema is often poorly written.
It has attracted spam at a scale reminiscent of keyword stuffing 25 years ago.
Small language models are increasingly reliable at inferring what schema used to need to declare explicitly.
Schema’s value isn’t disappearing, but it’s shifting: the signal matters most where the system’s own inference is weakest, and least where the content is already clean, well-structured, and unambiguous.
Schema and HTML5 have been part of my work since 2015, and I’ve written extensively about them over the years. But I’ve always seen structured data as one tool among many for educating the algorithms, not the answer in itself. That distinction matters enormously.
Brand is the key, and for me, always has been.
Without brand, all the structured data in the world won’t save you. The system needs to know who you are before it can make sense of what you’re telling it about yourself.
Schema describes the entity and brand establishes that the entity is worth describing. Get that order wrong, and you’re decorating a house the system hasn’t decided to visit yet.
The practical reframe: structured data implementation belongs in the infrastructure audit, and it’s the format that makes feeds and agent data possible in the first place. But it’s a confirmation layer, not a foundation, and the system will trust its own reading over yours if the two diverge.
Why improve infrastructure when you can skip them entirely?
The multiplicative nature of the pipeline means the same logic that makes your weakest gate your biggest problem also makes gate-skipping your biggest opportunity.
If every gate attenuates confidence, removing a gate entirely doesn’t just save you from one failure mode: it removes that gate’s attenuation from the equation permanently.
To make that concrete, here’s what the math looks like across seven approaches. The base case assumes 70% confidence at every gate, producing a 16.8% surviving signal across all five in DSCRI. Where an approach improves a gate, I’ve used 75% as the illustrative uplift.
These are invented numbers, not measurements. The point is the relative improvement, not the figures themselves.
Approach
What changes
Entering ARGDW with
Pull (crawl)
Nothing
16.8%
Schema markup
I → 75%
18.0%
WebMCP
R skipped
24.0%
IndexNow
D skipped, S → 75%
25.7%
IndexNow + WebMCP
D skipped, S → 75%, R skipped
36.8%
Feed (Merchant Center, Product Feed)
D, S, C, R skipped
70.0%
MCP (direct agent data)
D, S, C, R, I skipped
100%
The infrastructure phase is pre-competitive. The annotation, recruited, grounded, displayed, and won (ARGDW) gates are where your content competes against every alternative the system has indexed. Competition is multiplicative too, so what you carry into annotation is what gets multiplied.
A brand that navigated all five DSCRI gates with 70% enters the competitive phase with 16.8% confidence intact. A brand on a feed enters with 70%. A brand on MCP enters with 100%. The competitive phase hasn’t started yet, and the gap is already that wide.
There’s an asymmetry worth naming here. Getting through a DSCRI gate with a strong score is largely within your control: the thresholds are technical, the failure modes are known, and the fixes have playbooks.
Getting through an ARGDW gate with a strong score depends on how you compare to all the alternatives in the system. The playbooks are less well developed, some don’t exist at all (annotation, for example), and you can’t control the comparison directly — you can only influence it.
Which means the confidence you carry into annotation is the only part of the competitive phase you can fully engineer in advance.
Optimizing your crawl path with schema, WebMCP, IndexNow, or combinations of all three will move the needle, and the table above shows by how much. But a feed or MCP connection changes what game you’re playing.
Every content type benefits from skipping gates, but the benefit scales with the business stakes at the end of the pipeline, and nothing has more at stake than content where the end goal is a commercial transaction.
The MCP figure represents the best case for the DSCRI phase: direct data availability bypasses all five infrastructure gates. In practice, the number of gates skipped depends on what the MCP connection provides and how the specific platform processes it. The principle holds: every gate skipped is an exclusion risk avoided and potential attenuation removed before competition starts.
A product feed is only the first rung. Andrea Volpini walked me through the full capability ladder for agent readiness:
A feed gives the system inventory presence (it knows what exists).
A search tool gives the agent catalog operability (it can search and filter without visiting the website).
An action endpoint tips the model from assistive to agentic — the agent doesn’t just recommend the transaction, it closes it.
That distinction is what I built AI assistive agent optimization (AAO) around: engineering the conditions for an agent to act on your behalf, not just mention you.
Volpini’s ladder makes the mechanic concrete: each rung skips more gates, removes more exclusion risk, and eliminates more potential attenuation before competition starts. A brand with all three is playing a different game from a brand that’s still waiting for a bot to crawl its product pages.
Note: Always keep this in mind when optimizing your site and content — make your content friction-free for bots and tasty for algorithms.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
DSCRI are absolute tests, ARGDW are competitive tests. The pivot is annotation.
Five gates. Five absolute tests. Pass or fail (and a degrading signal even on pass).
The solutions are well documented:
Discovery failures fixed with sitemaps and IndexNow.
Selection failures with pruning and entity signal clarity.
Crawling failures with server configuration.
Rendering failures with server-side rendering or the new pathways that bypass the problem entirely.
Indexing failures with semantic HTML, canonical management, and structured data.
The infrastructure phase is the only phase with a playbook, and opportunity cost is the cheapest failure pattern to fix.
But DSCRI is only half the pipeline, and it’s the easiest to deal with.
After indexing, the scoreboard turns on. The five competitive gates (ARGDW) are competitive tests: your content doesn’t just need to pass, it needs to beat the competition. What your content carries into the kickoff stage of those competitive gates is what survived DSCRI. And the entry gate to ARGDW is annotation.
The next piece opens annotation: the gate the industry has barely begun to address. It’s where the system attaches sticky notes to your indexed content across 24+ dimensions, and every algorithm in the ARGDW phase uses those notes to decide what your content means, who it’s for, and whether it deserves to be recruited, grounded, displayed, and recommended.
Those sticky notes are the be-all and end-all of your competitive position, and almost nobody knows they exist.
In “How the Bing Q&A / Featured Snippet Algorithm Works,” in a section I titled “Annotations are key,” I explained what Ali Alvi told me on my podcast, “Fabrice and his team do some really amazing work that we actually absolutely rely on.”
He went further: without Canel’s annotations, Bing couldn’t build the algos to generate Q&A at all. A senior Microsoft engineer, on the record, in plain language.
The evidence trail has been there for six years. That, for me, makes annotation the biggest untapped opportunity in search, assistive, and agential optimization right now.
This is the third piece in my AI authority series.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/03/Information-loss-through-infrastructure-gates-tj3RKq.png?fit=1600%2C1500&ssl=115001600http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-10 15:00:002026-03-10 15:00:00The five infrastructure gates behind crawl, render, and index
When people speak naturally, their language flows. It’s often messy, incomplete, and not especially coherent. The Google search bar, however, required something different. Users had to compress their needs into short phrases or slightly longer queries — what’s traditionally classified as short-tail or long-tail.
To make that work, users stacked queries across a journey, moving through a funnel from A to B and refining as they went. In the process, users often stripped out personalized nuance to match what they believed the search engine could understand. In response, SEO professionals built systems around that constraint, grouping queries by search volume, categorizing them by a limited set of intents, and measuring competitiveness.
That dynamic is changing. SEOs need to understand the behavioral change that’s emerging. Google is promoting Gemini, and phone manufacturers like Samsung are marketing AI-enabled features as product USPs. Alongside this product marketing, there’s also a level of education happening. Users are being encouraged to be more expressive with their queries, personalize their searches, and describe what they’re looking for in greater depth.
Moving from keyword research to prompt research
This is where we need to move away from the notion of keyword research to prompt research. Keyword research traditionally assumes that demand can be quantified, that variations can be listed and grouped, and that optimization happens at a phrase level or a cluster level. In the new hybrid AI and organic search world, demand is much more of a generative concept. Prompts can be written in countless ways while preserving the same underlying need.
This doesn’t make keyword research obsolete, but it does change its focus. Instead of extracting keywords from tools as we’ve done, we also need to start understanding and modeling journeys. Instead of grouping by volume alone, we need to group by decision stage and the type and level of uncertainty the user has.
The output of this process isn’t simply a keyword map, but a task map that accurately reflects the real pressures and constraints experienced by the audience. This is an evolution from short-tail and long-tail keyword research to an infinite tail of prompt research.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
The infinite tail as a behavioral shift
You can describe the infinite tail as an expansion of the long tail. But that underestimates what’s actually changing. It’s not just about more niche phrases or longer query strings. It’s about the level of personalization that’s been layered into each request.
As users add context, constraints, and preferences, prompts become unique combinations of a multitude of factors. The number of possible combinations effectively becomes infinite, even if the underlying tasks remain finite. AI systems respond by evaluating the given prompts and probabilistically predicting the next tokens rather than using exact-match strings.
It’s less about how you rank for a specific keyword or whether you’re visible in AI for a specific phrase. It becomes whether your content has the highest probability of satisfying the situation being described. That’s a different optimization problem altogether. You’re not competing on phrasing. You’re competing on task completion.
This part of the journey is where “fuzzy searches” happen, meaning the path isn’t a straight line. Success isn’t just about finishing a task. It’s about making sure the user actually found what they were looking for. Since every user moves differently, the process is flexible rather than a set of rigid steps.
One of the most important mechanics in AI search is query fan-out. When a complex prompt is submitted, the system doesn’t treat it as a single string. Instead, it decomposes a request into a network of subquestions, classifications, and checks that together form a broader evaluation framework.
From an SEO perspective, this means your content moves beyond evaluation against a single phrase or specific document matches. Instead, it’s assessed across a network of related questions, with a collective determination of whether it can satisfy a broader task.
In a fan-out world, you win by supporting the entire decision cluster that surrounds that term. If your content addresses only one narrow dimension of the task, it becomes fragile. If it supports multiple layers of the decision, it becomes resilient. Fan-out rewards structural coverage and contextual relevance rather than repetition of specific phrases.
Grounding queries help provide the LLM with a level of confidence through its fan-created queries. AI systems generate answers and attempt to validate them.
They’re used to check whether a proposed answer is supported elsewhere, whether claims are consistent across sources, and whether the entity behind the information is reputable. If an AI system includes your brand in a summarized response, it needs a level of confidence to defend it virtually if challenged by alternative information.
This changes the meaning of authority. In traditional SEO, ranking could be achieved through technical content, links, and other forms of manipulation. In AI search, selection also depends on how easily your content can be corroborated against a broader consensus within the cohort. This can involve factors tied to entity clarity, including structure, data consistency, consistent messaging, and external validation. These signals reduce uncertainty for the system. You’re not just trying to appear. You’re trying to be selected and defended.
Organic search isn’t disappearing. Ranking still influences discovery, technical SEO still shapes crawlability, and architecture still determines how well a site and its content are understood.
But now, AI layers sit on top, synthesizing information and influencing which brands are surfaced within conversational responses. In this hybrid environment, organic visibility feeds AI selection. They aren’t exclusive, and yet they aren’t codependent.
AI selection can reinforce brand perception, and fan-out rewards depth of current coverage. Grounding then rewards trust and consistency. This is where the infinite tail rewards genuine audience understanding and the creation of websites and content systems that support it.
This is a shift from keyword research to prompt research, and not just a cosmetic renaming of the process. Success will depend on understanding why people search, the decisions they’re making, the uncertainties they face, and the evidence they need before committing. Search increasingly revolves around satisfying situations rather than matching strings. Designing for the infinite tail means designing for people and the tasks they’re trying to complete.
“Content is king” remains one of the most widely accepted ideas in SEO. Not everyone has agreed. Different schools of thought have always existed, with some practitioners prioritizing backlinks and others focusing on technical SEO.
Content is often treated as the primary driver of search visibility. I’m not arguing that.
My point is simpler: if you’ve relied on content to drive results — and earn a living — you should start doubling down on distribution.
With AI search changing the game, creating great content (and, yes, building some backlinks) is no longer enough to get it seen. The more important question may no longer be “What should I write next?” but “Where should I push this next?”
AI tools are further fragmenting search
Content distribution has become far more important in recent years, especially as audiences spread across more online spaces. In many teams, this job was usually outsourced to someone other than SEOs:
Social media managers.
Community managers.
PR specialists.
Various assistants and interns.
Sure, distribution held some value to SEO, but it was generally considered more beneficial to other functions.
Thanks to AI search, it’s finally landed squarely on our plate. Since AI models have fragmented search to an unprecedented level, distribution is now key to meaningful SEO outcomes.
There are three key drivers behind this change:
Different tools have different sourcing logic.
AI tools source differently from traditional search.
Their logic is changeable.
If this all sounds a bit abstract, let’s briefly dig into the evidence and explain what’s really going on.
Different tools have different sourcing logic
Search is fragmenting as people use a wider range of tools. Ideally, one strategy would work everywhere, but research shows that’s not the case.
AI search tools cite different sources, a 2025 Search Atlas study found. Some show significantly more overlap with the SERPs than others. This indicates that different tools follow different sourcing logic. And as long as that’s true, optimizing for one won’t necessarily boost visibility on another.
The whole thing is even trickier because users seem more open to switching tools than before. Gemini may soon surpass formerly unrivaled ChatGPT in traffic share, according to Similarweb. That could change again quickly.
Thinking there’s a single clear winner, like Google used to be, would be wrong. Focusing on the most popular tool at the moment isn’t a guaranteed strategy.
To maximize visibility, we need to consider how multiple AI tools source their information, which implies our distribution strategy needs to be broad.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
AI search uses different logic from traditional search
The Search Atlas study showed that some AI search tools overlap with Google more than others — but in all cases, the overlap is pretty low. Perplexity ranked the highest at 43%, while ChatGPT barely hit 21%.
Characterizing Web Search in The Age of Generative AI (PDF) explicitly finds that AI search tools draw from a much wider pool of sources and are more likely to cite sites with fewer visits than traditional search engines.
This shows us that fragmentation is compounding. The pool of potential sources is wider, with little overlap among AI tools or between AI and traditional search.
The sourcing logic is changeable
The most problematic factor out of all, though, is that the sourcing logic of one tool can and often does change, too. This leads to different domains getting cited for the same prompts at different points in time — a phenomenon called citation drift.
Citation drift is more frequent than we might assume. Over the course of just a month, for instance, AI tools change approximately 40-60% of the domains they cite for the same prompt, according to Profound.
In other words, one domain could appear several times in a single response, then disappear completely the following month. This flip-flopping gets even worse over longer periods. For example, Profound’s study also showed that, from January to July, as many as 70% to 90% of the domains cited for the same prompt had changed.
Search is fragmented across tools and time. As cited domains change more frequently, users see more sources, making it even harder for you to push your brand to the front.
So, what can we do about it? How should we approach this increasing fragmentation of search?
While this might change as new tools and strategies emerge, the best answer we have so far is this: focus on broad, multi-channel distribution.
When you can’t reliably predict which sources will be used, the best strategy is to widen your footprint. This creates more potential entry points into AI systems’ training and retrieval layers.
This will require some serious shifts in how many SEOs approach their work. Here are a few you can implement right away.
1. Get good at collaborating
You’re unlikely to win fragmented AI search on your own. Optimizing for it now takes a much broader approach than before, pulling in digital PR, social media, community management, and other functions.
Those areas require skills many SEOs don’t have. Those who do still have only 24 hours in a day, so spreading that work across multiple disciplines isn’t realistic.
This only works with a team. You might hate that idea, especially because it means giving up full control of your projects and results. I get it, but that’s the reality right now. You’ll have to let some things go, trust others to handle them, and divide responsibilities. In other words, you’ll need to collaborate efficiently.
Even if you let experts handle certain tasks, you’ll still need at least a surface-level understanding of other disciplines becoming central to search.
SEOs will still own at least parts of distribution, whether that means handling the high-level strategy or downright executing it on specific channels.
In either case, doing this well requires skills you may not have used much before. So now’s the time to develop them.
That could mean learning more about digital PR, partnerships, thought leadership, syndication, community presence, or something else. With so many possibilities, it helps to start with the area you feel most comfortable with or most drawn to at the moment.
3. Shift your mindset from ranking to presence
You also need to change how you think about SEO, and then translate that shift into actual workflows. Google is still a major traffic driver, and rankings still matter. But for a fragmented, AI-driven search, obsessing over rank won’t cut it.
Instead of asking, “How do I get this content to rank?” You now need to ask, “How do I get this content into as many places as possible?”
Again, the goal is to create multiple entry points across AI systems, platforms, and audiences, increasing the chances of your content getting discovered, cited, and surfaced.
That’s why it’s important to start thinking more about overall presence across ecosystems rather than just positions in specific search engines.
4. Redesign your workflow
If you’ve successfully shifted your mindset from ranking to presence, it’s time to build a workflow that reflects that change.
I know firsthand how easy it is to forget about distribution, especially if it wasn’t part of your process before. To make it stick, you need to redesign your workflow to position distribution at the core.
A good place to start is by adding a launch phase, where content is distributed immediately upon publishing. After that, you could include a recurring phase every few months to ensure you regularly refresh and redistribute content.
Define reusable details upfront, like which channels you’ll consistently target and who owns each one. That way, you’ll minimize planning from scratch and make sure nothing falls through the cracks.
5. Start with these easy-to-implement best practices
Finally, if you want some easy tactics to immediately add to your to-do list, consider these:
Pilot content partnerships, starting where it’s easiest. Usually, that implies reaching out to existing business partners first.
Proactively distribute your content on third-party sites, whether that means syndicating it or repurposing it for Quora and LinkedIn.
Pay attention to where AI tools already pull from. While sourcing logic changes constantly, you may still notice recurring patterns worth leveraging.
Give a special push to your existing, older content to counteract the pitfalls of citation drift. Reintroduce it on new channels, or work to get it referenced in new places.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
Rethinking SEO processes for fragmented AI search
The shifts are large enough that you’ll need to rethink how you do SEO. As search fragments, the work itself will have to evolve.
The approaches and workflows you relied on in the past won’t translate cleanly into a landscape shaped by multiple AI tools, changing sourcing logic, and constantly shifting citations.
These processes will also become more complex because they require closer collaboration with other teams. Distribution now intersects with digital PR, social media, partnerships, and community management, making cross-team coordination more important than before.
There’s a long road ahead. The best way to keep your sanity is to start small: focus on manageable steps, take them one at a time, and build from there.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-10 13:00:002026-03-10 13:00:00Content alone isn’t enough: Why SEO now requires distribution
If you’ve been in marketing long enough, you’ve probably lived through a few identity crises. First, we were channel experts. Then, we became integrated marketers, growth marketers, and performance marketers. Somewhere along the way, someone added “AI” to everyone’s job description and called it a day.
Now, we’re entering the era of the full-stack marketer. From where I sit — particularly as a media leader — the role is starting to look a lot like product management.
This doesn’t mean you need to start writing Jira tickets for fun (though some of you already do). It means that tomorrow’s most effective media leaders won’t just optimize campaigns. They’ll own outcomes, connect dots across teams, and think holistically about the entire user experience, from first impression to final conversion (and beyond).
I’ve seen this shift most clearly in industries with long consideration cycles, multiple stakeholders, and rising acquisition costs — where marketing performance is inseparable from the experience itself.
Let’s break down what’s driving the rise of the full-stack marketer, what it really means to “think like a product manager,” and why this mindset is becoming non-negotiable for media leaders.
What is a full-stack marketer, anyway?
A full-stack marketer isn’t someone who does everything (burnout isn’t a job requirement). Instead, it’s someone who understands how everything works together.
Over the course of my career, I’ve learned that the most impactful media decisions rarely come from being the deepest expert in one area. They come from having working fluency across many:
Media and channels: Paid search, paid social, programmatic, CTV, SEO, email, SMS, and whatever new acronym launches next quarter.
Creative and messaging: Knowing what resonates, where, and why.
Data and analytics: Not just reading dashboards, but asking better questions of the data.
UX and CRO: Understanding friction, intent, and user behavior.
Technology and platforms: CRMs, CMSs, marketing automation, and attribution tools.
The full-stack marketer doesn’t need to be the deepest expert in every area, but they do need to know enough to connect insights, spot gaps, and make informed trade-offs. In practice, this means constantly zooming out to see the system and zooming back in when something breaks.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
Why media leaders are evolving into product thinkers
Earlier in my career, media leadership was often defined by questions like:
Are we hitting CPA targets?
Which channels are driving the most conversions?
How do we allocate budget more efficiently?
Those questions still matter. I ask them all the time. But over the years, I’ve learned they’re no longer sufficient on their own. Today’s environment forces media leaders to grapple with bigger, messier questions:
Why are conversion rates declining even when traffic is strong?
Where are prospects dropping out of the funnel, and why?
How does media performance change when the application experience changes?
What happens after the lead submits?
These are product questions. Product managers obsess over the end-to-end experience: the user journey, friction points, trade-offs, and outcomes. Media leaders who adopt this mindset stop seeing campaigns as isolated efforts and start seeing them as inputs into a broader system.
In many of the industries I’ve worked in, that system is anything but simple.
Marketing performance rarely exists in isolation. In many industries (especially those with longer decision cycles), a click is just the beginning, not the win.
Whether you’re selling financial services, healthcare, or education, prospects move through nonlinear journeys influenced by multiple touchpoints, stakeholders, and moments of friction. This is where full-stack thinking becomes critical.
Example 1: When media isn’t the problem, the experience is
I’ve lost count of how many times I’ve heard this reaction when performance starts slipping: “The platform is getting more expensive.”
Sometimes that’s true. But a product-minded media leader asks deeper questions:
Has the conversion experience changed recently?
Did we add steps, fields, or requirements?
Are we driving mobile traffic to a hostile desktop experience?
Across industries, I’ve repeatedly seen strong intent at the keyword or audience level, healthy CTRs, and solid landing-page engagement followed by a steep drop-off at the point of conversion. It’s a product experience problem.
In higher ed, this often shows up when high-intent program traffic is routed to lengthy or confusing application flows, generic inquiry forms, or experiences that don’t match the promise of the ad, especially on mobile. Prospective students signal strong intent, only to hit friction that has nothing to do with media and everything to do with the experience they’re asked to navigate.
A full-stack marketer doesn’t just flag this: they bring data, partner cross-functionally, and help prioritize fixes based on impact.
Example 2: Different audiences, different ‘products’
One of the most important product principles is that not all users are the same, and they shouldn’t be treated that way.
Many organizations market to multiple audiences at once, each with different motivations, risk tolerance, and timelines. Treating them as if they’re buying the same “thing” is a fast track to average results.
A product-minded media leader understands that:
The value proposition changes by audience.
The conversion event may be different.
The decision timeline is almost certainly different.
I’ve seen this clearly in healthcare, where patients, caregivers, and referring providers evaluate the same organization through entirely different lenses. Financial services presents a similar challenge, with banking, investment, and insurance decisions varying dramatically by life stage and goals.
Full-stack marketers adapt media strategy accordingly, from channel mix to messaging to measurement. This is because they understand product-market fit, not just audience targeting.
Example 3: What happens after the conversion
One of the biggest blind spots in media strategy is what happens after someone converts. Product thinkers ask:
How quickly does someone follow up?
Is the first touch personalized or generic?
Does the message align with the promise of the ad?
I’ve seen performance improve without changing media at all, simply by improving speed-to-lead or aligning follow-up messaging with campaign intent.
Healthcare offers especially clear examples of this dynamic due to intake workflows, appointment scheduling, and care coordination, but the principle is universal: media doesn’t end at the form fill. The full-stack marketer is accountable for conversions and outcomes.
Another hallmark of product management is roadmap thinking: prioritizing initiatives based on impact, effort, and sequencing. Full-stack media leaders bring this same approach to marketing:
Short-term wins versus long-term bets.
Testing frameworks instead of one-off experiments.
Phase 3: Layer in audience-based creative and messaging.
Instead of chasing the “next shiny channel,” full-stack marketers focus on compounding gains.
Data fluency: Asking better questions
Product managers don’t just look at metrics. They interrogate them. The same should be true for media leaders. Instead of asking, “What’s the CPA?” I’ve learned to ask:
“Which segments are converting efficiently, and which aren’t?”
“How does performance differ by device, geography, or life stage?”
“What signals indicate readiness vs. research?”
In higher ed, this might mean:
Separating brand vs. non-brand intent.
Looking at assisted conversions.
Evaluating performance by program.
Data becomes a tool for decision-making.
Collaboration is the new superpower
Full-stack marketers are inherently collaborative because they have to be. In higher ed, success often requires alignment across:
Admissions.
Enrollment marketing.
IT and web teams.
Academic leadership.
External partners.
Media leaders who think like product managers don’t just execute requests. They help stakeholders understand trade-offs, prioritize initiatives, and rally around shared goals. They also translate data into stories people can act on.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
So, what does this mean for tomorrow’s media leaders?
The rise of the full-stack marketer doesn’t mean specialization is dead. It means seeing the entire system matters more than optimizing any single piece of it.
From my perspective, tomorrow’s strongest media leaders will:
Understand the business behind the campaign.
Think beyond their channel.
Advocate for the user experience.
Use data to inform and influence.
Embrace ambiguity (and occasionally chaos).
In categories where trust, timing, and transformation are at the core of the “product,” this mindset is no longer optional.
At its heart, marketing here is more than campaigns. It’s guiding life-changing choices. If you’re a media leader feeling like your role is expanding faster than your job description — congratulations! You’re not losing focus. You’re evolving.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-10 12:00:002026-03-10 12:00:00Why tomorrow’s media leaders must think like product managers
Buying AI capabilities to drive marketing is easy. Enabling marketing teams to actually use it independently, decisively, and at scale is far harder.
The main culprit? Humans.
Marketing teams have always had the same elusive goal: to move at the pace of the consumer. Responding to each customer’s needs in real time, delivering the relevant message at the right moment, and optimizing customer lifetime value to drive loyalty and ROI. The goal is not new.
What is perpetually new are the AI technologies available to analyze consumer data and generate instant, personalized messaging at scale. But while technology evolves rapidly, the ability of marketing teams to harness it independently and decisively has not kept pace. The main obstacle is organizational: most marketing teams have not structured themselves to extract full value from the technology they already have.
This is not to say that there is no progress. There is. Marketing teams that have crossed that chasm are seeing extraordinary results.
One case in point is Caesars Entertainment that reduced campaign execution time from five days to five minutes. Asadul Shah, vice president of player revenue Strategy, called it “a massive game changer.”
Before that transformation, Caesars marketers manually built targeting lists across disconnected systems, coordinated across multiple tools and waited on engineers, analysts and creative teams before anything could go out. The result was an operation too slow to target players with the precision and timing the market demanded.
Caesars worked with Optimove to consolidate data, orchestration and execution in one platform. Shah noted the transformation made marketing “not just more efficient; it is more responsive to what our players actually need in the moment.”
What made it work was not technology alone. Caesars implemented Positionless Marketing, a framework that frees marketing teams from fixed roles, giving every marketer the power to execute any task instantly and independently. Optimove provided the platform. Caesars built the team structure to make it real. Technology and human ingenuity working together making Positionless Marketing possible.
Any organization achieving this kind of transformation is doing what McKinsey calls “organizing to value,” a fundamental rethink of structure, decision-making and accountability that turns a marketing team into an operation built to drive value continuously. For marketing, that means becoming a Positionless team that optimizes customer lifetime value, drives loyalty and delivers measurable ROI.Below, we use McKinsey’s Organize to Value framework to outline the pitfalls that block Positionless Marketing and the blueprint to build teams that can execute any marketing task, instantly and independently.
The six pitfalls inhibiting the transition to Positionless Marketing
McKinsey has identified six core problems preventing marketing teams from successfully evolving into the Positionless model. Of these, only one is about technology. All the others are about how leaders and teams are getting in their own way.
Unclear objectives push teams toward activity metrics instead of outcomes. When marketing goals are vague, execution defaults to roles and handoffs rather than impact.
Misaligned governance creates approval layers that add days to decisions that should be faster. In marketing, excessive controls directly conflict with the speed required to deliver customer value.
Uncommitted leaders manage through silos rather than enabling autonomy, preventing marketing teams from evolving past role-based dependency.
Stagnant marketing culture resists experimentation even when the right tools are in place, slowing execution regardless of technology investment.
Muddled marketing execution, with unclear process ownership, leaves no single person accountable for results, and performance erodes accordingly.
Disconnected technology reinforces data compartmentalization and separation of tasks among sub-teams, making strategic alignment and agile responses virtually impossible.
These are the realities of assembly-line marketing operations — not Positionless ones. Insights live with analysts. Creativity lives with designers. Activation lives with engineers. Value disappears in the spaces between them.The assembly line was built for control. It was never built to deliver value.
How McKinsey’s Blueprint helps build positionless marketing teams (and why the effort pays off)
McKinsey’s “Organize to Value” blueprint proposes a fundamental shift: design organizations around value creation, clear outcomes, impact over job titles and minimal friction execution. It provides the foundation to become Positionless and build the conditions for marketing teams to keep customers for life.
To make Positionless Marketing a reality, marketing leaders should focus on pragmatic application and the aspects that most influence marketing execution.
Start with purpose and behavior. Make explicit why actions are taken, alongside what is delivered. A shared sense of purpose allows teams to make fast decisions without waiting for approval on each one.
Restructure work around outcomes and accountability. Map current processes and identify where approvals slow execution without adding value. Build cross-functional flexibility over time rather than reorganizing overnight.
Leadership and processes. Establish a clear decision-to-execution flow and set explicit expectations for how fast each part of the marketing process should move. Processes should enable flow, not control.
Governance, technology and talent. Effective governance ensures consistency without slowing execution. Technology and AI should unlock new value, not just automate existing processes. And talent should be deployed based on what the work requires, not what a title suggests.
Empower marketers to act beyond their role. Once purpose, accountability, process and technology are aligned, marketers should be free to step across traditional job functions and execute independently as Positionless Marketers. The measure of success is not role compliance; it is value delivery.
These changes require sustained commitment. But the alternative (an assembly-line structure that was never built to deliver customer value) is far costlier than the transformation itself.
The results speak for themselves. In addition to Caesars:
FDJ United implemented Positionless Marketing to eliminate overlapping platforms, remove reliance on other teams wherever possible and enable continuous improvement through real-time measurement. Campaign time was slashed from six weeks to hours, with end-to-end campaigns now executed by one marketer from ideation to analysis.
A major retailer achieved a 16.1x increase in purchase rates while saving 300 working hours per year with the same team size. The shift to Positionless Marketing allowed the team to scale personalization and impact without adding headcount… demonstrating that the framework’s value is not just speed of execution, but the ability to do fundamentally more with what you already have.
The window to act is narrowing
The technology and AI tools are here and ever evolving. Today, AI generates infinite creative variants. Data platforms surface real-time behavioral signals. Decisioning engines coordinate across channels instantly.
But technology layered on top of an assembly-line structure creates the illusion of progress. The same handoffs happen. The same approvals add the same delays. Speed arrives at the edge; the bottleneck stays in the middle.
External pressures are accelerating. Customers expect personalization and the best experience across all channels. Competition is rising and growing more complex.
Marketing leaders who wait for transformation will find their competitors have already made it. The ones moving first are pulling ahead.
McKinsey confirms what the best marketing teams already know: the right structure and technology unleash human potential — and vice versa. Smart people trapped in the wrong system will still underperform. The best AI tools in the world won’t deliver results when constrained by the wrong organization.
McKinsey’s blueprint is pointing out the way. Positionless Marketing is the destination.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-10 11:00:002026-03-10 11:00:00McKinsey’s ‘Organize to Value’ a blueprint for evolving to positionless marketing by Optimove
Google Ads is set to enhance the viewer experience of Performance Max video ads with an innovative asset optimization feature. Leveraging advanced AI voice models, this update aims to infuse video ads with realistic voice-overs, ultimately enhancing user engagement and ad performance.
Why we care. Advertisers who don’t actively opt out by March 20, will have their video ads automatically enhanced with Google’s AI voice models, changing how their ads sound to viewers without requiring any creative production work.
How it works.
The feature only activates on videos that don’t already contain a voice track
Google’s AI selects text from advertiser-provided headlines and descriptions, then generates a realistic voice-over from that copy
The voice-over is layered onto the existing base video and saved as a new video asset
The catch. This is opt-out, not opt-in. The default setting means ads will be automatically eligible for voice enhancement unless advertisers proactively disable it.
Key dates. Advertisers can choose to exclude their ads from this feature until March 20th. To do so, they must opt out of the video enhancement control. After the opt-out period, all ads with video enhancement control enabled will automatically be eligible for voice-enhanced versions.
Action steps for advertisers. Advertisers can adjust their video settings by visiting their ads in Google Ads.
First seen. This update was shared by Paid Search expert Arpan Banerjee who shared the update on LinkedIn.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-09 17:34:452026-03-09 17:34:45Google Ads adds AI voice-over to Performance Max video ads