YouTube tests sticky banner after ad skip

The Fujiwhara effect on YouTube: AI, Shorts, and the rise of duplicate content

YouTube is experimenting with a format that keeps ads visible even after users skip — potentially reshaping how advertisers think about skippable inventory.

What’s happening. YouTube is testing a sticky banner overlay that appears once a user skips an ad. Instead of the ad disappearing entirely, a branded card remains on-screen until the viewer actively dismisses it.

How it works. After hitting “skip,” users return to their video as normal, but a persistent banner tied to the original ad stays visible within the player, extending the advertiser’s presence beyond the initial skip.

Why we care. This test from YouTube creates a way to maintain visibility even when users skip ads, potentially increasing brand recall without requiring full ad views.

It also changes how skippable performance may be evaluated, as impressions and engagement could extend beyond the initial ad, giving brands more value from the same inventory within Google’s ecosystem.

Why it’s notable. Skippable ads have traditionally meant lost visibility once skipped. This format changes that dynamic by offering a second chance for exposure, even when users opt out of the full ad experience.

Impact for advertisers. The update creates an opportunity for extended brand visibility and recall, but could also influence engagement metrics and how users perceive ad interruptions.

The bottom line. If rolled out widely, the sticky banner test could redefine what a “skipped” ad means — turning it into continued, lower-friction exposure rather than a full exit for advertisers on YouTube.

First seen. This update was first spotted by Founder & CEO of Adsquire Anthony Higman who shared spotting it on LinkedIn.

Read more at Read More

Google adds video visibility to Performance Max reporting

In Google Ads automation, everything is a signal in 2026

Google is incrementally improving metric visibility in Performance Max, giving advertisers more insight into how creative choices — particularly video — impact performance.

What’s happening. Google Ads has introduced a new “Ads using video” segment within Performance Max channel performance reporting, allowing advertisers to break down results based on whether video assets were included.

Why we care. Marketers can now compare performance across placements that used video versus those that didn’t, offering a clearer view into the role video plays across Google’s automated inventory.

It helps answer a key question in an automated environment: whether investing in video assets is driving better results, allowing you to make more informed creative and budget decisions inside Google Ads.

Between the lines. As video becomes more central across surfaces like YouTube and beyond, this update gives advertisers a way to validate the impact of investing in video assets within automated campaigns.

The bottom line. The new segment adds a layer of clarity to Performance Max, helping advertisers better evaluate video’s contribution without changing how campaigns are run inside Google Ads.

First spotted. This update was first spotted by PPC News Feed founder Hana Kobzova.

Read more at Read More

Google expands Personal Intelligence to AI Mode, Gemini, Chrome

Google Personal Intelligence expands

Google is expanding Personal Intelligence across AI Mode, Gemini, and Chrome in the U.S., moving it beyond beta into broader consumer use.

Why we care. Personal Intelligence pushes Google further into fully personalized search, using first-party data like Gmail and Photos. That makes results harder to replicate, rank against, or track — especially in AI Mode, where outputs may vary based on user history, purchases, and behavior.

The details. Personal Intelligence now works across:

  • AI Mode in Google Search (available now in the U.S.)
  • Gemini app (rolling out to free users)
  • Gemini in Chrome (rolling out)

How it works. Users can connect apps like Gmail and Google Photos so Google can tailor responses using personal context. Examples Google shared include:

  • Shopping recommendations based on past purchases and brand preferences.
  • Tech troubleshooting using receipt data to identify exact devices.
  • Travel suggestions using flight details, timing, and past trips.
  • Personalized itineraries and local recommendations.
  • Hobby suggestions inferred from user interests.

Availability. These features are available only for personal accounts, not Workspace users, Google said.

Dig deeper. Google says AI Mode stays ad-free for Personal Intelligence users

Catch-up quick. Google introduced Personal Intelligence as a U.S.-only beta for Gemini subscribers in January. At the time:

  • It was limited to AI Pro and Ultra users.
  • It focused on Gemini, with Search integration “coming soon.”
  • The feature was opt-in and off by default.
  • This update delivers on that roadmap by:
  • Bringing it to Search AI Mode.
  • Expanding access to free users.
  • Extending it to Chrome.

Privacy and control. Google emphasized:

  • Users must opt in to connect apps.
  • Connections can be turned on or off at any time.
  • Models do not train directly on Gmail or Photos content.
  • Limited data, such as prompts and responses, may be used to improve systems.

Google’s blog post. Bringing the power of Personal Intelligence to more people

Read more at Read More

Google says AI Mode stays ad-free for Personal Intelligence users

Although Google continues to test ads in AI Mode, users who connect apps to enable Personal Intelligence won’t see ads — and that isn’t changing right now, a Google spokesperson confirmed.

What’s happening. Google has been testing ads inside AI Mode in the U.S.

  • Early results: users find these business connections “helpful,” per Google.
  • But there’s a clear carveout: no ads for users who opt into app-connected, highly personalized experiences.

The details. Google today expanded Personal Intelligence in AI Mode as a beta to anyone in the U.S., allowing Gemini to generate more tailored responses by connecting data across its ecosystem, including Google Search, Gmail, Google Photos, and YouTube.

  • Opting into Personal Intelligence creates an ad-free experience inside AI Mode.

Why we care. Ads are coming to AI Mode, but Google is moving cautiously where personal data is deepest. Personal Intelligence experiences stay ad-free for now while Google works out the right balance.

What Google is saying. A Google spokesperson told Search Engine Land:

  • “There are currently no ads for people who choose to connect their apps with AI Mode. That isn’t changing right now.
  • “Over the past few months, we’ve been testing ads in AI Mode in the US. Our tests have shown that people find these connections to businesses helpful and open up new opportunities to discover products and services.
  • “In the future, we anticipate that ads will operate similarly for people who choose to connect their apps with AI Mode. Ads will continue to be relevant to things like your query, the context of the response and your interests.”

Bottom line. Personal Intelligence positions Google’s Gemini app as a more personalized assistant, setting the stage for future ad experiences built on richer, cross-platform user context.

Read more at Read More

Yahoo CEO: Google AI Mode is the biggest threat to web traffic

Yahoo traffic pipeline

Yahoo CEO Jim Lanzone said AI-powered search — especially Google’s AI Mode — is putting the open web’s core traffic model at risk and argues AI search engines must send users back to publishers.

  • “I think that the LLMs are one big reason that they’re under threat, with AI Mode in Google being the biggest challenge.”
  • “Those publishers deserve [traffic], and we’re not going to have the content to consume to give great answers if publishers aren’t healthy.”

Why we care. Many websites are seeing less traffic from answer engines like Google and OpenAI — and I think it’ll only get worse. So it’s encouraging to see Yahoo trying to preserve the “search sends traffic” model. As he said: “We have very purposefully highlighted and linked very explicitly and bent over backwards to try to send more traffic downstream to the people who created the content.”

Yahoo’s AI stance. Yahoo is taking a different approach from chatbot-style interfaces, Lanzone said on the Decoder podcast. He added that Yahoo isn’t trying to compete as a full AI assistant:

  • “Ours looks a lot more like traditional search and it is more paragraph-driven. It’s not a chatbot that’s trying to act like it’s a person and be your friend.”
  • “We’re not a large language model. We’re not going to be the place you come to code. We’ve really launched Scout as an answer engine.”

What’s next: Personalization + agentic actions. Yahoo plans to expand Scout beyond basic answers and is embedding AI across its ecosystem:

  • “You are very shortly going to see us get into very personalized results. You’re going to see us get into very agentic actions that you can take.”
  • “There’s a button in Yahoo Finance that does analysis of a given stock on the fly… It is in Yahoo Mail to help summarize and process emails.”

Yahoo vs. Google isn’t a thing. Yahoo isn’t trying to win by converting Google users directly. Instead, Yahoo is prioritizing its existing audience and increasing usage frequency over immediate market share gains:

  • “Nobody chooses, you will not be surprised, Yahoo over Google or somewhere else to search. The way that we get our search volume is because we have 250 million US users and 700 million global users in the Yahoo network at any given time. There’s a search box there. And infrequently, they use it.”

A warning. Companies — including publishers — should be cautious about relying too heavily on AI platforms as intermediaries. Lanzone compared today’s AI partnerships to Yahoo’s past reliance on Google:

  • “You are tempting fate by opening up a way for consumers to access your product within a large language model.”
  • “The big bad wolf will come to your door and say everything’s cool.”

The interview. Yahoo CEO Jim Lanzone on reviving the web’s homepage

Read more at Read More

Web Design and Development San Diego

How nonprofits can build a digital presence that actually drives impact

How nonprofits can build a digital presence that actually drives impact

For a long time, a nonprofit’s digital presence hasn’t been a “nice-to-have.” It’s the central hub for mission delivery, donor engagement, and advocacy.

Many organizations struggle with the technical and strategic foundations needed to turn a website and a few social accounts into a high-performing digital ecosystem.

The goal isn’t simply to “be online.” It’s to build reliable infrastructure, so your organization owns its narrative, protects its assets, and measures the impact of “free” digital efforts.

Here’s a practical look at the critical elements of managing a nonprofit’s digital presence — and the common pitfalls to avoid — based on my experience helping several organizations throughout my career.

If you help an organization with digital marketing and they aren’t following these practices, your first step should be getting their digital house in order.

1. Own your foundations: Domains and account control

Owning your name and your story are essential parts of a proactive online reputation management strategy and a critical aspect of managing an online entity. 

In my experience, the most overlooked risk in nonprofit digital management is the lack of direct ownership of technical assets.

A well-meaning volunteer or third-party agency often registers a domain or creates a social account using personal credentials. If that individual leaves the organization, you risk losing access to your primary digital channel — the domain you should own and control.

I’ve worked with several organizations that had to start over completely because they lacked control.

  • Domain ownership: Ensure the domain is registered in the organization’s name using a generic “admin@” or “info@” email address that multiple stakeholders can access. Set the domain to auto-renew and use a registrar that offers robust security features.
  • Website hosting and management: The organization also needs to control its website hosting and administration. Use a similar approach to the one recommended for domain ownership.
  • Social media governance: Again, use a similar process to the one described above to establish ownership of key social media channels. Grant volunteers access via delegation on individual channels rather than sharing passwords. This allows you to revoke access immediately if a staff member or volunteer moves on, protecting your brand’s voice and security.

Dig deeper: Google Ad Grants now lets nonprofits optimize for shop visits

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

2. Move beyond ‘winging it’: The editorial calendar

A common mistake for nonprofits is posting only when there’s an immediate need, which is often only when making a fundraising appeal. This “broadcast-only” approach often leads to donor fatigue and low engagement.

To build a community, you need a content plan that balances stories of impact with actionable requests.

  • The 70/20/10 rule: Aim for 70% value-based content (success stories, educational info), 20% shared content from partners or community members, and only 10% direct “asks.”
  • The editorial calendar: Use a simple tool, even a shared spreadsheet, to map out your themes and individual pieces of content for the month. This ensures you aren’t scrambling for a post on Giving Tuesday, that everyone knows what’s expected of them, and that your messaging and pace of content creation remain consistent across email, social, and your blog.

3. Tracking what matters (and ignoring what doesn’t)

Data is only useful if it informs future decisions. Many organizations get bogged down in “vanity metrics” like total likes or page views without understanding whether those numbers lead to real-world outcomes.

  • Set up conversion tracking: It isn’t enough to know that 1,000 people visited your site. You need to know how many of them clicked the “Donate” button or signed up for your newsletter.
  • Behavioral analytics: Use cost-free tools like Google Analytics 4 and Microsoft Clarity to see where people are dropping off in your donation funnel. If 50% of people leave the site on your “Ways to Help” page, you may have a UX issue or a confusing call to action.

4. Optimize for the ‘mobile-first’ donor

Most global web traffic is now mobile, and for nonprofits, this is critical. Donors often engage with your content on social media on their phones and expect a seamless transition to your donation page.

  • Speed and simplicity: Fancy header videos, sliders, and bloated images slow down your site, like the nonprofit example in this article about bad website design. Less is more when speed is of the essence. Reduce friction to make your website more usable. For example, if your donation page takes more than three seconds to load or requires more form fields than necessary, you’re leaving donations on the table.
  • Payment flexibility: Incorporate digital wallets like Apple Pay, Google Pay, or PayPal. Reducing friction at the point of donation is one of the most effective ways to increase your conversion rate. Many nonprofits use third-party tools to manage donations, so keep payment flexibility in mind when choosing a payment partner.

Dig deeper: Why now is the most important time for nonprofit advertising

Get the newsletter search marketers rely on.


Common pitfalls to avoid

Even well-intentioned nonprofits can undermine their digital presence with a few common mistakes.

Targeting ‘everyone’

One of the biggest mistakes is trying to reach everyone. A digital presence that tries to appeal to every demographic usually ends up appealing to no one. Define your “ideal supporter,” and tailor your language, imagery, and platform choice to them.

Neglecting accessibility

Accessibility is about inclusion. Ensure your images have alt text, your videos have captions, and your website colors have enough contrast for users with visual impairments. If a portion of your audience can’t interact with your site, you aren’t fulfilling your mission.

The ‘set it and forget it’ mentality

I often tell businesses to treat websites like any other business asset, and the same applies to nonprofits. Digital ecosystems require maintenance.

Links break, plugins need updates, and search algorithms change. A quarterly “digital audit” to check your site speed, broken elements, and SEO health is essential for long-term visibility.

Dig deeper: How to use Google Ads to get more donations for your nonprofit

Turning your digital ecosystem into a mission multiplier

A successful digital presence is built on the same principles as a successful mission: consistency, transparency, and clear communication. By owning your assets, planning your content, and grounding your decisions in data, you ensure that your digital ecosystem serves as a force multiplier for the people you’re trying to help.

Read more at Read More

Web Design and Development San Diego

5 competitive gates hidden inside ‘rank and display’

ARGDW- 5 competitive gates hidden inside ‘rank and display’

If you’re a content strategist, you might feel this isn’t your territory. Keep reading, because it is. Everything you build feeds these five gates, and the decisions the algorithms make here determine whether the system recruits your content, trusts it enough to display it, and recommends it to the person who just asked for exactly what you sell.

The DSCRI infrastructure phase covers the first five gates: discovery through indexing. DSCRI is a sequence of absolute tests where the system either has your content or it doesn’t, and every failure degrades the content the competitive phase inherits.

The competitive phase, ARGDW (annotation through won), is a sequence of relative tests. Your content doesn’t just need to pass. It needs to beat the alternatives. A page that is perfectly indexed but poorly annotated can lose to a competitor whose content the system understands more confidently. 

A brand that is annotated but never recruited into the system’s knowledge structures can lose to one that appears in all three graphs. The infrastructure phase is absolute: pass, stall, or degrade. The competitive phase is Darwinian “survival of the fittest.”

The DSCRI infrastructure phase determines whether your content even gets this far. The ARGDW competitive phase determines whether assistive engines use it.

Up until today, the industry has generally compressed these five distinct processes into two words: “rank and display.” That compression muddied visibility into several separate competitive mechanisms. Understanding and optimizing for all five will make all the difference in the world.

The competitive turn: Where absolute tests become relative ones

The transition from DSCRI to ARGDW is the most significant moment in the pipeline. I call it the competitive turn.

In the infrastructure phase, every gate is zero-sum: does the system have this content or not? Your competitors face the same test, and you both pass or fail. But the quality of what survives rendering and conversion fidelity creates differences that carry forward. 

The differentiation through the DSCRI infrastructure gates is raw material quality, pure and simple, and you have an advantage in the ARGDW phase when better raw material enters that competition.

At the competitive turn, the questions change. The system stops asking “Do I have this?” and starts asking “Is this better than the alternatives?” 

Every gate from annotation forward is a comparison. Your confidence score matters only relative to the confidence scores of every other piece of content the system has collected on the same topic, for the same query, serving the same intent.

You’ve done everything within your power to get your content fully intact. From here, the engine puts you toe to toe with your competitors.

The DSCRI ARGDW pipeline- Where absolute tests become relative

Multi-graph presence as structural advantage in ARGD(W)

The algorithmic trinity — search engines, knowledge graphs, and LLMs — operates across four of the five competitive gates: annotation, recruitment, grounding, and display. Won is the outcome produced by those four gates. Presence in all three graphs creates a compounding advantage across ARGD, and that vastly increases your chances of being the brand that wins.

The systems cross-reference across graphs constantly. An entity that exists in the entity graph with confirmed attributes, has supporting content in the document graph, and appears in the concept graph’s association patterns receives higher confidence at every downstream gate than an entity present in only one.

This is competitive math. If your competitor has document graph presence (they rank in search), but no entity graph presence (no knowledge panel, no structured entity data), and you have both, the system treats your content with higher confidence at grounding because it can verify your claims against structured facts. The competitor’s content can only be verified against other documents, which is a higher-fuzz verification path — more interpretation, more ambiguity, lower confidence.

Recruitment (Gate 6)- One piece of content, three separate knowledge structures

For me, this is where the three-dimensional approach comes into its own, and single-graph thinking becomes a structural liability. “SEO” optimizes for the document graph. Entity optimization (structured data, knowledge panel, and entity home) optimizes for the entity graph. 

Consistent, well-structured copywriting across authoritative platforms optimizes for concept graph. Most brands invest heavily in one (perhaps two) and ignore the others. The brands that win at the competitive gates are stronger than their competitors in all three at every gate in ARGD(W).

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Annotation: The gate that decides what your content means across 24+ dimensions

Annotation is something I haven’t heard anyone else (other than Microsoft’s Fabrice Canel) talking about. And yet it’s very clearly the hinge of the entire pipeline. It sits at the boundary between the two phases: the last gate that applies absolute classification, and the first gate that feeds competitive selection. Everything upstream (in DSCRI) prepared the raw material. Everything downstream in ARGDW depends on how accurately the system can classify it.

At the indexing gate, the system stores your content in its proprietary format. Annotation is where the system reads what it stored and decides what it means. The classification operates across at least five categories comprising at least 24 dimensions.

Canel confirmed the principle and confirmed there are (a lot) more dimensions than the ones I’ve mapped. What follows is my reconstruction of the categories I can identify from observed behavior and educated guesses.

Canel confirmed the Annotation gate back in 2020 on my podcast as part of the Bing Series, in the episode “Bingbot: Discovering, Crawling, Extracting and Indexing.

  • “We understand the internet, we provide the richness on top of HTML to a lot, lot, lot of features that are extracted, and we provide annotation in order that other teams are able to retrieve and display and make use of this data.”
  • “My job stops at writing to this database: writing useful, richly annotated information, and handing it off for the ranking team to do their job.”

So we know that annotation is a “thing,” and that all the other algorithms retrieve the chunks using those annotations.

Annotation classification runs across five types of specialist models operating simultaneously per niche: 

  • One for entity and identity resolution (core identity).
  • One for relationship extraction and intent routing (selection filters).
  • One for claim verification (confidence multipliers).
  • One for structural and dependency scoring (extraction quality).
  • One for temporal, geographic, and language filtering (gatekeepers). 

This five-model architecture is my reconstruction based on observed annotation patterns and confirmed principles. The annotation system is a panel of specialists, and the combined output becomes the scorecard every downstream gate uses to compare your content against your competitors.

Annotation (Gate 5)- How the system classifies your content

Gatekeepers 

They determine whether the content enters specific competitive pools at all:

  • Temporal scope (is this current?).
  • Geographic scope (where does this apply?).
  • Language.
  • Entity resolution (which entity does this content belong to?). 

Fail a gatekeeper, and the content is excluded from entire query classes regardless of quality.

Core identity

This classifies the content’s substance: entities present, attributes, relationships between entities, and sentiment. 

For example, a page about “Jason Barnard” that the system classifies as being about a different Jason Barnard has perfect infrastructure and broken annotation. The content was there, and the system read it, but filed it in the wrong drawer.

Selection filters 

They add query routing: intent category, expertise level, claim structure, and actionability. 

For example, content classified as informational never surfaces for transactional queries, regardless of how well it performs on every other dimension.

Extraction quality

Think:

  • Sufficiency (does this chunk contain enough to be useful?)
  • Dependency (does it rely on other chunks to make sense?)
  • Standalone score (can it be extracted and still work?)
  • Entity salience (how central is the focus entity?)
  • Entity role (is the entity the subject, the object, or a peripheral mention?)

Weak chunks get discarded before competition begins.

Confidence multipliers 

These determine how much the system trusts its own classification: verifiability, provenance, corroboration count, specificity, evidence type, controversy level, consensus alignment, and more.

Two pieces of content can be classified identically on every other dimension and still receive wildly different confidence scores based on how verifiable and corroborated their claims are.

An important aside on confidence

Confidence is a multiplier that determines whether systems have the “courage” to use a piece of content for anything.

Once upon a time, content was king. Then, a few years ago, context took over in many people’s minds.

Confidence is the single most important factor in SEO and AAO, and always has been — we just didn’t see it.

To retain their users, search and assistive engines must provide the most helpful results possible. Give them a piece of content that, from a content and context perspective, appears to be super relevant and helpful, but they have absolutely no confidence in it for one reason or another, and they likely will not use it for fear of providing a terrible user experience.

What happens when annotation fails you (silently)

Annotation failures are the most dangerous failures in the pipeline because they are invisible. The content is indexed. But if the system misclassifies it, every competitive decision downstream inherits that misclassification.

I’ve watched this pattern repeatedly in our database: a page is indexed, it appears in search results, and yet the entity still gets misrepresented in AI responses.

Imagine this: A passage/chunk from your website is in the index, but confidence has degraded through the DSCRI part of the pipeline, and the annotation stage has received a degraded version. 

The structural issues at the rendering and indexing gates didn’t prevent indexing, but they were degraded versions of the original content. That degradation makes the annotation less accurate, less complete, and less confident. That annotative weakness will propagate through every competitive gate that follows in ARGDW.

When your content is included in grounding or display, and it’s suboptimally annotated, your content is underperforming. You can always improve annotation.

Measuring annotation quality in ARGDW

Annotation quality is the most important gate in the AI engine pipeline, but unfortunately, you can’t measure annotation quality directly. Every metric available to you is an indirect downstream effect.

The KPIs I suggest below are signals that clearly show where your content cleared indexing and failed annotation: the engine found the page, rendered it, indexed it, and then drew the wrong conclusions from it.

That distinction matters: beware of “we need more content” when the real problem is “the engine misread the content we have.”

Your brand SERP tells you exactly what the algorithm understood

These signals reveal how accurately the AI has understood who you are, what you do, and who you serve. The brand SERP (and AI résumé) is a readout of the algorithm’s model of your brand and, because it is updated continuously, makes it a great KPI.

  • Brand SERP shows incorrect entity associations: wrong competitors, wrong category, wrong geography.
  • AI résumé is noncommittal, hedged, or incomplete.
  • AI outputs underestimate your NEEATT credentials.
  • Knowledge panel displays incorrect information.
  • AI describes your brand using a competitor’s framing or category language.
  • Entity type is misclassified (person treated as organization, product treated as service).
  • AI can’t answer basic factual questions about your brand and offers without hedging.

If the algorithm can’t place you in a competitive set, it won’t recommend you

These signals reveal which entities the system considers comparable — a direct readout of how annotation classified them. Annotation places entities into competitive pools, and if your brand doesn’t appear in comparison sets where it belongs, the engine classified it outside that pool. Better content won’t fix that. Improving the algorithm’s ability to accurately, verbosely, and confidently annotate your content will.

  • Absent from “best [product] for [use case]” results where you qualify.
  • Absent from “alternatives to [competitor]” results.
  • Absent from “[brand A] vs. [brand B]” comparisons for your category.
  • Named in comparisons but with incorrect differentiators or misattributed features.
  • Consistently ranked below competitors with weaker real-world authority signals.

For me, that last one is the most telling. Weaker brand, higher placement.

Once again, what you’re saying isn’t the problem, how you’re saying it and how you “package” it for the bots and algorithms is the problem.

If the algorithm can’t surface you unprompted, you’re invisible at the moment of intent

These signals reveal whether the AI can place your brand at the point of discovery, before the user knows you exist. Clearing indexing means the engine has the content. Failing here means annotation didn’t connect that content to the broad topic signals that drive assistive recommendations. 

The difference between a brand that appears in “how do I solve [problem]” answers and one that doesn’t is whether annotation connected the content to the intent.

  • Absent from “how do I solve [problem your product solves]” answers, even as a passing mention.
  • Not surfaced when the AI explains a concept you coined or own.
  • Absent from AI-generated roundups, guides, and “where to start” responses for your core topic.
  • Named as a generic example rather than a recommended solution.
  • The AI discusses your subject area at length and doesn’t name you as a practitioner or source.
  • Entity present in the knowledge graph but invisible in discovery queries on AI platforms.

The three taxes you’re paying with sub-optimal annotation

Three revenue consequences follow from annotation failure, one at each layer of the funnel. 

  • The doubt tax is what you pay at BoFu when a buyer reaches your brand in the engine and the AI presents a confused, incomplete, or misframed version of what you offer. 
  • The ghost tax is what you pay at MoFu when you belong in the consideration set and the algorithm doesn’t prominently include you. 
  • The invisibility tax is what you pay at ToFu when the audience doesn’t know to look for you and the algorithm doesn’t introduce you. 

Each tax is a direct read of how well annotation worked — or didn’t.

For you as an SEO/AAO expert, you can diagnose your approach to reduce these three taxes for your client or company as: 

  • BoFu failures point to entity-level misunderstanding. 
  • MoFu failures point to competitive cohort misclassification.
  • ToFu failures point to topic-authority disconnection.

Annotation should be your focus. My bet is that for the vast majority of brands, the gate in the pipeline with the biggest payback will be annotation. 99% of the time, my advice to you is going to be “get started on fixing that before you touch anything else.”

For the full classification model in academic depth, see: 

Recruitment: The universal checkpoint where competition becomes explicit

Recruitment is where the system uses your content for the first time. Every piece of content the system has annotated now competes for inclusion in the system’s active knowledge structures, and this is where head-to-head competition begins.

Every entry mode in the pipeline — whether content arrived by crawl, by push, by structured feed, by MCP, or by ambient accumulation — must pass through recruitment. No content reaches a person without being recruited first. We could call recruitment “the universal checkpoint.”

The critical structural fact: it recruits into three distinct graphs, each with different selection criteria, different confidence thresholds, and different refresh cycles. The three-graph model is my reconstruction. 

The underlying principle (multiple knowledge structures with different characteristics) is confirmed by observing behavior across the algorithmic trinity through the data we collect (25 billion datapoints covering Google’s Knowledge Graph, brand search results, and LLM outputs).

The entity graph stores structured facts with low fuzz — who is this entity, what are its attributes, how does it relate to other entities, binary edges — and knowledge graph presence is entity graph recruitment, with entity salience, structural clarity, source authority, and factual consistency as the selection criteria.

The document graph handles content with medium fuzz — passages and pages and chunks the system has annotated and assessed as worth retaining — where search engine ranking is the visible output, and relevance to anticipated queries, content quality signals, freshness, and diversity requirements drive selection.

The concept graph operates at a different level entirely, storing inferred relationships with high fuzz — topical associations, expertise patterns, semantic connections that emerge from cross-referencing multiple sources — with LLM training data selection as the mechanism and corroboration patterns as the primary selection criterion.

Recruotment (Gate 6)

The same content may be recruited by one, two, or all three graphs. Each graph has its own speed of ingestion and its own speed of output. I call these the three speeds, a pattern I formulated explicitly this year but have been observing empirically across 10 years of brand SERP experiments: 

  • Search results are daily to weekly.
  • Knowledge graph updates are monthly. 
  • LLM updates are currently several months (when they choose to manually refresh the training data).

Grounding: Where the system checks its own work in real time

Recruitment stored your content in the system’s three knowledge structures. Grounding is where the system checks whether it should trust your content, right now, for this specific query.

Search engines retrieve from their own index. Knowledge graphs serve stored structured facts. Neither needs grounding. Only LLMs have the (huge) gap between stale training data and fresh reality that makes grounding necessary. 

The need for grounding will gradually disappear as the three technologies of the algorithmic trinity converge and work together natively in real time.

In an assistive Engine, the LLM is the lead actor. When the user asks a question or seeks a solution to a problem, the LLM assesses its confidence in its own answer. 

If confidence is sufficient, it responds from embedded knowledge. If confidence is low, it sends cascading queries to the search index, retrieves results, dispatches bots to scrape selected pages, and synthesizes an answer from the fresh evidence (Perplexity is the easiest example to see this in action — an LLM that summarizes search results).

But that’s too simplistic. The three grounding sources model that follows is my reconstruction of how this lifecycle operates across the algorithmic trinity.

The search engine grounding the industry currently focuses on is this: the LLM queries the web index, retrieves documents, and extracts the answer. That’s high fuzz.

Now add this: Knowledge graph allows a simple, quick, and cheap lookup: low fuzz, binary edges, no interpretation required, and our data shows that Google does this already for entity-level queries.

My bet is that specialist SLM grounding is emerging as a third source. We know that once enough consistent data about a topic crosses a cost threshold, the system builds a small language model specialized for that niche, and that model becomes a domain-expert verifier. It would be foolish not to use that as a third grounding base.

The competitive implication is huge. A brand with entity graph presence gives the system a low-fuzz grounding path. A brand without it forces the system onto the high-fuzz path (document retrieval), which means more interpretation, more ambiguity, and lower confidence in the result. The competitor with structured entity data gets verified faster and more accurately.

In short, focus on entity optimization because knowledge graphs are the cheapest, fastest, and most reliable grounding for all the engines.

Get the newsletter search marketers rely on.


Display: Where machine confidence meets the person

Your content has been annotated, recruited into its knowledge structures, and verified through grounding. Display is where the AI assistive engine decides what to show the person (and, looking to the future that is already happening, where the AI assistive Agent decides what to act upon).

Display is three simultaneous decisions: format (how to present), placement (where in the response), and prominence (how much emphasis). A brand can be annotated, recruited, and grounded with high confidence and still lose at display because the system chose a different format, placed the competitor more prominently, or decided the query deserved a different type of answer entirely.

This is essentially the same thing as Bing’s Whole Page Algorithm. Gary Illyes jokingly called Google’s whole page algorithm “the magic mixer.” Nathan Chalmers, PM for the whole page algorithm at Bing, explained how that works on my podcast in 2020. Don’t make the mistake of thinking this is out of date — it isn’t. The principles are even more relevant than ever.

UCD activates at display

You may have heard or read me talking obsessively about understandability, credibility, and deliverability. UCD is absolutely fundamental because it is the internal structure of display: the vertical dimension that makes this gate three-dimensional.

The same content, grounded with the same confidence, presents differently depending on who is asking and why.

A person arriving with high trust — they searched your brand name, they already know you — experiences display at the understandability layer, where the engine acts as a trusted partner confirming what they already believe, which is BOFU.

A person evaluating options — they asked “best [category] for [use case]” — experiences display at the credibility layer, where the engine presents evidence for and against as a recommender, which is MOFU.

A person encountering your brand for the first time — a broad topical question in which your name appears — experiences it at the deliverability layer, where the system introduces you, which is TOFU.

The user interaction reveals the funnel position. The funnel position determines which UCD layer fires.

This is why optimizing only for “ranking” misses reality: Display is a context-sensitive presentation, not a list, and the same piece of content can introduce, validate, or confirm depending on who asked.

The framing gap at display

The system presents what it understood, verified, and deemed relevant. The gap between that and your intended positioning is the framing gap, and it operates differently at each funnel stage.

  • At TOFU, the gap is cognitive: the system may know you exist, but doesn’t associate you with the right topics. 
  • At MOFU, the gap is imaginative: the system needs a frame to differentiate your proof from the competitor’s, and most brands supply claims without frames. 
  • At BOFU, the gap is about relevance: the system cross-references your claims against structured evidence, and either confirms or hedges.

After annotation, framing is the single most important part of the SEO/AAO puzzle, so I’ll talk a lot about both in the coming articles.

Won: The zero-sum moment where one brand wins and every competitor loses

Everything I’ve explained so far in this series collapses into a zero-sum point at the “won” gate. Here, the outcome is binary. The person (or agent) acts, or they don’t. One brand converts, and every competitor loses. 

The system may have mentioned others at display, but at the moment of commitment, there can only be one winner for the transaction.

Three won resolutions in the competitive context

Won always resolves through three distinct mechanisms, each with different competitive dynamics.

Resolution 1: Imperfect click

  • The AI influences the person’s thinking at grounding and display, but the person decides independently: they choose one of several options offered by the engine, they walk into the store, or they book by phone. 
  • This is what Google called the “zero moment of truth,” where the competitive battle happens at display, where the engine has influenced the human, but the active choice the person makes is still very much “them.”

Resolution 2: Perfect click

  • The AI recommends one brand and the person takes it. This is the natural next step, what I call the zero-sum moment. 
  • This fires inside the AI interface, where the engine filtered for intent, context, and readiness, presented one answer, and the person converted.

Resolution 3: Agential click

  • The AI agent acts autonomously on the person’s behalf. No person at the decision point, an API settlement between the buyer’s agent, and the brand’s action endpoint. 
  • The competitive battle happened entirely within the engine: whichever brand had the highest accumulated confidence, the strongest grounding evidence, and a functional transaction endpoint is the winner. The person doesn’t choose. The system chooses for them.

The trajectory runs from oldest to newest: Resolution 1 was dominant up to late 2025, Resolution 2 is taking over, and Resolution 3 gained a lot of traction early 2026. Stripe and Cloudflare are laying the transaction and identity rails. Visa and Mastercard are building the financial authorization infrastructure. 

Anthropic’s MCP is providing the coordination layer. Google’s UCP and A2A are defining how agents communicate across the full consumer commerce journey. Apple has the closed-loop infrastructure to make it seamless on a billion devices the moment they choose to. 

Microsoft is locking in the enterprise and government layer through Copilot in a way that will be extremely difficult to displace. No single company turns Resolution 3 on — but all of them together make it inevitable.

Competitive escalation across the five ARGDW gates

The competitive intensity increases at every gate — a progressive narrowing, a Darwinian funnel where the field shrinks at each stage. The narrowing pattern is my model based on observed outcomes across our database. The underlying principle (competitive selection intensifies downstream) is structural to any sequential gating system.

Competitive narrowing
  • The field is large at annotation, where the algorithms create scorecards and your classification versus competitors’ determines downstream positioning.
  • Recruitment sets the qualifying round: multiple brands enter the system’s knowledge structures, but not all, and the selection criteria already favor multi-graph presence.
  • Grounding narrows the shortlist as confidence requirements tighten — the system verifies the candidates worth checking, not everyone.
  • Display reduces to finalists, often one primary recommendation with supporting alternatives.
  • Won is the binary outcome. The zero-sum moment you’re either welcoming with open arms or fearful of.

ARGDW: Relative tests. The scoreboard is on.

Five gates. Five relative tests. Competitive failures in ARGDW are significantly harder to diagnose than infrastructure failures in DSCRI because the fix is competitive positioning rather than technical.

  • Annotation failures mean the system misclassified what your content is or who it belongs to — write for entity clarity, structure claims with explicit evidence, and use schema markup to declare rather than expect the system to guess.
  • Recruitment failures increasingly mean you’re present in one graph while competitors have two or three — build entity graph presence (structured data, knowledge panel, entity home), document graph presence (content quality, topical coverage), and concept graph presence (consistent publishing across authoritative platforms) as a coordinated program.
  • Grounding failures mean the system is verifying you on the high-fuzz path — provide structured entity data for low-fuzz verification, and MCP endpoints if you need real-time grounding without the search step.
  • Display failures mean the framing gap is costing you at the three layers of the visible gate — assuming you fixed all the upstream issues, then closing that framing gap at every UCD layer is your pathway to gain visibility in AI engines.
  • Won failures mean the resolution mechanism doesn’t exist — Resolution 1 requires that you rank (good enough up to 2024), Resolution 2 requires that you dominate your market (good enough in 2026), and Resolution 3 requires a mandate framework and action endpoint (needed for 2027 onward).

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

After establishing the 10-gate AI engine pipeline, what’s next?

The aim of this series of articles is to give you the playbook for the DSCRI infrastructure phase and the strategy for the ARGDW competitive phase. This 10-gate AI engine pipeline breaks optimizing for assistive engines and agents into manageable chunks.

Each gate is manageable on its own. And the relative importance of each gate is now clear for you (I hope). In the remainder of this series of articles, I’ll provide solutions to the major issues at each gate that will help you manage each individually (and as part of the collective whole).

Aside: The feedback I have had from Microsoft on this series so far (thank you, Navah Hopkins) reminded me of something Chalmers said to me about Darwinism in Search back in 2020.

My explanations are often more absolute and mechanical than the reality. That’s a very fair point. But then reality is unmanageably nuanced, and nuance leads to a lack of clarity and often paralyzes people to the extent that they struggle to identify actionable next steps. I want to be useful.

I suggest we take this evolution from SEO to AAO step by step. Over the last 10+ years, I’ve always done my very best to avoid saying “it depends.”

People often say it takes 10,000 hours to become an expert. The framework presented here comes from tens of thousands of hours analyzing data, experimenting, working with the engineers who build these systems, and developing algorithms, infrastructure, and KPIs.

The aim is simple: reduce the number of frustrating “it depends” answers and provide a clear outline for identifying actionable next steps.

This is the fifth piece in my AI authority series. 

Read more at Read More

Web Design and Development San Diego

Why social search visibility is the next evolution of discoverability

While everyone focuses on AI search, the real opportunity may be social search

Search strategy once meant ranking on Google. We optimized websites and invested heavily in organic visibility. Entire marketing strategies were built around capturing demand from Google search results.

But search behavior doesn’t live on a single platform. Today, people search on TikTok for recommendations, YouTube for tutorials, Reddit for honest opinions, and Amazon for product validation.

Search behavior now spans a much wider set of platforms, creating one of the most overlooked opportunities in digital marketing.

Search behavior is diversifying

Recent research from SparkToro and Datos analyzed search behavior across 41 major platforms, including traditional search engines, ecommerce platforms, social networks, AI tools, and reference sites.

The findings reinforce something many marketers are beginning to notice. Search is no longer confined to traditional search engines.

While Google still dominates search activity, a growing share of discovery now happens across a wider collection of platforms — a search universe, if you will.

The research suggests search activity is roughly distributed as follows:

  • Traditional search engines: ~80% of searches, with Google alone at ~73.7%
  • Commerce platforms (Amazon, Walmart, eBay): ~10%
  • Social networks: ~5.5%
  • AI tools (ChatGPT, Claude, etc.): ~3.2%

Consumers search directly on platforms where they expect to find the most useful answers, in the formats they prefer, rather than relying on Google to send them elsewhere.

Dig deeper: Discoverability in 2026: How digital PR and social search work together

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

The industry is focused on AI and missing the bigger mainstream shift

Much of the search industry conversation today is focused on AI. Questions like:

  • How do I rank in ChatGPT?
  • How do I optimize for AI search?
  • Will AI replace Google?

They’re constantly being posed, debated, and answered by SEO professionals on platforms like Search Engine Land.

I want to be clear, these are important questions. But the data within this study tells a more grounded story, especially when thinking about strategy over the next 12 months.

AI search tools currently account for roughly 3.2% of search activity, per SparkToro research. That’s meaningful. It will almost certainly reshape how people search and discover information in the future.

But today, AI search is still smaller than many established discovery platforms with far broader adoption. For example:

  • Amazon receives more searches than ChatGPT.
  • YouTube receives more searches than ChatGPT.
  • Even Bing receives more search activity.

Yet many brands are pouring disproportionate attention into AI visibility while overlooking platforms where millions of searches are already happening every day.

Social platforms are now search engines

For many users, social platforms are now core search destinations. People look to:

  • TikTok for recommendations, restaurants, travel ideas, and products.
  • YouTube for tutorials, reviews, and problem-solving.
  • Reddit for honest discussions and community opinions.
  • Pinterest for inspiration and visual discovery.

Each platform plays a different role in the discovery journey.

Platform What people search for
TikTok/Instagram Discovery and recommendations
YouTube Learning, tutorials, and reviews
Reddit Real opinions and community discussions
Pinterest Inspiration and planning

These platforms are more than entertainment destinations. Users head to them with real intent to find a solution to a problem, need, or desire.

Social content is now appearing directly in Google results

As users adopt social platforms for search, Google has begun aggregating and organizing information right within its SERPs. So yes, social and creator content appears directly inside Google search results.

Over the past year, Google has significantly expanded how it surfaces social content within SERPs. Search results now frequently include TikTok videos, YouTube Shorts, Reddit threads, Instagram posts, and forum discussions.

Google even partnered with platforms like Reddit to ensure that community discussions appear more prominently in search results. This means social content can now influence discovery in multiple ways:

  • Direct searches on social platforms.
  • Visibility within Google search results.
  • Influence within AI-generated answers.

Dig deeper: Social and UGC: The trust engines powering search everywhere

Get the newsletter search marketers rely on.


Social content is also powering AI search

Social platforms are also important sources for AI-generated answers. AI systems rely on content that reflects real-world experiences, discussions, and opinions.

That’s why platforms such as Reddit, YouTube, Quora, forums, and creator-led content (i.e., Instagram, TikTok, and YouTube Shorts) are frequently cited in AI-generated responses.

Google’s AI Overviews often reference Reddit threads and YouTube videos.

Other AI tools regularly draw insights from community discussions, reviews, and creator content when generating answers.

This means content created for social discovery can influence visibility across multiple layers of search, including social platforms, Google search results, and AI-generated responses.

A single piece of content can now travel much further across the universe, consistently providing signals to audiences, developing a preference toward one brand over another.

The compounding discoverability effect

When brands invest in social search visibility, they unlock a powerful compounding effect. For example, a useful YouTube tutorial could:

  • Rank in YouTube search.
  • Appear in Google search results.
  • Be referenced in AI-generated answers.
  • Be shared across social platforms.
  • Spread through private messaging and dark social channels.

Unlike traditional website content, social content can move across platforms, dramatically expanding its reach. This creates an entirely new layer of discoverability.

And at a time when marketing budgets are under increasing scrutiny, the ability for content to generate visibility across multiple platforms makes the ROI of content strategies far more compelling.

Dig deeper: The social-to-search halo effect: Why social content drives branded search

Most brands still follow the old search playbook

Despite these shifts, most search strategies still revolve around Google SEO, paid search, website content, and AI/LLM interfaces.

Few brands have structured strategies for TikTok search optimization, YouTube search visibility, Reddit community engagement, and creator-led discovery strategies.

While Google SEO is incredibly competitive, social search remains relatively under-optimized. Brands that move early can capture visibility (presence) in spaces where demand already exists, thereby developing preference for their brand.

When brands invest in social search visibility, they aren’t just unlocking the 5.5% of searches happening directly on social platforms. They’re also influencing traditional search results, AI-generated answers, and wider discovery across the web.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Search everywhere: A new model for discoverability

Search is more than a channel. It’s a behavior that happens across a developing and evolving search universe.

Your audience searches wherever they believe they’ll find the best answer in the most useful format — whether that’s Google, TikTok, YouTube, Reddit, Amazon, Pinterest, or increasingly, AI interfaces.

Winning search today means being discoverable wherever those searches happen. The brands that win won’t be the ones that rank in just one place, even as traditional SEO remains an important part of the mix. They’ll be the ones that are discoverable wherever their audience searches.

That is the future of search. That is “search everywhere.”

Dig deeper: ‘Search everywhere’ doesn’t mean ‘be everywhere’

Read more at Read More

Web Design and Development San Diego

Google Ads Editor 2.12 adds creative control and campaign flexibility

Google Ads auction insights

Google is expanding capabilities in Google Ads Editor to give advertisers more creative flexibility, automation control, and budget precision — especially as AI-driven campaign types continue to evolve.

What’s new. The 2.12 release introduces a wide set of updates across Performance Max, Demand Gen, and video campaigns, with a clear focus on scaling creative assets and improving workflow efficiency.

Creative expansion. Performance Max campaigns now support up to 15 videos per asset group, allowing advertisers to feed more variations into Google’s AI for testing. The addition of 9:16 vertical images also reflects growing demand for mobile-first formats, particularly across surfaces like short-form video.

Campaign upgrades. Demand Gen campaigns get several enhancements, including new customer acquisition goals, brand guideline controls, and hotel feed integrations. A new minimum daily budget and a streamlined campaign build flow aim to improve stability and setup.

Video & AI control. Updates to non-skippable video formats and real-time bid guidance give advertisers more control over performance, while new text and brand guidelines help ensure AI-generated assets stay on-brand and compliant.

Budgeting shift. A new total campaign budget feature allows advertisers to set a fixed spend across a defined period — ideal for promotions or seasonal bursts — with Google automatically pacing delivery.

Workflow improvements. Account-level tracking templates, better visibility into Final URL expansion performance, clearer campaign status filters, and bulk link replacement tools are designed to reduce manual work and improve account management at scale.

Why we care. This update to Google Ads Editor gives them more creative flexibility and control over AI-driven campaigns, especially in Performance Max and Demand Gen. Features like increased video limits, vertical assets, and total campaign budgets help you test more, scale faster, and manage spend more efficiently.

It also improves workflows and brand safeguards, making it easier to guide automation while maintaining consistency and performance across Google Ads.

Between the lines. The update continues a broader trend: as automation increases, Google is giving advertisers more ways to guide AI rather than manually control every input.

The bottom line. Google Ads Editor 2.12 is less about one standout feature and more about incremental gains across creative, automation, and control — helping advertisers better manage increasingly AI-driven campaigns within Google Ads.

Read more at Read More

Web Design and Development San Diego

How Google’s Universal Commerce Protocol could reshape search conversions

How Google’s Universal Commerce Protocol could reshape search conversions

As Google rolls out AI Overviews, AI Mode in Search, and the Gemini ecosystem, we face a growing challenge: what happens when users get answers — and soon complete purchases — without leaving Google’s interfaces?

Enter Google’s Universal Commerce Protocol (UCP), now in beta.

UCP is designed to help brands to sell to consumers without leaving the Gemini or LLM experience. Consumers can check out within the LLM, add rewards points, and fully execute the transaction. Here’s an example flow:

Google UCP workflow example

How Google’s Universal Commerce Protocol works

At its core, UCP standardizes how consumer AI interfaces communicate with merchant checkout systems. When a user tells Gemini, “Find me a highly rated, waterproof hiking boot in size 10 under $200 and buy it,” UCP is the invisible bridge that allows the AI to securely fetch inventory, process the payment, and confirm the order.

While Google’s developer documentation leans into technical jargon like “Model Context Protocol (MCP)” and “Agent2Agent (A2A) interoperability,” the implications are remarkably straightforward:

  • It uses your existing feeds: UCP plugs directly into your existing Google Merchant Center (GMC) shopping feeds. The inventory data you’re already managing for your campaigns is the same data that will power these AI transactions.
  • You keep the data: Unlike selling on some third-party marketplaces, where you lose the customer relationship, UCP ensures you remain the merchant of record. You process the transaction, you own the first-party customer data, and you control the post-purchase experience.
  • Frictionless checkout: By enabling checkouts directly within Google’s AI ecosystem, UCP can reduce cart abandonment and increase conversion rates among high-intent shoppers.

Dig deeper: How Google’s Universal Commerce Protocol changes ecommerce SEO

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Best practices for Google’s UCP

Like many LLM optimization recommendations, these steps come down to the fundamentals of managing your shopping feed and Merchant Center account.

Google outlined a few best practices. If you follow these four steps, you’ll be well-positioned for success.

1. Master your feed data hygiene

In an agentic commerce environment, your product feed is your primary sales tool. To ensure the AI accurately matches your products to highly specific user queries, you need to enrich your feed with granular details.

  • Write product titles that are 30 or more characters long.
  • Expand product descriptions to 500 or more characters.
  • Include Global Trade Item Numbers (GTINs), where relevant, to ensure accurate product matching.
  • Include three or more additional images alongside your primary product photo to engage visual shoppers.
  • Use lifestyle images, not just standard product shots on white backgrounds.
  • Ensure your image quality meets the standard of 1,500×1,500 pixels.
  • Categorize your inventory by product type and share key product highlights.
  • Prepare specific feed attributes required for UCP, such as returns, support information, and policy information.
  • Support Google’s Native Checkout when possible (checkout logic integrated directly into the AI interface). Google also offers another option called Embedded Checkout (an iframe-based solution for highly bespoke branding). This will work, but is suboptimal at this time.

Dig deeper: Google publishes Universal Commerce Protocol help page

2. Highlight convenience and trust signals

To set your brand apart when AI is helping consumers make immediate, confident purchasing decisions, you must pass trust and convenience signals directly through your feed. The data shows that these elements directly impact the bottom line:

  • Indicate clearly if your brand offers free shipping.
  • Share your shipping speed (next day, two-day, etc.).
  • Display your return policy.
  • Submit sale prices when available. Regardless, ensure the feed represents the most accurate pricing details.
  • Include product ratings.

Get the newsletter search marketers rely on.


3. Upgrade your technical infrastructure and SEO

The shift to UCP requires foundational updates to how your backend systems interact with Google. You must work hand in hand with their development and SEO teams to prepare for these AI search experiences.

  • Migrate from the Content API to the Merchant API to enable real-time inventory updates and programmatic access to data and insights.
  • Upgrade your tag in Data Manager and implement Conversion with Cart Data to effectively use first-party data in your campaigns.
  • Prioritize content-rich pages for indexing and crawling, and ensure structured data is always supported by visible content.
  • Create your Business Profile and claim your Brand Profile to highlight your business information and brand voice on Google platforms.
  • Have your development team explore and prototype with UCP open source on GitHub to map APIs for checkout, session creation, and order management.

4. Additional features and tools beyond UCP to consider

Google is actively rolling out pilot programs designed specifically for the agentic era. Be proactive in adopting these new solutions rather than waiting for wide release:

  • Prepare for the “Business Agent,” a virtual sales associate that acts like a brand representative to answer product questions right on Google.
  • Consider the “Direct Offers Pilot,” a new way for advertisers to present exclusive discounts directly in AI Mode.
  • Inquire about the “Conversational Attributes Pilot,” which introduces dozens of new Merchant Center attributes designed to enhance discovery in the conversational commerce era.

Dig deeper: Are we ready for the agentic web?

The future of search will happen within LLMs

The launch of Google’s Universal Commerce Protocol signals a significant shift. The SERP is becoming a transactional engine that increasingly operates within large language models.

UCP presents a meaningful opportunity. By removing friction between discovery and purchase, conversion rates could increase.

However, taking advantage of this requires stepping outside the Google Ads interface and working directly in your feed data and technical integrations, much like with Google Shopping. While this isn’t new, it’s becoming more important.

Ultimately, this comes down to the quality of your product data.

Read more at Read More