Google says AI Mode stays ad-free for Personal Intelligence users

Although Google continues to test ads in AI Mode, users who connect apps to enable Personal Intelligence won’t see ads — and that isn’t changing right now, a Google spokesperson confirmed.

What’s happening. Google has been testing ads inside AI Mode in the U.S.

  • Early results: users find these business connections “helpful,” per Google.
  • But there’s a clear carveout: no ads for users who opt into app-connected, highly personalized experiences.

The details. Google today expanded Personal Intelligence in AI Mode as a beta to anyone in the U.S., allowing Gemini to generate more tailored responses by connecting data across its ecosystem, including Google Search, Gmail, Google Photos, and YouTube.

  • Opting into Personal Intelligence creates an ad-free experience inside AI Mode.

Why we care. Ads are coming to AI Mode, but Google is moving cautiously where personal data is deepest. Personal Intelligence experiences stay ad-free for now while Google works out the right balance.

What Google is saying. A Google spokesperson told Search Engine Land:

  • “There are currently no ads for people who choose to connect their apps with AI Mode. That isn’t changing right now.
  • “Over the past few months, we’ve been testing ads in AI Mode in the US. Our tests have shown that people find these connections to businesses helpful and open up new opportunities to discover products and services.
  • “In the future, we anticipate that ads will operate similarly for people who choose to connect their apps with AI Mode. Ads will continue to be relevant to things like your query, the context of the response and your interests.”

Bottom line. Personal Intelligence positions Google’s Gemini app as a more personalized assistant, setting the stage for future ad experiences built on richer, cross-platform user context.

Read more at Read More

Yahoo CEO: Google AI Mode is the biggest threat to web traffic

Yahoo traffic pipeline

Yahoo CEO Jim Lanzone said AI-powered search — especially Google’s AI Mode — is putting the open web’s core traffic model at risk and argues AI search engines must send users back to publishers.

  • “I think that the LLMs are one big reason that they’re under threat, with AI Mode in Google being the biggest challenge.”
  • “Those publishers deserve [traffic], and we’re not going to have the content to consume to give great answers if publishers aren’t healthy.”

Why we care. Many websites are seeing less traffic from answer engines like Google and OpenAI — and I think it’ll only get worse. So it’s encouraging to see Yahoo trying to preserve the “search sends traffic” model. As he said: “We have very purposefully highlighted and linked very explicitly and bent over backwards to try to send more traffic downstream to the people who created the content.”

Yahoo’s AI stance. Yahoo is taking a different approach from chatbot-style interfaces, Lanzone said on the Decoder podcast. He added that Yahoo isn’t trying to compete as a full AI assistant:

  • “Ours looks a lot more like traditional search and it is more paragraph-driven. It’s not a chatbot that’s trying to act like it’s a person and be your friend.”
  • “We’re not a large language model. We’re not going to be the place you come to code. We’ve really launched Scout as an answer engine.”

What’s next: Personalization + agentic actions. Yahoo plans to expand Scout beyond basic answers and is embedding AI across its ecosystem:

  • “You are very shortly going to see us get into very personalized results. You’re going to see us get into very agentic actions that you can take.”
  • “There’s a button in Yahoo Finance that does analysis of a given stock on the fly… It is in Yahoo Mail to help summarize and process emails.”

Yahoo vs. Google isn’t a thing. Yahoo isn’t trying to win by converting Google users directly. Instead, Yahoo is prioritizing its existing audience and increasing usage frequency over immediate market share gains:

  • “Nobody chooses, you will not be surprised, Yahoo over Google or somewhere else to search. The way that we get our search volume is because we have 250 million US users and 700 million global users in the Yahoo network at any given time. There’s a search box there. And infrequently, they use it.”

A warning. Companies — including publishers — should be cautious about relying too heavily on AI platforms as intermediaries. Lanzone compared today’s AI partnerships to Yahoo’s past reliance on Google:

  • “You are tempting fate by opening up a way for consumers to access your product within a large language model.”
  • “The big bad wolf will come to your door and say everything’s cool.”

The interview. Yahoo CEO Jim Lanzone on reviving the web’s homepage

Read more at Read More

Web Design and Development San Diego

How nonprofits can build a digital presence that actually drives impact

How nonprofits can build a digital presence that actually drives impact

For a long time, a nonprofit’s digital presence hasn’t been a “nice-to-have.” It’s the central hub for mission delivery, donor engagement, and advocacy.

Many organizations struggle with the technical and strategic foundations needed to turn a website and a few social accounts into a high-performing digital ecosystem.

The goal isn’t simply to “be online.” It’s to build reliable infrastructure, so your organization owns its narrative, protects its assets, and measures the impact of “free” digital efforts.

Here’s a practical look at the critical elements of managing a nonprofit’s digital presence — and the common pitfalls to avoid — based on my experience helping several organizations throughout my career.

If you help an organization with digital marketing and they aren’t following these practices, your first step should be getting their digital house in order.

1. Own your foundations: Domains and account control

Owning your name and your story are essential parts of a proactive online reputation management strategy and a critical aspect of managing an online entity. 

In my experience, the most overlooked risk in nonprofit digital management is the lack of direct ownership of technical assets.

A well-meaning volunteer or third-party agency often registers a domain or creates a social account using personal credentials. If that individual leaves the organization, you risk losing access to your primary digital channel — the domain you should own and control.

I’ve worked with several organizations that had to start over completely because they lacked control.

  • Domain ownership: Ensure the domain is registered in the organization’s name using a generic “admin@” or “info@” email address that multiple stakeholders can access. Set the domain to auto-renew and use a registrar that offers robust security features.
  • Website hosting and management: The organization also needs to control its website hosting and administration. Use a similar approach to the one recommended for domain ownership.
  • Social media governance: Again, use a similar process to the one described above to establish ownership of key social media channels. Grant volunteers access via delegation on individual channels rather than sharing passwords. This allows you to revoke access immediately if a staff member or volunteer moves on, protecting your brand’s voice and security.

Dig deeper: Google Ad Grants now lets nonprofits optimize for shop visits

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

2. Move beyond ‘winging it’: The editorial calendar

A common mistake for nonprofits is posting only when there’s an immediate need, which is often only when making a fundraising appeal. This “broadcast-only” approach often leads to donor fatigue and low engagement.

To build a community, you need a content plan that balances stories of impact with actionable requests.

  • The 70/20/10 rule: Aim for 70% value-based content (success stories, educational info), 20% shared content from partners or community members, and only 10% direct “asks.”
  • The editorial calendar: Use a simple tool, even a shared spreadsheet, to map out your themes and individual pieces of content for the month. This ensures you aren’t scrambling for a post on Giving Tuesday, that everyone knows what’s expected of them, and that your messaging and pace of content creation remain consistent across email, social, and your blog.

3. Tracking what matters (and ignoring what doesn’t)

Data is only useful if it informs future decisions. Many organizations get bogged down in “vanity metrics” like total likes or page views without understanding whether those numbers lead to real-world outcomes.

  • Set up conversion tracking: It isn’t enough to know that 1,000 people visited your site. You need to know how many of them clicked the “Donate” button or signed up for your newsletter.
  • Behavioral analytics: Use cost-free tools like Google Analytics 4 and Microsoft Clarity to see where people are dropping off in your donation funnel. If 50% of people leave the site on your “Ways to Help” page, you may have a UX issue or a confusing call to action.

4. Optimize for the ‘mobile-first’ donor

Most global web traffic is now mobile, and for nonprofits, this is critical. Donors often engage with your content on social media on their phones and expect a seamless transition to your donation page.

  • Speed and simplicity: Fancy header videos, sliders, and bloated images slow down your site, like the nonprofit example in this article about bad website design. Less is more when speed is of the essence. Reduce friction to make your website more usable. For example, if your donation page takes more than three seconds to load or requires more form fields than necessary, you’re leaving donations on the table.
  • Payment flexibility: Incorporate digital wallets like Apple Pay, Google Pay, or PayPal. Reducing friction at the point of donation is one of the most effective ways to increase your conversion rate. Many nonprofits use third-party tools to manage donations, so keep payment flexibility in mind when choosing a payment partner.

Dig deeper: Why now is the most important time for nonprofit advertising

Get the newsletter search marketers rely on.


Common pitfalls to avoid

Even well-intentioned nonprofits can undermine their digital presence with a few common mistakes.

Targeting ‘everyone’

One of the biggest mistakes is trying to reach everyone. A digital presence that tries to appeal to every demographic usually ends up appealing to no one. Define your “ideal supporter,” and tailor your language, imagery, and platform choice to them.

Neglecting accessibility

Accessibility is about inclusion. Ensure your images have alt text, your videos have captions, and your website colors have enough contrast for users with visual impairments. If a portion of your audience can’t interact with your site, you aren’t fulfilling your mission.

The ‘set it and forget it’ mentality

I often tell businesses to treat websites like any other business asset, and the same applies to nonprofits. Digital ecosystems require maintenance.

Links break, plugins need updates, and search algorithms change. A quarterly “digital audit” to check your site speed, broken elements, and SEO health is essential for long-term visibility.

Dig deeper: How to use Google Ads to get more donations for your nonprofit

Turning your digital ecosystem into a mission multiplier

A successful digital presence is built on the same principles as a successful mission: consistency, transparency, and clear communication. By owning your assets, planning your content, and grounding your decisions in data, you ensure that your digital ecosystem serves as a force multiplier for the people you’re trying to help.

Read more at Read More

Web Design and Development San Diego

5 competitive gates hidden inside ‘rank and display’

ARGDW- 5 competitive gates hidden inside ‘rank and display’

If you’re a content strategist, you might feel this isn’t your territory. Keep reading, because it is. Everything you build feeds these five gates, and the decisions the algorithms make here determine whether the system recruits your content, trusts it enough to display it, and recommends it to the person who just asked for exactly what you sell.

The DSCRI infrastructure phase covers the first five gates: discovery through indexing. DSCRI is a sequence of absolute tests where the system either has your content or it doesn’t, and every failure degrades the content the competitive phase inherits.

The competitive phase, ARGDW (annotation through won), is a sequence of relative tests. Your content doesn’t just need to pass. It needs to beat the alternatives. A page that is perfectly indexed but poorly annotated can lose to a competitor whose content the system understands more confidently. 

A brand that is annotated but never recruited into the system’s knowledge structures can lose to one that appears in all three graphs. The infrastructure phase is absolute: pass, stall, or degrade. The competitive phase is Darwinian “survival of the fittest.”

The DSCRI infrastructure phase determines whether your content even gets this far. The ARGDW competitive phase determines whether assistive engines use it.

Up until today, the industry has generally compressed these five distinct processes into two words: “rank and display.” That compression muddied visibility into several separate competitive mechanisms. Understanding and optimizing for all five will make all the difference in the world.

The competitive turn: Where absolute tests become relative ones

The transition from DSCRI to ARGDW is the most significant moment in the pipeline. I call it the competitive turn.

In the infrastructure phase, every gate is zero-sum: does the system have this content or not? Your competitors face the same test, and you both pass or fail. But the quality of what survives rendering and conversion fidelity creates differences that carry forward. 

The differentiation through the DSCRI infrastructure gates is raw material quality, pure and simple, and you have an advantage in the ARGDW phase when better raw material enters that competition.

At the competitive turn, the questions change. The system stops asking “Do I have this?” and starts asking “Is this better than the alternatives?” 

Every gate from annotation forward is a comparison. Your confidence score matters only relative to the confidence scores of every other piece of content the system has collected on the same topic, for the same query, serving the same intent.

You’ve done everything within your power to get your content fully intact. From here, the engine puts you toe to toe with your competitors.

The DSCRI ARGDW pipeline- Where absolute tests become relative

Multi-graph presence as structural advantage in ARGD(W)

The algorithmic trinity — search engines, knowledge graphs, and LLMs — operates across four of the five competitive gates: annotation, recruitment, grounding, and display. Won is the outcome produced by those four gates. Presence in all three graphs creates a compounding advantage across ARGD, and that vastly increases your chances of being the brand that wins.

The systems cross-reference across graphs constantly. An entity that exists in the entity graph with confirmed attributes, has supporting content in the document graph, and appears in the concept graph’s association patterns receives higher confidence at every downstream gate than an entity present in only one.

This is competitive math. If your competitor has document graph presence (they rank in search), but no entity graph presence (no knowledge panel, no structured entity data), and you have both, the system treats your content with higher confidence at grounding because it can verify your claims against structured facts. The competitor’s content can only be verified against other documents, which is a higher-fuzz verification path — more interpretation, more ambiguity, lower confidence.

Recruitment (Gate 6)- One piece of content, three separate knowledge structures

For me, this is where the three-dimensional approach comes into its own, and single-graph thinking becomes a structural liability. “SEO” optimizes for the document graph. Entity optimization (structured data, knowledge panel, and entity home) optimizes for the entity graph. 

Consistent, well-structured copywriting across authoritative platforms optimizes for concept graph. Most brands invest heavily in one (perhaps two) and ignore the others. The brands that win at the competitive gates are stronger than their competitors in all three at every gate in ARGD(W).

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Annotation: The gate that decides what your content means across 24+ dimensions

Annotation is something I haven’t heard anyone else (other than Microsoft’s Fabrice Canel) talking about. And yet it’s very clearly the hinge of the entire pipeline. It sits at the boundary between the two phases: the last gate that applies absolute classification, and the first gate that feeds competitive selection. Everything upstream (in DSCRI) prepared the raw material. Everything downstream in ARGDW depends on how accurately the system can classify it.

At the indexing gate, the system stores your content in its proprietary format. Annotation is where the system reads what it stored and decides what it means. The classification operates across at least five categories comprising at least 24 dimensions.

Canel confirmed the principle and confirmed there are (a lot) more dimensions than the ones I’ve mapped. What follows is my reconstruction of the categories I can identify from observed behavior and educated guesses.

Canel confirmed the Annotation gate back in 2020 on my podcast as part of the Bing Series, in the episode “Bingbot: Discovering, Crawling, Extracting and Indexing.

  • “We understand the internet, we provide the richness on top of HTML to a lot, lot, lot of features that are extracted, and we provide annotation in order that other teams are able to retrieve and display and make use of this data.”
  • “My job stops at writing to this database: writing useful, richly annotated information, and handing it off for the ranking team to do their job.”

So we know that annotation is a “thing,” and that all the other algorithms retrieve the chunks using those annotations.

Annotation classification runs across five types of specialist models operating simultaneously per niche: 

  • One for entity and identity resolution (core identity).
  • One for relationship extraction and intent routing (selection filters).
  • One for claim verification (confidence multipliers).
  • One for structural and dependency scoring (extraction quality).
  • One for temporal, geographic, and language filtering (gatekeepers). 

This five-model architecture is my reconstruction based on observed annotation patterns and confirmed principles. The annotation system is a panel of specialists, and the combined output becomes the scorecard every downstream gate uses to compare your content against your competitors.

Annotation (Gate 5)- How the system classifies your content

Gatekeepers 

They determine whether the content enters specific competitive pools at all:

  • Temporal scope (is this current?).
  • Geographic scope (where does this apply?).
  • Language.
  • Entity resolution (which entity does this content belong to?). 

Fail a gatekeeper, and the content is excluded from entire query classes regardless of quality.

Core identity

This classifies the content’s substance: entities present, attributes, relationships between entities, and sentiment. 

For example, a page about “Jason Barnard” that the system classifies as being about a different Jason Barnard has perfect infrastructure and broken annotation. The content was there, and the system read it, but filed it in the wrong drawer.

Selection filters 

They add query routing: intent category, expertise level, claim structure, and actionability. 

For example, content classified as informational never surfaces for transactional queries, regardless of how well it performs on every other dimension.

Extraction quality

Think:

  • Sufficiency (does this chunk contain enough to be useful?)
  • Dependency (does it rely on other chunks to make sense?)
  • Standalone score (can it be extracted and still work?)
  • Entity salience (how central is the focus entity?)
  • Entity role (is the entity the subject, the object, or a peripheral mention?)

Weak chunks get discarded before competition begins.

Confidence multipliers 

These determine how much the system trusts its own classification: verifiability, provenance, corroboration count, specificity, evidence type, controversy level, consensus alignment, and more.

Two pieces of content can be classified identically on every other dimension and still receive wildly different confidence scores based on how verifiable and corroborated their claims are.

An important aside on confidence

Confidence is a multiplier that determines whether systems have the “courage” to use a piece of content for anything.

Once upon a time, content was king. Then, a few years ago, context took over in many people’s minds.

Confidence is the single most important factor in SEO and AAO, and always has been — we just didn’t see it.

To retain their users, search and assistive engines must provide the most helpful results possible. Give them a piece of content that, from a content and context perspective, appears to be super relevant and helpful, but they have absolutely no confidence in it for one reason or another, and they likely will not use it for fear of providing a terrible user experience.

What happens when annotation fails you (silently)

Annotation failures are the most dangerous failures in the pipeline because they are invisible. The content is indexed. But if the system misclassifies it, every competitive decision downstream inherits that misclassification.

I’ve watched this pattern repeatedly in our database: a page is indexed, it appears in search results, and yet the entity still gets misrepresented in AI responses.

Imagine this: A passage/chunk from your website is in the index, but confidence has degraded through the DSCRI part of the pipeline, and the annotation stage has received a degraded version. 

The structural issues at the rendering and indexing gates didn’t prevent indexing, but they were degraded versions of the original content. That degradation makes the annotation less accurate, less complete, and less confident. That annotative weakness will propagate through every competitive gate that follows in ARGDW.

When your content is included in grounding or display, and it’s suboptimally annotated, your content is underperforming. You can always improve annotation.

Measuring annotation quality in ARGDW

Annotation quality is the most important gate in the AI engine pipeline, but unfortunately, you can’t measure annotation quality directly. Every metric available to you is an indirect downstream effect.

The KPIs I suggest below are signals that clearly show where your content cleared indexing and failed annotation: the engine found the page, rendered it, indexed it, and then drew the wrong conclusions from it.

That distinction matters: beware of “we need more content” when the real problem is “the engine misread the content we have.”

Your brand SERP tells you exactly what the algorithm understood

These signals reveal how accurately the AI has understood who you are, what you do, and who you serve. The brand SERP (and AI résumé) is a readout of the algorithm’s model of your brand and, because it is updated continuously, makes it a great KPI.

  • Brand SERP shows incorrect entity associations: wrong competitors, wrong category, wrong geography.
  • AI résumé is noncommittal, hedged, or incomplete.
  • AI outputs underestimate your NEEATT credentials.
  • Knowledge panel displays incorrect information.
  • AI describes your brand using a competitor’s framing or category language.
  • Entity type is misclassified (person treated as organization, product treated as service).
  • AI can’t answer basic factual questions about your brand and offers without hedging.

If the algorithm can’t place you in a competitive set, it won’t recommend you

These signals reveal which entities the system considers comparable — a direct readout of how annotation classified them. Annotation places entities into competitive pools, and if your brand doesn’t appear in comparison sets where it belongs, the engine classified it outside that pool. Better content won’t fix that. Improving the algorithm’s ability to accurately, verbosely, and confidently annotate your content will.

  • Absent from “best [product] for [use case]” results where you qualify.
  • Absent from “alternatives to [competitor]” results.
  • Absent from “[brand A] vs. [brand B]” comparisons for your category.
  • Named in comparisons but with incorrect differentiators or misattributed features.
  • Consistently ranked below competitors with weaker real-world authority signals.

For me, that last one is the most telling. Weaker brand, higher placement.

Once again, what you’re saying isn’t the problem, how you’re saying it and how you “package” it for the bots and algorithms is the problem.

If the algorithm can’t surface you unprompted, you’re invisible at the moment of intent

These signals reveal whether the AI can place your brand at the point of discovery, before the user knows you exist. Clearing indexing means the engine has the content. Failing here means annotation didn’t connect that content to the broad topic signals that drive assistive recommendations. 

The difference between a brand that appears in “how do I solve [problem]” answers and one that doesn’t is whether annotation connected the content to the intent.

  • Absent from “how do I solve [problem your product solves]” answers, even as a passing mention.
  • Not surfaced when the AI explains a concept you coined or own.
  • Absent from AI-generated roundups, guides, and “where to start” responses for your core topic.
  • Named as a generic example rather than a recommended solution.
  • The AI discusses your subject area at length and doesn’t name you as a practitioner or source.
  • Entity present in the knowledge graph but invisible in discovery queries on AI platforms.

The three taxes you’re paying with sub-optimal annotation

Three revenue consequences follow from annotation failure, one at each layer of the funnel. 

  • The doubt tax is what you pay at BoFu when a buyer reaches your brand in the engine and the AI presents a confused, incomplete, or misframed version of what you offer. 
  • The ghost tax is what you pay at MoFu when you belong in the consideration set and the algorithm doesn’t prominently include you. 
  • The invisibility tax is what you pay at ToFu when the audience doesn’t know to look for you and the algorithm doesn’t introduce you. 

Each tax is a direct read of how well annotation worked — or didn’t.

For you as an SEO/AAO expert, you can diagnose your approach to reduce these three taxes for your client or company as: 

  • BoFu failures point to entity-level misunderstanding. 
  • MoFu failures point to competitive cohort misclassification.
  • ToFu failures point to topic-authority disconnection.

Annotation should be your focus. My bet is that for the vast majority of brands, the gate in the pipeline with the biggest payback will be annotation. 99% of the time, my advice to you is going to be “get started on fixing that before you touch anything else.”

For the full classification model in academic depth, see: 

Recruitment: The universal checkpoint where competition becomes explicit

Recruitment is where the system uses your content for the first time. Every piece of content the system has annotated now competes for inclusion in the system’s active knowledge structures, and this is where head-to-head competition begins.

Every entry mode in the pipeline — whether content arrived by crawl, by push, by structured feed, by MCP, or by ambient accumulation — must pass through recruitment. No content reaches a person without being recruited first. We could call recruitment “the universal checkpoint.”

The critical structural fact: it recruits into three distinct graphs, each with different selection criteria, different confidence thresholds, and different refresh cycles. The three-graph model is my reconstruction. 

The underlying principle (multiple knowledge structures with different characteristics) is confirmed by observing behavior across the algorithmic trinity through the data we collect (25 billion datapoints covering Google’s Knowledge Graph, brand search results, and LLM outputs).

The entity graph stores structured facts with low fuzz — who is this entity, what are its attributes, how does it relate to other entities, binary edges — and knowledge graph presence is entity graph recruitment, with entity salience, structural clarity, source authority, and factual consistency as the selection criteria.

The document graph handles content with medium fuzz — passages and pages and chunks the system has annotated and assessed as worth retaining — where search engine ranking is the visible output, and relevance to anticipated queries, content quality signals, freshness, and diversity requirements drive selection.

The concept graph operates at a different level entirely, storing inferred relationships with high fuzz — topical associations, expertise patterns, semantic connections that emerge from cross-referencing multiple sources — with LLM training data selection as the mechanism and corroboration patterns as the primary selection criterion.

Recruotment (Gate 6)

The same content may be recruited by one, two, or all three graphs. Each graph has its own speed of ingestion and its own speed of output. I call these the three speeds, a pattern I formulated explicitly this year but have been observing empirically across 10 years of brand SERP experiments: 

  • Search results are daily to weekly.
  • Knowledge graph updates are monthly. 
  • LLM updates are currently several months (when they choose to manually refresh the training data).

Grounding: Where the system checks its own work in real time

Recruitment stored your content in the system’s three knowledge structures. Grounding is where the system checks whether it should trust your content, right now, for this specific query.

Search engines retrieve from their own index. Knowledge graphs serve stored structured facts. Neither needs grounding. Only LLMs have the (huge) gap between stale training data and fresh reality that makes grounding necessary. 

The need for grounding will gradually disappear as the three technologies of the algorithmic trinity converge and work together natively in real time.

In an assistive Engine, the LLM is the lead actor. When the user asks a question or seeks a solution to a problem, the LLM assesses its confidence in its own answer. 

If confidence is sufficient, it responds from embedded knowledge. If confidence is low, it sends cascading queries to the search index, retrieves results, dispatches bots to scrape selected pages, and synthesizes an answer from the fresh evidence (Perplexity is the easiest example to see this in action — an LLM that summarizes search results).

But that’s too simplistic. The three grounding sources model that follows is my reconstruction of how this lifecycle operates across the algorithmic trinity.

The search engine grounding the industry currently focuses on is this: the LLM queries the web index, retrieves documents, and extracts the answer. That’s high fuzz.

Now add this: Knowledge graph allows a simple, quick, and cheap lookup: low fuzz, binary edges, no interpretation required, and our data shows that Google does this already for entity-level queries.

My bet is that specialist SLM grounding is emerging as a third source. We know that once enough consistent data about a topic crosses a cost threshold, the system builds a small language model specialized for that niche, and that model becomes a domain-expert verifier. It would be foolish not to use that as a third grounding base.

The competitive implication is huge. A brand with entity graph presence gives the system a low-fuzz grounding path. A brand without it forces the system onto the high-fuzz path (document retrieval), which means more interpretation, more ambiguity, and lower confidence in the result. The competitor with structured entity data gets verified faster and more accurately.

In short, focus on entity optimization because knowledge graphs are the cheapest, fastest, and most reliable grounding for all the engines.

Get the newsletter search marketers rely on.


Display: Where machine confidence meets the person

Your content has been annotated, recruited into its knowledge structures, and verified through grounding. Display is where the AI assistive engine decides what to show the person (and, looking to the future that is already happening, where the AI assistive Agent decides what to act upon).

Display is three simultaneous decisions: format (how to present), placement (where in the response), and prominence (how much emphasis). A brand can be annotated, recruited, and grounded with high confidence and still lose at display because the system chose a different format, placed the competitor more prominently, or decided the query deserved a different type of answer entirely.

This is essentially the same thing as Bing’s Whole Page Algorithm. Gary Illyes jokingly called Google’s whole page algorithm “the magic mixer.” Nathan Chalmers, PM for the whole page algorithm at Bing, explained how that works on my podcast in 2020. Don’t make the mistake of thinking this is out of date — it isn’t. The principles are even more relevant than ever.

UCD activates at display

You may have heard or read me talking obsessively about understandability, credibility, and deliverability. UCD is absolutely fundamental because it is the internal structure of display: the vertical dimension that makes this gate three-dimensional.

The same content, grounded with the same confidence, presents differently depending on who is asking and why.

A person arriving with high trust — they searched your brand name, they already know you — experiences display at the understandability layer, where the engine acts as a trusted partner confirming what they already believe, which is BOFU.

A person evaluating options — they asked “best [category] for [use case]” — experiences display at the credibility layer, where the engine presents evidence for and against as a recommender, which is MOFU.

A person encountering your brand for the first time — a broad topical question in which your name appears — experiences it at the deliverability layer, where the system introduces you, which is TOFU.

The user interaction reveals the funnel position. The funnel position determines which UCD layer fires.

This is why optimizing only for “ranking” misses reality: Display is a context-sensitive presentation, not a list, and the same piece of content can introduce, validate, or confirm depending on who asked.

The framing gap at display

The system presents what it understood, verified, and deemed relevant. The gap between that and your intended positioning is the framing gap, and it operates differently at each funnel stage.

  • At TOFU, the gap is cognitive: the system may know you exist, but doesn’t associate you with the right topics. 
  • At MOFU, the gap is imaginative: the system needs a frame to differentiate your proof from the competitor’s, and most brands supply claims without frames. 
  • At BOFU, the gap is about relevance: the system cross-references your claims against structured evidence, and either confirms or hedges.

After annotation, framing is the single most important part of the SEO/AAO puzzle, so I’ll talk a lot about both in the coming articles.

Won: The zero-sum moment where one brand wins and every competitor loses

Everything I’ve explained so far in this series collapses into a zero-sum point at the “won” gate. Here, the outcome is binary. The person (or agent) acts, or they don’t. One brand converts, and every competitor loses. 

The system may have mentioned others at display, but at the moment of commitment, there can only be one winner for the transaction.

Three won resolutions in the competitive context

Won always resolves through three distinct mechanisms, each with different competitive dynamics.

Resolution 1: Imperfect click

  • The AI influences the person’s thinking at grounding and display, but the person decides independently: they choose one of several options offered by the engine, they walk into the store, or they book by phone. 
  • This is what Google called the “zero moment of truth,” where the competitive battle happens at display, where the engine has influenced the human, but the active choice the person makes is still very much “them.”

Resolution 2: Perfect click

  • The AI recommends one brand and the person takes it. This is the natural next step, what I call the zero-sum moment. 
  • This fires inside the AI interface, where the engine filtered for intent, context, and readiness, presented one answer, and the person converted.

Resolution 3: Agential click

  • The AI agent acts autonomously on the person’s behalf. No person at the decision point, an API settlement between the buyer’s agent, and the brand’s action endpoint. 
  • The competitive battle happened entirely within the engine: whichever brand had the highest accumulated confidence, the strongest grounding evidence, and a functional transaction endpoint is the winner. The person doesn’t choose. The system chooses for them.

The trajectory runs from oldest to newest: Resolution 1 was dominant up to late 2025, Resolution 2 is taking over, and Resolution 3 gained a lot of traction early 2026. Stripe and Cloudflare are laying the transaction and identity rails. Visa and Mastercard are building the financial authorization infrastructure. 

Anthropic’s MCP is providing the coordination layer. Google’s UCP and A2A are defining how agents communicate across the full consumer commerce journey. Apple has the closed-loop infrastructure to make it seamless on a billion devices the moment they choose to. 

Microsoft is locking in the enterprise and government layer through Copilot in a way that will be extremely difficult to displace. No single company turns Resolution 3 on — but all of them together make it inevitable.

Competitive escalation across the five ARGDW gates

The competitive intensity increases at every gate — a progressive narrowing, a Darwinian funnel where the field shrinks at each stage. The narrowing pattern is my model based on observed outcomes across our database. The underlying principle (competitive selection intensifies downstream) is structural to any sequential gating system.

Competitive narrowing
  • The field is large at annotation, where the algorithms create scorecards and your classification versus competitors’ determines downstream positioning.
  • Recruitment sets the qualifying round: multiple brands enter the system’s knowledge structures, but not all, and the selection criteria already favor multi-graph presence.
  • Grounding narrows the shortlist as confidence requirements tighten — the system verifies the candidates worth checking, not everyone.
  • Display reduces to finalists, often one primary recommendation with supporting alternatives.
  • Won is the binary outcome. The zero-sum moment you’re either welcoming with open arms or fearful of.

ARGDW: Relative tests. The scoreboard is on.

Five gates. Five relative tests. Competitive failures in ARGDW are significantly harder to diagnose than infrastructure failures in DSCRI because the fix is competitive positioning rather than technical.

  • Annotation failures mean the system misclassified what your content is or who it belongs to — write for entity clarity, structure claims with explicit evidence, and use schema markup to declare rather than expect the system to guess.
  • Recruitment failures increasingly mean you’re present in one graph while competitors have two or three — build entity graph presence (structured data, knowledge panel, entity home), document graph presence (content quality, topical coverage), and concept graph presence (consistent publishing across authoritative platforms) as a coordinated program.
  • Grounding failures mean the system is verifying you on the high-fuzz path — provide structured entity data for low-fuzz verification, and MCP endpoints if you need real-time grounding without the search step.
  • Display failures mean the framing gap is costing you at the three layers of the visible gate — assuming you fixed all the upstream issues, then closing that framing gap at every UCD layer is your pathway to gain visibility in AI engines.
  • Won failures mean the resolution mechanism doesn’t exist — Resolution 1 requires that you rank (good enough up to 2024), Resolution 2 requires that you dominate your market (good enough in 2026), and Resolution 3 requires a mandate framework and action endpoint (needed for 2027 onward).

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

After establishing the 10-gate AI engine pipeline, what’s next?

The aim of this series of articles is to give you the playbook for the DSCRI infrastructure phase and the strategy for the ARGDW competitive phase. This 10-gate AI engine pipeline breaks optimizing for assistive engines and agents into manageable chunks.

Each gate is manageable on its own. And the relative importance of each gate is now clear for you (I hope). In the remainder of this series of articles, I’ll provide solutions to the major issues at each gate that will help you manage each individually (and as part of the collective whole).

Aside: The feedback I have had from Microsoft on this series so far (thank you, Navah Hopkins) reminded me of something Chalmers said to me about Darwinism in Search back in 2020.

My explanations are often more absolute and mechanical than the reality. That’s a very fair point. But then reality is unmanageably nuanced, and nuance leads to a lack of clarity and often paralyzes people to the extent that they struggle to identify actionable next steps. I want to be useful.

I suggest we take this evolution from SEO to AAO step by step. Over the last 10+ years, I’ve always done my very best to avoid saying “it depends.”

People often say it takes 10,000 hours to become an expert. The framework presented here comes from tens of thousands of hours analyzing data, experimenting, working with the engineers who build these systems, and developing algorithms, infrastructure, and KPIs.

The aim is simple: reduce the number of frustrating “it depends” answers and provide a clear outline for identifying actionable next steps.

This is the fifth piece in my AI authority series. 

Read more at Read More

Web Design and Development San Diego

Why social search visibility is the next evolution of discoverability

While everyone focuses on AI search, the real opportunity may be social search

Search strategy once meant ranking on Google. We optimized websites and invested heavily in organic visibility. Entire marketing strategies were built around capturing demand from Google search results.

But search behavior doesn’t live on a single platform. Today, people search on TikTok for recommendations, YouTube for tutorials, Reddit for honest opinions, and Amazon for product validation.

Search behavior now spans a much wider set of platforms, creating one of the most overlooked opportunities in digital marketing.

Search behavior is diversifying

Recent research from SparkToro and Datos analyzed search behavior across 41 major platforms, including traditional search engines, ecommerce platforms, social networks, AI tools, and reference sites.

The findings reinforce something many marketers are beginning to notice. Search is no longer confined to traditional search engines.

While Google still dominates search activity, a growing share of discovery now happens across a wider collection of platforms — a search universe, if you will.

The research suggests search activity is roughly distributed as follows:

  • Traditional search engines: ~80% of searches, with Google alone at ~73.7%
  • Commerce platforms (Amazon, Walmart, eBay): ~10%
  • Social networks: ~5.5%
  • AI tools (ChatGPT, Claude, etc.): ~3.2%

Consumers search directly on platforms where they expect to find the most useful answers, in the formats they prefer, rather than relying on Google to send them elsewhere.

Dig deeper: Discoverability in 2026: How digital PR and social search work together

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

The industry is focused on AI and missing the bigger mainstream shift

Much of the search industry conversation today is focused on AI. Questions like:

  • How do I rank in ChatGPT?
  • How do I optimize for AI search?
  • Will AI replace Google?

They’re constantly being posed, debated, and answered by SEO professionals on platforms like Search Engine Land.

I want to be clear, these are important questions. But the data within this study tells a more grounded story, especially when thinking about strategy over the next 12 months.

AI search tools currently account for roughly 3.2% of search activity, per SparkToro research. That’s meaningful. It will almost certainly reshape how people search and discover information in the future.

But today, AI search is still smaller than many established discovery platforms with far broader adoption. For example:

  • Amazon receives more searches than ChatGPT.
  • YouTube receives more searches than ChatGPT.
  • Even Bing receives more search activity.

Yet many brands are pouring disproportionate attention into AI visibility while overlooking platforms where millions of searches are already happening every day.

Social platforms are now search engines

For many users, social platforms are now core search destinations. People look to:

  • TikTok for recommendations, restaurants, travel ideas, and products.
  • YouTube for tutorials, reviews, and problem-solving.
  • Reddit for honest discussions and community opinions.
  • Pinterest for inspiration and visual discovery.

Each platform plays a different role in the discovery journey.

Platform What people search for
TikTok/Instagram Discovery and recommendations
YouTube Learning, tutorials, and reviews
Reddit Real opinions and community discussions
Pinterest Inspiration and planning

These platforms are more than entertainment destinations. Users head to them with real intent to find a solution to a problem, need, or desire.

Social content is now appearing directly in Google results

As users adopt social platforms for search, Google has begun aggregating and organizing information right within its SERPs. So yes, social and creator content appears directly inside Google search results.

Over the past year, Google has significantly expanded how it surfaces social content within SERPs. Search results now frequently include TikTok videos, YouTube Shorts, Reddit threads, Instagram posts, and forum discussions.

Google even partnered with platforms like Reddit to ensure that community discussions appear more prominently in search results. This means social content can now influence discovery in multiple ways:

  • Direct searches on social platforms.
  • Visibility within Google search results.
  • Influence within AI-generated answers.

Dig deeper: Social and UGC: The trust engines powering search everywhere

Get the newsletter search marketers rely on.


Social content is also powering AI search

Social platforms are also important sources for AI-generated answers. AI systems rely on content that reflects real-world experiences, discussions, and opinions.

That’s why platforms such as Reddit, YouTube, Quora, forums, and creator-led content (i.e., Instagram, TikTok, and YouTube Shorts) are frequently cited in AI-generated responses.

Google’s AI Overviews often reference Reddit threads and YouTube videos.

Other AI tools regularly draw insights from community discussions, reviews, and creator content when generating answers.

This means content created for social discovery can influence visibility across multiple layers of search, including social platforms, Google search results, and AI-generated responses.

A single piece of content can now travel much further across the universe, consistently providing signals to audiences, developing a preference toward one brand over another.

The compounding discoverability effect

When brands invest in social search visibility, they unlock a powerful compounding effect. For example, a useful YouTube tutorial could:

  • Rank in YouTube search.
  • Appear in Google search results.
  • Be referenced in AI-generated answers.
  • Be shared across social platforms.
  • Spread through private messaging and dark social channels.

Unlike traditional website content, social content can move across platforms, dramatically expanding its reach. This creates an entirely new layer of discoverability.

And at a time when marketing budgets are under increasing scrutiny, the ability for content to generate visibility across multiple platforms makes the ROI of content strategies far more compelling.

Dig deeper: The social-to-search halo effect: Why social content drives branded search

Most brands still follow the old search playbook

Despite these shifts, most search strategies still revolve around Google SEO, paid search, website content, and AI/LLM interfaces.

Few brands have structured strategies for TikTok search optimization, YouTube search visibility, Reddit community engagement, and creator-led discovery strategies.

While Google SEO is incredibly competitive, social search remains relatively under-optimized. Brands that move early can capture visibility (presence) in spaces where demand already exists, thereby developing preference for their brand.

When brands invest in social search visibility, they aren’t just unlocking the 5.5% of searches happening directly on social platforms. They’re also influencing traditional search results, AI-generated answers, and wider discovery across the web.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Search everywhere: A new model for discoverability

Search is more than a channel. It’s a behavior that happens across a developing and evolving search universe.

Your audience searches wherever they believe they’ll find the best answer in the most useful format — whether that’s Google, TikTok, YouTube, Reddit, Amazon, Pinterest, or increasingly, AI interfaces.

Winning search today means being discoverable wherever those searches happen. The brands that win won’t be the ones that rank in just one place, even as traditional SEO remains an important part of the mix. They’ll be the ones that are discoverable wherever their audience searches.

That is the future of search. That is “search everywhere.”

Dig deeper: ‘Search everywhere’ doesn’t mean ‘be everywhere’

Read more at Read More

Web Design and Development San Diego

Google Ads Editor 2.12 adds creative control and campaign flexibility

Google Ads auction insights

Google is expanding capabilities in Google Ads Editor to give advertisers more creative flexibility, automation control, and budget precision — especially as AI-driven campaign types continue to evolve.

What’s new. The 2.12 release introduces a wide set of updates across Performance Max, Demand Gen, and video campaigns, with a clear focus on scaling creative assets and improving workflow efficiency.

Creative expansion. Performance Max campaigns now support up to 15 videos per asset group, allowing advertisers to feed more variations into Google’s AI for testing. The addition of 9:16 vertical images also reflects growing demand for mobile-first formats, particularly across surfaces like short-form video.

Campaign upgrades. Demand Gen campaigns get several enhancements, including new customer acquisition goals, brand guideline controls, and hotel feed integrations. A new minimum daily budget and a streamlined campaign build flow aim to improve stability and setup.

Video & AI control. Updates to non-skippable video formats and real-time bid guidance give advertisers more control over performance, while new text and brand guidelines help ensure AI-generated assets stay on-brand and compliant.

Budgeting shift. A new total campaign budget feature allows advertisers to set a fixed spend across a defined period — ideal for promotions or seasonal bursts — with Google automatically pacing delivery.

Workflow improvements. Account-level tracking templates, better visibility into Final URL expansion performance, clearer campaign status filters, and bulk link replacement tools are designed to reduce manual work and improve account management at scale.

Why we care. This update to Google Ads Editor gives them more creative flexibility and control over AI-driven campaigns, especially in Performance Max and Demand Gen. Features like increased video limits, vertical assets, and total campaign budgets help you test more, scale faster, and manage spend more efficiently.

It also improves workflows and brand safeguards, making it easier to guide automation while maintaining consistency and performance across Google Ads.

Between the lines. The update continues a broader trend: as automation increases, Google is giving advertisers more ways to guide AI rather than manually control every input.

The bottom line. Google Ads Editor 2.12 is less about one standout feature and more about incremental gains across creative, automation, and control — helping advertisers better manage increasingly AI-driven campaigns within Google Ads.

Read more at Read More

Web Design and Development San Diego

How Google’s Universal Commerce Protocol could reshape search conversions

How Google’s Universal Commerce Protocol could reshape search conversions

As Google rolls out AI Overviews, AI Mode in Search, and the Gemini ecosystem, we face a growing challenge: what happens when users get answers — and soon complete purchases — without leaving Google’s interfaces?

Enter Google’s Universal Commerce Protocol (UCP), now in beta.

UCP is designed to help brands to sell to consumers without leaving the Gemini or LLM experience. Consumers can check out within the LLM, add rewards points, and fully execute the transaction. Here’s an example flow:

Google UCP workflow example

How Google’s Universal Commerce Protocol works

At its core, UCP standardizes how consumer AI interfaces communicate with merchant checkout systems. When a user tells Gemini, “Find me a highly rated, waterproof hiking boot in size 10 under $200 and buy it,” UCP is the invisible bridge that allows the AI to securely fetch inventory, process the payment, and confirm the order.

While Google’s developer documentation leans into technical jargon like “Model Context Protocol (MCP)” and “Agent2Agent (A2A) interoperability,” the implications are remarkably straightforward:

  • It uses your existing feeds: UCP plugs directly into your existing Google Merchant Center (GMC) shopping feeds. The inventory data you’re already managing for your campaigns is the same data that will power these AI transactions.
  • You keep the data: Unlike selling on some third-party marketplaces, where you lose the customer relationship, UCP ensures you remain the merchant of record. You process the transaction, you own the first-party customer data, and you control the post-purchase experience.
  • Frictionless checkout: By enabling checkouts directly within Google’s AI ecosystem, UCP can reduce cart abandonment and increase conversion rates among high-intent shoppers.

Dig deeper: How Google’s Universal Commerce Protocol changes ecommerce SEO

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Best practices for Google’s UCP

Like many LLM optimization recommendations, these steps come down to the fundamentals of managing your shopping feed and Merchant Center account.

Google outlined a few best practices. If you follow these four steps, you’ll be well-positioned for success.

1. Master your feed data hygiene

In an agentic commerce environment, your product feed is your primary sales tool. To ensure the AI accurately matches your products to highly specific user queries, you need to enrich your feed with granular details.

  • Write product titles that are 30 or more characters long.
  • Expand product descriptions to 500 or more characters.
  • Include Global Trade Item Numbers (GTINs), where relevant, to ensure accurate product matching.
  • Include three or more additional images alongside your primary product photo to engage visual shoppers.
  • Use lifestyle images, not just standard product shots on white backgrounds.
  • Ensure your image quality meets the standard of 1,500×1,500 pixels.
  • Categorize your inventory by product type and share key product highlights.
  • Prepare specific feed attributes required for UCP, such as returns, support information, and policy information.
  • Support Google’s Native Checkout when possible (checkout logic integrated directly into the AI interface). Google also offers another option called Embedded Checkout (an iframe-based solution for highly bespoke branding). This will work, but is suboptimal at this time.

Dig deeper: Google publishes Universal Commerce Protocol help page

2. Highlight convenience and trust signals

To set your brand apart when AI is helping consumers make immediate, confident purchasing decisions, you must pass trust and convenience signals directly through your feed. The data shows that these elements directly impact the bottom line:

  • Indicate clearly if your brand offers free shipping.
  • Share your shipping speed (next day, two-day, etc.).
  • Display your return policy.
  • Submit sale prices when available. Regardless, ensure the feed represents the most accurate pricing details.
  • Include product ratings.

Get the newsletter search marketers rely on.


3. Upgrade your technical infrastructure and SEO

The shift to UCP requires foundational updates to how your backend systems interact with Google. You must work hand in hand with their development and SEO teams to prepare for these AI search experiences.

  • Migrate from the Content API to the Merchant API to enable real-time inventory updates and programmatic access to data and insights.
  • Upgrade your tag in Data Manager and implement Conversion with Cart Data to effectively use first-party data in your campaigns.
  • Prioritize content-rich pages for indexing and crawling, and ensure structured data is always supported by visible content.
  • Create your Business Profile and claim your Brand Profile to highlight your business information and brand voice on Google platforms.
  • Have your development team explore and prototype with UCP open source on GitHub to map APIs for checkout, session creation, and order management.

4. Additional features and tools beyond UCP to consider

Google is actively rolling out pilot programs designed specifically for the agentic era. Be proactive in adopting these new solutions rather than waiting for wide release:

  • Prepare for the “Business Agent,” a virtual sales associate that acts like a brand representative to answer product questions right on Google.
  • Consider the “Direct Offers Pilot,” a new way for advertisers to present exclusive discounts directly in AI Mode.
  • Inquire about the “Conversational Attributes Pilot,” which introduces dozens of new Merchant Center attributes designed to enhance discovery in the conversational commerce era.

Dig deeper: Are we ready for the agentic web?

The future of search will happen within LLMs

The launch of Google’s Universal Commerce Protocol signals a significant shift. The SERP is becoming a transactional engine that increasingly operates within large language models.

UCP presents a meaningful opportunity. By removing friction between discovery and purchase, conversion rates could increase.

However, taking advantage of this requires stepping outside the Google Ads interface and working directly in your feed data and technical integrations, much like with Google Shopping. While this isn’t new, it’s becoming more important.

Ultimately, this comes down to the quality of your product data.

Read more at Read More

Rethinking SEO in the age of AI

For years, SEO followed a fairly predictable playbook: create valuable content, optimize it for search engines, and compete for rankings on Google. But the way people discover information online is changing quickly. Tools like ChatGPT, Perplexity, and Gemini are introducing a new layer between users and search engines, where answers are generated and synthesized rather than simply retrieved.

In a recent episode of the Get Discovered podcast, Joe Walsh, CEO of Prerender.io, sat down with Yoast’s Principal Architect Alain Schlesser to discuss what this shift means for SEO and online discoverability. Their conversation explores how AI answer engines are reshaping the search landscape and why many traditional SEO assumptions no longer fully apply.

Alain shares insights on:

  • How AI systems retrieve and surface information
  • Why brands must rethink their online positioning, and
  • What businesses should start preparing for as AI-driven discovery evolves over the next 12–18 months?

The new discovery layer: AI is becoming the gatekeeper

“There’s now a layer in front of search that acts as a gatekeeper before you even hit those search engines.”

AI adds a new layer to the information discovery process for the searchers

That’s how Alain describes one of the biggest structural shifts happening in online discovery today. For years, the flow of search was straightforward: a user typed a search term into a search engine, the engine returned a list of results, and the user decided which link to click.

But AI-powered systems have added a new layer to that process.

From search queries to conversational discovery

Today, many users begin their search journey by asking questions in tools like ChatGPT, Perplexity, or Gemini instead of typing traditional keyword queries. The AI system then determines whether it needs external information and may generate multiple search queries behind the scenes to retrieve relevant sources.

The discovery flow now looks something like this:

The traditional vs the new agentic search

Previously:

User → Search engine → Website

Now:

User → AI model → Search engine → Website → AI synthesis → User

Instead of presenting a list of links, the AI model interprets and combines information before generating an answer. Alain explains this process in more detail in the podcast, highlighting how AI systems now act as a filtering layer between users and the web.

Search is fragmenting beyond Google

“We were in a rather comfortable position where we were only dealing with a monopoly search.”

For much of the past two decades, SEO largely meant optimizing for one ecosystem: Google. Even though other search engines existed, Google dominated how people discovered information online.

But that environment is changing.

As Alain explains, AI systems are introducing a new layer of fragmentation in discovery. Different AI platforms rely on different combinations of search engines, indexes, and training data, which means results can vary widely between them.

In practice, that means a brand might appear prominently in one AI system while barely showing up in another. For SEO teams, this marks a shift toward thinking about visibility across multiple AI-driven environments rather than just one search engine.

Do checkout: Why does having insights across multiple LLMs matter for brand visibility?

What hasn’t changed: The fundamentals of SEO

Despite technological changes, Alain emphasizes that the core principles of good SEO remain intact.

“You shouldn’t try to game the search engine. You need to create valuable content that humans actually want to read, and structure it so search engines can understand it.”

At its core, search still aims to deliver the best possible answers to users. Whether the request comes from a person typing a query or an AI model generating one behind the scenes, the goal remains the same: surface useful, reliable information.

That means SEO teams should continue focusing on fundamentals such as:

AI systems may change how information is surfaced, but they still rely on the same underlying signals of quality and relevance.

The “top results or nothing” reality

As the discovery landscape evolves, another important shift emerges in how AI systems interact with search results.

“They don’t see the full search result page. What the LLM typically sees is just the five topmost elements per search query.”

Unlike human users, AI systems typically work with a very small set of retrieved sources before generating an answer. That means if your content doesn’t appear among those top results, it may never reach the AI system at all.

In a world where AI answers rely on the summarization of modern content, only the sources that make it into that small retrieval window influence the final response.

This makes strong search visibility more important than ever. Ranking well isn’t just about earning clicks anymore. It determines whether your content is even considered when AI systems construct an answer.

Why “safe” content strategies are no longer enough

Even if your content reaches those top results, there’s another layer of filtering happening inside the AI model itself.

Large language models compress enormous amounts of information during training. As Alain explains:

What the model keeps are the dominant signal and the outliers. Everything in between is often compressed away as statistical noise.

In the podcast, Alain uses this idea to explain why brands that try to be broadly acceptable or “safe” may struggle to stand out in AI-driven discovery.

The takeaway is clear: in a world where AI systems summarize and compress information, having a clear and distinctive perspective becomes increasingly important.

Why Yoast launched AI visibility tracking

As AI systems reshape how information is discovered and summarized, a new challenge emerges for businesses: understanding how their brand appears in AI-generated answers. That’s the problem Yoast set out to address with Yoast SEO AI +, a feature designed to help businesses monitor how their brand shows up across major AI platforms.

Earlier in this article, we explored how AI systems now sit between users and search engines, retrieve only a small set of results, and synthesize answers through the summarization of modern content. Together, these changes create a new discovery layer that is far less transparent than traditional search.

As Alain explains in the podcast:

“We need more visibility and observability into that AI-based layer to figure out what is going on there. Right now, it’s mostly a black box.”

Unlike traditional search engines, AI systems don’t provide clear rankings, impressions, or click data that explain why a source was selected. Instead, answers are generated from a mix of retrieved content, training data, and model reasoning. For businesses, that makes it much harder to understand whether their brand is visible in AI-driven discovery.

This is where AI visibility tracking becomes valuable. Rather than focusing only on search rankings, teams also need insight into how their brand is represented inside AI responses.

Yoast SEO AI + helps surface that layer by allowing teams to observe how their brand appears across AI systems, such as ChatGPT, Perplexity, and Gemini.

Must read: What is ChatGPT Search (and how does it use Bing data)?

The goal is not simply to track another metric. It’s to help businesses understand how AI systems interpret and represent their brand.

As Alain notes, visibility in AI systems can vary significantly depending on the platform, because each one relies on different combinations of:

  • search engines
  • indexes
  • training datasets

This means a brand might appear frequently in one AI system while barely showing up in another. Without visibility into those differences, it becomes difficult for teams to understand how their content performs in the new discovery landscape.

In that sense, tools like Yoast SEO AI + are less about selling a new SEO feature and more about helping businesses observe a rapidly changing ecosystem where discoverability no longer happens only in search results.

The next evolution: AI agents making decisions

“What we will increasingly see is automated transactions where AI agents navigate websites and initiate actions on behalf of users.”

So far, much of the discussion around AI and search has focused on how answers are generated. But according to Alain, the next phase of this evolution may go further.

Over the next 12–18 months, AI systems may begin moving beyond answering questions and start performing tasks on behalf of users. Instead of guiding someone toward a website to make a decision, AI agents could increasingly compare options, interact with websites, and complete actions automatically.

If that shift happens, the traditional customer journey could change significantly. Alain shares a fascinating perspective on what this might mean for businesses in the coming years in the full podcast conversation.

SEO matters more than ever

AI isn’t replacing SEO. If anything, it’s reinforcing why good SEO matters in the first place. What’s changing is the path between users and content. Instead of navigating search results themselves, users increasingly receive answers that AI systems retrieve, interpret, and synthesize.

That makes strong fundamentals more important than ever. Businesses still need to focus on:

  • valuable content
  • clear structure
  • discoverable and indexable pages
  • a distinctive brand identity

But the central question for SEO is evolving. It’s no longer just:

“Can Google find my website?”

It’s now:

“Does the AI have a reason to remember my brand?”

For more insights from Alain Schlesser on how AI is reshaping SEO, watch the full Get Discovered podcast episode.

The post Rethinking SEO in the age of AI appeared first on Yoast.

Read more at Read More

How To Fix “Discovered ‐ Currently Not Indexed” in Google Search Console

“Discovered – currently not indexed” in Google Search Console means Google knows the URL exists but has not crawled it […]

The post How To Fix “Discovered ‐ Currently Not Indexed” in Google Search Console appeared first on Onely.

Read more at Read More

What is an XML sitemap and why should you have one?

A good XML sitemap serves as a roadmap for your website, guiding Google to all your important pages. XML sitemaps can be beneficial for SEO, helping Google find your essential pages quickly, even if your internal linking isn’t perfect. This post explains what they are and how they help you rank better and get surfaced by AI agents.

Key takeaways

  • An XML sitemap is crucial for SEO, as it guides search engines to your important pages, improving crawl efficiency
  • XML sitemaps list essential URLs and provide metadata, helping search engines understand content and prioritize crawling
  • With Yoast SEO, you can automatically generate and manage XML sitemaps, keeping them up to date
  • XML sitemaps support faster indexing of new content and help discover orphan pages that aren’t linked elsewhere
  • Add your XML sitemap to Google Search Console to help Google find it quickly and monitor indexing status

What are XML sitemaps?

An XML sitemap is a file that lists a website’s essential pages, ensuring Google can find and crawl them. It also helps search engines understand your website structure and prioritize important content.

💡 Fun fact:

XML is not the only type of sitemap; there are several sitemap formats, each serving a slightly different purpose:

  • RSS, mRSS, and Atom 1.0 feeds: These are typically used for content that changes frequently, such as blogs or news sites. They automatically highlight recently updated content
  • Text sitemaps: The simplest format. These contain a plain list of URLs, one per line, without additional metadata

These are HTML sitemaps that are created for visitors, not search engines. They list and link to important pages in a clear, hierarchical structure to improve user navigation. An XML sitemap, however, is specifically designed for search engines.

XML sitemaps include additional metadata about each URL, helping search engines better understand your content. For example, it can indicate:

  • When a page was last meaningfully updated
  • How important is a URL relative to other URLs
  • Whether the page includes images or videos, using sitemap extensions

Search engines use this information to crawl your site more intelligently and efficiently, especially if your website is large, new, or has complex navigation.

Looking to expand your knowledge of technical SEO? We have a course in the Yoast SEO Academy focusing on crawlability and indexability. One of the topics we tackle is how to use XML sitemaps properly.

What does an XML sitemap look like?

An XML sitemap follows a standardized format. It is a text file written in Extensible Markup Language (XML) that search engines can easily read and process. As it follows a structured format, search engines like Google can quickly understand which URLs exist on your website and when they were last updated.

Here is a very simple example of an XML sitemap that contains a single URL:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>https://www.yoast.com/wordpress-seo/</loc>
<lastmod>2024-01-01</lastmod>
</url>
</urlset>

Each URL in a sitemap is wrapped in specific XML tags that provide information about that page. Some of these tags are required, while others are optional but helpful for search engines.

Below is a breakdown of the most common XML sitemap tags:

Tag Requirement Description
<?xml> Mandatory Declares the XML version and character encoding used in the file.
<urlset> Mandatory The container for the entire sitemap. It defines the sitemap protocol and holds all listed URLs.
<url> Mandatory Represents a single URL entry in the sitemap. Each page must be enclosed within its own <url> tag.
<loc> Mandatory Specifies the full canonical URL of the page you want search engines to crawl and index.
<lastmod> Optional Indicates the date when the page was last meaningfully updated, helping search engines know when to re-crawl the page.
<changefreq> Optional Suggests how frequently the content on the page is expected to change, such as daily, weekly, or monthly.
<priority> Optional Suggests the relative importance of a page compared to other pages on the same site, using a scale from 0.0 to 1.0.

Note: While sitemaps.org supports optional tags like <changefreq> and <priority>, Google and Bing generally ignore them. Google has officially discarded them. Instead, it prefers <lastmod> to signal (last modified) when content actually updates.

What is an XML sitemap index?

A sitemap index is a file that lists multiple XML sitemap files. Instead of containing individual page URLs, it acts as a directory that points search engines to several separate sitemaps.

This becomes useful when a website has a large number of URLs or when the site owner wants to organize sitemaps by content type. For example, a site may have separate sitemaps for pages, blog posts, products, or categories.

Here’s a breakdown of how XML sitemap and XML sitemap index differ:

Feature XML Sitemap XML Sitemap Index
Purpose Lists individual URLs on a website Lists multiple sitemap files
Content Contains page URLs and optional metadata Contains links to sitemap files
Use case Suitable for small or medium-sized sites Useful when a site has multiple sitemaps
Structure Uses <urlset> and <url> tags Uses <sitemapindex> and <sitemap> tags.

Search engines support sitemap limits. A single sitemap can contain up to 50,000 URLs or be up to 50 MB in size. If your website exceeds these limits, you can create multiple sitemaps and group them together using a sitemap index.

Submitting a sitemap index to search engines allows them to discover and process all your sitemaps from a single file.

In short, an XML sitemap helps search engines discover pages, while a sitemap index helps search engines discover multiple sitemaps.

Below is a simple example of what a sitemap index file looks like:

?xml version="1.0" encoding="UTF-8"?> 
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> 
<sitemap> 
<loc>https://www.example.com/sitemap-pages.xml</loc> 
<lastmod>2025-12-11</lastmod> 
</sitemap> 
<sitemap> 
<loc>https://www.example.com/sitemap-products.xml</loc> 
<lastmod>2025-12-11</lastmod> 
</sitemap> 
</sitemapindex> 

In this example, the sitemap index references two separate sitemaps. Each one can contain thousands of URLs. This structure helps search engines efficiently discover and crawl large websites.

Why do you need an XML sitemap?

Technically, you don’t need an XML sitemap. Search engines can often discover your pages through internal links and backlinks from other websites. However, having an XML sitemap is highly recommended because it helps search engines crawl and understand your site more efficiently.

Here are some key benefits of using an XML sitemap:

Improved crawl efficiency

Sitemaps help search engines like Google and Bing crawl large or complex websites more efficiently. By listing your important URLs in one place, you make it easier for crawlers to find and prioritize valuable pages.

Faster indexing of new content

When you update or add new pages to your site, including them in your sitemap helps search engines discover them sooner. This can lead to faster indexing, especially for websites that publish content frequently, such as blogs, news sites, or e-commerce stores with changing product listings.

Discovery of orphan pages

Orphan pages are pages that are not linked from other parts of your website. Because crawlers typically follow links to discover content, these pages can sometimes be missed. An XML sitemap can help ensure these pages are still discovered.

Additional metadata signals

XML sitemaps can include additional metadata about each URL, such as the <lastmod> tag. This information helps search engines understand when a page was last updated and whether it may need to be crawled again.

Support for specialized content

Sitemaps can also be extended to include specific types of content, such as images or videos. These specialized sitemaps help search engines better understand and surface media content in results like Google Images or video search.

Better understanding of site structure

A well-organized sitemap gives search engines a clearer overview of your website’s structure and the relationship between different sections or content types.

Indexing insights through Search Console

When you submit your sitemap to tools like Google Search Console, you can monitor how many URLs are discovered and indexed. This also helps you identify crawl issues or indexing errors.

Support for multilingual websites

For websites targeting multiple languages or regions, XML sitemaps can include alternate language versions of pages using hreflang annotations. This helps search engines serve the correct language version to users in different locations.

Do XML sitemaps matter for AI search?

Yes, but indirectly. AI-powered search experiences like AI Overviews or Bing Copilot still rely on the traditional search index to discover and retrieve content. That means your pages usually need to be crawled and indexed first before they can appear in AI-generated answers.

This is where XML sitemaps still help. By listing your important URLs in one place, a sitemap makes it easier for search engines to discover and index your content. Keeping the <lastmod> value accurate can also help search engines prioritize recently updated pages, which is especially useful for AI systems that aim to surface fresh information.

In short, a sitemap won’t make your content appear in AI answers by itself. But it helps ensure your pages are discoverable, indexed, and up to date, which increases their chances of being used in AI-powered search results.

Adding XML sitemaps to your site with Yoast

Because XML sitemaps play an important role in helping search engines discover and crawl your content, Yoast SEO automatically generates XML sitemaps for your website. This feature is available in both the free and premium versions (Yoast SEO Premium, Yoast WooCommerce SEO, and Yoast SEO AI+) of the plugin.

A smarter analysis in Yoast SEO Premium

Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!

Get Yoast SEO Premium Only $118.80 / year (ex VAT)

Instead of requiring you to manually create or maintain sitemap files, Yoast SEO handles everything automatically. As you publish, update, or remove content, the plugin updates your sitemap index and the individual sitemaps in real time. This ensures search engines always have an up-to-date overview of the pages you want them to crawl and index.

Yoast SEO also organizes your sitemaps intelligently. Rather than placing every URL in a single file, the plugin creates a sitemap index that groups separate sitemaps for different content types, such as posts, pages, and other public content types, with just one click.

Read more: XML sitemaps in the Yoast SEO plugin

enable sitemap generation yoast seo

Another important advantage is that Yoast SEO only includes content that should actually appear in search results. Pages set to noindex are automatically excluded from the XML sitemap. This helps keep your sitemap clean and focused on the URLs that matter for SEO.

Controlling what appears in your sitemap

While the plugin automatically manages sitemaps, you still have full control over which content is included.

For example, if you don’t want a specific post or page to appear in search results, you can change the setting “Allow search engines to show this content in search results?” in the Yoast SEO sidebar under the Advanced tab. When this option is set to No, the content will be marked as noindex and automatically excluded from the XML sitemap. When set to Yes, the content remains eligible to appear in search results and is included in the sitemap.

This makes it easy to keep your sitemap focused on the pages you actually want search engines to crawl and index. In some cases, developers can further customize sitemap behavior. For example, filters can be used to limit the number of URLs per sitemap or to programmatically exclude certain content types.

Because all of this happens automatically, most website owners never need to manage sitemap files manually. Yoast SEO keeps your XML sitemap clean, up to date, and optimized for search engines as your site grows.

Read more: How to exclude content from the sitemap

Make Google find your sitemap

If you want Google to find your XML sitemap quicker, you’ll need to add it to your Google Search Console account. You can find your sitemaps in the ‘Sitemaps’ section. If not, you can add your sitemap at the top of the page.

Adding your sitemap helps check whether Google has indexed all pages in it. We recommend investigating this further if there is a significant difference between the ‘submitted’ and ‘indexed’ counts for a particular sitemap. Maybe there’s an error that prevents some pages from indexing? Another option is to add more links pointing to content that has not yet been indexed.

Google search console sitemap
Google correctly processed all URLs in a post sitemap

What websites need an XML sitemap?

Google’s documentation says sitemaps are beneficial for “really large websites,” “websites with large archives,” “new websites with just a few external links to them,” and “websites which use rich media content.” According to Google, proper internal linking should allow it to find all your content easily. Unfortunately, many sites do not properly link their content logically.

While we agree that these websites will benefit the most from having one, at Yoast, we think XML sitemaps benefit every website. As the web grows, it’s getting harder and harder to index sites properly. That’s why you should provide search engines with every available option to have it found. In addition, XML sitemaps make search engine crawling more efficient.

Every website needs Google to find essential pages easily and know when they were last updated. That’s why this feature is included in the Yoast SEO plugin.

Which pages should be in your XML sitemap?

How do you decide which pages to include in your XML sitemap? Always start by thinking of the relevance of a URL: when a visitor lands on a particular URL, is it a good result? Do you want visitors to land on that URL? If not, it probably shouldn’t be in it. However, if you don’t want that URL to appear in the search results, you must add a ‘noindex’ tag. Leaving it out of your sitemap doesn’t mean Google won’t index the URL. If Google can find it by following links, Google can index the URL.

Example: A new blog

For example, you are starting a new blog. Of course, you want to ensure your target audience can find your blog posts in the search results. So, it’s a good idea to immediately include your posts in your XML sitemap. It’s safe to assume that most of your pages will also be relevant results for your visitors. However, a thank you page that people will see after they’ve subscribed to your newsletter is not something you want to appear in the search results. In this case, you don’t want to exclude all pages from your sitemap, only this one.

Let’s stay with the example of the new blog. In addition to your blog posts, you create some categories and tags. These categories and tags will have archive pages that list all posts in that specific category or tag. However, initially, there might not be enough content to fill these archive pages, making them ‘thin content’.

For example, tag archives that show just one post are not that valuable to visitors yet. You can exclude them from the sitemap when starting your blog and include them once you have enough posts. You can even exclude all your tag pages or category pages simultaneously using Yoast SEO.

However, this kind of page could also be excellent ranking material. So, if you think: well, yes, this tag page is a bit ‘thin’ right now, but it could be a great landing page, then enrich it with additional information and images. And don’t exclude it from your sitemap in this case.

Frequently asked questions about XML sitemaps

There are a lot of questions regarding XML sitemaps, so we’ve answered a couple in the FAQ below:

What happens when Google Search Console says an XML sitemap has errors?

An invalid or improperly read XML sitemap usually indicates a specific error that needs investigation. Check the reported issue to understand what is causing the problem. Make sure the sitemap has been submitted through the search engine’s webmaster tools. When the sitemap is marked as invalid, review the listed errors and apply the appropriate fixes for each one.

How can I check whether a website has an XML sitemap?

In most cases, you can find out if sites have an XML sitemap by adding sitemap.xml to the root domain. So, that would be example.com/sitemap.xml. If a site has Yoast SEO installed, you’ll notice that it’s redirected to example.com/sitemap_index.xml. sitemap_index.xml is the base sitemap that collects all the sitemaps on your site into a single page.

How can I update an XML sitemap?

There are ways to create and update your sitemaps by hand, but you shouldn’t. Also, there are static generators that let you generate a sitemap whenever you want. But, again, this process would need to repeat itself every time you add or update content. The best way to do this is by simply using Yoast SEO. Turn on the XML sitemap in Yoast SEO, and all your updates will be applied automatically.

Can I use <priority> in my XML sitemap?

In the past, people believed that adding the <priority> attribute to sitemaps would signal to Google that specific URLs should be prioritized. Unfortunately, it doesn’t do anything, as Google has often said it doesn’t use this attribute to read or prioritize content in sitemaps.

Check your own XML sitemap!

Now you know how important it is to have an XML sitemap: it can help your site’s SEO. If you add the correct URLs, Google can easily access your most important pages and posts. Google will also find updated content easily, so it knows when a URL needs to be crawled again. Lastly, adding your XML sitemap to Google Search Console helps Google find it quickly and lets you check for sitemap errors.

So check your XML sitemap and find out if you’re doing it right!

The post What is an XML sitemap and why should you have one? appeared first on Yoast.

Read more at Read More