New: Yoast Duplicate Post 4.6

Version 4.6 of Yoast Duplicate Post is here, and it’s all about making your editing experience feel more natural in WordPress’s Block Editor, and making sure “Rewrite & Republish” works reliably every time you need it.

A more modern editing experience

Everything where you’d expect it. The Duplicate Post controls now sit in the Block Editor’s sidebar, right alongside WordPress’s own settings, no more hunting around. If you’re still on the Classic Editor, nothing changes for you.

Buttons that look the part. The “Copy to a new draft” and “Rewrite & Republish” actions are now proper bordered buttons, consistent with the rest of the WordPress interface. Cleaner, clearer, and easier to use.

Built for the future. Under the hood improvements ensure Duplicate Post stays stable and compatible as WordPress continues to evolve, so you don’t have to think about it.


Yoast Duplicate Post has always been about reliability. While the plugin has served millions of you faithfully since our last release, we’re excited to bring you version 4.6. This update is packed with long-awaited fixes and thoughtful interface refinements that ensure the plugin stays modern, stable, and ready for the future of WordPress.

Enrico Battocchi – Plugin team lead and creator of Duplicate Post


More reliable “Rewrite & Republish” workflows

Your posts won’t get stuck. If something goes wrong mid-process, like a redirect being interrupted, the plugin now handles it gracefully and cleans up automatically. Your content will never be left in a stuck state.

Attachments copied completely. All attachment metadata, including captions and descriptions, is fully preserved when you duplicate a post. Nothing gets left behind.

International & security improvements

The right words, in your language. Buttons and notices in the Block Editor are now correctly translated across all languages, with none of the behind-the-scenes errors that some locales were seeing.

Consistent styling, always. Buttons display correctly regardless of your admin configuration, including when the WordPress admin bar is turned off.

Version 4.6 is available now. As always, we recommend testing in a staging environment before updating your live site.

The post New: Yoast Duplicate Post 4.6 appeared first on Yoast.

Read more at Read More

Top 10 Best AI SEO Agencies in 2026

Your search rankings may look fine, but your brand is invisible where buying decisions now begin. When prospects ask ChatGPT, […]

The post Top 10 Best AI SEO Agencies in 2026 appeared first on Onely.

Read more at Read More

10 Best Technical SEO Agencies To Improve Your Site in 2026

A Reddit thread on r/SEO with 246 upvotes titled “Started working in an Agency and I’m stunned by the lack of knowledge” […]

The post 10 Best Technical SEO Agencies To Improve Your Site in 2026 appeared first on Onely.

Read more at Read More

Fintech in AI Search: How to Be the Trusted & Featured Brand

Fintech in AI search plays by much stricter rules.

Because it’s a Your Money or Your Life category, products must clear higher verification thresholds before AI mentions you:

  • Is your product legitimate?
  • Are your fees and protections explicit?
  • Do other trusted sources back up your claims?

To find these answers, AI draws from your website and the wider web — including sources you don’t control.

That risks misrepresentation of your brand.

What matters, then, is whether those sources tell an accurate story.

In this article, I’ll explain how to influence that narrative.

Because the real goal is for your fintech brand to show up in AI search AND be represented accurately.

3 Types of AI Visibility in Fintech

Your fintech brand can appear in AI search in three ways.

Your goal is to show up in all three.

  • Mentioned when AI explains topics in your category
  • Cited and linked within the answers
  • Recommended as part of a shortlist of products

Fintech AI Visibility

Brand Mentions

Brand mentions are when AI systems include your name in an answer.

They’re great for brand awareness.

These references put your name in front of buyers even when they’re not seeking you out.

For example, I asked ChatGPT:

“Are buy now, pay later providers ideal for my business?”

It mentioned several BNPL platforms in its response.

ChatGPT – BNPL providers

This suggests the AI recognizes those brands as part of the category and relevant in the space.

You’ll often see mentions as:

  • Lists embedded inside explanations: “Popular BNPL providers include X, Y, Z…”
  • Examples supporting a point: “Some neobanks, like X and Y, offer…”
  • Context for user stories: “Many users switch from traditional banks to apps like X…”

A mention isn’t a recommendation, but it still matters.

Mentions often appear in non-brand queries. That’s when users begin exploring their options.

And if they see your brand mentioned often, it builds familiarity.

(Also known as the mere exposure effect.)

Exposure in AI Search

So, by the time a user reaches the decision stage, they’re more likely to recognize your name.

But sometimes that isn’t enough.

That’s why having a positive brand sentiment is vital.

The way AI mentions your brand can shape buyer perception.

If it often frames your product as “known for strong security,” that idea sticks.

But if the AI always pairs your name with warnings like “high fees” or “frequent outages,” it can raise doubts.

Citations

Citations are when the AI uses your pages to support its answer.

They’re valuable for boosting credibility and consumer trust.

When AI uses your content as a source, there’s an implied endorsement. It references you because you’re trustworthy.

And when you’re consistently cited, your brand becomes associated with expertise in the topic.

Citations may appear differently across platforms and prompts.

Sometimes they appear as footnotes and/or as inline links.

ChatGPT – Citations

You might also see a sidebar or expandable panel with grouped sources.

Google AI Mode – Sources

Other times, citations appear as thumbnails somewhere in the AI’s response.

ChatGPT – Thumbnail citations

Regardless of format, the principle is the same:

When the AI cites your documentation, it signals that your content is being treated as a reliable source.

It’s pulling information from your pages to build its answer.

And this allows you to influence how the AI explains your product.

For example, I asked ChatGPT:

“What reporting and analytics does Klarna offer brands after implementation?”

ChatGPT – Klarna reporting analytics

Many of the citations came from Klarna’s documentation.

This implies that Klarna has some level of influence over the AI’s answer.

ChatGPT – Klarna citations

There’s a caveat with citations, however.

LLMs might link to your site, but that doesn’t always mean more traffic.

Citations are less visible than brand mentions or recommendations.

I rarely click them myself.

If I need more detail, I’ll usually continue the conversation within the platform. Or switch to Google search.

That’s likely true for many users.

Still, citations signal that AI systems trust your documentation.

And that trust enables product recommendations, which we’ll cover next.

Product Recommendations

Product recommendations are when AI includes your brand or product in a shortlist.

They’re the most impactful type of AI visibility because they influence which brands users consider.

And ultimately, which product they choose.

Here’s what recommendations look like in ChatGPT and Google AI Mode.

I asked:

“Which BNPL platform is good for mid-size ecommerce brands?”

Both listed Klarna as one of the top options.

ChatGPT & Google AI recommend Klarna

This places Klarna front and center as buyers narrow their options.

Showing up in high-intent queries is vital for recommendations.

These are prompts that include “top,” “best,” “compare,” or “alternative.”

Such as:

  • What are good alternatives to X for a small business
  • What are the best budgeting apps for freelancers
  • List the best neobanks with high-yield savings

ChatGPT – Best budgeting apps

But showing up in these queries isn’t automatic.

AI systems use specific signals to decide which brands to recommend.

How LLMs Choose Which Fintech Brands to Feature

AI acts as a filter between buyers and brands.

So how do these systems decide which brands and products to recommend?

From what we can observe, comes down to two signals: consensus and consistency.

Consensus vs Consistency

Consensus

Consensus is when multiple reputable sources mention your brand and product.

AI surfaces brands that have this kind of social proof — it suggests that you’re real, trustworthy, and worth recommending.

The stronger the consensus, the more confident AI is in featuring you.

But this cuts both ways.

If sources consistently highlight negatives, AI may repeat those warnings instead.

In fintech, AI systems likely assess consensus from several sources, including:

  • Partner-bank and infrastructure disclosures
  • Regulatory databases
  • Personal finance publishers
  • Finance communities and review platforms
  • Partner sites
  • Technical and investor communities

Fintech Brand Consensus

So, a big part of AI optimization is showing up in the sources LLMs use to form consensus.

The easiest way to identify those sources is to run brand-related prompts.

Example: “Best banks for international transfers.”

Then, check which sites appear in the citations.

ChatGPT – Best banks – Sources

Those are the sources the AI model trusts.

When these sites and reviews talk about your brand, it increases your chances of being mentioned by AI.

Further reading: Read our search everywhere optimization guide for tips on building a positive brand reputation across platforms.


Consistency

It’s not enough for your brand to be mentioned everywhere. The sources also need to agree on the facts they’re sharing about you.

That means the core details of your product align all over the internet, including your:

  • Category
  • Pricing and fees
  • Product features
  • Protections

For example, I asked ChatGPT and Google AI Mode for “best budgeting apps.”

Both recommended YNAB (You Need A Budget).

ChatGPT & Google AI recommend YNAB

That’s no surprise.

YNAB appears in dozens of reputable sources, including Money, CNBC, NerdWallet, and Wirecutter.

NerdWallet – YNAB – App review

It’s also frequently mentioned in finance communities, such as myFICO Forum.

MyFico – Forum – YNAB

 

These sources also highlight specific use cases: college students, goal-setting, and overall budgeting.

These consistency signals help AI confidently recommend YNAB for those exact scenarios.

YNAB – Mentions

Building consistency across platforms comes down to good ole PR and reputation management.

Ensure your key details align across your site and third-party coverage.

Working with publishers and affiliates will help you shape how your brand is described.

Ultimately, consistency starts with content: what you publish and what others publish about you.

3 Types of Content That Dominate Fintech in AI Search

LLMs will reference any public content they can access.

In fintech, three types carry the most weight.

1. Owned Content

Owned content is anything you publish and control on your own properties.

This includes your website, documentation, and any branded platforms.

Owned Content

AI analyzes these places for your version of the facts.

That’s why content like “What does this product do?” or “How does it work?” is so essential.

For example, I asked ChatGPT:

“Compare ATM withdrawal limits, card spending caps, and international FX fees for Wise, Revolut, and Monzo.”

Its answer cited many of the three brands’ pricing and product pages to build the comparison.

ChatGPT – Brand sources

This indicates the AI uses these pages to answer this query.

For you, this means your website plays a big role in what AI says about your product.

Treat your site as both a marketing and educational channel.

Publish the product details that matter to buyers.

Look at your sales conversations, support tickets, and comparison research to identify questions, concerns, and pain points.

For example, Intuit’s TurboTax landing page includes extensive product details.

It covers everything from security and guarantees to key tax filing information.

Intuit Turbotax – Landing page

This helps the AI (and users) understand what the product includes, how it works, and who it’s for.

2. Earned Media and Reviews

Earned media and reviews are third-party perspectives on your product.

This includes everything from editorial coverage to user feedback.

LLMs use these sources to fact-check your claims. It’s also how they understand what it’s like to use your product.

In fintech, third-party sources often include:

  • Editorial guides and roundups by established finance sites such as Kiplinger and MarketWatch
  • Affiliate and review platforms, including sites like the Better Business Bureau (BBB)
  • Community discussions on platforms such as Quora and finance forums like MoneySavingExpert

Earned Media

For example, I asked ChatGPT:

“What reporting and analytics does Klarna offer brands after implementation?”

The citations included Klarna’s own documentation. Plus, sites such as print-on-demand platform Gelato, Forbes, and G2.

ChatGPT – Klarna – Reporting analytics sources

That mix is worth noting.

It shows the AI isn’t taking Klarna’s claims at face value.

It’s cross-checking them against third-party evaluations.

The takeaway here is to treat reputable third-party coverage as a core growth channel.

One proven strategy: publish original research that journalists can cite.

Take KPMG’s Pulse of Fintech H1 2025 Report, for example.

KPMG – Pulse of Fintech

Each edition generates media coverage across major sites like Bloomberg and Trinetix.

This works because reporters are constantly hunting for newsworthy statistics.

Trinetix – Fintech trends

Other things you can do to increase earned mentions include:

  • Fill out or update third-party listings you control, like app store profiles
  • Co-author articles to earn mentions in trusted sources
  • Synchronize PR, product, legal, and marketing so your brand story stays unified everywhere

3. Official Records

Official records are documents that confirm your legal authorization to operate.

LLMs treat them as proof and confirmation of compliance and regulatory standing.

The types of official records LLMs cite include:

  • Regulatory registries and licenses
  • Regulatory disclosures and notices
  • Partner bank disclosures
  • Corporate records

Official Records

These sources allow the AI to answer questions on legitimacy and protection.

For example, I asked Perplexity:

“Is Wise licensed to operate in the U.S., and what protections apply to Wise balances?”

The citations included:

  • A PDF consent order from six state financial regulators
  • Wise’s National Trust application filed with the OCC
  • The California DFPI’s regulated-entity page for Wise US, Inc.

Federal Reserve – Wise

Along with Wise’s documentation, these give AI enough evidence to answer confidently.

They confirmed that Wise:

  • Operates in the U.S.
  • Under specific entities
  • And with the appropriate approvals and protections

In fintech AI search, this kind of regulatory confirmation is a strong trust signal.

It tells AI systems that your product is legitimate and safe to mention.

This creates a real opportunity for you.

AI systems can only cite what they can find, parse, and verify.

Your job is to make your regulatory standing explicit, structured, and easy to retrieve.

Start by naming your partner banks, custodians, and key infrastructure providers on your site.

Revolut – Help article

And keep those details up to date across your site.

Publish key pages that AI systems can pull from, including:

  • Regulatory and licensing: Clearly list your licenses, registration numbers, regulatory bodies, and jurisdictions where you operate
  • Protection: Explain in plain language how funds are safeguarded, what insurance applies, and which entities custody assets

Ramp – Help center

Link to these pages from your footer and trust pages so AI bots can easily find them.

How Fintech Brands Can Improve AI Search Visibility and Accuracy

54% of Americans now turn to ChatGPT for financial research, according to a Motley Fool Money study.

That means buyers often get the “AI version” of your brand before they see your website.

That’s actually good news.

A Microsoft study found that AI traffic converts at 3x the rate of other channels. This includes search, direct, and social media.

Microsoft Clarity blog – Conversion rates

The catch? This only works in your favor if the model accurately describes your product.

Here’s how to help it do that.

Provide Proof That Your Brand Is Real and Trustworthy

LLMs need proof they can validate before they include you in answers.

So your trust details need to be public and clear across your owned platforms.

One effective way to do this is with a dedicated section on your site.

This can serve as your primary source of truth.

Many fintech brands, like SoFi, do this with a “Trust & Security Center.”

Sofi – Trust Center

But a well-structured “Help Center” like Venmo’s works, too:

Venmo – Help center

Overall, make it easy for LLMs (and users) to find the facts that reduce perceived risk:

  • Who holds the funds
  • Who powers the product
  • How the product works

Reiterate the same trust details in related pages and sections of your site.

Add them to your homepage, About page, and FAQ sections on service pages.

Many fintech brands also include disclosures, like Member FDIC or partner bank language, in the footer.

PayPal – Disclosure

A keyword tool like Semrush’s Keyword Magic can help you find safety and trust concerns people have about your company.

Keyword Magic Tool – Klarna

If they’re asking these questions on Google, you can bet they’re asking them in AI tools, too.

How you format your content is crucial. Ensure it’s easy for AI to extract and cite:

  • Use question-and-answer structures for common concerns
  • Answer each question with a clear, direct, and quotable response
  • Include facts and statistics when applicable

N26 – FAQ

Finally, treat data hygiene as a required part of your process.

When a partner, protection, or operational flow changes, update the documentation immediately.

Then clean up anything outdated.

Redirect or remove old PDFs and help docs so AI only finds the current version.

Reduce Mixed Messages About Your Product Online

Contradictions undermine AI’s trust in your brand.

They break the consistency signal, making AI systems cautious about recommending you.

But inconsistencies can easily happen over time.

As your company evolves, public-facing information can become outdated.

Older pages, screenshots, or explanations remain discoverable online. But AI systems can’t always determine which version is current.

The good news is that you can fix this with a few focused actions.

First, start where you have complete control: your own site.

Ensure your core narrative, product details, and trust documentation are fully synchronized on all landing pages and trust hubs.

SoFi does this well.

Their “all-in-one” app positioning is reinforced throughout their site.

Sofi – All-in-one app

As you update your site, have marketing, product, and compliance teams work together.

This ensures consistency in promotional materials, regulatory disclosures, and product specs.

Next, make sure that affiliate and “best-of” publishers accurately describe you.

Affiliate sites and finance publishers are the most-cited sources in AI answers, according to the Semrush AI Visibility Index (December 2025).

Semrush Visibility Index – Financial – Top sources

So, it’s worth checking what these sites are currently saying about you.

(Especially on “best of” listicles, comparisons, and reviews.)

To do this, research the questions people ask when evaluating your product.

They’re usually formatted like this:

  • Is [Brand X] legit
  • [Brand Y] fees
  • Can I trust [Brand Z]
  • [Brand X] vs [Brand Y]
  • Is [Brand X] safe

Google’s People Also Ask and keyword tools let you find these questions.

Keyword Magic Tool – RobinHood

You can also use Semrush’s AI Visibility Toolkit to see what questions users ask LLMs about your industry.

It tells you the exact prompts they use:

Visibility Overview – PayPal – Topics & Questions

Then, look at which pages and websites are often cited.

If you’re using the AI Visibility Toolkit, it will pull these for you:

Visibility Overview – PayPal – Cited sources

Otherwise, manually search the questions you found in different generative engines.

Perplexity – Can I trust RobinHood

Click each source and scan for inconsistencies.

If you find something wrong, reach out to the publisher for a correction or update.

Make it easy for them to make changes by providing clear, publish-ready facts.

Another vital step is monitoring (and participating in) online finance conversations.

Forum and social media posts have a long shelf life.

This can pose consistency problems for you as your company grows and your products change.

Reddit, for example, rarely deletes old posts.

So outdated answers can stay discoverable for years.

Reddit – Investing – Outdated answer

Reduce the impact of outdated information by:

  • Replying with a simple correction, especially in threads you see cited in AI engines
  • Making sure your social media accounts repeat the correct version
  • Announcing updates where people discuss your category

It’s also worth being more present in communities that AI often cites.

For example, Fidelity’s subreddit often shows up in Fidelity-related questions.

ChatGPT – Fidelity subreddit

If you manage or participate in spaces like this, you can influence the public record directly.

Use our brand subreddit guide for tips on setting one up and growing your visibility.

Manage Brand Perception and Sentiment

AI systems assess how other sites talk about you. That public sentiment shapes the answers users get.

For example, I asked ChatGPT: “Is PayPal safe?”

It didn’t give a definitive “yes.”

Instead, it used qualifying language like “generally considered” and “not perfect.”

It also added important caveats and security considerations.

ChatGPT – PayPal safety

Looking at the citations, you can see the sources that contributed to those caveats:

  • Investopedia, comparing PayPal’s safety measures to credit cards
  • Community discussions, such as r/privacy, where users debate PayPal’s risk profile
  • Editorial sites and even some competitors like Wise outlining protections and limitations

Wise – Is PayPal safe

This means:

How other sites describe you affects how AI describes you.

That makes sentiment tracking vital.

Set up regular AI search visibility audits for your brand. You can do this manually by monitoring different AI platforms.

Start with the top two most used generative AI tools:

ChatGPT and Google AI Overviews.

AI Monthly Users

Each month, run a consistent set of high-intent prompts related to your brand and category.

Note the sentiment, including any differences between AI models.

Look for patterns to assess whether your sentiment is positive, neutral, or negative:

  • Regular positive framing
  • Repeated warnings
  • Recurring pros and cons

Yes, doing this manually takes time.

If you’d rather automate the process, use the AI Visibility Toolkit.

Visibility Overview – PayPal – Topics & Questions

For example, it provides your brand’s overall sentiment and share of voice.

Brand Performance – PayPal – Overall Sentiment

It breaks this down by platforms, including:

  • Google AI Mode
  • ChatGPT
  • Perplexity
  • Gemini

Brand Performance – Paypal – Select platform

 

You can also see how you stack up against your biggest fintech competitors.

Brand Performance – Paypal – Competitors

The Narrative Drivers tool is especially useful.

It shows the exact questions people ask about your brand. And the percentage of favorable sentiment towards you in each answer.

This makes it easy to see where you’re perceived positively or negatively at scale.

Really cool.

Narrative Drivers – Paypal – Breakdown by Question

Make Your Fintech Brand Easy for AI to Trust

AI is changing fintech in a number of ways.

Most notably, a buyer’s first touchpoint is now often an AI-generated answer.

If you’re not in those answers, you’re not in the decision.

The fix: build consistency and consensus signals for your fintech brand.

You already have a strong idea of how to do that. Now, dive deeper into the topic with our AI optimization guide.

The post Fintech in AI Search: How to Be the Trusted & Featured Brand appeared first on Backlinko.

Read more at Read More

How to Optimize Your Product Pages for AI Visibility

AI has changed the way people shop.

58% of consumers now use GenAI tools instead of traditional search to find products.

Imagine your customer runs a simple query in Google’s AI Mode: “Winter jackets for women.”

Instead of a long list of links, they get direct product recommendations — alongside:

  • Descriptions of features and best use cases
  • Ratings and reviews
  • Editorial sites that mention the product
  • Direct comparisons with top competitors

All in one response.

Google AI Mode –Best winter jackets for women

Which raises an obvious question:

Why do some products show up, while others are ignored entirely?

Many factors influence AI recommendations.

But one of the most important — and most controllable — is your product pages.

In basic terms, AI needs to understand what your product is and who it’s for.

When that information is clear, structured, and specific, your products have a much better chance of appearing in AI results.

In this guide, we’ll break down how AI evaluates product pages, and which elements matter most.

Plus, we’ll see how leading ecommerce brands structure their pages to get recommended.

Free checklist: To get a head start, download our Product Page AI Optimization Checklist. It includes everything you need to get more product mentions in AI platforms.


How AI Models “Think” About Product Pages

Ever wondered how large language models (LLMs) choose which products to surface in answers?

While there’s a lot at play, you can basically narrow it down to two factors:

  • Consistency: Information about your brand and products matches across your website and third-party sites
  • Consensus: Multiple reputable sources validate your product’s quality, use cases, and performance. This includes reviews on your product pages and third-party sites.

For LLMs to confidently cite a product page, they need consistent, up-to-date information.

AI models analyze product pages to pull details that help them answer user queries.

Remember, AI queries don’t look like a regular search.

Prompts are often highly specific requests for products that fit a clear use case or situation.

Example: What are the best women’s road racing shoes for a 10K in Ireland?

ChatGPT – Best women's racing shoes

AI looks for product pages that clearly communicate:

  • What the product is
  • What it’s used for
  • Who uses it
  • In what situations it can be used

This helps the system understand your product in the context of user queries.

Take this Nike road racing shoe product page, for example.

Nike – Women's Road Racing Shoes

AI systems understand when and how to recommend this product because it contains details like:

  • What the product is: “Women’s Road Racing Shoes”
  • Who should use it and when: Racing-related language like “marathon” and “race day shoe” makes it clear this product is for racing

When I searched “best road racing shoes for women” in AI Mode, it recommended Nike’s Alphafly.

Google AI Mode –Best road racing shoes for women

And where did the information it quoted come from?

Nike’s own product page.

Nike – Product page

AI models also look for consensus signals on product pages.

This includes customer reviews and ratings.

When AI analyzes reviews, it looks for patterns. This includes repeated mentions of specific use cases, features, or product benefits.

For example, the Nike Alphafly is highly rated with plenty of reviews on the Nike website.

Among other benefits, this improves its chances of being recommended by AI platforms.

Nike – Reviews

But AI doesn’t rely solely on product pages.

It cross-references independent sources to back up claims about your products.

In a similar search for racing shoes, I found that AI Mode cites various third-party sources to support its recommendations.

Google AI Mode –Best road racing shoes for women – Sources

Like this one, that includes a review of Nike shoes, complete with product details.

RunToTheFinish – Best carbon plate running shoes

Product pages are one piece of the AI visibility puzzle.

But they create the foundation AI systems need to confidently recommend your products.

Further reading: Learn how LLMs recommend brands in Semrush’s AI Visibility Index.


6 Essential Elements of a Product Page for AI Visibility

You likely already have some (or all) of the elements below on your product pages.

But for AI visibility, having them isn’t enough.

What matters is clarity, specificity, and structure.

Note: These elements aren’t in any particular order: all are important for AI visibility.


1. Clear Product Descriptions with Semantic Language

A clear product description explains more than what your product is. It spells out what it does, who it’s for, and why someone would choose it.

This matters for AI visibility because LLMs rely heavily on semantic retrieval.

In other words, AI understands the intent and meaning behind queries. Not just exact-match keywords.

For example, when someone searches for “vacuum for pet hair,” AI doesn’t just look for that phrase.

It also looks for semantically related terms. Things like “stubborn hair,” “carpets,” “pet odors,” and “allergens.”

How AI expands your query

These terms help AI infer use cases, surface the right features, and decide when your product is a good fit.

Including them on product pages improves your chances of appearing in AI-generated answers.

So, how do you find these terms?

First, read forums, reviews, and social media conversations.

Learn how people talk about the problems they’re facing and the products they’re using.

Using our vacuum example, I dove into r/VacuumCleaners. There, I found recurring phrases around weight, clogging, tangles, and flooring-specific concerns.

Reddit – Best vacuum for pet hair

Next, conduct keyword research on related terms.

This shows you how people actually phrase their searches.

A tool like Semrush’s Keyword Magic Tool is great for this task.

Note: A free Semrush account gives you 10 searches in the Keyword Magic Tool per day. Or you can use this link to access a 14-day trial on a Semrush Pro subscription.


Enter a keyword, such as “pet hair vacuum.”

The tool will return a list of “Broad Match” queries, which contain variations of your keyword.

Keyword Magic Tool – Pet hair vacuum – Keywords

Review the “All Keywords” list on the left to find common themes.

Then, check the monthly search volume for each term.

In our example, we might use “handheld,” “carpet,” and “hardwood” as semantic keywords.

Keyword Magic Tool – Pet hair vacuum – Groups

Collect a few key terms, and use them in product descriptions to explain what your product does.

You can still be creative. Just don’t sacrifice clarity.

Here’s what this looks like in practice.

I asked AI Mode for the best lightweight vacuum for pet hair. One of the top recommendations was a Shark vacuum.

Google AI Mode – Best lightweight vacuum for pet hair

User preferences and personal context aside, AI Mode recommended this product for a few reasons:

For one, it has strong consensus signals from third-party reviews and editorial sites.

(Which you can see from the sources on the right side.)

Google AI Mode – Best lightweight vacuum for pet hair – Sources

But let’s also take a closer look at the product page.

The product name alone — Shark UltraLight PetPro Corded Stick Vacuum — gives a core use case.

It’s meant for lightweight, pet-focused cleaning.

SharkNINJA – Shark Ultralight PetPro Corded Stick Vacuum

The product description reinforces that message with simple, specific language:

  • Captures stubborn hair
  • Works on carpets and floors
  • Hand vac option
  • Weighs less than three pounds

SharkNINJA – Shark Vacuum – Review

That same phrasing shows up in the AI response.

This strongly suggests AI Mode is pulling this information directly from Shark’s product description for this vacuum.

Google AI Mode – Item description

Bottom line: Customer-focused, use-case-driven language helps AI understand when to recommend your product.

This increases your chances of appearing in AI search results.

Further reading: Need inspiration? Check out some of our favorite ecommerce website examples.


2. Pricing and Availability in Real-Time Feeds

LLMs read product data from two places: your product pages and merchant feeds.

If your site has accurate structured data, AI can use that. But crawlers don’t run every minute. That means prices and stock can be stale.

That’s where a live product feed or API comes in.

This includes Shopify’s Catalog API, OpenAI’s Product Feed Spec, and feeds submitted through Google’s Merchant Center.

Pro tip: OpenAI product feed submission is currently available only to approved partners. Fill out the Merchant Application form for consideration.

Google Merchant Center


When you use these, AI search engines can fetch current prices and inventory on demand.

That’s the tech that powers real-time recommendations and in-chat shopping in ChatGPT and other AI platforms.

ChatGPT – Full shoping destination

More platforms are also adding this capability.

Google is rolling out a Universal Commerce Protocol.

This feature brings buy-in-chat functionality to eligible product recommendations in AI Mode and Gemini.

Google – Universal Commerce Protocol

But what if you don’t use a product feed or API?

LLMs can still find product information on public webpages. But it may be outdated.

And that’s a problem.

AI platforms evaluate recency and consistency.

Mismatched prices or outdated stock can hurt your AI visibility. In part, because it leads to a poor customer experience.

​​To see how this plays out in practice, I tested ChatGPT’s “Shopping research” mode.

The AI asks questions to narrow results, including how much you want to spend.

I told ChatGPT I was looking for a new couch. I specified both my budget and need for delivery to Massachusetts.

ChatGPT – Use shopping research

ChatGPT returned five options, all of which fit my budget and availability requirements.

The “Best overall” option even highlighted that it was “in stock for fast delivery” to my state.

ChatGPT – Returned options that fit

To further test how price affects results, I asked if any of the recommended couches were on sale.

It narrowed down my options and provided sale pricing.

ChatGPT – Narrowed down options & provided prices

ChatGPT only mentioned one couch as being on sale.

To find out why, I reviewed the product pages for each recommendation. But only one clearly highlighted both the original and sale price.

Walmart’s product pages boldly showcase the previous price versus the discount.

Walmart – Previous prices versus discount

In its response, ChatGPT specifically mentioned that Walmart displays this info on its product page.

ChatGPT – Currently on sale

Walmart also submits its product feeds to platforms like Google Merchant Center.

So its pricing (both sale and original) is clear and current across platforms.

Google SERP – Grey couch with sleeper

Product feeds and APIs keep your price and inventory fresh.

When AI systems have access to this data, they can recommend your products when users narrow options by price, availability, or discounts.

3. Ratings and Reviews

Many AI systems display ratings and reviews in product recommendations.

In AI Mode, you can click a product recommendation and see reviews directly in the sidebar.

Google AI Mode – Reviews

ChatGPT also includes information from reviews.

It often surfaces them as part of the response:

ChatGPT – Includes information from reviews

But LLMs do more than show you reviews. They also weigh reviews and ratings when choosing recommendations.

ChatGPT often includes labels like “Budget-friendly” or “Most popular” based on reviews.

OpenAI has confirmed that answers may include summaries of the themes most commonly mentioned in reviews.

That could mean pros, cons, and use cases pulled directly from reviews.

Here’s how that looks in practice when I search for warm winter hiking boots:

ChatGPT – Section headings

Ultimately, reviews on your product page don’t just affect whether your product appears in AI search.

They can also influence how it’s positioned.

When AI systems analyze reviews, they look for consistency:

  • Repeated mentions of specific use cases
  • Commonly praised features
  • Patterns in star ratings
  • Shared language around benefits or problems

The more clearly those patterns emerge, the easier it is for AI to confidently recommend — and describe — your product.

This applies to reviews on your own product pages and on third-party sites.

When I asked AI Mode for a hydrating cleanser for sensitive skin, the first recommendation was a product from CeraVe.

Google AI Mode – Hydrating cleanser for sensitive skin

Interestingly, the product description itself doesn’t explicitly emphasize “sensitive skin.”

CeraVe – Hydrating Facial Cleanser

But the reviews on CeraVe’s product page do.

Here’s what I noticed:

  • Reviews are tagged with commonly mentioned phrases
  • One of the most prominent tags is “sensitive skin”
  • There are over 100 reviews referencing sensitive skin — most of them positive

CeraVe – Hydrating Facial Cleanser – Reviews

Having reviews on every product page is a best practice that increases trust and authority.

Encourage customers to leave detailed feedback by:

  • Prompting for use cases in review forms
  • Asking follow-up questions after purchase
  • Offering light incentives (like a coupon) in exchange for honest reviews

Note: The most important thing is that these reviews are real. Fake or AI-generated reviews may temporarily improve your brand’s visibility in AI search. But they are never worth the long-term risk to your reputation.


4. Contextual Use Cases

AI search looks for explicit connections between what a product is and why someone needs it.

So, your entire product page should explain when, why, and in what situations a product makes sense.

Apple – Macbook Air

This requires a shift in how you think about product marketing.

Instead of asking, “What can this product do?”

Ask, “In what specific scenario would someone actively look for this?”

Start by identifying who buys your product and what triggers that purchase. If you don’t already have this insight, customer interviews are your fastest path.

Look for:

  • The situation that prompted the search
  • The alternatives they considered
  • The constraint that mattered most (travel, space, safety, performance, etc.)

Once you have this, choose one or two clear, specific use cases to feature on each product page.

Don’t just list all the possible ways your product can be used.

AI isn’t great at matching vague versatility.

Instead, focus on the use cases that come up repeatedly in customer conversations. That way, AI can match your product to a specific intent.

Let’s look at an example for an electronics brand.

This product page for Anker’s 3-in-1 mobile charger states it’s “ultra compact and travel friendly.”

Anker – Products

When I search for travel-friendly chargers on ChatGPT, Anker’s 3-in-1 device is the top recommended product.

ChatGPT – Travel friendly Chargers

Obviously, this little charger is a great option for more than just travel.

But by calling out that use case on the product page, it makes it easier for LLMs to recommend it in related queries.

5. Awards and Certifications

LLMs prioritize trustworthy, verifiable information when recommending products.

One of the strongest ways to demonstrate that trust is to feature third-party validation on your product pages.

This includes:

  • Industry awards and “best of” recognitions
  • Third-party testing results
  • Safety and quality certifications
  • Sustainability or ethical production badges

To see how much awards affect AI visibility, I analyzed 50 ecommerce brands in Semrush’s AI Visibility Overview tool.

This included Samsung, Patagonia, Everlane, Caraway, and others.

First, I identified brands with high AI Visibility scores.

This is a Semrush metric that measures how often brands appear in AI-generated answers.

I focused on brands scoring above their industry average. (This varies by industry, but is generally between 60 to 90.)

AI SEO Overview – Samsung

Next, I looked at how many of the top-ranking brands feature awards and certifications on their product pages.

And I found something very interesting:

82% of the brands with medium to high AI visibility prominently feature awards and certifications on their product pages.

Awards and certifications link to higher ai visibility

For example, Samsung has an AI Visibility score of 90.

AI SEO Overview – Samsung – Great

And its product pages feature multiple awards.

Like being “rated #1 in camera quality” by the American Customer Satisfaction Index.

Samsung – Rated #1 in camera quality

And winning “Best Phone Camera” by Consumer Reports:

Samsung – What experts say

When I asked Claude which phone has the best camera quality, the Samsung Galaxy was one of its top recommendations:

Claude – Samsung result

BabyBjorn has an AI Visibility score of 67.

Semrush AI SEO – Babybjorn – Overview

A quick look at its product pages reveals certificates and awards on every product page.

Like this one that references a “Best Bouncer” award from Parents Magazine:

BabyBjorn – Parents Magazine Award

When I asked ChatGPT to recommend the “best and safest baby bouncer,” BabyBjorn was the #1 pick:

ChatGPT – Best and safest baby bouncer

Now, this is correlation, not necessarily causation. And awards and certifications are not the only factor.

But they can make a difference for product page visibility in LLM search.

If you already have awards and certifications, showcase them prominently on your product pages.

If you don’t, create a strategy to earn them.

Target industry-specific certifications (safety, quality, sustainability) and awards from reputable organizations.

This includes relevant certifications and “best” awards through PR outreach.

6. Structured Attributes and Schema Markup

Structured attributes are pieces of product information that machines can easily understand.

This includes things like:

  • Price
  • Dimensions
  • Materials
  • Ratings
  • Availability
  • Color
  • Size
  • Warranty details

These attributes are vital components of a product page.

Use tables, bullet lists, or specification sections to clearly structure them for machines and customers.

They should also be in your structured data and product feeds.

For example, health company Vitamix features a “Specifications” section on its product pages:

Vitamix – Product specifications

We can’t say definitively that schema affects LLM visibility (yet).

But major AI search engines confirm they rely on structured attributes to understand and recommend products.

What OpenAI says: “When determining which products to surface, ChatGPT considers structured metadata from first-party and third-party providers (e.g., price, product description).

Depending on your needs, some of these factors will be more relevant than others. For example, if you specify a budget of $30, ChatGPT will focus more on price, whereas if price isn’t mentioned, it may focus on other aspects instead.”


It’s also still a best practice for traditional SEO.

Plus, it’s no secret that structured data helps products appear on Google’s main page and Shopping tab.

It’s what allows users to refine results, see ratings, and check prices right on the first page of Google.

Google SERP – Best glass air fryers

But here’s where it gets interesting.

When I conducted a search in AI Mode, Google’s own shopping cards were the main sources.

Google AI Mode – Google shopping cards

Clicking into one of those sources, I saw even more of that search-friendly structured data.

Seaech friendly structured data

And where does all this information come from?

You guessed it: the original product page.

That same structure is what enables Google’s AI responses to display live pricing, availability, sales, and comparisons.

Clear, consistent schema simply gives search engines and LLMs more to work with.

That context helps AI more confidently recommend your product in related queries.

AI Visibility Essentials for Product Pages (By Industry)

The elements above matter on every product page.

But AI evaluates product pages differently depending on the category.

In this section, we’ll break down the category-specific product page details that AI looks for across six common ecommerce industries.

Fashion Brands

Ask any AI engine for clothing recommendations, and you’ll notice something consistent: the results highlight fit, materials, and comfort.

Google AI Mode – About products

Clearly, the most important product page elements for fashion brands are:

  • Clear sizing and conversion charts
  • Material and care information
  • Customer fit data
  • Sustainability certifications and ethical production badges

Fashion queries are also highly specific to the individual shopper.

To see how AI handles these searches, I used Semrush’s AI Visibility Toolkit.

I analyzed the topic “jeans for women” using Semrush’s Prompt Research tool.

AI Visibility – Prompt Research – Volume

What’s revealing is the variety of queries under this topic.

Take “Plus size and curvy women’s jeans” for example.

Even within this niche, searches vary widely:

  • “Best plus size jeans for big thighs”
  • “Best curvy fit jeans”
  • Most comfortable jeans for curvy women”

AI Visibility – Prompt Research – Jeans for women – Prompts

Across all these queries, the AI responses consistently emphasize the same details:

  • High-rise styles
  • Stretch denim
  • Tummy control
  • Specific silhouettes like bootcut

Semrush – Prompt research – Best jeans for women

These details are pulled directly from product pages and customer reviews:

Abercrombie – Product details

For AI to match products to these specific queries, it needs structured details on your pages.

This is something Abercrombie & Fitch does well.

They display clear fit guidance and aggregated customer fit feedback prominently on product pages.

Abercrombie – Customer says it fits

Health and Wellness Products

Nothing is more important to health and wellness brands than trust and safety.

That’s why non-negotiables for product pages in this industry include:

  • Full ingredient composition
  • Clear dosage and instructions
  • Contraindications and allergen warnings
  • Source transparency
  • Clinical studies or certifications

Searches for health products are often deeply personal and complex.

Many start with a product type and the demographic it’s best for.

For example, the topic of “infant multivitamins” includes these common searches:

  • “Where can I buy reliable infant multivitamins?”
  • “How do I choose the best multivitamin for my baby?”

AI Visibility – Prompt Research – Vitamins & Suplements – Prompts

In their responses, AI models pull from ingredient lists, dosage information, and certifications.

Brands that perform well for wellness-related AI queries follow the same pattern.

They provide detailed information about ingredients, sourcing, and production on their product pages.

This is what helps popular health company Thorne get recommended often in AI search results:

ChatGPT – Thorne Recommendation

Their product pages list ingredients in detail:

Thorne – Ingredient Information

They also include dosage instructions and verifications of the product quality.

All in a clear, machine-readable format.

Thorne – Verified & How to Use

Electronics

When it comes to electronics, AI loves to quote specs.

Battery life, screen resolution, charging speed, refresh rates, and more are all pulled into responses.

So every electronics product page should include the essentials:

  • Full technical specs
  • Compatibility information
  • Setup or installation guides
  • Safety and efficiency certifications

For example, even a simple search — “best cameras for night photography” — returns spec-heavy recommendations.

Google AI Mode – Best cameras for night photography

Structured specs give AI systems what they need to compare products.

This is important on your own site and third parties.

Brands like Sony excel here.

They ensure their product and retailer pages feature technical details that are consistent and in-depth across platforms.

Sony – Product – Key Specs

Home and Furniture Brands

Furniture shopping comes with one big question: Will it fit?

AI knows this, which is why technical details dominate recommendations.

Your home and furniture product pages need:

  • Clear dimensions and room size recommendations
  • Assembly requirements (tools, time, difficulty)
  • Materials and care details
  • Quality and sustainability certifications

For example, in a search for modular sofas for small apartments, ChatGPT mentions configurations in its answer:

ChatGPT mentions configurations

One of its top recommendations is a couch by home brand Burrow.

While many factors go into this, its product page is definitely one of them.

It features different configurations of their modular sofas. Plus, the dimensions of each.

Burrow – Configurations of sofas

It also contains other vital information that users might ask AI systems, such as detailed materials and fabric care.

Burrow – Detailed materials & fabric care

Outdoor and Sports Equipment

Customers need to know whether your products will survive their outdoor adventures.

Which is why AI takes these elements into account:

  • Weather ratings and technical materials
  • Performance specs (capacity, weight, range)
  • Use-case scenarios
  • Safety certifications or features

Let’s say your customers ask about hiking backpacks. They’ll see AI models highlight key features, max load, and materials.

Google AI Mode – Best hiking backpacks

Osprey’s backpacks are regularly recommended by AI.

This is because they clearly state use cases like “week-long backpacking trips”:

Osprey – Recommended by AI

They also include features that make it ideal for common use cases: materials, weight, volume, dimensions, and load range.

Osprey – Product specifications

Baby Products

Baby products trigger some of the most safety-sensitive AI recommendations.

AI models look for structured, verifiable details when recommending anything for infants.

If you sell baby products, here’s what your product page should include:

  • Age and weight suitability
  • Safety certifications (like OEKO-TEX, GREENGUARD)
  • Ergonomic or developmental benefits
  • Material and care instructions

For example, BabyBjorn includes safety certifications on its product pages.

And goes deep into safety information.

This includes how the fabrics are developed, and the appropriate age and weight for safe use.

Baby Bjorn – Product fabrics

When I asked Perplexity for the safest baby carrier on the market for newborns, BabyBjorn was among its top recommendations.

It also specifically mentioned the “hip healthy” certification featured on BabyBjorn’s product page.

Perplexity – Baby Bjorn reference

Increase Your Product Page Visibility in AI Search

If you want AI to recommend your products, the best place to start is your product pages.

Small improvements compound quickly.

Clear descriptions. Structured data. Real reviews. Verifiable trust signals.

Together, they shape how AI understands — and surfaces — your products.

But product pages are just the start.

First, download the Product Page AI Optimization Checklist. It tells you exactly what to review, update, and add to make your product pages AI-friendly.

Then, learn how to build an AI ecommerce SEO strategy that improves your visibility across the entire buyer journey.

AI visibility is possible for your products. Keep testing, keep tracking, and keep growing.

The post How to Optimize Your Product Pages for AI Visibility appeared first on Backlinko.

Read more at Read More

GEO vs SEO: What’s the Difference in 2026?

SEO optimizes for ranking in search results. GEO optimizes for being cited in AI-generated answers. That’s the core distinction, but […]

The post GEO vs SEO: What’s the Difference in 2026? appeared first on Onely.

Read more at Read More

Scaling the agentic web with NLWeb

Imagine a web ecosystem where not just humans but AI agents communicate with websites, going beyond traditional browsing. Unlike conventional web experiences, where people click, scroll, and search, AI agents can navigate, interpret, and even perform tasks autonomously on your site. This is not a futuristic concept. It is already unfolding. This is the emergence of the agentic web.

Key takeaways

  • The agentic web enables AI agents to autonomously navigate and interact with websites, shifting user responsibilities from manual navigation to decision-making
  • Protocols are crucial for communication among AI agents; they must rely on structured, machine-readable data for effective coordination
  • SEO professionals must adapt to the agentic web by optimizing websites as endpoints for AI queries, ensuring structured data and clarity
  • NLWeb facilitates interaction between agents and websites by exposing structured data and allowing for natural language queries without traditional interface limitations
  • Yoast’s collaboration with NLWeb helps WordPress users prepare for the agentic web by organizing content and making it easier to integrate structured data

The big shift: From web for users to a web for users and agents

For years, the web followed a simple pattern. Humans searched, clicked, compared, and completed tasks manually. Even as search engines evolved, the interaction model stayed the same: search and click.

That model is changing.

The agentic web represents a shift from a web designed only for human users to one designed for both people and AI assistants. Instead of manually researching products, comparing services, filling out forms, and completing transactions, users will increasingly delegate those tasks to intelligent assistants that can search, interpret information, and act on their behalf. The user’s role shifts from active navigator to decision-maker.

From searching to delegating.

This is not about smarter chat interfaces. It is about autonomous agents that can interpret the search intent, compare options, and execute actions on behalf of users. Websites are no longer just pages to be visited. They are endpoints to be queried.

For that to work at scale, intelligence cannot reside in a single assistant or on a closed platform. It has to be distributed. Systems must be able to communicate with other systems without friction. That requires a web that is machine-readable, interoperable, and built for agent-to-agent interaction.

The agentic web is not a prediction. It is an architectural shift already underway!

Protocol thinking and the infrastructure of agentic web communication

If the agentic web is about intelligent systems interacting with websites, then the real question becomes simple: how do these systems understand each other?

The answer is not design. It is infrastructure.

The web has always depended on shared communication rules. HTTP allows browsers to request pages. RSS distributes updates. Structured data helps search engines interpret meaning. These are not features. They are protocols. They are agreements that enable large-scale coordination.

Now the same logic applies to AI agents.

In the agentic web, agents will not click buttons or visually scan pages. They will send requests, interpret structured responses, compare options, and complete tasks. For that to work across millions of websites, communication cannot be improvised. It must be standardized.

This is where protocol thinking becomes essential.

Protocol thinking means designing websites so they are predictable for machines. Instead of building custom integrations for every assistant or platform, websites expose a consistent interaction layer. Agents do not need to learn every interface. They rely on shared rules.

As emphasized in discussions of distributed intelligence, the goal is not to let a single chatbot control everything. The intelligence must be distributed. Systems need a simplified way to communicate without having to understand the technical details of every tool they connect to.

That only works when there is common ground.

In practical terms, this means:

  • Websites must expose structured, machine-readable data
  • Agents must know what they can ask
  • Responses must follow predictable formats
  • Communication must scale beyond one platform

Protocols create that shared language.

What does this mean for SEO professionals?

As the web evolves to support AI agents, SEO professionals are starting to ask a new question: how do you stay visible when answers are generated instead of ranked?

A clear example of this surfaced during Microsoft’s Ignite event. In a Q&A session, a consultant described a client who sells products like mayonnaise and wanted their brand to appear when someone asks an AI assistant about mayonnaise. The question was simple, but it revealed something deeper. If AI systems generate answers instead of listing search results, what does optimization look like?

This is where the shift becomes real.

The agentic web does not replace the open web. It adds another layer on top of it. Search engines still index pages. Rankings still matter. But intelligent systems can now query websites directly, compare information across sources, and generate synthesized responses.

For SEOs, this changes the website’s role.

It is no longer enough to think in terms of pages to be visited. Websites must be treated as endpoints to be queried.

This means structured data, clean information architecture, and machine-readable content are not just enhancements for rich results. They are the foundation that allows AI systems to interpret and select your content in the first place.

Watch the full event here!

Key takeaway for SEOs

The agentic web is an additional layer on the open web, not a replacement for it. To stay visible, SEO professionals must ensure their websites are structured, accessible, and ready to be queried by intelligent systems.

Visibility in this new layer depends on clarity, interoperability, and infrastructure.

Must read: Why does having insights across multiple LLMs matter for brand visibility?

Introducing NLWeb

NLWeb was first introduced by Microsoft in May 2025 as an open project designed to make it simple for websites to offer rich natural language interfaces using their own data and model of choice. Later, in November at Microsoft Ignite, Microsoft presented NLWeb again alongside its first enterprise offering through Microsoft Foundry.

At its core, NLWeb aims to make it easy for a website to function like an AI app. Instead of navigating pages manually, users and agents can query a site’s content directly using natural language.

But NLWeb is more than just a conversational layer.

Every NLWeb instance is also a Model Context Protocol, or MCP, server. This means that when a website enables NLWeb, it becomes inherently discoverable and accessible to agents operating within the MCP ecosystem. In simple terms, agents do not need custom integrations for every site. If a website supports NLWeb, agents can recognize it and interact with it in a standardized way.

NLWeb is a conversational layer that interacts with a website and retrieves information

NLWeb builds on formats that websites already use, such as Schema.org and RSS. It combines that structured data with large language models to generate natural language responses. This allows websites to expose their content in a way that both humans and AI agents can understand.

Importantly, NLWeb is technology agnostic. Site owners can choose their preferred infrastructure, models, and databases. The goal is interoperability, not platform lock-in.

In many ways, NLWeb is positioned to play a role in the agentic web similar to what HTML did for the early web. It provides a shared communication layer that allows agents to query websites directly, without relying only on traditional crawling or visual interfaces.

How is NLWeb different from standard LLM citations?

With standard LLM citations, the model generates an answer first, then adds sources. The response is still probabilistic, which can introduce inaccuracies or hallucinations.

NLWeb works differently.

It treats the language model as a smart retrieval layer. Instead of inventing answers, it pulls verified objects directly from the website’s structured data and presents them in natural language.

That distinction matters. It means responses are grounded in the publisher’s own data from the start, reducing the risk of hallucination and giving site owners greater control over how their content is represented.

What NLWeb means for the agentic web

The agentic web depends on systems being able to communicate at scale. Agents cannot manually interpret every interface or navigate every page visually. They need structured, machine-readable access.

NLWeb helps enable that.

Instead of requiring custom integrations for every assistant or platform, a website can expose an NLWeb-enabled endpoint. Agents only need to know that a site supports NLWeb. The protocol handles how requests are made and how responses are structured.

This supports a more distributed ecosystem. The goal is not to let one chatbot control everything. Intelligence must be distributed across the web.

Generative interfaces do not replace content. They depend on well-structured, accessible content. When an AI system summarizes results or compares options, it is still drawing from the information that websites provide. NLWeb simply creates a clearer path for that interaction.

Yoast’s collaboration with NLWeb and what it means for WordPress users

As part of the NLWeb announcement, Microsoft highlighted Yoast as a partner helping bring agentic search capabilities to WordPress. You can read more about this collaboration in our official press announcement on Yoast and Microsoft’s NLWeb integration.

For many WordPress site owners, concepts like infrastructure, endpoints, and protocols can feel abstract. That is exactly where preparation matters.

While Yoast does not automatically deploy NLWeb for users, the schema aggregation feature in Yoast SEO, Yoast SEO Premium, Yoast WooCommerce SEO, and Yoast SEO AI+ organizes and structures content, making it significantly easier to build NLWeb. When site owners enable the relevant Yoast feature, nothing changes visually on the front end. What changes is the underlying structure.

In short, we map and organize structured data to reduce the technical effort required to build NLWeb on top of it. In other words, we help publishers complete much of the groundwork.

The agentic web is not about chasing a trend. It is about ensuring your content remains discoverable, understandable, and usable in a world where intelligent systems increasingly act on behalf of users.

The post Scaling the agentic web with NLWeb appeared first on Yoast.

Read more at Read More

How To Use AI To Support Your Design Process

If you’re trying to figure out how to use AI for graphic design without wrecking your creative process, you’re asking the right question. 

This is about removing friction, not replacing designers. AI tools now live inside Figma, Adobe, and most product workflows. They generate layouts, draft UX copy, suggest components, and summarize research in seconds. Teams that ignore this shift don’t stay sharp—they fall behind. 

The opportunity is acceleration, not just automation. Used correctly, AI helps you explore more ideas, test faster, and scale design systems without burning out your team. Used poorly, it creates generic work and weakens your brand. 

Key Takeaways

  • AI works best as a speed multiplier inside research, ideation, and iteration, not as a final decision-maker. 
  • Designers who use AI strategically explore more concepts in less time without sacrificing quality. 
  • AI excels at pattern recognition and scale; humans lead brand, emotion, and strategic direction. 
  • Workflow integration matters more than tool selection. Start with clear constraints and review checkpoints. 
  • The future of design is human-led, AI-supported collaboration where judgment and curation become competitive advantages. 

Why AI Is Reshaping The AI Design Process

AI in design moved fast. A few years ago, tools generated rough visual experiments that designers used for inspiration. Now AI sits inside production software as an integrated feature. Adobe Firefly brings generative fill directly into Creative Cloud. Figma AI assists with layout generation and content drafting inside live design files, not separate applications. 

Adobe Color in action.

This mirrors broader workforce trends. Recent research estimates that generative AI could automate 60 to 70 percent of employees’ time spent on tasks like writing, coding, and content creation. That number matters for designers because the tools have reached production quality. 

AI now handles generating layout variations instantly, scaling design systems across multiple pages, drafting placeholder UX copy, summarizing user interviews, and creating image assets. The shift is about integration into daily workflows, not novelty anymore. 

AI becomes a speed multiplier that lets designers iterate faster, a pattern recognizer that surfaces insights humans might miss in large datasets, and a scale enabler that makes it practical to test dozens of variations instead of three. Designers who ignore these capabilities don’t become more original. They become slower than competitors who’ve figured out how to direct AI effectively. 

Where AI Fits Inside The Design Process

AI supports nearly every phase of design. The key is knowing where it accelerates thinking and where it should not lead. 

Research & Discovery: AI can summarize user interviews and cluster recurring themes in minutes instead of hours. You can feed it transcripts from customer calls and get structured themes back almost immediately. Those insights become more powerful when layered with strong web design principles that drive conversions. AI spots patterns in what users say—you decide which patterns actually matter for your product strategy. 

Ideation & Concept Development: This is where AI shines for expanding possibility rather than limiting it. You can generate mood boards, explore visual directions, and create layout permutations at a pace that wasn’t realistic before. Prompt an AI tool for 15 different homepage hero structures in minutes, then use your judgment to identify which directions are worth refining. That expansion of options helps teams break out of familiar patterns. 

Wireframing & Prototyping: If you’ve wondered how can AI UX design improve workflows, this phase shows the practical impact. AI suggests layout blocks based on content priority, drafts microcopy that matches tone, and builds rough component structures. Instead of starting from a blank canvas, designers refine something that’s 60% there. That said, if those wireframes need to scale across devices, applying mobile design best practices remains critical. AI can propose structure, but it doesn’t validate usability across breakpoints without human review. 

Using AI For Faster Ideation Without Killing Creativity

The biggest fear designers have is sameness—that AI makes everything look generic. That happens, but only when designers outsource judgment instead of using AI as a thinking tool. 

AI outputs reflect the constraints you give them. Weak prompts produce generic work because the AI defaults to common patterns it’s seen in training data. Clear brand direction, specific visual references, and well-defined constraints produce usable divergence that respects your brand identity. 

Here’s a workflow that protects creativity: Start by defining brand guardrails clearly—your color palette, typography rules, spatial hierarchy, and tone. Then use AI to generate multiple structural variations within those constraints. Review the batch to identify directional strengths. Maybe one layout’s grid system works better than others, or a particular hierarchy feels more balanced. Finally, refine manually, bringing your taste and brand knowledge to polish the direction that showed the most promise.

 

An example brand brief.

AI handles divergence by producing many options quickly. Humans handle convergence by deciding which option best serves users and brand goals. That division of labor is where creative advantage comes from now—not from generating raw assets, but from curating intelligently and knowing what makes one layout stronger than another. 

AI Design Tools Worth Exploring 

Rather than chasing a specific brand to start with your tool search, focus on categories that solve real workflow problems. 

Generative Image Tools: Adobe Firefly integrates into Creative Cloud for generative fill, background creation, and texture generation. Midjourney supports rapid conceptual exploration when you need to visualize abstract ideas quickly. Both have strengths—Firefly for production polish, Midjourney for conceptual divergence. 

Generative AI tools.

Layout Assistants Inside Platforms: Figma AI analyzes content blocks and suggests structural placement patterns based on design principles. That’s often what people mean when asking how does AI web design work in practice—the tool reads your content, understands hierarchy needs, and proposes layouts that respect proximity and visual weight. That saves time, but you still need to ensure those layouts adapt properly across breakpoints. AI suggests structure; it doesn’t validate responsive behavior automatically. 

Layout assistants.

UX Writing AI: Tools like ChatGPT help draft onboarding flows, empty states, error messages, and product explanations. You provide context about your product and tone, and the AI generates options that you refine. This is especially useful for non-writers on small teams who need functional copy quickly. 

An example wireframe.

Design System Scaling Tools: Some tools help propagate design tokens across files, update component variants systematically, and maintain consistency as design systems grow. These reduce manual maintenance overhead that slows teams down. 

Research Summarization Tools: AI accelerates theme extraction and clustering from qualitative research. Feed it interview transcripts or survey responses, and it groups similar feedback into themes. You still interpret what those themes mean strategically, but the initial organization happens faster. 

Design scaling tools.

Where AI Should Not Lead The Design Process 

AI lacks context beyond what you feed it. It doesn’t understand lived experience, cultural nuance, or accountability for its outputs. That creates clear boundaries for where it should support but not lead. 

Avoid delegating these decisions to AI:  

  • Brand strategy that defines who you are and why you matter 
  • Emotional storytelling that connects with users on a human level 
  • Cultural nuance that requires awareness of traditions and sensitivities  
  • Ethical tradeoffs where design choices affect vulnerable users  
  • Accessibility decisions that determine whether people with disabilities can use your product 

Accessibility is a good example of why AI needs human oversight. AI can flag contrast ratios, check color blindness simulations, and suggest alt text. But designing truly inclusive systems demands empathy, user testing with people who have disabilities, and deep knowledge of WCAG compliance. AI can assist; it cannot define inclusive strategy or make judgment calls about complex accessibility tradeoffs. 

The same applies to brand strategy. AI can generate tagline options or suggest positioning statements, but it cannot understand your company’s mission, competitive differentiation, or long-term vision without you providing that context—and even then, it cannot make strategic choices about where to compete or what to stand for. 

Building An AI-Supported Creative Workflow

Tools create speed, but systems create leverage. The difference matters because ad-hoc AI usage creates inconsistency, while structured workflows compound value over time. 

Start by defining when AI enters your process. Use it early for research synthesis, initial ideation, and variant generation. Avoid inserting it into final brand approvals or strategic presentations without clear human oversight. Create shared prompt libraries so your team develops consistent constraints that produce strong outputs. When someone writes a prompt that generates excellent results, save it. That institutional knowledge becomes valuable as your team scales. 

Add review checkpoints where every AI-assisted asset passes human critique before approval. This prevents generic work from slipping through. Someone with design judgment needs to evaluate whether the AI output serves the brief, matches brand standards, and solves the user problem effectively. 

For small teams, this might mean a simple checklist: “AI-generated assets reviewed for brand alignment, user clarity, and accessibility considerations.” For enterprise teams, document AI-assisted decisions for transparency, risk management, and consistency. Treat AI like a junior collaborator—fast, productive, capable of handling repetitive tasks, but requiring clear direction and quality review. 

Common Mistakes When Adding AI To Your Design Process 

Teams make predictable errors when integrating AI. The most common is accepting first outputs without iteration. AI drafts are starting points, not finished work. A layout generated in 30 seconds probably needs 30 minutes of refinement to match your brand and serve users well. 

Another mistake is skipping user validation. AI can generate beautiful interfaces that confuse real users. Always test AI-assisted designs with actual people before shipping. 

Treating AI drafts as final assets creates bland work. AI averages patterns from training data, which means outputs trend toward the middle. Your job is pushing past that average toward something distinctive. 

Letting brand consistency drift happens when different team members use AI with different constraints. Without shared guidelines, your visual identity fragments across projects. 

Ignoring intellectual property implications creates risk. Some AI-generated content may resemble copyrighted work from training data. Review outputs carefully and modify them enough that they’re clearly original. 

The fix for all of these is simple: use AI to explore options, validate with real users, and refine manually before considering anything done. 

The Future of AI in Design: Augmentation, Not Replacement

Creative roles are evolving rather than disappearing. As automation expands, human-centered skills—judgment, taste, strategic thinking, empathy—increase in value rather than decrease. 

Designers who thrive won’t resist AI tools or pretend they don’t exist. They’ll orchestrate AI effectively by directing it toward high-leverage tasks, setting strong constraints, and curating outputs intelligently. The competitive advantage shifts toward direction (knowing what to ask for), judgment (recognizing quality), and system-level thinking (building workflows that scale). 

Faster iteration cycles become normal. Teams that used to test three homepage variations now test thirty. That volume requires better curation skills, knowing which signals indicate a strong concept versus a mediocre one. Blended human-machine creativity becomes standard, where humans provide strategic direction and taste while AI handles speed and scale. 

The designers who struggle will be those who resist integration or, conversely, over-rely on AI without developing their own judgment. The ones who win will use AI to expand what’s possible while keeping human creativity and empathy at the center. 

FAQs

Is AI going to take over the graphic design industry? 

AI automates production-heavy tasks like resizing assets, generating variations, and creating placeholder content, but it can’t replace strategic thinking, brand leadership, or creative interpretation. Jobs evolve—designers spend less time on repetitive tasks and more time on strategy and creative direction. 

Will AI replace graphic designers? 

Design roles are changing while the profession itself remains strong. Designers who integrate AI effectively expand their capacity and iteration speed. The skills that become more valuable are judgment, curation, brand strategy, and user empathy—things AI cannot replicate. 

What are the benefits of AI web design?

AI accelerates layout generation, UX copy drafting, and behavioral analysis. It reduces bottlenecks that slow teams down and increases the number of experiments you can run. That means faster learning cycles and better-informed design decisions. 

Conclusion

AI is a multiplier that amplifies your capabilities rather than a mastermind that makes decisions. If you’re figuring out how to use AI for graphic design, start small and specific. Use it in research synthesis to save hours of manual theme clustering. Use it in early ideation to generate more options than you’d create manually. Measure where it reduces friction without hurting clarity or brand strength. 

Then build structure around what works—shared prompts, review checkpoints, clear documentation of AI-assisted outputs. 

Strong design still depends on human judgment and strategic direction. AI just increases the number of informed experiments you can run.  

The designers who win won’t resist AI. They’ll direct it intelligently while keeping human creativity and judgment at the center of every decision. 

Read more at Read More

Google Ads API enforces daily minimum budget for Demand Gen campaigns

In Google Ads automation, everything is a signal in 2026

Google will begin enforcing a minimum daily budget for Demand Gen campaigns starting April 1, 2026.

What’s happening: The Google Ads API will require a minimum daily budget of $5 USD (or local equivalent) for all Demand Gen campaigns. The change is designed to help campaigns move through the “cold start” phase with enough spend for Google’s models to learn and optimize effectively. The update will roll out as an unversioned API change, applying across all buying paths.

Technical details:

  • In API v21 and above, campaigns set below the threshold will trigger a BUDGET_BELOW_DAILY_MINIMUM error, with additional details available in the error metadata.
  • In API v20, advertisers will receive a generic UNKNOWN error, with the specific validation failure referenced in the unpublished error code field.

The rule applies when modifying budgets, start dates, or end dates in ways that push daily spend below the $5 floor — covering both daily and flighted budgets.

Impact on existing campaigns. Current Demand Gen campaigns running below the minimum will continue serving. However, any future edits to budgets or scheduling will require compliance with the new floor.

Why we care. For advertisers and developers, this adds a new compliance layer to campaign management workflows. Systems will need updating to catch and handle the new validation errors before deployment.

The bottom line. Google is standardizing a minimum investment threshold for Demand Gen — prioritizing performance stability, while requiring advertisers to adjust budgets and automation accordingly.

Read more at Read More

The AI engine pipeline: 10 gates that decide whether you win the recommendation

The AI engine pipeline- 10 gates that decide whether you win the recommendation

AI recommendations are inconsistent for some brands and reliable for others because of cascading confidence: entity trust that accumulates or decays at every stage of an algorithmic pipeline.

Addressing that reality requires a discipline that spans the full algorithmic trinity through assistive agent optimization (AAO). It also demands three structural shifts: the funnel moves inside the agent, the push layer returns, and the web index loses its monopoly.

The mechanics behind that shift sit inside the AI engine pipeline. Here’s how it works.

The AI engine pipeline: 10 gates and a feedback loop

Every piece of digital content passes through 10 gates before it becomes an AI recommendation. I call this the AI engine pipeline, DSCRI-ARGDW, which stands for:

  • Discovered: The bot finds you exist.
  • Selected: The bot decides you’re worth fetching.
  • Crawled: The bot retrieves your content.
  • Rendered: The bot translates what it fetched into what it can read.
  • Indexed: The algorithm commits your content to memory.
  • Annotated: The algorithm classifies what your content means across dozens of dimensions.
  • Recruited: The algorithm pulls your content to use.
  • Grounded: The engine verifies your content against other sources.
  • Displayed: The engine presents you to the user.
  • Won: The engine gives you the perfect click at the zero-sum moment in AI.

After “won” comes an 11th gate that belongs to the brand, not the engine: served. What happens after the decision feeds back into the AI engine pipeline as entity confidence, making the next cycle stronger or weaker.

DSCRI is absolute. Are you creating a friction-free path for the bots?

ARGDW is relative. How do you compare to your competition? Are you creating a situation in which you’re relatively more “tasty” to the algorithms?

Cascading confidence is multiplicative

Both sides of the AI engine pipeline are sequential. Each gate feeds the next.

Content entering DSCRI through the traditional pull path passes through every gate. Content entering through structured feeds or direct data push can skip some or all of the infrastructure gates entirely, arriving at the competitive phase with minimal attenuation.

Skipped gates are a huge win, so take that option wherever and whenever you can. You “jump the queue” and start at a later stage without the degraded confidence of the previous ones. That changes the economics of the entire pipeline, and I’ll come back to why.

Why the four-step model falls short

The four-step model the SEO industry inherited from 1998 — crawl, index, rank, display — collapses five distinct infrastructure processes into “crawl and index” and five distinct competitive processes into “rank and display.”

It might feel like I’m overcomplicating this, but I’m not. Each gate has nuance that merits its standalone position. If you have empathy for the bots, algorithms, and engines, remove friction, and make the content digestible, they’ll move you through each gate cleanly and without losing speed.

Each gate is an opportunity to fail, and each point of potential failure needs a different diagnosis. The industry has been optimizing a four-room house when it lives in a 10-room building, and the rooms it never enters are the ones where the pipes leak the worst.

Most SEO advice operates at the selection, crawling, and rendering gates. Most GEO advice operates at “displayed” and “won,” which is why I’m not a fan of the term. 

Most teams aren’t yet working on annotation and recruitment, which are actually where the biggest structural advantages are created.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Three audiences you need to cater to and three acts you need to master

The AI engine pipeline has an entry condition — discovery — and nine processing gates organized in three acts of three, each with a different primary audience.

Act I: Retrieval (selection, crawling, rendering)

  • The primary audience is the bot, and the optimization objective is frictionless accessibility.

Act II: Storage (indexing, annotation, recruitment)

  • The primary audience is the algorithm, and the optimization objective is being worth remembering: verifiably relevant, confidently annotated, and worth recruiting over the competition.

Act III: Execution (grounding, display, won)

  • The primary audience is the engine and, by extension, the person using the engine, where the optimization objective is being convincing enough that the engine chooses and the person acts.

Frictionless for bots, worth remembering for algorithms, and convincing for people. Content must pass every machine gate and still persuade a human at the end.

The audiences are nested, not parallel. Content can only reach the algorithm through the bot and can only reach the person through the algorithm. You can have the most impeccable expertise and authority credentials in the world. If the bot can’t process your page cleanly, the algorithm will never see it.

This is the nested audience model: bot, then algorithm, then person. Every optimization strategy should start by identifying which audience it serves and whether the upstream audiences are already satisfied.

Discovery: The system learns you exist

Discovery is binary. Either the system has encountered your URL or it hasn’t. Fabrice Canel, principal program manager at Microsoft responsible for Bing’s crawling infrastructure, confirmed:

  • “You want to be in control of your SEO. You want to be in control of a crawler. And IndexNow, with sitemaps, enable this control.”

The entity home website, the canonical web property you control, is the primary discovery anchor. The system doesn’t just ask, “Does this URL exist?” It asks, “Does this URL belong to an entity I already trust?” Content without entity association arrives as an orphan, and orphans wait at the back of the queue.

The push layer — IndexNow, MCP, structured feeds — changes the economics of this gate entirely. A later piece in this series is dedicated to what changes when you stop waiting to be found.

Act I: The bot decides whether to fetch your content

Selection: The system decides whether your content is worth crawling

Not everything that’s discovered gets crawled. The system makes a triage decision based on countless signals, including entity authority, freshness, crawl budget, perceived value, and predicted cost.

Selection is where entity confidence first translates into a concrete pipeline advantage. The system already has an opinion about you before it crawls a single page. That opinion determines how many of your pages it bothers to look at.

Crawling: The bot arrives and fetches your content

Every technical SEO understands this gate. Server response time, robots.txt, redirect chains. Foundational, but not differentiating.

What most practitioners miss is that the bot doesn’t arrive in a vacuum. Canel confirmed that context from the referring page can be carried forward during crawling. With highly relevant links, the bot carries more context than it would from a link on an unrelated directory.

Rendering: The bot builds the page the algorithm will see

This is where everything changes and where most teams aren’t yet paying attention. The bot executes JavaScript if it chooses to, builds the Document Object Model (DOM), and produces the full rendered page. 

But here’s a question you probably haven’t considered: how much of your published content does the bot actually see after this step? If bots don’t execute your code, your content is invisible. More subtly, if they can’t parse your DOM cleanly, that content loses significant value.

Google and Bing have extended a favor for years: they render JavaScript. Most AI agent bots don’t. If your content sits behind client-side rendering, a growing proportion of the systems that matter simply never see it.

Representatives from both Google and Bing have also discussed the efforts they make to interpret messy HTML. Here’s one way to look at it: search was built on favors, and those favors aren’t being offered by the new players in AI.

Importantly, content lost at rendering can’t be recovered at any downstream gate. Every annotation, grounding decision, and display outcome depends on what survives rendering. If rendering is your weakest gate, it’s your F on the report card. Everything downstream inherits that grade.

Act II: The algorithm decides whether your content is worth remembering

This is where most brands are losing out because most optimization advice doesn’t address the next two gates. And remember, if your content fails to pass any single gate, it’s no longer in the race.

Indexing: Where HTML stops being HTML

Rendering produces the full page as the bot sees it. Indexing then transforms that DOM into something the system can store. Two things happen here that the industry often misses:

  • The system strips the navigation, header, footer, and sidebar — elements that repeat across multiple pages on your site. These aren’t stored per page. The system’s primary goal is to identify the core content. This is why I’ve talked about the importance of semantic HTML5 for years. It matters at a mechanical level: <nav>, <header>, <footer>, <aside>, <main>, and <article> tell the system where to cut. Without semantic markup, it has to guess. Gary Illyes confirmed at BrightonSEO in 2017, possibly 2018, that this was one of the hardest problems they had at the time.
  • The system chunks and converts. The core content is broken into blocks or passages of text, images with associated text, video, and audio. Each chunk is transformed into a proprietary internal format. Illyes described the result as something like a folder with subfolders, each containing a typed chunk. The page becomes a hierarchical structure of typed content blocks.

I call this conversion fidelity: how much semantic information survives the strip, chunk, convert, and store sequence. Rendering fidelity (Gate 3) measures whether the bot could consume your content. Conversion fidelity (Gate 4) measures whether the system preserved it accurately when filing it away.

Both fidelity losses are irreversible, but they fail differently. Rendering fidelity fails when JavaScript doesn’t execute or content is too difficult for the bot to parse. Conversion fidelity fails when the system can’t identify which parts of your page are core content, when your structure doesn’t chunk cleanly, or when semantic relationships between elements don’t survive the format conversion.

Something we often overlook is that even after a successful crawl, indexing isn’t guaranteed. Content that passes through crawl and render may still not be indexed.

That might sound bad enough, but here’s a distinction that should concern you: indexing and annotation are separate processes. Content may be indexed but poorly annotated — stored in the system but semantically misclassified. Non-indexed content is invisible. Misannotated content actively confuses the system about who you are, which can be worse.

Annotation: Where entity confidence is built or broken

This is the gate most of the industry has yet to address.

Think of annotations as sticky notes on the indexed “folders” created at the indexing gate. Indexing algorithms add multiple annotations to every piece of content in the index.

I identified 24 annotation dimensions I felt confident sharing with Canel. When I asked him, his response was, “Oh, there is definitely more.” 

Those 24 dimensions were organized across five annotation layers: 

  • Gatekeepers (scope classification).
  • Core identity (semantic extraction).
  • Selection filters (content categorization).
  • Confidence multipliers (reliability assessment).
  • Extraction quality (usability evaluation).

There are certainly more layers, and each layer likely includes more dimensions than I’ve mapped. Hundreds, probably thousands. This is an open model. The community is invited to map the dimensions I’ve missed.

Annotation is where the system decides the facts: 

  • What your content is about.
  • Where it fits into the wider world.
  • How useful it is.
  • Which entity it belongs to.
  • What claims it makes.
  • How those claims relate to claims from other sources. 

Credibility signals — notability, experience, expertise, authority, trust, transparency — are evaluated here. Topical authority is assessed here, too, along with much more.

Annotation operates on what survives rendering and conversion. If critical information was lost at either gate, the annotation system is working with degraded raw material. It annotates what the annotation engine received, not what you originally published.

Canel confirmed a principle I suggested that should reshape how we think about this gate: “The bot tags without judging. Filtering happens at query time.” Annotation quality determines your eligibility for every downstream triage.

I have a full piece coming on annotation alone. For now, annotation is the gate where most brands silently lose and the one most worth working on.

Recruitment: Where the algorithmic trinity decides whether to absorb you

This is the first explicitly competitive gate. After annotation, the pipeline feeds into three systems simultaneously. 

  • Search engines recruit content for results pages (the document graph). 
  • Knowledge graphs recruit structured facts for entity representation (the entity graph). 
  • Large language models recruit patterns for training data and grounding retrieval (the concept graph).

Before recruitment, the system found, crawled, stored, and classified your content. At recruitment, it decides whether your content is worth keeping over alternatives that serve the same purpose.

Being recruited by all three elements of the algorithmic trinity gives you a disproportionate advantage at grounding because the grounding system can find you through multiple retrieval paths, and at display because there are multiple opportunities for visibility.

Recruitment is the structural advantage that separates brands with consistent AI visibility from brands that appear inconsistently.

Get the newsletter search marketers rely on.


Act III: The engine presents and the decision-maker commits

Grounding: Where AI checks its confidence in the content against real-time evidence

This is the gate that separates traditional search from AI recommendations.

Ihab Rizk, who works on Microsoft’s Clarity platform, described the grounding lifecycle this way:

  • The user asks a question. 
  • The LLM checks its internal confidence. If it’s insufficient, it sends cascading queries, multiple angles of intent designed to triangulate the answer, which many people call fan-out queries. 
  • Bots are dispatched to scrape selected pages in real time. 
  • The answer is generated from a combination of training data and fresh retrieval.

But grounding isn’t just search results, as many people believe. The other two technologies in the algorithmic trinity play a role.

The knowledge graph is used to ground facts. AI Overviews explicitly showed information grounded in the knowledge graph. It’s reasonable to assume specialized small language models are used to ground user-facing large language models.

The takeaway is that your content’s performance from discovery through recruitment determines whether your pages are in the candidate pool when grounding begins. If your content isn’t indexed, isn’t well annotated, or isn’t associated with a high-confidence entity, it won’t be in the retrieval set for any part of the trinity. The engine will ground its answer on someone else’s content instead.

You can’t optimize for grounding if your content never reaches the grounding stage.

Display: The output of the pipeline

Display is where most AI tracking tools operate. They measure what AI says about you. But by the time you’re measuring display, the decisions were already made upstream, from discovery through grounding.

Brands with high cascading confidence appear consistently. Brands with low cascading confidence appear intermittently, the same phenomenon Rand Fishkin demonstrated.

Display is where AI meets the user. It also covers the acquisition funnel, which is easy to understand and meaningful for marketers. This is where most businesses focus because it’s visible and sits just before the click. I’ll write a full article on that later in this series.

Won: The moment the decision-maker commits

Won is the terminal processing gate in the AI engine pipeline. Ten gates of processing, three acts of audience satisfaction, and it comes down to this: Did the system trust you enough to commit?

The accumulated confidence at this gate is called “won probability,” the system’s calculated likelihood that committing to you is the right decision. Three resolutions are possible, and they form a spectrum. To understand why that spectrum matters, you need to understand the 95/5 rule.

Professor John Dawes at the Ehrenberg-Bass Institute demonstrated that at any given moment, only about 5% of potential buyers are actively in-market. The other 95% aren’t ready to purchase. You sell to the 5%, but the real job of marketing is staying top of mind for the other 95% so that when they decide to move to purchase, on their schedule, not yours, you’re the brand they think of.

The three scenarios that follow show how AI takes over the job of being top of mind at the critical moment for the 95%. I call this top of algorithmic mind.

  • The imperfect click: The person browses a list of options, pogo-sticks between results, and decides. Traditional search and what Google called the zero moment of truth. The system doesn’t know who is ready. It shows everyone the same list and hopes. The 95/5 efficiency is low. You’re hitting and hoping, and so is the engine.
  • The perfect click: The AI recommends one solution and the person takes it. I call this the zero-sum moment in AI. This is where we are right now with assistive engines like ChatGPT, Perplexity, and AI Mode. The system has filtered for intent, context, and readiness. It presents one answer to a person moving from the 95% into the 5% with much higher precision.
  • The agential click: The agent commits, either after pausing for human approval, “Shall I book this?” or autonomously. The agent caught the moment of readiness, did the work, and closed it. Maximum precision. This is the ultimate solution to the 95/5 problem: AI catches the exact moment and acts.
The Won Spectrum

Search won’t disappear. Most people will always want to browse some of the time. Window shopping is fun, and emotionally charged decisions aren’t something people will always delegate.

The trajectory, however, moves from imperfect to perfect to agential. Brands need to optimize for all three outcomes on that spectrum, starting now. Optimizing for agents should already be part of your strategy, as should optimizing for assistive engines and search engines. AAO covers them all.

Search engines, AI assistive engines, and assistive agents are your untrained salesforce. Your job is to train them well enough that you’re top of algorithmic mind at the moment the 95% become the 5%, and the AI either:

  • Offers you as an option.
  • Recommends you as the best solution.
  • Actively makes the conversion for you.

Dig deeper: SEO in the age of AI: Becoming the trusted answer

Served: The pipeline remembers

After conversion, the brand takes over. You should optimize the post-won feedback gate. The processing pipeline, the DSCRI-ARGDW spine, gets you to the decision. Served sits outside that spine as the gate that closes the loop, turning the line into a circle.

Every “won” that produces a positive outcome strengthens the next cycle’s cascading confidence. Every “won” that produces a negative outcome weakens it. Ten gates get you to the decision. The 11th, served, determines whether the decision repeats and your advantage compounds.

This is where the business lives. Acquisition without retention is a leak, both directly and indirectly through the AI engine pipeline feedback loop.

Brands that engineer their post-won experience to generate positive evidence, reviews, repeat engagement, low return rates, and completion signals, build a flywheel. Brands that neglect post-won burn confidence with every cycle.

Diagnosing failure in the pipeline

The three acts — bot, algorithm, engine, or person — describe who you’re speaking to. The two phases describe what kind of test you’re taking.

  • Phase 1: Infrastructure, discovery through indexing
    • Absolute tests. You either pass or fail. A page that can’t be rendered doesn’t get partially indexed. Infrastructure gates are binary: pass or stall.
  • Phase 2: Competitive, annotation through won
    • Relative tests. Winning depends not just on how good your content is but on how good the competition is at the same gate.

The practical implication is infrastructure first, competitive second. If your content isn’t being found, rendered, or indexed correctly, fixing annotation quality is wasted effort. You’re decorating a room the building inspector hasn’t cleared.

In practice, brands tend to fail in three predictable ways.

  • Opportunity cost (Act I: Bot failures)
    • Your content isn’t in the system, so you have zero opportunity. Cheapest to fix, most expensive to ignore.
  • Competitive loss (Act II: Algorithm failures) 
    • Your content is in the system, but competitors’ content is preferred. The brand believes it’s doing everything right while AI systems consistently choose a competitor at recruitment, grounding, and display.
  • Conversion leak (Act III: Engine failures)
    • Your content is presented, but the system hedges or fumbles the recommendation. In short, you lose the sale.
The AI engine pipeline - DSCRI-ARGDW-Sv

Every gate you pass still costs you signal

In 2019, I published How Google Universal Search Ranking Works: Darwinism in Search, based on a direct explanation from Google’s Illyes about how Google calculates ranking bids by multiplying individual factor scores. A zero on any factor kills the entire bid.

Darwin’s natural selection works the same way: fitness is the product across all dimensions, and a single zero kills the organism. Brent D. Payne made this analogy: “Better to be a straight C student than three As and an F.” 

As with Google’s bidding system, cascading confidence is multiplicative, not additive. Here’s what that means:

Per-gate confidence Surviving signal at the won gate
90% 34.9%
80% 10.7%
70% 2.8%
60% 0.6%
50% 0.1%

Illustrative math, not a measurement. The principle is what matters: strengths don’t compensate for weaknesses in a multiplicative chain.

A single weak gate destroys everything. Nine gates at 90% plus one at 50% drops you from 34.9% to 19.4%. If that gate drops to 10%, it kills the surviving signal entirely. A near-zero anywhere in a multiplicative chain makes the whole chain near-zero.

This is competitive math. If your competitors are all at 50% per gate and you’re at 60%, you win: 0.6% surviving signal against their 0.1%. Not because you’re excellent, but because you’re less bad. 

Most brands aren’t at 90%. The worse your gates are, the bigger the gap a small improvement opens. Here’s an example.

Gate D S C R I A Re G Di W Surviving Signal
Discovered Selected Crawled Rendered Indexed Annotated Recruited Grounded Displayed Won
Your Brand 75% 80% 70% 85% 75% 5% 80% 70% 75% 80% 0.4%
Competitor 65% 60% 65% 70% 60% 60% 65% 60% 65% 60% 1.8%

I chose annotated as the “F” grade in this example for demonstrative purposes.

Annotation is the phase-boundary gate. It’s the hinge of the whole pipeline. If the system doesn’t understand what your content is, nothing downstream matters.

Applying this Darwinian principle across a 10-gate pipeline, where confidence is measurable at every transition, is my diagnostic model. I recently filed a patent for the mechanical implementation.

Improving gates versus skipping them

There are two ways to increase your surviving signal through the pipeline, and they aren’t equal.

Improving your gates

Better rendering, cleaner markup, faster servers, and schema help the system classify your content more accurately. These are real gains, single-digit to low double-digit percentage improvements in surviving signal.

For many brands and SEOs, this is maintenance rather than transformation. It matters, and most brands aren’t doing it well, but it’s incremental.

Skipping gates entirely

Structured feeds, Google Merchant Center and OpenAI Product Feed Specification, bypass discovery, selection, crawling, and rendering altogether, delivering your content to the competitive phase with minimal attenuation. 

MCP connections skip even further, making data available from recruitment onward with triple-digit percentage advantages over the pull path.

If you’re only improving gates, you’re leaving an order of magnitude on the table.

The highest-value target is always the weakest gate

Improving your best gate from 95% to 98% is nearly invisible in the pipeline math. Improving your worst gate from 50% to 80% transforms your entire surviving signal. That’s the Darwinian principle at work: fitness is multiplicative, the weakest dimension determines the outcome, and strengths elsewhere can’t compensate.

Most teams are optimizing the wrong gate. Technical SEO, content marketing, and GEO each address different gates. Each is necessary, but none is sufficient because the pipeline requires all 10 to perform. Teams pouring budget into the two or three gates they understand are ignoring the ones that are actually killing their signal.

Then there’s the single-system mistake. At recruitment, the pipeline feeds into three graphs, the algorithmic trinity. Missing one graph means one entire retrieval path doesn’t include you.

You can be perfectly optimized for search engine recruitment and completely absent from the knowledge graph and the LLM training corpus. In a multiplicative system, that gap compounds with every cycle.

Most of the AI tracking industry is measuring outputs without diagnosing inputs, tracking what AI says about you at display when the decisions were already made upstream. That’s like checking your blood pressure without diagnosing the underlying condition.

The tools to do this properly are emerging. Authoritas, for example, can inspect the network requests behind ChatGPT to understand which content is actually formulating answers. But the real work is at the gates upstream of display, where your content either passed or stalled before the engine ever opened its mouth.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Audit your pipeline: Earliest failure first

The correct audit order is pipeline order. Start at discovery and work forward.

If content isn’t being discovered, nothing downstream matters. If it’s discovered but not selected for crawling, rendering fixes are wasted effort. If it’s crawled but renders poorly, every annotation and grounding decision downstream inherits that degradation.

This is your new plan: Find the weakest gate. Fix it. Repeat.

The inconsistency Fishkin documented is a training deficit. The AI engine pipeline is trainable. The training compounds. The walled gardens increase their lock-in with every cycle.

The brand that trains its AI salesforce better than the competition doesn’t just win the next recommendation. It makes the next one easier to win, and the one after that, until the gap widens to the point where competitors can’t close it without starting from scratch.

Without entity understanding, nothing else in this pipeline works. The system needs to know who you are before it can evaluate what you publish. Get that right, build from the brand up through the funnel, and the compounding does the rest.

Next: The five infrastructure gates the industry compressed into ‘crawl and index’

The next piece opens the infrastructure gates in full: rendering fidelity, conversion fidelity, JavaScript as a favor, not a standard, structured data as the native language of the infrastructure phase, and the investment comparison that puts numbers on improving gates versus skipping them entirely. 

The sequential audit shows where your content is dying before the algorithm ever sees it, and once you see the leaks, you can start plugging them in the order that moves your surviving signal the most.

This is the third piece in my AI authority series. The first, “Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it,” introduced cascading confidence. The second, “AAO: Why assistive agent optimization is the next evolution of SEO” named the discipline. 

Read more at Read More