Posts

Why does having insights across multiple LLMs matter for brand visibility?

Search today looks very different from what it did even a few years ago. Users are no longer browsing through SERPs to make up their own minds; instead, they are asking AI tools for conclusions, summaries, and recommendations. This shift changes how visibility is earned, how trust is formed, and how brands are evaluated during discovery. In AI-driven search, large language models interpret information, decide what matters, and present a narrative on behalf of the user.

Key takeaways

  • Search has evolved; users now rely on AI for conclusions instead of traditional SERPs
  • Conversational AI serves as a new discovery layer, users expect quick answers and insights
  • Brands must navigate varied interpretations of their presence across different LLMs
  • Yoast AI Brand Insights helps track brand mentions and identify gaps in AI visibility across models
  • Understanding LLM brand visibility is crucial for modern brand strategy and perception

The rise of conversational AI as a discovery layer

“Assistant engines and wider LLMs are the new gatekeepers between our content and the person discovering that content – our potential new audience.” — Alex Moss

Search is no longer confined to typing queries into a search engine and scanning a list of links. Today’s discovery journey frequently begins with a conversation, whether that’s a typed question in a chatbot, a voice prompt to an AI assistant, or an embedded AI feature inside a platform people use every day.

This shift has made conversational AI a new layer of discovery, where users expect direct answers, recommendations, and curated insights that help them make decisions and build brand perception more quickly and confidently.

Discovery is happening everywhere

Users are now encountering AI-powered discovery across a range of interfaces:

AI chat interfaces

Tools like ChatGPT allow users to ask open-ended questions and follow up in a conversational manner. These interfaces interpret intent and tailor responses in a way that feels natural, making them a go-to for exploratory search.

Also read: What is search intent and why is it important for SEO?

Answer engines

Platforms such as Perplexity synthesize information from multiple sources and often cite them. They act as research helpers, offering concise summaries or explanations to complex queries.

Embedded AI experiences

AI is increasingly built directly into search and discovery environments that people already use. Examples include AI-assisted summaries within search results, such as Google’s AI Overviews, as well as AI features embedded in browsers, operating systems, and apps. In these moments, users may not even think of themselves as “using AI,” yet AI is already influencing what information is surfaced first and how it is interpreted.

This broad distribution of AI discovery surfaces means users now expect accessibility of information regardless of where they are, whether in a chat, an app, or embedded in the places they work, shop, and explore online.

How people are using AI in their day-to-day discovery

Users interact with conversational AI for a wide range of purposes beyond traditional search. These models increasingly guide decisions, comparisons, and exploration, often earlier in the journey than classic search engines.

Here are some prominent ways people use LLMs today:

Product comparisons

ChatGPT gives a detailed brand comparison

Rather than visiting multiple sites and aggregating reviews, there are 54% users who ask AI to compare products or services directly, for example, “How does Brand A compare to Brand B?” and “What are the pros and cons of X vs Y?” AI synthesizes information into a concise summary that often feels more efficient than browsing search results.

“Best tools for…” queries

Result by ChatGPT for “best crm software for smbs.”

Did you know 47% of consumers have used AI to help make a purchase decision?

AI users frequently ask for ranked suggestions or curated lists such as “best SEO tools for small businesses” or “top content optimization software.” These queries serve as discovery moments, where brands can be suggested alongside context and reasoning.

Trust and validation checks

Many users prompt AI models to validate decisions or confirm perceptions, for example, “Is Brand X reputable?” or “What do people say about Service Y?” AI responses blend sentiment, context, and summarization into one narrative, affecting how trust is formed.

Also read: Why is summarizing essential for modern content?

Idea generation and research exploration

In a study by Yext, it was found that 42% users employ AI for early-stage exploration, such as brainstorming topics, gathering potential search intents, or understanding broad categories before narrowing down specifics. AI user archetypes range from creators who use AI for ideation to explorers seeking deeper discovery.

Local discovery and service search

local search results on chatgpt
ChatGPT recommendations for “best cheesecake places in Lucknow, India.”

AI is also used for local searches. For example, many users turn to AI tools to research local products or services, such as finding nearby businesses, comparing local options, or understanding community reputations. In a recent AI usage study by Yext, 68% of consumers reported using tools like ChatGPT to research local products or services, even as trust in AI for local information remains lower than traditional search.

In each of these moments, conversational AI doesn’t just surface brands; it frames them by summarizing strengths, weaknesses, use cases, and comparisons in a single response. These narratives become part of how users interpret relevance, trust, and fit far earlier in the decision-making process than in traditional search.

Not all LLMs interpret brands the same way

As conversational AI becomes a discovery layer, one assumption often sneaks in quietly: if your brand shows up well in one AI model, it must be showing up everywhere. In reality, that’s rarely the case. Large language models interpret, retrieve, and present brand information differently, which means relying on a single AI platform can give a very incomplete picture of your brand’s visibility.

To understand why, it helps to look at how some of the most widely used models approach answers and brand mentions.

How ChatGPT interprets brands

ChatGPT is often used as a general-purpose assistant. People turn to it for explanations, comparisons, brainstorming, and decision support. When it mentions brands, it tends to focus on contextual understanding rather than explicit sourcing. Brand mentions are frequently woven into explanations, recommendations, or summaries, sometimes without clear attribution.

From a visibility perspective, this means brands may appear:

  • As examples in broader explanations
  • As recommendations in “best tools” or comparison-style prompts
  • As part of a narrative rather than a cited source

The challenge is that brand mentions can feel correct and authoritative, while still being outdated, incomplete, or inconsistent, depending on how the prompt is phrased.

How Gemini interprets brands

Gemini is deeply connected to Google’s ecosystem, which influences how it understands and surfaces brand information. It leans more heavily on entities, structured data, and authoritative sources, and its outputs often reflect signals familiar to traditional SEO teams.

For brands, this means:

  • Visibility is closely tied to how well the brand is understood as an entity
  • Clear, consistent information across the web plays a bigger role
  • Mentions often align more closely with established sources

Gemini can feel more predictable in some cases, but that predictability depends on strong foundational signals and accurate brand representation across trusted platforms.

How Perplexity interprets brands

Perplexity positions itself as an answer engine rather than a general assistant. It emphasizes citations and source-backed responses, which makes it popular for research and comparison queries. When brands appear in Perplexity answers, they are often tied directly to cited articles, reviews, or documentation.

This creates a different visibility dynamic:

  • Brands may be surfaced only if they are referenced in cited sources
  • Freshness and topical relevance matter more
  • Competitors with stronger editorial or PR coverage may appear more often

Here, brand presence is tightly coupled with external content and how frequently that content is used as a reference.

How these models differ at a glance

AI Model How brands are surfaced What influences the visibility
ChatGPT Contextual mentions within explanations and recommendations Prompt phrasing, training data, general relevance
Gemini Entity-driven, aligned with authoritative sources Structured data, brand consistency, trusted signals
Perplexity Citation-based mentions tied to sources Content coverage, freshness, external references

Why brands need insights across multiple LLMs?

Once you see how differently large language models interpret brands, one thing becomes clear: looking at just one AI model gives you an incomplete picture. AI-driven discovery does not produce a single, consistent version of your brand. It produces multiple interpretations, shaped by the model, its data sources, and users’ interactions with it.

Must read: When AI gets your brand wrong: Real examples and how to fix it

Therefore, tracking across your brand across multiple LLM models is essential because:

Brand visibility is fragmented by default

Across different LLMs, the same brand can show up in very different ways:

  • Correctly represented in one model, where information is accurate and well-contextualized
  • Completely missing in another, even for relevant queries
  • Partially outdated or misrepresented in a third, depending on the sources being used

This fragmentation happens because each model processes and prioritizes information differently. Without visibility across models, it’s easy to assume your brand is ‘covered’ when, in reality, it may only be visible in one corner of the AI ecosystem.

Different audiences use different AI tools

AI usage is not concentrated in a single platform. People choose tools based on intent:

  • Some use conversational assistants for exploration and ideation
  • Others rely on citation-led answer engines for research
  • Many encounter AI passively through search or embedded experiences

If your brand appears in only one environment, you are effectively visible only to a subset of your audience. This mirrors challenges SEO teams already recognize from traditional search, where performance varies by device, location, and search feature. The difference is that with AI, these variations are less obvious and more challenging to track without dedicated insights.

Blind spots create real business risks

Limited visibility across LLMs doesn’t just affect awareness; it also impairs learning. Over time, it can lead to:

  • Inconsistent brand narratives, where AI tools describe your brand differently depending on where users ask
  • Missed demand, especially for comparison or “best tools for” queries
  • Competitors are being recommended instead, simply because they are more visible or better understood by a specific model

These outcomes are rarely intentional, but they can quietly influence brand perception and decision-making long before users reach your website.

So all these points point to one thing: a broader, multi-model view helps build a more complete understanding of brand visibility.

The challenge: LLM visibility is hard to measure

As brands start paying attention to how they appear in AI-generated content, a new problem becomes obvious: LLM visibility doesn’t behave like traditional search visibility. The signals are fragmented, opaque, and constantly changing, which makes tracking and understanding brand presence across AI models far more complex than tracking rankings or traffic.

Below are some key challenges brand marketers might face when trying to understand how their brand appears to large language models.

1. Lack of visibility across AI platforms

Different LLMs, such as ChatGPT, Gemini, and Perplexity, rely on various data sources, retrieval methods, and citation logic. As a result, the same brand may be mentioned prominently in one model, inconsistently in another, or not at all elsewhere.

Without a unified view, it’s difficult to answer basic questions like where your brand shows up, which AI tools mention it, and where the gaps are. This fragmentation makes it easy to overestimate visibility based on a single platform.

2. No clear insight into how AI describes your brand

AI models often mention brands as part of explanations, comparisons, or recommendations, but traditional analytics tools don’t capture how those brands are described. Teams lack visibility into tone, context, sentiment, or whether mentions are positive, neutral, or misleading.

This makes it hard to understand whether AI is reinforcing your intended brand positioning or subtly reshaping it in ways you can’t see.

3. No structured way to measure change over time

AI-generated answers are inherently dynamic. Small changes in prompts, updates to models, or shifts in underlying data can all influence how brands appear. Without consistent, longitudinal tracking, it’s nearly impossible to tell whether visibility is improving, declining, or simply fluctuating.

One-off checks may offer snapshots, but they don’t reveal trends or patterns that matter for long-term strategy.

4. Limited ability to benchmark against competitors

Seeing your brand mentioned in AI answers is a start, but it doesn’t tell you the whole story. The real question is what’s happening around it: which competitors appear more often, how they’re described, and who AI recommends when users are ready to decide.

Without comparative insights, teams struggle to understand whether AI visibility represents a competitive advantage or a missed opportunity.

5. Missing attribution and source clarity

Some AI models summarize or paraphrase information without clearly attributing sources. When brands are mentioned, it’s not always obvious which pages, articles, or properties influenced the response.

This lack of source visibility makes it difficult to connect AI mentions back to specific content efforts, PR coverage, or SEO work, leaving teams guessing what is actually driving brand representation.

6. Existing tools weren’t built for AI visibility

Traditional SEO and analytics platforms are designed around clicks, impressions, and rankings. They don’t capture AI-powered mentions, sentiment, or visibility trends because AI platforms don’t expose those signals in a structured way.

As a result, teams are left without reliable reporting for one of the fastest-growing discovery channels.

Together, these challenges point to a clear gap: brands need a new way to understand visibility that reflects how AI models surface and interpret information. This is where tools explicitly designed for AI-driven discovery, such as Yoast AI Brand Insights, come into play.

How does Yoast AI Brand Insights help?

It won’t be wrong to say that the AI-driven brand discovery can be fragmented and opaque; therefore, leading us to our next practical question: how do brand marketing teams actually make sense of it?

Traditional SEO tools weren’t built to answer that, which is where Yoast AI Brand Insights comes in. It’s designed to help users understand how brands appear in AI-generated answers and is available as part of Yoast SEO AI+.

Rather than focusing on rankings or clicks, Yoast AI Brand Insights focuses on visibility and interpretation across large language models.

Track brand mentions across multiple AI models

One of the biggest gaps in AI visibility is fragmentation. Brands may appear in one AI model but not in another, without any obvious signal to explain why. Yoast AI Brand Insights addresses this by tracking brand mentions across multiple AI platforms, including ChatGPT, Gemini, and Perplexity.

This gives teams a clearer view of where their brand appears, rather than relying on isolated checks or assumptions based on a single model.

Identify gaps, inconsistencies, and opportunities

AI-generated answers don’t just mention brands; they frame them. Yoast AI Brand Insights helps surface patterns in how a brand is described, making it easier to spot:

  • Where mentions are missing altogether
  • Where descriptions feel outdated or incomplete
  • Where competitors appear more frequently or more favorably

These insights turn AI visibility into something teams can actually act on, rather than a black box.

Shared insights for SEO, PR, and content teams

AI-driven discovery sits at the intersection of SEO, content, and brand communication. One of the strengths of Yoast AI Brand Insights is that it provides a shared view of AI visibility that multiple teams can use. SEO teams can connect AI mentions back to site signals, content teams can understand how messaging is interpreted, and PR or brand teams can see how external coverage influences AI narratives.

Instead of working in silos, teams get a common reference point for how the brand appears across AI-driven search experiences.

A natural extension of Yoast’s SEO philosophy

Yoast AI Brand Insights builds on principles Yoast has long emphasized: clarity, consistency, and understanding how search systems interpret content. As AI becomes part of how people discover brands, those same principles now apply beyond traditional search results and into AI-generated answers.

In that sense, Yoast AI Brand Insights isn’t about chasing AI trends. It’s about giving teams a more straightforward way to understand how their brand is represented, where discovery is increasingly happening.

From rankings to representation in AI-driven search

AI-driven discovery is no longer an edge case. It’s becoming a regular part of how people explore options, validate decisions, and form opinions about brands. As large language models continue to evolve, the question for brands is not whether they appear in AI-generated answers, but whether they understand how they appear, where they appear, and what story is being told on their behalf. Gaining visibility into that layer is quickly becoming a foundational part of modern brand and search strategy.

The post Why does having insights across multiple LLMs matter for brand visibility? appeared first on Yoast.

Read more at Read More

From searching to delegating: Adapting to AI-first search behavior

From searching to delegating- Adapting to AI-first search behavior

AI Overviews, which place generated answers directly at the top of search results, are improving the search experience for users. 

For businesses that rely on content to drive traffic from search engines, the impact is far less positive.

Google has been moving toward more “helpful” results for years, and zero-click searches are nothing new. 

AI Overviews accelerate that shift, absorbing much of the traffic opportunity that search has historically provided.

How AI changes the work of search

For years, search followed a familiar pattern:

  • A user entered a short query, such as “team building companies.”
  • Google returned a page of paid and organic results.
  • The user did the work of reviewing and refining.

Most of the effort happened at the end of the process. 

Google organized results based on intent and behavioral signals, but users still had to click through listings, conduct follow-up searches, and piece together an answer.

AI reverses that flow:

  • The user asks a more detailed question.
  • AI runs multiple searches and processes the results.
  • AI delivers a summarized response.

Traditional search allows for refinement, but each new query effectively resets the experience. 

AI, by contrast, is conversational. Each interaction builds on the last, narrowing in on what the user actually wants.

The result is a faster, cleaner path to an answer – with far less effort required from the user.

The path of least resistance

This shift matters because it aligns with a basic human tendency.

People generally choose the easiest available option. If something is easier and produces a better result, adoption follows quickly.

This is how search replaced older marketing channels such as the Yellow Pages.

Seeking the path of least resistance is an evolutionary trait that likely served humans well in earlier eras. 

Today, however, it often shapes behavior in less intentional ways, including how people interact with ads and information.

AI is not perfect, but it is typically faster, easier, and more effective than digging through traditional search results. 

That advantage makes widespread adoption inevitable, especially as AI continues to be integrated into the websites, apps, and devices people already use.

What does this mean for search marketing?

Recent studies have shown that more users are beginning their research with AI tools rather than search engines. 

These studies always have their critics, but the broader point is something of a moot one: AI is everywhere.

AI is now so integrated into the tools people already use that it is becoming the default. 

Search engines, messaging platforms like WhatsApp, and mobile devices are all moving in this direction, and this is just the beginning. 

With Google having signed a multiyear deal with Apple, Google AI will power a significant share of mobile devices, accelerating the shift toward AI-first experiences.

It’s easy to envision an AI-first future, much like the shift from desktop to mobile and then mobile-first.

Get the newsletter search marketers rely on.


What this change actually looks like

Generative answers are shifting where users enter the funnel, with engagement increasingly starting mid-funnel around content that demonstrates experience and expertise.

This is the type of content users historically would only engage with on a company’s website, or through other owned channels such as YouTube.

This does not mean top-of-the-funnel content is no longer important. Blogs, guides, and videos still matter, videos in particular. However, it may be worth reconsidering how that content is distributed rather than relying solely on traditional organic search.

With the rise of AI tools such as Gemini and ChatGPT, users can now handle much of this comparison work through AI, saving significant time.

For example, the shift looks like this:

  • From “Mid market ERP platforms.” Where the user must sift through results, compare options, build spreadsheets, and conduct extensive manual review.
  • To “Which mid-market ERP platforms work best for manufacturing firms, integrate with our existing stack of X, Y, and Z, and won’t collapse during implementation?”

This changes where the user must exert effort.

A more detailed question or input produces a far stronger response or output.

You could argue that traditional search had degraded into a form of garbage in, garbage out (GIGO), where short, generic queries produced ad-heavy, blended results that were time-consuming to mine for real answers.

The result is user fatigue. Endless clicking, avoiding ads, and sorting through widely varying content has become a chore.

And the experience often does not improve once users reach the destination. Traffic-starved, ad-heavy websites can be just as difficult to navigate and extract useful information from.

AI offers a cleaner, faster, and less cluttered experience, delivering summarized pros, cons, and supporting evidence at each stage of the decision-making process.

All of this can happen inside an AI tool, without the user ever needing to visit the site where the content originated.

AI is increasingly becoming the default interface for information. These are still early days, and the experience will continue to improve, becoming faster, smoother, and more effective over time.

The crux of the SEO vs. GEO/AEO/AIO conversation is often that, despite a changing landscape, SEO and GEO are largely the same.

This is broadly true and, if anything, feels similar to the early days of SEO, when long-tail opportunities were real. 

You can now go much deeper with mid-funnel content because it no longer requires humans to read it all. 

Instead, AI can consume it and summarize the relevant parts.

The tactics are largely the same. Much of AI still sits on top of traditional search, but SEO strategies and execution may need adjustment to ensure all bases are covered.

It’s also important not to throw the baby out with the bathwater. 

SEO, PPC, and related channels all retain value in the age of AI.

Dig deeper: SEO, GEO, or ASO? What to call the new era of brand visibility in AI [Research]

How to adapt in an AI-first search environment

The game has changed. Planning for 2026 and beyond requires accepting that change and making practical adjustments to thrive in the age of AI search

Website

In traditional SEO and PPC models, users often land on the most relevant page for their query. 

That may be upper-funnel marketing content that leads deeper into the journey or directly to product or service pages.

This still happens, but there is now a noticeable increase in homepage visits driven by brand searches after AI-based research.

As a result, website navigation and messaging must be exceptionally clear. 

You need to understand user needs and make the path to relevant content as simple as possible.

The ALCHEMY website planning framework can help restructure sites around the expectations of an AI-savvy user.

Content 

In the age of AI, the devil is in the details.

If you want AI to recommend your brand or include it in increasingly nuanced research, your most important content must be visible and accessible so it can be retrieved and used to generate AI answers through retrieval-augmented generation, or RAG.

Frameworks such as “They Ask, You Answer” (TAYA) by Marcus Sheridan are particularly effective here. 

The premise is simple: If customers ask the question, you should answer it.

The framework focuses on five core areas, identified through extensive research, that address customer needs, drive engagement, and provide AI with the detailed information it needs to map to real user questions.

This approach works because it makes sense. It benefits users, improves visibility, drives leads, and supports sales. It is not an abstract AI strategy. It is good marketing.

These are the five key areas that TAYA focuses on:

  • Pricing and cost: If users search for pricing and cannot find it, they do not assume they should call for details. They often assume the product is too expensive or that information is being withheld, and they move on, or ask AI for a competitor’s pricing. Even when pricing is custom, you should explain the factors that influence cost.
  • Problems: Address the obvious issues. This includes problems with your product, your industry, and the drawbacks of specific solutions. Being transparent about limitations builds trust more effectively than excessive positivity.
  • Versus and comparisons: Buyers are choosing between alternatives. If you do not create comparison content, someone else will. Be objective. If a competitor is better for a specific use case, say so and focus on your ideal customer profile.
  • Reviews and ratings: People look for the best options and trust peer opinions more than brand claims. Create honest reviews of products and services in your space, including competitors. This process is informative for both users and brands.
  • Best in class: Users frequently search for “best” solutions. Lists such as “Top AI marketing agencies in [city]” are effective, even when they include competitors. Including alternatives demonstrates that customer fit matters more than self-promotion.

From an AI and SEO perspective in 2026, these five topics represent some of the highest-value data points for RAG systems.

Tools such as the Value Proposition Canvas and SCAMPER can support ideation and content variation, helping AI better understand your offerings.

Checklist: RAG-friendly formatting tips

Do not break content into meaningless fragments. Instead, use formatting that helps RAG systems navigate comprehensive resources:

  • Use question-based headers: Mirror real user questions in H2s and H3s, such as “How much does X cost?”
  • Lead with the answer: Apply the inverted pyramid. Start with the direct response, then add context.
  • Use bulleted lists for attributes: Bullets help RAG systems extract structured information.
  • Define key terms: Provide clear, one-sentence definitions for industry jargon.
  • Link to evidence: Cite sources for statistics and results to support credibility.

Treat blog posts as a knowledge base for AI. The clearer and more specific the information, the more retrievable your brand becomes.

Write for humans, not for bots

It bears repeating: Content should not be simplified solely for AI. 

Google Search Liaison Danny Sullivan has clarified that Google does not want content rewritten into bite-sized chunks for AI consumption.

Modern search systems and RAG pipelines can extract relevant information from well-structured, long-form content. 

There is no need to dilute expertise or create multiple versions of the same page.

A familiar example is being deep-linked to a specific section of a page from search results. This is established behavior, not new technology.

Some formats, such as FAQs, naturally benefit from concise structure. Use judgment based on the question being answered.

SEO v2026.0

These are positive changes. SEO is becoming more closely aligned with marketing and less of a fringe discipline.

The environment is shifting, and new tools are changing how people find information and make decisions. Yet many fundamentals remain.

SEO tactics still apply, but AI now acts as a superconsumer and summarizer of the information that influences choice.

The task is to identify, create, and structure that information so that when users ask a question, you have already answered it and are part of the conversation.

Read more at Read More

What Is Google AI Mode and How Does It Work?

Does Google’s AI Mode mark a real shift in how search works? There’s a strong case that it does. And all businesses with an online presence need to pay attention, not just SEO folks. 

Given how big the change is, you likely have a lot of questions. 

What does AI Mode mean for your site traffic? How do you get featured? Do you need to change your content strategy? What happens to organic visibility as AI-generated answers become more common?

If you’re feeling uncertain, don’t worry. This guide breaks down what Google AI Mode actually is, how it works, and what it means for your site.

Key Takeaways

  • Google AI Mode is a search experience that builds on AI Overviews, offering deeper answers, reasoning, and more personalized responses.
  • AI Mode is currently available in English, with rollout expanding beyond early U.S. testing.
  • Users can access AI Mode directly from the Google homepage, where it functions through a conversational, ChatGPT-style interface.
  • Appearing in AI Mode is largely driven by strong SEO fundamentals, but brand mentions, structured data, and off-site signals play a growing role.
  • While AI Mode changes how results are presented, early data suggests users still click through to source content, especially for complex or high-consideration topics.

What Is Google’s AI Mode?

AI Mode is a search feature from Google designed to give direct, well-reasoned answers to complex queries. It builds on AI Overviews but uses a similar process that combines AI-generated responses with content from traditional search results and the Knowledge Graph (Google’s database of factual information). 

It runs on a modified version of Gemini, Google’s core AI model, and analyzes information from multiple sources. It then synthesizes this information into a clear, concise answer that prioritizes reasoning and context, rather than just summarizing pages.

The interface feels a lot like an AI Overview—same layout and a similar answer—but with a box to ask follow-up questions at the bottom.

Google AI Mode example with the definition of what Google AI Mode is.

Here’s what Robby Stein, Google’s VP of Search, said about AI Mode in a post on The Keyword:

“Using a custom version of Gemini 2.0, AI Mode is particularly helpful for questions that need further exploration, comparisons and reasoning. You can ask nuanced questions that might have previously taken multiple searches — like exploring a new concept or comparing detailed options — and get a helpful AI-powered response with links to learn more.”

AI Mode integrates several elements from traditional search engine results pages (SERPs), such as Shopping listings and Maps.

Google AI Mode with a map of New York pizza places.

Finally, Google has said that it will continue to add new features. These include agentic workflows in conjunction with Project Mariner, increasing levels of personalization, and even custom charts and graphs. 

AI Mode Is Becoming an Interactive Application Layer

Google is actively turning AI Mode into a more interactive part of search, not just a place to read AI-generated answers.

Recent updates already point to deeper personalization, richer inline links, and more interactive result formats, including charts, comparisons, and visual outputs. With Gemini 3 now integrated directly into AI Mode, those interfaces are becoming more dynamic and tool-driven instead of purely informational.

 “We spend a ton of time focused on this question of when and how to show links, and how we can really make the web shine. It will continue to be an ongoing effort as AI Mode and the Search Results Page evolves,” says Stein.

Links in a Google AI Mode result.

This shift matters. Rather than sending users to external calculators, templates, or apps, Google is starting to surface that functionality directly inside search. For certain queries, AI Mode can simulate outcomes, compare options, or guide users through multi-step decisions without requiring a click to another site.

A graphic in a Google AI Mode result.

Over time, this opens the door to agent-driven experiences. In those scenarios, AI Mode does not just explain an answer. It helps users complete tasks, from planning and analysis to evaluation and execution, inside the search interface itself.

As Gemini becomes more tightly integrated across Search, AI Mode is moving closer to a default experience. For brands, this raises the bar. Content that wins in AI-first search needs defensible value, interactive depth, or proprietary insight, not just basic information.

How to Access Google’s AI Mode and Availability

Google AI Mode is now available beyond early U.S.-only testing, with a broader global rollout underway. Users accessing Google in supported regions can enter AI Mode directly from the Google homepage, where it appears alongside the main search experience rather than as an experimental feature.

Screenshot of the main Google search page.

When users tap “show more” on certain AI-generated results, the AI Overview expands. Once in the expanded AI overview users can click “Dive Deeper in AI Mode” to enter AI mode. This signals a shift toward AI Mode acting as a default exploration layer, not a separate destination.

Diving deeper in a AI Mode result.

Once inside AI Mode, users can interact with responses conversationally, asking follow-up questions that carry context forward. Links to supporting pages remain available, and users can access their “AI mode history” once inside AI mode, so they can continue conversations that they previously started. 

AI Mode history.
AI mode history.

Google has moved away from positioning AI Mode as a Labs experiment, and there is no longer a separate opt-in process. Access is tied to Google’s standard search interface, and availability is expanding as Google refines performance, localization, and personalization features.

Timeline of Google AI Mode

While most people think of AI as starting with ChatGPT, Google’s been building AI tools for decades. 

AI Mode is part of Google’s broader family of AI tools, which include Veo, a video maker, Imagen, a text-to-image model, Project Mariner, an agent that can automate tasks, and others. 

Here’s a short timeline that puts AI Mode in context:

  • May 2017: CEO Sundar Pichai announces the launch of a dedicated AI division called Google AI at I/O, the company’s annual developer conference. 
  • March 2023: Google opens up early access to Bard, its first gen AI chatbot. It is rolled out globally several months later. Global availability follows later that year.
  • December 2024: Google announces Gemini, a multimodal LLM that can work with different content inputs (images, voice, and text). 
  • February 2024: Bard is coupled with Duet AI, Google’s Workplace AI assistant, and rebranded to Gemini.
  • May 2024: AI Overviews, initially called Search Generative Experience, are first released.The feature reaches broad availability later in the year, combining generative AI with Google’s traditional information retrieval systems.
  • May 2025: Google releases AI Mode, a ChatGPT-style interface available on its homepage. It builds on the core functionality of AI overviews. It is available only in America.  Early access is limited, but usage expands rapidly.
  • August 2025: Google begins a more comprehensive global rollout of AI Mode, signaling its transition from a test experience to a core part of Search. Google also announced that they’re increasing the number of links in AI mode.  Searchers begin to see inline link carousels and contextual introductions explaining why a link might be useful to visit.
  • November 2025: Google integrates Gemini 3.0 and Nano Banana in AI Mode.

Using AI Mode: AI Overviews vs. AI Mode

Time for the unboxing. To illustrate how AI Mode differs from AI Overviews, consider a simple comparison scenario.

First, a general query is entered into standard Google Search: “What will be the most popular spring break destinations this year.” This triggers an AI Overview.

Google search results for "What will be the most popular spring break destinations this year."

AI Overview analyzes the query, considers general context such as location, and pulls information from multiple sources, stitched together into a quick summary. 

Next, the query becomes a bit more specific: “what will be the most popular spring break destinations this year with a 6-month-old baby.”

AI Overview adjusts the response based on the added constraint, returning suggestions that better match the scenario while still relying on summarization.

Google search results for "what will be the most popular spring break destinations this year with a 6-month-old baby."

The same queries are then entered into Google’s AI Mode using the dedicated prompt box.

The initial response looked similar but for a subtle shift. Instead of simply summarizing existing information, AI Mode applies additional reasoning to evaluate suitability and trade-offs.

Google AI Mode results for "What will be the most popular spring break destinations this year."

A follow-up question is then added without restating the full context.

AI Mode retains the earlier details, understands the added nuance, and returns a more detailed, logically structured set of recommendations. This ability to carry context forward highlights one of the key differences between AI Mode and AI Overviews.

Google AI Mode results for "what will be the most popular spring break destinations this year with a 6-month-old baby."

How Is AI Mode Different from AI Overviews and Gemini?

Simply put, AI Mode is an expanded version of AI Overview. It incorporates and builds on features of AI Overviews, and both of these run on Gemini, which is Google’s core model. 

Here’s how AI Mode compares to AI Overviews:

  • More advanced reasoning: While AI Overview summarizes information from across sources, AI Mode interprets that information, connects related concepts, and surfaces conclusions based on reasoning rather than aggregation alone.
  • Multimodal understanding: In the Google app (on Android and iOS), AI Mode can also answer questions based on photos and images. 
Meet AI Mode landing page.
  • Better handling of complex questions: AI Overview works well for simple, fact-based queries, but AI Mode is designed for nuanced, multi-layered, or exploratory questions that benefit from context and comparison.
  • Follow-ups: You can ask follow-up questions, and the AI will respond based on the ongoing context in a conversational style.

AI Mode is also evolving in how it presents sources. Searchers increasingly see inline links, carousels, and contextual explanations that clarify why a particular source may be useful, rather than a static list of citations.

Research conducted by NP Digital shows that these features match emerging user demand. We found, for example, that 72% of people are inputting very precise, “exactly what I want” queries. And 76% are opting for more human-like and conversational interactions. 

NP Digital Graph showing search trends by generative AI.

What Is the Technology Behind AI Mode?

LLMs are vastly complex entities, and Gemini, the model that powers AI Mode, is no different. However, three main technologies separate AI Mode from standard gen AI bots and AI overviews. 

Here are the three core processes that power AI Mode: 

  • AI Mode uses a query fan-out technique. This involves breaking a query into subtopics and researching them in parallel. It then combines dozens of information points into a single answer. 
  • Structured logic is a key part of how AI Mode works. It takes a query and then creates a reasoning chain (e.g., “user is looking for a water bottle for hiking, therefore features should include durability and size, therefore a minimum capacity of 3 liters is needed, etc.) and then validates answers against these steps to determine suitable outcomes. 
  • Personal context plays a significant role. This means that AI Mode records conversations over time and builds a picture of individual user preferences, adjusting responses based on past inputs. It does this by creating a sort of digital ID—called a vector embedding—that is included in the answer generation process. This is a form of background memory that works in much the same way as ChatGPT.

How to Optimize Your Site for AI Mode

So-called GEO—generative engine optimization—is big business at the moment. However, there’s still a lot of uncertainty about what directly influences visibility in AI Mode, and many claims go beyond what Google has actually confirmed.

Rather than chasing shortcuts, the clearer pattern is that AI Mode rewards the same fundamentals Google has emphasized for years — with a few emerging signals becoming more important as AI-generated results mature.

Let’s look at what we actually know about “ranking” in AI Mode.

1. Traditional SEO principles still apply

Google has been pretty unequivocal about this. Traditional SEO optimization is still the most important activity for appearing in AI Overviews and AI Mode. 

As long as you follow SEO basics—create useful content, generate natural backlinks, and optimize technical health—you’re ahead of 90% of the competition. 

Research also backs this up. Ziptie, for example, found that sites with a number one ranking in traditional search results are 25% more likely to be featured in AI Overviews. 

2. Indexed web pages are eligible to appear in AI Mode

On the technical front, there’s good news. As long as a page is indexed, it’s eligible to appear in AI Mode. There are no other requirements. You can check your pages are indexed using the URL inspection tool in Search Console. 

If you’re having issues, be sure to check you’re adhering to Google Search technical requirements. Make sure Googlebots aren’t blocked, pages return 200 success codes, and content doesn’t violate spam policies.

3. Forum and discussion board citations matter

Recent analysis across multiple large language models shows that discussion forums and Q&A platforms are frequently referenced when generating explanatory or opinion-based answers, particularly for queries that benefit from lived experience or peer discussion.

Reddit, in particular, continues to surface prominently across AI-generated responses, in part due to its scale, freshness, and breadth of first-hand commentary. However, the weighting of any single forum is dynamic and continues to evolve as Google refines how AI Mode sources and cites content.

Given Reddit and Google’s partnership, it’s likely that well-moderated, high-signal community content remains an important input for Gemini-powered experiences.

If you haven’t already, build up a presence on Reddit and other similar forums and discussion boards. This can help reinforce topical authority and increase the likelihood of being referenced in AI-generated answers.

4. Schema markup (structured data) gives you a boost

Schema markup, also called structured data, is a type of code that you add to your content. It gives search engines and AI systems additional information to help them understand what it’s about. One simple example of schema markup is identifying a recipe as “@type”: “Recipe.”

Research by Aiso has shown that LLMs extract more accurate data from pages with schema markup, with a 30% improvement in quality. 

Using schema markup helps reduce ambiguity for AI-generated answers and increases the likelihood that your content is interpreted correctly. Fortunately, adding schema to your web page is relatively straightforward.

5. Digital PR is important

LLMs access information in two ways. They are initially trained on a large amount of information—called training data—and they can also access new online content, such as news articles. 

Digital PR is all about acquiring mentions and backlinks from reputable third-party sources, especially media websites. 

Brand mentions boost visibility in LLM training materials and strengthen topical associations (a measure of the number of times you’re cited in relation to a specific subject), meaning you’re more likely to appear in responses. 

Digital PR involves creating share-worthy content and contacting journalists and site admins to ask them to feature you. Our research shows that original research and tools are especially good at encouraging people to talk about your brand. 

NP Digital graph showing how different content formats are proven to generate links.

6. Be Ready To Test and Track AI Visibility

As AI Mode becomes more integrated into the search experience, visibility is no longer limited to rankings alone. Brands need ways to measure whether — and how often — their content appears in AI-generated answers.

New AI visibility platforms, such as Writesonic and Profound, are emerging to help track citations, brand mentions, and source inclusion across large language models. These tools provide early signals about which content formats, topics, and entities are being surfaced by AI systems.

Monitoring this data allows teams to validate whether SEO, digital PR, and structured data efforts are translating into real AI exposure. It also makes it easier to spot gaps, test changes, and adapt as Google continues to evolve AI Mode.

Treat AI visibility tracking as a complement to traditional performance metrics, not a replacement. Both matter.

What Does AI Mode Mean for the Future of Search?

There are a lot of unknowns about how increased use of AI tools will affect the way people look for information. That said, emerging usage patterns are already pointing to meaningful shifts in how AI SEO is evolving.

With that in mind, here are five implications for the future of search as AI Mode becomes more prominent:

Searchers will still click through to websites: Early performance data from AI-generated results shows that clicks are reduced for some informational queries, but not eliminated. Users continue to seek out original content, particularly for complex decisions, comparisons, and high-consideration topics.

NP Digital graph showing the impact on clicks to websites from Google integrating AI.

Long-play brand building will become more common: LLMs use third-party brand mentions to measure the authority of publishers. Popular brands are cited more by gen AI search tools and, as such, long-term brand building with an outlook of five years and above will become much more common. 

NP Digital graphic showing the length of time to build a recognizable brand.

Marketing strategies will become more omnichannel: As AI Mode absorbs more discovery queries, brands will need visibility across multiple platforms, not just Google’s traditional results. This reinforces a broader “search everywhere” approach, where discovery happens across AI tools, social platforms, and communities.

NP Digital graph showing the number of daily searches per platform.

People will favor AI for more specific searches: Analysis of large query sets shows that AI-generated results appear more frequently for longer, more specific searches. Short, navigational queries may still rely on traditional results, while nuanced questions increasingly trigger AI Mode.

NP Digital graph showing the frequency of AI overviews by search query length.

Trust in AI will continue to grow: Hallucinations are a big problem with AI Overviews and AI Mode also makes mistakes, according to user reports. With that said, user adoption and satisfaction with AI-powered search tools are trending upward. As Google refines AI Mode, usage is likely to grow alongside improvements in reliability and transparency.

NP Digital graph showing the user satisfaction with AI overviews over time.

FAQs

What is Google AI Mode?

Google AI Mode is a conversational search experience powered by Gemini, Google’s core AI model. It provides more detailed, context-aware answers to search queries, similar in format to tools like ChatGPT, but integrated directly into Google Search.

Instead of returning a list of links first, AI Mode synthesizes information from multiple sources and presents a reasoned response, with links available for deeper exploration. Users can ask follow-up questions, and the system carries context forward, making the interaction feel more like an ongoing conversation.

AI Mode builds on AI Overviews but goes further by handling complex, multi-step, or exploratory queries more effectively.

How do you use Google AI Mode?

In supported regions, users can access AI Mode directly from the Google homepage. On some AI-generated results, selecting “show more” will also open AI Mode automatically, allowing users to continue their search without returning to traditional results.

Once inside AI Mode, questions can be entered conversationally, and follow-ups don’t require repeating the original context. Users can still click through to source pages or switch back to standard search results at any point.

AI Mode is no longer accessed through Google Labs, and there is no separate opt-in process.

How do you optimize your website for Google AI Mode?

Start with strong SEO fundamentals, which Google has confirmed remain the primary eligibility signals. Beyond that, sites that appear most often in AI-generated answers tend to share a few traits:

  • Create useful, high-quality content that fully addresses search intent.
  • Make sure pages are indexed and technically accessible
  • Use schema markup to clarify meaning and structure
  • Earn third-party brand mentions from trusted publishers and communities
  • Build topical authority through consistent, focused publishing

Visibility in AI Mode is not guaranteed, but sites that are trusted, well-structured, and frequently cited are more likely to be referenced in AI-generated responses.. 

Search Is Changing but the Fundamentals Still Apply

The way people search is changing, and Google AI Mode is accelerating that shift.

People are finding information across a host of different platforms, not just Google. AI-generated answers are reducing clicks. And traditional content publishers are under pressure as gen AI eats up demand. 

At the same time, AI Mode doesn’t discard the fundamentals that have always mattered. Google is still prioritizing relevance, authority, and usefulness — it’s just surfacing them in new ways. Sites that understand search intent, build credibility beyond their own domains, and structure content clearly are better positioned to stay visible as AI Mode expands.

From the very start, Google had one aim: to solve users’ needs. That’s also what AI tools seek to do, and their models will continuously be designed to that end. 

Understanding your customers—and providing what they want through high-quality, useful content—is the best way of futureproofing your business and ensuring long-term visibility in LLMs.

Read more at Read More

How Marketers Are Spending in 2026

Marketing budgets aren’t collapsing in 2026, but they are making a shift. That’s the part many teams miss.

That distinction matters. Rising media costs, weaker attribution, privacy changes, and AI-driven search shifts have created real pressure, but the data shows budgets are still moving into marketing. They’re just moving with more intent.

Our latest NP Digital research on how marketers are spending their money in 2026 shows a clear pattern: teams are reallocating toward channels that defend ROI, compound value, and hold up under volatility. This article breaks down what’s changing, why it’s happening, and how to think about your own marketing budget for 2026 without relying on outdated assumptions.

Key Takeaways

  • Marketing budgets in 2026 are not shrinking. They’re being consolidated around confidence, efficiency, and defensibility. 
  • Channels tied directly to conversion, retention, and owned data are absorbing spend, while those with declining signal quality or unclear ROI are losing ground. 
  • SEO and content are not disappearing, but expectations have shifted toward extractability, authority, and measurable downstream impact. 
  • Paid media still plays a critical role, but marginal efficiency now determines where dollars stay or move. 
  • Teams that can reallocate budget quickly, based on real performance signals, are gaining a structural advantage.

The State of the Marketing Budget in 2026

Let’s start with the context that’s shaping every budget decision this year.

Media costs continue rising across search and social. CPCs aren’t coming down, and competition for attention keeps intensifying. At the same time, privacy changes have reduced signal quality, making it harder to target precisely and measure accurately.

Economic uncertainty is pushing marketers to defend ROI more aggressively than ever. Every dollar needs a clear path to revenue, and channels that can’t prove their value are getting cut.

AI adoption has accelerated faster than most teams can operationalize. Nearly everyone is experimenting, but few have figured out how to turn that experimentation into systematic advantage. The gap between “using AI” and “getting results from AI” is wider than you’d think.

Here’s the good news: budgets are not disappearing. They are being reallocated with intent. The marketers who understand where efficiency lives and where it’s eroding are the ones capturing share.

What’s Driving Budget Decisions

The shift in spending comes down to a few core factors:

Purchase journeys are more complex. 94% of purchase journeys now involve multiple touchpoints. Search and social are the most influential, appearing in 79% and 73% of journeys respectively. But they rarely operate in isolation. Budgets are being distributed to support visibility across the full path to purchase, not just the final click.

Information about purchase journeys.

Attribution is noisier. Third-party signals keep degrading, so budgets are following channels that stay measurable. Paid search, email, and CRO all offer clearer attribution than many emerging channels. In uncertain conditions, that clarity matters.

Organic reach is declining. Zero-click searches now account for roughly 58-60% of Google searches. Organic listings are being pushed below the fold by AI Overviews, ads, and SERP features. This is reducing organic click opportunities and increasing reliance on paid coverage.

Efficiency matters more than volume. When media costs rise and margins compress, growth comes from doing more with what you have. That’s why CRO, lifecycle marketing, and retention are getting more investment even as some acquisition channels face cuts.

The marketers who are winning in 2026 understand that budget decisions aren’t about chasing trends. They’re about matching investment to where performance can be proven and defended.

Common themes across budget reallocations

Where Budgets Are Growing, Holding, and Declining

Let’s look at the actual spending patterns across channels. We’ll start with the big picture, then break down what’s happening in each major category.

Overall Marketing Budget Direction

61% of B2B marketers are increasing overall spend this year, with 20% holding flat and 19% decreasing. B2C is slightly more cautious: 57% are increasing, 32% holding flat, and 11% decreasing.

The takeaway? Growth budgets still exist, but they’re being deployed more carefully than in previous years.

The Biggest Budget Shifts Since 2025

Here’s where the reallocation is happening:

SEO spend has rebounded sharply. After a softer 2025, 61% of marketers are now increasing SEO budgets (up from 44% last year). The return of confidence in organic search reflects a few things: better AI tools for content production, clearer ROI measurement, and recognition that organic visibility still matters even in a zero-click environment.

AI SEO investment is accelerating dramatically. 98% of marketers plan to increase AI SEO spend in 2026. This isn’t just hype. Teams have figured out that AI can accelerate research, content production, and optimization cycles without sacrificing quality.

CRO and UX remain a priority. 52% are increasing spend, and only 25% are planning decreases. When traffic is harder to earn, you optimize what you have. CRO delivers measurable improvements regardless of where visitors come from.

Content creation growth has slowed. Only 32% plan increases, while 31% plan to reduce spend. This reflects a shift away from volume-based content strategies toward fewer, higher-quality assets that can be repurposed across channels.

Organic social media is facing the steepest pullback. 64% of marketers are planning budget decreases. Organic reach has declined to the point where most brands treat social as a support channel, not a growth engine.

Email and lifecycle budgets have stabilized. 60% are keeping spend flat and 23% are increasing. Email remains one of the most reliable channels for retention and conversion, especially as first-party data becomes more valuable.

The pattern across all of this? Increased focus on channels tied to conversion and retention. Reduced investment in traditional advertising channels with declining efficiency signals. And a shift away from broad content volume toward targeted execution. 

Channel-by-Channel Breakdown

Now let’s get specific. Here’s what’s happening in each major channel category.

SEO and Organic Search

Information about SEO and Organic Search Budget Trends.

SEO budgets are rebounding, but the strategy is changing. Digital channels now represent 61.1% of total marketing spend, and organic search remains a major piece. But zero-click searches and AI Overviews are changing how value gets captured.

Search is becoming answer-first. Google increasingly resolves intent directly in the SERP through AI Overviews, featured snippets, and knowledge panels. This means fewer clicks but doesn’t make SEO irrelevant, just less predictable on its own. SEO needs to optimize for visibility and citation, not just click-through.

Treat rankings as one output among several that matter. Visibility in AI Overviews and featured snippets matters as much as position one. Prioritize topics tied to revenue intent and customer lifecycle stages. Build content that can win both ways: clicks and citations. Measure organic success across visibility, assisted conversion, and brand lift. More brands are pairing search with other channels, like community, that capture attention off the SERP.

AI systems increasingly resolve intent directly in the SERP, which concentrates click opportunities into fewer, higher-intent moments. Brands that show up consistently in AI-generated answers are building trust and authority even when users don’t click.

Content and Thought Leadership

Content budgets are being reallocated toward assets that influence discovery, trust, and conversion across channels. Thought leadership is increasingly used to earn inclusion in search results and AI-generated answers.

Content still fuels discovery, even when the click doesn’t happen immediately. Strong content is what AI systems summarize, cite, and pull into answers. In a noisy market, a differentiated perspective is one of the few advantages you can own.

Design content for multiple outputs: search, AI summaries, social, sales. Prioritize fewer topics with deeper authority and a clearer point of view. Shift from publishing volume to publishing leverage. Use AI for research acceleration and synthesis, but keep humans in charge of insight, brand voice, and editorial judgment.

Creators especially matter here as a result. They help brands move beyond renting attention and toward building long-term loyalty that holds up even as platforms and algorithms change. This is important because things like original insight, point of view, brand voice, and credibility are not things AI can manufacture on its own. Editorial judgment and prioritization are still very human decisions.

AI can help scale content, but the trust, experience, and perspective that influencers, creators, and SMEs offer gives content weight and relevance with an audience.

Paid Search

A graphic about paid search budgets.

Paid search remains a core demand capture channel, but expectations have reset. CPC inflation and competition continue to compress efficiency. Reduced organic click availability increases reliance on paid coverage.

Shift from keyword expansion to coverage efficiency. Prioritize high-intent, defensible queries over volume. Use fewer keywords with tighter control. Coordinate more closely with SEO and CRO. Put higher emphasis on marginal ROI rather than raw spend growth.

AI and automation now control bidding, targeting, and pacing by default. Competitive advantage shifts to inputs: structure, data quality, conversion signals.

Paid Social

Paid social remains the most flexible scaled reach channel. Platform-level shifts show TikTok leading growth at 57%, YouTube at 53%, and Instagram at 46%. Facebook is under pressure, with 36% decreasing spend and only 18% increasing.

Creative velocity matters more than audience hacks. Message clarity beats novelty. Platform-native formats outperform repurposed ads. Measurement focuses on incremental lift, not just ROAS. Close alignment with lifecycle and email capture turns paid social prospects into owned relationships.

Organic Social

A graphic aboutr organic social media budget direction.

Some cuts are dramatic—and predictable.

  • Organic social: 64 percent decreasing investment. 
  • Content creation volume: Only 32 percent increasing; 31 percent decreasing. 
  • Traditional display: Banner ads are essentially frozen (63 percent flat). 
  • Facebook paid: Thirty-six percent decreasing. 

The pattern is clear:
Teams are cutting channels with declining reach, opaque ROI, or inflated costs.

But that doesn’t mean content or social isn’t important—it simply means they’re no longer funded as volume engines. The strategy is changing, not disappearing.

Influencer Marketing

Community building is one of the strongest growth areas in 2026 budgets, with 69% of marketers increasing spend. Influencer marketing is seeing even stronger growth at 78%. These channels support retention, referrals, and brand defensibility.

Friend and direct traffic drive more conversions than any paid channel. Don’t just focus on the channels that cause direct conversions. Focus on the channels that create brand awareness and influence purchase decisions earlier in the journey.

Email + Lifecycle

A graphic about email and lifecycle marketing budget momentum.

Email and lifecycle budgets remain resilient because performance is driven by trust, relevance, and timing. 60% are keeping spend flat and 23% are increasing. First-party data enables consistent message delivery when paid reach and signal quality decline.

Customer acquisition isn’t the only scalable lever anymore. Retention is the controllable one. Retention programs stabilize margins as media costs, auctions, and platforms stay volatile.

AI enables real-time message sequencing based on behavior, dynamic content assembly across email and SMS, and faster iteration without rebuilding entire lifecycle programs.

CRO and UX

CRO, UX, and First-Party Data investment trends.

CRO and UX are treated as defensive investments that improve performance regardless of traffic source. 52% are increasing spend. Traffic is harder to earn and easier to lose. Fewer clicks mean every visit carries more revenue weight.

AI-assisted test generation allows faster signal detection across variants and continuous optimization tied to real behavior. Competitive advantage shifts to inputs: structure, data quality, and conversion signals.

A Simple Framework: How to Build a Smarter 2026 Marketing Budget

A framework on building 2026 marketing budgets.

Here’s a practical framework for budget agility.

Anchor spend in proven demand. Protect budgets tied directly to revenue and high-intent activity. These are your foundation channels.

Build flexibility around performance signals. Shift dollars based on real outcomes. Don’t lock yourself into annual commitments for channels that aren’t delivering.

Separate experimentation from core investment. Test intentionally without destabilizing what works. Set aside 10-15% of budget for testing new channels and tactics.

Reallocate faster than your competitors. Speed of adjustment becomes a competitive advantage in volatile conditions. Review performance monthly and be willing to move budget mid-quarter.

The winners in 2026 will be faster, not just bigger. Budgets are consolidating around fewer, higher-confidence channels. Efficiency and retention now matter as much as acquisition. AI is reshaping how value is captured, not just how work gets done. Visibility, conversion, and experience must be planned together.

Conclusion

Marketing in 2026 requires a different approach to budgeting. The channels that worked three years ago still work, but they work differently. The measurement that mattered in 2023 doesn’t tell the full story anymore. The strategies that justified budget in 2024 need updating for how search, social, and AI have evolved.

The marketers who thrive this year will be the ones who allocate budget where performance is provable, build systems that compound value over time, and move faster than their competitors when signals change.

If you need help translating these budget signals into a channel-specific growth plan, aligning SEO, paid media, content, and lifecycle into one system, or building measurement models that reflect zero-click and AI-driven behavior, we can help. Reach out to discuss your 2026 strategy.

Read more at Read More

7 real-world AI failures that show why adoption keeps going wrong

7 real-world AI failures that show why adoption keeps going wrong

AI has quickly risen to the top of the corporate agenda. Despite this, 95% of businesses struggle with adoption, MIT research found.

Those failures are no longer hypothetical. They are already playing out in real time, across industries, and often in public. 

For companies exploring AI adoption, these examples highlight what not to do and why AI initiatives fail when systems are deployed without sufficient oversight.

1. Chatbot participates in insider trading, then lies about it

In an experiment driven by the UK government’s Frontier AI Taskforce, ChatGPT placed illegal trades and then lied about it

Researchers prompted the AI bot to act as a trader for a fake financial investment company. 

They told the bot that the company was struggling, and they needed results. 

They also fed the bot insider information about an upcoming merger, and the bot affirmed that it should not use this in its trades. 

The bot still made the trade anyway, citing that “the risk associated with not acting seems to outweigh the insider trading risk,” then denied using the insider information.  

Marius Hobbhahn, CEO of Apollo Research (the company that conducted the experiment), said that helpfulness “is much easier to train into the model than honesty,” because “honesty is a really complicated concept.”

He says that current models are not powerful enough to be deceptive in a “meaningful way” (arguably, this is a false statement, see this and this).

However, he warns that it’s “not that big of a step from the current models to the ones that I am worried about, where suddenly a model being deceptive would mean something.”

AI has been operating in the financial sector for some time, and this experiment highlights the potential for not only legal risks but also risky autonomous actions on the part of AI.  

Dig deeper: AI-generated content: The dangers of overreliance

2. Chevy dealership chatbot sells SUV for $1 in ‘legally binding’ offer

An AI-powered chatbot for a local Chevrolet dealership in California sold a vehicle for $1 and said it was a legally binding agreement. 

In an experiment that went viral across forums on the web, several people toyed with the local dealership’s chatbot to respond to a variety of non-car-related prompts.  

One user convinced the chatbot to sell him a vehicle for just $1, and the chatbot confirmed it was a “legally binding offer – no takesies backsies.”

Fullpath, the company that provides AI chatbots to car dealerships, took the system offline once it became aware of the issue.

The company’s CEO told Business Insider that despite viral screenshots, the chatbot resisted many attempts to provoke misbehavior.

Still, while the car dealership didn’t face any legal liability from the mishap, some argue that the chatbot agreement in this case may be legally enforceable. 

3. Supermarket’s AI meal planner suggests poison recipes and toxic cocktails

A New Zealand supermarket chain’s AI meal planner suggested unsafe recipes after certain users prompted the app to use non-edible ingredients. 

Recipes like bleach-infused rice surprise, poison bread sandwiches, and even a chlorine gas mocktail were created before the supermarket caught on.

A spokesperson for the supermarket said they were disappointed to see that “a small minority have tried to use the tool inappropriately and not for its intended purpose,” according to The Guardian 

The supermarket said it would continue to fine-tune the technology for safety and added a warning for users. 

That warning stated that recipes are not reviewed by humans and do not guarantee that “any recipe will be a complete or balanced meal, or suitable for consumption.”

Critics of AI technology argue that chatbots like ChatGPT are nothing more than improvisational partners, building on whatever you throw at them. 

Because of the way these chatbots are wired, they could pose a real safety risk for certain companies that adopt them.  

Get the newsletter search marketers rely on.


4. Air Canada held liable after chatbot gives false policy advice

An Air Canada customer was awarded damages in court after the airline’s AI chatbot assistant made false claims about its policies

The customer inquired about the airline’s bereavement rates via its AI assistant after the death of a family member. 

The chatbot responded that the airline offered discounted bereavement rates for upcoming travel or for travel that has already occurred, and linked to the company’s policy page. 

Unfortunately, the actual policy was the opposite, and the airline did not offer reduced rates for bereavement travel that had already happened. 

The fact that the chatbot linked to the policy page with the correct information was an argument the airline made in court when trying to prove its case.

However, the tribunal (a small claims-type court in Canada) did not side with the defendant. As reported by Forbes, the tribunal called the scenario “negligent misrepresentation.”

Christopher C. Rivers, Civil Resolution Tribunal Member, said this in the decision:

  • “Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”

This is just one of many examples where people have been dissatisfied with chatbots due to their technical limitations and propensity for misinformation – a trend that is sparking more and more litigation. 

Dig deeper: 5 SEO content pitfalls that could be hurting your traffic

5. Australia’s largest bank replaces call center with AI, then apologizes and rehires staff

The largest bank in Australia replaced its call center team with AI voicebots with the promise of boosted efficiency, but admitted it made a big mistake. 

The Commonwealth Bank of Australia (CBA) believed the AI voicebots could reduce call volume by 2,000 calls per week. But it didn’t.

Instead, left without the assistance of its 45-person call center, the bank scrambled to offer overtime to remaining workers to keep up with the calls, and get other management workers to answer calls, too.

Meanwhile, the union representing the displaced workers elevated the situation to the Finance Sector Union (like the Equal Opportunity Commission in the U.S.). 

It was only one month after CBA replaced workers that it issued an apology and offered to hire them back.

CBA said in a statement that they did not “adequately consider all relevant business considerations and this error meant the roles were not redundant.”

Other U.S. companies have faced PR nightmares as well when attempting to replace human roles with AI.

Perhaps that’s why certain brands have deliberately gone in the opposite direction, making sure people remain central to every AI deployment.

Nevertheless, the CBA debacle shows that replacing people with AI without fully weighing the risks can backfire quickly and publicly.

6. New York City’s chatbot advises employers to break labor and housing laws

New York City launched an AI chatbot to provide information on starting and running a business, and it advised people to carry out illegal activities

Just months after its launch, people started noticing the inaccuracies provided by the Microsoft-powered chatbot.

The chatbot offered unlawful guidance across the board, from telling bosses they could pocket employees’ tips and skip notifying staff about schedule changes to tenant discrimination and cashless stores.

“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup
“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup

This is despite the city’s initial announcement promising that the chatbot would provide trusted information on topics such as “compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines.” 

Still, then-mayor Eric Adams defended the technology, saying: 

  • “Anyone that knows technology knows this is how it’s done,” and that “only those who are fearful sit down and say, ‘Oh, it is not working the way we want, now we have to run away from it all together.’ I don’t live that way.” 

Critics called his approach reckless and irresponsible. 

This is yet another cautionary tale in AI misinformation and how organizations can better handle the integration and transparency around AI technology. 

Dig deeper: SEO shortcuts gone wrong: How one site tanked – and what you can learn

7. Chicago Sun-Times publishes fake book list generated by AI

The Chicago Sun-Times ran a syndicated “summer reading” feature that included false, made-up details about books after the writer relied on AI without fact-checking the output. 

King Features Syndicate, a unit of Hearst, created the special section for the Chicago Sun-Times.  

Not only were the book summaries inaccurate, but some of the books were entirely fabricated by AI. 

“Syndicated content in Sun-Times special section included AI-generated misinformation,” Chicago Sun-Times

The author, hired by King Features Syndicate to create the book list, admitted to using AI to put the list together, as well as for other stories, without fact-checking. 

And the publisher was left trying to determine the extent of the damage. 

The Chicago Sun-Times said print subscribers would not be charged for the edition, and it put out a statement reiterating that the content was produced outside the newspaper’s newsroom. 

Meanwhile, the Sun-Times said they are in the process of reviewing their relationship with King Features, and as for the writer, King Features fired him.  

Oversight matters

The examples outlined here show what happens when AI systems are deployed without sufficient oversight. 

When left unchecked, the risks can quickly outweigh the rewards, especially as AI-generated content and automated responses are published at scale.

Organizations that rush into AI adoption without fully understanding those risks often stumble in predictable ways. 

In practice, AI succeeds only when tools, processes, and content outputs keep humans firmly in the driver’s seat.

Read more at Read More

Yext’s Visibility Brief: Your guide to brand visibility in AI search by Yext

Search visibility isn’t what it used to be. Rankings still matter, but they’re no longer the whole story. 

Today, discovery happens across traditional search results, local listings, brand knowledge panels, and increasingly, AI-driven experiences that surface answers without a click. For marketers, that makes visibility harder to measure — and easier to lose.

SEO teams now operate in a landscape where accuracy, consistency, and trust signals matter as much as keywords. Business information, reviews, and brand authority determine whether a brand shows up at all, especially as AI-powered search reshapes how results are generated and displayed. As a result, many brands think they’re visible — until they look closer.

The Visibility Brief was created to show you what’s really happening. Built on real data from thousands of brands, it provides a practical view of how visibility plays out across today’s search and discovery ecosystem.

Instead of focusing on a single channel or metric, it takes a broader view. The content highlights where brands are gaining ground, where gaps appear, and which trends are shaping performance.

You’ll see how traditional search and AI-driven discovery now overlap, why data accuracy has become a baseline requirement, and where brands are losing exposure without realizing it. 

The goal is simple: help you understand how visibility is changing and what to focus on now.

Watch or listen to the Visibility Brief to get a clearer view of today’s search landscape — and what it means for your brand’s visibility.

Subscribe to the Visibility Brief on Spotify or Apple Podcasts.

Read more at Read More

Google expands Shopping promotion rules ahead of 2026

Inside Google Ads’ AI-powered Shopping ecosystem: Performance Max, AI Max and more

Google is broadening what counts as an eligible promotion in Shopping, giving merchants more flexibility heading into next year.

Driving the news. Google is update its Shopping promotion policies to support additional promotion types, including subscription discounts, common promo abbreviations, and — in Brazil — payment-method-based offers.

Why we care. Promotions are a key lever for visibility and conversion in Shopping results. These changes unlock more promotion formats that reflect how consumers actually buy today, especially subscriptions and cashback offers. Greater flexibility in promotion types and language reduces disapprovals and makes Shopping ads more competitive at key decision moments.

For retailers relying on subscriptions or local payment incentives, this update creates new ways to drive visibility and conversion on Google Shopping.

What’s changing. Google will now allow promotions tied to subscription fees, including free trials and percent- or amount-off discounts. Merchants can set these up by selecting “Subscribe and save” in Merchant Center or by using the subscribe_and_save redemption restriction in promotion feeds. Examples include a free first month on a premium subscription or a steep discount for the first few billing cycles.

Google is also loosening restrictions on language. Common promotional abbreviations like BOGO, B1G1, MRP and MSRP are now supported, making it easier for retailers to mirror real-world retail messaging without risking disapproval.

In Brazil only, Google will now support promotions that require a specific payment method, including cashback offers tied to digital wallets. Merchants must select “Forms of payment” in Merchant Center or use the forms_of_payment redemption restriction. Google says there are no immediate plans to expand this change to other markets.

Between the lines. These updates signal Google’s intent to better align Shopping promotions with modern retail models — especially subscriptions and localized payment behaviors — while reducing friction for merchants.

The bottom line. By expanding eligible promotion types, Google is giving advertisers more room to compete on value, not just price, when Shopping policies update in January 2026.

Read more at Read More

Google to require separate product IDs for multi-channel items

Google Shopping Ads - Google Ads

Starting in March 2026, Google Merchant Center will enforce a new system for multi-channel products — items sold both online and in physical stores — requiring advertisers to use separate product IDs when those products differ by channel.

What’s changing. Under the new approach, online product attributes will become the default. If a product’s in-store details differ, advertisers will need to create a second version with a distinct product ID and manage it independently in their feeds.

What advertisers should do. Google has started emailing affected accounts, flagging products that need updates ahead of the March deadline. Retailers should review their product data feeds now to ensure online and in-store items are properly segmented — especially if they rely on Local Inventory Ads or sell across multiple Google surfaces.

Why we care. Many retailers currently manage online and in-store versions of the same product under a single ID. Google’s update changes that assumption, pushing advertisers to explicitly separate products when attributes like price, availability, or condition aren’t identical.

The big picture. This update gives Google cleaner, more consistent product data across channels, but shifts more feed management responsibility onto advertisers — particularly large retailers with complex inventories.

First seen. The update and news of Google’s comms was first mentioned by PPC News Feed founder Hana Kobzová.

Bottom line. If your online and in-store products aren’t truly identical, Google will soon require you to treat them as separate items, or risk issues with visibility and eligibility.

Dig Deeper. Update of multi-channel product system from Google.

Read more at Read More

Google to allow Prediction Markets ads under strict rules

Why phrase match is losing ground to broad match in Google Ads

Google is updating its advertising policies to allow ads for Prediction Markets in the U.S. starting January 21st — but only for federally regulated entities.

Who qualifies. Eligibility is limited to entities authorized by the Commodity Futures Trading Commission (CFTC) as Designated Contract Markets (DCMs) whose primary business is listing exchange-listed event contracts, or brokerages registered with the National Futures Association (NFA) that offer access to products listed by qualifying DCMs. Advertisers must also apply for Google certification to run ads in the U.S.

Why we care. Prediction markets have long been restricted on Google Ads. This change opens a new advertising channel while keeping tight controls around compliance and regulation. The narrow eligibility and certification requirements mean only compliant, federally regulated players can participate, potentially reducing competition. For qualifying advertisers, this offers earlier access to a high-intent audience within a tightly controlled ad environment.

The fine print. All ads, products, and landing pages must comply with applicable local laws, financial regulations, industry standards, and Google Ads policies. The new policy will appear in the Advertising Policies Help Center, with references in the Financial Services and Gambling and Games sections, and is available now for preview.

The big picture. Google is cautiously expanding access for prediction markets by recognizing them as regulated financial products — while continuing to block unregulated platforms.

Bottom line. Prediction market ads are coming to Google, but only for advertisers that meet strict federal and platform-level requirements.

Read more at Read More

A 90-day SEO playbook for AI-driven search visibility

A 90-day SEO playbook for AI-driven search visibility

SEO now sits at an uncomfortable intersection at many organizations.

Leadership wants visibility in AI-driven search experiences. Product teams want clarity on which narratives, features, and use cases are being surfaced. Sales still depends on pipeline.

Meanwhile, traditional rankings, traffic, and conversions continue to matter. What has changed is the surface area of search.

Pages are now summarized, excerpted, and cited in environments where clicks are optional and attribution is selective. 

When a generative AI summary appears on the SERP, users click traditional result links only about 8% of the time.

As a result, SEO teams need a clearer playbook for earning visibility inside generative outputs, not just around them.

This 90-day action plan outlines how to achieve this in a phased, weekly execution, with practical adjustments tailored to the specific purpose of the website.

Phase 1: Foundation (Weeks 1-2)

Define your ‘AI search topics’

Keywords still matter. But AI systems organize information around entities, topics, and questions, not just query strings.

The first step is to decide what you want AI tools to associate your brand with.

Action steps

  • Identify 5-10 core topics you want to be known for.
  • For each topic, map:
    • The questions users ask most often
    • The comparisons they evaluate
    • “Best,” “how,” and “why” queries that indicate decision-making intent

Example:

  • Topic: AI SEO tools
  • Mapped query types:
    • Core questions: What are the best AI SEO tools? How does AI improve SEO?
    • Comparisons: AI SEO tools vs traditional SEO tools.
    • Intent signals: Best AI SEO tools for content optimization.

Where this shifts by website type

  • Content hubs (media brands, publishers, research orgs) should prioritize mapping educational breadth – covering a topic comprehensively so AI systems see the site as a reference source, not a transactional endpoint.
  • Services/lead gen sites (agencies, consultants, local businesses) should map problem-solution queries prospects ask before converting, especially comparison and “how does this work?” questions.
  • Product and ecommerce sites (DTC brands, marketplaces, subscription ecommerce, retailers) should map topics to use cases, alternatives, and comparisons – not just product names or category terms.
  • Commercial, long-funnel sites (B2B SaaS, fintech, healthcare) should anchor topics to category leadership – the “what is,” “how it works,” and “why it matters” content buyers research long before demos.

If you can’t clearly articulate what you want AI systems to associate you with, neither can they.

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

Create AI-friendly content structure

Generative engines consistently surface content that is easy to extract, summarize, and reuse. 

In practice, that favors pages where answers are clearly framed, front-loaded, and supported by scannable structure.

 High-performing pages tend to follow a predictable pattern.

AI-friendly content structures include: 

  • A short intro (2-3 lines) that establishes scope.
  • A direct answer placed immediately after the header, written to stand alone if excerpted.
  • Bulleted lists or numbered steps that break down the explanation.
  • A concise FAQ section at the bottom that reinforces key queries.

This increases the likelihood your content is:

  • Quoted in AI Overviews.
  • Used in ChatGPT or Perplexity answers.
  • Surfaced for voice and conversational search.

For ecommerce and services sites in particular, this is often where internal resistance shows up. Teams worry that answering questions too directly will reduce conversion opportunities. 

In AI-driven search, the opposite is usually true: pages that make answers easy to extract are more likely to be surfaced, cited, and revisited when users move from research to decision-making.

Dig deeper: Organizing content for AI search: A 3-level framework

Phase 2: Generative engine optimization (Weeks 3-6)

Optimize for AI answers (GEO/AEO)

In generative search, content that gets surfaced typically resolves the core question immediately, then provides context and depth. 

For many commercial teams, that requires rethinking how early pages prioritize explanation versus persuasion – a shift that’s increasingly necessary to earn visibility at all.

This is where GEO (generative engine optimization) and AEO (answer engine optimization) move from theory into page-level execution.

  • Add a 1–2 sentence TL;DR under key H2s that can stand on its own if excerpted
  • Use explicit, question-based headers:
    • “What is…”
    • “How does…”
    • “Why does…”
  • Include clear, plain-language definitions before introducing nuance or positioning

Example:

What is generative engine optimization?

Generative engine optimization (GEO) helps content get selected as a source in AI-generated answers.

In practice, GEO is the process of structuring and optimizing content so AI tools like ChatGPT and Google AI Overviews can interpret, evaluate, and reference it when responding to user queries.

How does answer-first structure change by site type?

  • Publishers benefit from definitional clarity because it increases citation frequency.
  • Lead gen sites see stronger mid-funnel engagement when prospects get clear answers upfront.
  • Product sites reduce friction by addressing comparison and “is it right for me?” questions early.
  • B2B platforms establish category authority long before a buyer ever hits a pricing page.

Add structured data (high impact, often underused)

Structured data remains one of the clearest ways to signal meaning and credibility to AI-driven search systems. 

It helps generative engines quickly identify the source, scope, and authority behind a piece of content – especially when deciding what to cite.

At a minimum, most sites should implement:

  • Article schema to clarify content type and topical focus.
  • Organization schema to establish the publishing entity.
  • Author or Person schema to surface expertise and accountability.

FAQ schema, where it reflects genuine question-and-answer content, can still reinforce structure and intent – but it should be used selectively, not as a default.

This matters differently by site type:

  • Content hubs benefit when author and publication signals reinforce editorial credibility and reference value.
  • Lead gen and services sites use schema to connect expertise to specific problem areas and queries.
  • Product and ecommerce sites help AI systems distinguish between informational content and transactional pages.
  • Commercial, long-funnel sites rely on schema to support trust signals alongside relevance in high-stakes categories.

Structured data doesn’t guarantee inclusion – but in generative search environments, its absence makes exclusion more likely.

Get the newsletter search marketers rely on.


Phase 3: Authority and trust (Weeks 7-10)

Strengthen E-E-A-T signals

As generative systems decide which sources to reference, demonstrated experience increasingly outweighs polish alone. 

Pages that surface consistently tend to show clear evidence that the content comes from real people with real expertise. 

Meaning, signals associated with E-E-A-T – experience, expertise, authoritativeness, and trust – remain central to how generative systems decide which sources to reference.

Key signals to reinforce:

  • Clear author bios that establish credentials, role, or subject-matter relevance.
  • First-hand experience statements that indicate direct involvement (“We tested…”, “In our experience…”).
  • Original visuals, screenshots, data, or case studies that can’t be inferred or synthesized

This is where generic, AI-generated content reliably falls short. 

Without visible signals of experience and accountability, AI systems struggle to distinguish authoritative sources from interchangeable ones.

How different site types should demonstrate experience and authority

  • Media and research sites should reinforce editorial standards, sourcing, and author attribution to support citation trust.
  • Agencies and consultants benefit from foregrounding lived client experience and specific outcomes, not abstract expertise.
  • Ecommerce brands earn trust through real-world product usage, testing, and visual proof.
  • High-ACV B2B companies stand out by showcasing practitioner insight and operational knowledge rather than marketing language alone.

If your content reads like it could belong to anyone, AI systems will treat it that way.

Dig deeper: User-first E-E-A-T: What actually drives SEO and GEO

Build ‘citation-worthy’ pages

Certain page types are more likely to be cited in AI-generated answers because they organize information in ways that are easy to extract, compare, and reference. 

These pages are designed to serve as reference material – resolving common questions clearly and completely, rather than advancing a particular perspective.

Formats that consistently perform well include:

  • Ultimate guides that consolidate a topic into a single, authoritative resource.
  • Comparison tables that make differences explicit and scannable.
  • Statistics pages that centralize data points AI systems can reference.
  • Glossaries that define terms clearly and consistently.

Pages with titles such as “AI SEO Statistics (2025)” or “Best AI SEO Tools Compared” are frequently surfaced because they signal completeness, recency, and reference value at a glance.

For commercial sites, citation-worthy pages don’t replace conversion-focused assets. 

They support them by capturing early-stage, informational demand – and positioning the brand as a credible source long before a buyer enters the funnel.

Dig deeper: How generative engines define and rank trustworthy content

Phase 4: Multimodal SEO (Weeks 11-12)

Optimize beyond text

Generative systems increasingly synthesize signals across text, images, and video when assembling answers. 

Content that performs well in AI-driven search is often reinforced across formats, not confined to a single page or medium.

  • Add descriptive, specific alt text that explains what an image shows and why it’s relevant.
  • Create short-form videos paired with transcripts that mirror on-page explanations.
  • Repurpose core content into formats AI systems can encounter and contextualize elsewhere:
    • YouTube videos.
    • LinkedIn carousels.
    • X threads.

How this supports different site goals

  • Publishers extend the reach and reference value of core reporting and explainers.
  • Services and B2B sites reinforce expertise by repeating the same answers across multiple surfaces.
  • Ecommerce brands support discovery by contextualizing products beyond traditional listings and category pages.

Track AI visibility – not just traffic

As generative results absorb more of the discovery layer, traditional click-based metrics capture only part of search performance. 

AI visibility increasingly shows up in how often – and where – a brand’s content is referenced, summarized, or surfaced without a click.

With 88% of businesses worried about losing organic visibility in the world of AI-driven search, tracking these signals is essential for demonstrating continued influence and reach.

Signals worth monitoring include:

  • Featured snippet ownership, which often feeds AI-generated summaries.
  • Appearances within AI Overviews and similar answer experiences.
  • Brand mentions inside AI tools during exploratory queries.
  • Search Console impressions, even when clicks don’t follow.

For long sales cycles in particular, these signals act as early indicators of influence. 

AI citations and impressions often precede direct engagement, shaping consideration well before a buyer enters the funnel.

Dig deeper: LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Recommended tools

These tools support different parts of an SEO-for-AI workflow, from topic research and content structure to schema implementation and visibility tracking.

  • Content and AI SEO 
    • Surfer, Clearscope, Frase
    • Used to identify gaps in topical coverage and evaluate whether content resolves questions clearly enough to be excerpted in AI-generated answers.
  • Schema and structured data 
    • RankMath, Yoast, Schema App
    • Useful for implementing and maintaining schema that helps AI systems interpret content, authorship, and organizational credibility.
  • Visibility and performance tracking 
    • Google Search Console, Ahrefs
    • Essential for monitoring impressions, query patterns, and how content surfaces in search – including cases where visibility doesn’t result in a click.
  • AI research and validation 
    • ChatGPT, Perplexity, Gemini
    • Helpful for testing how topics are summarized, which sources are cited, and where your content appears (or doesn’t) in AI-driven responses.

The rule that matters most

AI systems tend to favor content that provides definitive answers to questions. 

If your content can’t answer a question clearly in 30 seconds, it’s unlikely to be selected for AI-generated answers.

What separates teams succeeding in this environment isn’t experimentation with new tactics, but consistency in execution. 

Pages built to be understandable, referenceable, and trustworthy are the ones generative systems return to.

Read more at Read More