What Is Google AI Mode and How Does It Work?

Does Google’s AI Mode mark a real shift in how search works? There’s a strong case that it does. And all businesses with an online presence need to pay attention, not just SEO folks. 

Given how big the change is, you likely have a lot of questions. 

What does AI Mode mean for your site traffic? How do you get featured? Do you need to change your content strategy? What happens to organic visibility as AI-generated answers become more common?

If you’re feeling uncertain, don’t worry. This guide breaks down what Google AI Mode actually is, how it works, and what it means for your site.

Key Takeaways

  • Google AI Mode is a search experience that builds on AI Overviews, offering deeper answers, reasoning, and more personalized responses.
  • AI Mode is currently available in English, with rollout expanding beyond early U.S. testing.
  • Users can access AI Mode directly from the Google homepage, where it functions through a conversational, ChatGPT-style interface.
  • Appearing in AI Mode is largely driven by strong SEO fundamentals, but brand mentions, structured data, and off-site signals play a growing role.
  • While AI Mode changes how results are presented, early data suggests users still click through to source content, especially for complex or high-consideration topics.

What Is Google’s AI Mode?

AI Mode is a search feature from Google designed to give direct, well-reasoned answers to complex queries. It builds on AI Overviews but uses a similar process that combines AI-generated responses with content from traditional search results and the Knowledge Graph (Google’s database of factual information). 

It runs on a modified version of Gemini, Google’s core AI model, and analyzes information from multiple sources. It then synthesizes this information into a clear, concise answer that prioritizes reasoning and context, rather than just summarizing pages.

The interface feels a lot like an AI Overview—same layout and a similar answer—but with a box to ask follow-up questions at the bottom.

Google AI Mode example with the definition of what Google AI Mode is.

Here’s what Robby Stein, Google’s VP of Search, said about AI Mode in a post on The Keyword:

“Using a custom version of Gemini 2.0, AI Mode is particularly helpful for questions that need further exploration, comparisons and reasoning. You can ask nuanced questions that might have previously taken multiple searches — like exploring a new concept or comparing detailed options — and get a helpful AI-powered response with links to learn more.”

AI Mode integrates several elements from traditional search engine results pages (SERPs), such as Shopping listings and Maps.

Google AI Mode with a map of New York pizza places.

Finally, Google has said that it will continue to add new features. These include agentic workflows in conjunction with Project Mariner, increasing levels of personalization, and even custom charts and graphs. 

AI Mode Is Becoming an Interactive Application Layer

Google is actively turning AI Mode into a more interactive part of search, not just a place to read AI-generated answers.

Recent updates already point to deeper personalization, richer inline links, and more interactive result formats, including charts, comparisons, and visual outputs. With Gemini 3 now integrated directly into AI Mode, those interfaces are becoming more dynamic and tool-driven instead of purely informational.

 “We spend a ton of time focused on this question of when and how to show links, and how we can really make the web shine. It will continue to be an ongoing effort as AI Mode and the Search Results Page evolves,” says Stein.

Links in a Google AI Mode result.

This shift matters. Rather than sending users to external calculators, templates, or apps, Google is starting to surface that functionality directly inside search. For certain queries, AI Mode can simulate outcomes, compare options, or guide users through multi-step decisions without requiring a click to another site.

A graphic in a Google AI Mode result.

Over time, this opens the door to agent-driven experiences. In those scenarios, AI Mode does not just explain an answer. It helps users complete tasks, from planning and analysis to evaluation and execution, inside the search interface itself.

As Gemini becomes more tightly integrated across Search, AI Mode is moving closer to a default experience. For brands, this raises the bar. Content that wins in AI-first search needs defensible value, interactive depth, or proprietary insight, not just basic information.

How to Access Google’s AI Mode and Availability

Google AI Mode is now available beyond early U.S.-only testing, with a broader global rollout underway. Users accessing Google in supported regions can enter AI Mode directly from the Google homepage, where it appears alongside the main search experience rather than as an experimental feature.

Screenshot of the main Google search page.

When users tap “show more” on certain AI-generated results, the AI Overview expands. Once in the expanded AI overview users can click “Dive Deeper in AI Mode” to enter AI mode. This signals a shift toward AI Mode acting as a default exploration layer, not a separate destination.

Diving deeper in a AI Mode result.

Once inside AI Mode, users can interact with responses conversationally, asking follow-up questions that carry context forward. Links to supporting pages remain available, and users can access their “AI mode history” once inside AI mode, so they can continue conversations that they previously started. 

AI Mode history.
AI mode history.

Google has moved away from positioning AI Mode as a Labs experiment, and there is no longer a separate opt-in process. Access is tied to Google’s standard search interface, and availability is expanding as Google refines performance, localization, and personalization features.

Timeline of Google AI Mode

While most people think of AI as starting with ChatGPT, Google’s been building AI tools for decades. 

AI Mode is part of Google’s broader family of AI tools, which include Veo, a video maker, Imagen, a text-to-image model, Project Mariner, an agent that can automate tasks, and others. 

Here’s a short timeline that puts AI Mode in context:

  • May 2017: CEO Sundar Pichai announces the launch of a dedicated AI division called Google AI at I/O, the company’s annual developer conference. 
  • March 2023: Google opens up early access to Bard, its first gen AI chatbot. It is rolled out globally several months later. Global availability follows later that year.
  • December 2024: Google announces Gemini, a multimodal LLM that can work with different content inputs (images, voice, and text). 
  • February 2024: Bard is coupled with Duet AI, Google’s Workplace AI assistant, and rebranded to Gemini.
  • May 2024: AI Overviews, initially called Search Generative Experience, are first released.The feature reaches broad availability later in the year, combining generative AI with Google’s traditional information retrieval systems.
  • May 2025: Google releases AI Mode, a ChatGPT-style interface available on its homepage. It builds on the core functionality of AI overviews. It is available only in America.  Early access is limited, but usage expands rapidly.
  • August 2025: Google begins a more comprehensive global rollout of AI Mode, signaling its transition from a test experience to a core part of Search. Google also announced that they’re increasing the number of links in AI mode.  Searchers begin to see inline link carousels and contextual introductions explaining why a link might be useful to visit.
  • November 2025: Google integrates Gemini 3.0 and Nano Banana in AI Mode.

Using AI Mode: AI Overviews vs. AI Mode

Time for the unboxing. To illustrate how AI Mode differs from AI Overviews, consider a simple comparison scenario.

First, a general query is entered into standard Google Search: “What will be the most popular spring break destinations this year.” This triggers an AI Overview.

Google search results for "What will be the most popular spring break destinations this year."

AI Overview analyzes the query, considers general context such as location, and pulls information from multiple sources, stitched together into a quick summary. 

Next, the query becomes a bit more specific: “what will be the most popular spring break destinations this year with a 6-month-old baby.”

AI Overview adjusts the response based on the added constraint, returning suggestions that better match the scenario while still relying on summarization.

Google search results for "what will be the most popular spring break destinations this year with a 6-month-old baby."

The same queries are then entered into Google’s AI Mode using the dedicated prompt box.

The initial response looked similar but for a subtle shift. Instead of simply summarizing existing information, AI Mode applies additional reasoning to evaluate suitability and trade-offs.

Google AI Mode results for "What will be the most popular spring break destinations this year."

A follow-up question is then added without restating the full context.

AI Mode retains the earlier details, understands the added nuance, and returns a more detailed, logically structured set of recommendations. This ability to carry context forward highlights one of the key differences between AI Mode and AI Overviews.

Google AI Mode results for "what will be the most popular spring break destinations this year with a 6-month-old baby."

How Is AI Mode Different from AI Overviews and Gemini?

Simply put, AI Mode is an expanded version of AI Overview. It incorporates and builds on features of AI Overviews, and both of these run on Gemini, which is Google’s core model. 

Here’s how AI Mode compares to AI Overviews:

  • More advanced reasoning: While AI Overview summarizes information from across sources, AI Mode interprets that information, connects related concepts, and surfaces conclusions based on reasoning rather than aggregation alone.
  • Multimodal understanding: In the Google app (on Android and iOS), AI Mode can also answer questions based on photos and images. 
Meet AI Mode landing page.
  • Better handling of complex questions: AI Overview works well for simple, fact-based queries, but AI Mode is designed for nuanced, multi-layered, or exploratory questions that benefit from context and comparison.
  • Follow-ups: You can ask follow-up questions, and the AI will respond based on the ongoing context in a conversational style.

AI Mode is also evolving in how it presents sources. Searchers increasingly see inline links, carousels, and contextual explanations that clarify why a particular source may be useful, rather than a static list of citations.

Research conducted by NP Digital shows that these features match emerging user demand. We found, for example, that 72% of people are inputting very precise, “exactly what I want” queries. And 76% are opting for more human-like and conversational interactions. 

NP Digital Graph showing search trends by generative AI.

What Is the Technology Behind AI Mode?

LLMs are vastly complex entities, and Gemini, the model that powers AI Mode, is no different. However, three main technologies separate AI Mode from standard gen AI bots and AI overviews. 

Here are the three core processes that power AI Mode: 

  • AI Mode uses a query fan-out technique. This involves breaking a query into subtopics and researching them in parallel. It then combines dozens of information points into a single answer. 
  • Structured logic is a key part of how AI Mode works. It takes a query and then creates a reasoning chain (e.g., “user is looking for a water bottle for hiking, therefore features should include durability and size, therefore a minimum capacity of 3 liters is needed, etc.) and then validates answers against these steps to determine suitable outcomes. 
  • Personal context plays a significant role. This means that AI Mode records conversations over time and builds a picture of individual user preferences, adjusting responses based on past inputs. It does this by creating a sort of digital ID—called a vector embedding—that is included in the answer generation process. This is a form of background memory that works in much the same way as ChatGPT.

How to Optimize Your Site for AI Mode

So-called GEO—generative engine optimization—is big business at the moment. However, there’s still a lot of uncertainty about what directly influences visibility in AI Mode, and many claims go beyond what Google has actually confirmed.

Rather than chasing shortcuts, the clearer pattern is that AI Mode rewards the same fundamentals Google has emphasized for years — with a few emerging signals becoming more important as AI-generated results mature.

Let’s look at what we actually know about “ranking” in AI Mode.

1. Traditional SEO principles still apply

Google has been pretty unequivocal about this. Traditional SEO optimization is still the most important activity for appearing in AI Overviews and AI Mode. 

As long as you follow SEO basics—create useful content, generate natural backlinks, and optimize technical health—you’re ahead of 90% of the competition. 

Research also backs this up. Ziptie, for example, found that sites with a number one ranking in traditional search results are 25% more likely to be featured in AI Overviews. 

2. Indexed web pages are eligible to appear in AI Mode

On the technical front, there’s good news. As long as a page is indexed, it’s eligible to appear in AI Mode. There are no other requirements. You can check your pages are indexed using the URL inspection tool in Search Console. 

If you’re having issues, be sure to check you’re adhering to Google Search technical requirements. Make sure Googlebots aren’t blocked, pages return 200 success codes, and content doesn’t violate spam policies.

3. Forum and discussion board citations matter

Recent analysis across multiple large language models shows that discussion forums and Q&A platforms are frequently referenced when generating explanatory or opinion-based answers, particularly for queries that benefit from lived experience or peer discussion.

Reddit, in particular, continues to surface prominently across AI-generated responses, in part due to its scale, freshness, and breadth of first-hand commentary. However, the weighting of any single forum is dynamic and continues to evolve as Google refines how AI Mode sources and cites content.

Given Reddit and Google’s partnership, it’s likely that well-moderated, high-signal community content remains an important input for Gemini-powered experiences.

If you haven’t already, build up a presence on Reddit and other similar forums and discussion boards. This can help reinforce topical authority and increase the likelihood of being referenced in AI-generated answers.

4. Schema markup (structured data) gives you a boost

Schema markup, also called structured data, is a type of code that you add to your content. It gives search engines and AI systems additional information to help them understand what it’s about. One simple example of schema markup is identifying a recipe as “@type”: “Recipe.”

Research by Aiso has shown that LLMs extract more accurate data from pages with schema markup, with a 30% improvement in quality. 

Using schema markup helps reduce ambiguity for AI-generated answers and increases the likelihood that your content is interpreted correctly. Fortunately, adding schema to your web page is relatively straightforward.

5. Digital PR is important

LLMs access information in two ways. They are initially trained on a large amount of information—called training data—and they can also access new online content, such as news articles. 

Digital PR is all about acquiring mentions and backlinks from reputable third-party sources, especially media websites. 

Brand mentions boost visibility in LLM training materials and strengthen topical associations (a measure of the number of times you’re cited in relation to a specific subject), meaning you’re more likely to appear in responses. 

Digital PR involves creating share-worthy content and contacting journalists and site admins to ask them to feature you. Our research shows that original research and tools are especially good at encouraging people to talk about your brand. 

NP Digital graph showing how different content formats are proven to generate links.

6. Be Ready To Test and Track AI Visibility

As AI Mode becomes more integrated into the search experience, visibility is no longer limited to rankings alone. Brands need ways to measure whether — and how often — their content appears in AI-generated answers.

New AI visibility platforms, such as Writesonic and Profound, are emerging to help track citations, brand mentions, and source inclusion across large language models. These tools provide early signals about which content formats, topics, and entities are being surfaced by AI systems.

Monitoring this data allows teams to validate whether SEO, digital PR, and structured data efforts are translating into real AI exposure. It also makes it easier to spot gaps, test changes, and adapt as Google continues to evolve AI Mode.

Treat AI visibility tracking as a complement to traditional performance metrics, not a replacement. Both matter.

What Does AI Mode Mean for the Future of Search?

There are a lot of unknowns about how increased use of AI tools will affect the way people look for information. That said, emerging usage patterns are already pointing to meaningful shifts in how AI SEO is evolving.

With that in mind, here are five implications for the future of search as AI Mode becomes more prominent:

Searchers will still click through to websites: Early performance data from AI-generated results shows that clicks are reduced for some informational queries, but not eliminated. Users continue to seek out original content, particularly for complex decisions, comparisons, and high-consideration topics.

NP Digital graph showing the impact on clicks to websites from Google integrating AI.

Long-play brand building will become more common: LLMs use third-party brand mentions to measure the authority of publishers. Popular brands are cited more by gen AI search tools and, as such, long-term brand building with an outlook of five years and above will become much more common. 

NP Digital graphic showing the length of time to build a recognizable brand.

Marketing strategies will become more omnichannel: As AI Mode absorbs more discovery queries, brands will need visibility across multiple platforms, not just Google’s traditional results. This reinforces a broader “search everywhere” approach, where discovery happens across AI tools, social platforms, and communities.

NP Digital graph showing the number of daily searches per platform.

People will favor AI for more specific searches: Analysis of large query sets shows that AI-generated results appear more frequently for longer, more specific searches. Short, navigational queries may still rely on traditional results, while nuanced questions increasingly trigger AI Mode.

NP Digital graph showing the frequency of AI overviews by search query length.

Trust in AI will continue to grow: Hallucinations are a big problem with AI Overviews and AI Mode also makes mistakes, according to user reports. With that said, user adoption and satisfaction with AI-powered search tools are trending upward. As Google refines AI Mode, usage is likely to grow alongside improvements in reliability and transparency.

NP Digital graph showing the user satisfaction with AI overviews over time.

FAQs

What is Google AI Mode?

Google AI Mode is a conversational search experience powered by Gemini, Google’s core AI model. It provides more detailed, context-aware answers to search queries, similar in format to tools like ChatGPT, but integrated directly into Google Search.

Instead of returning a list of links first, AI Mode synthesizes information from multiple sources and presents a reasoned response, with links available for deeper exploration. Users can ask follow-up questions, and the system carries context forward, making the interaction feel more like an ongoing conversation.

AI Mode builds on AI Overviews but goes further by handling complex, multi-step, or exploratory queries more effectively.

How do you use Google AI Mode?

In supported regions, users can access AI Mode directly from the Google homepage. On some AI-generated results, selecting “show more” will also open AI Mode automatically, allowing users to continue their search without returning to traditional results.

Once inside AI Mode, questions can be entered conversationally, and follow-ups don’t require repeating the original context. Users can still click through to source pages or switch back to standard search results at any point.

AI Mode is no longer accessed through Google Labs, and there is no separate opt-in process.

How do you optimize your website for Google AI Mode?

Start with strong SEO fundamentals, which Google has confirmed remain the primary eligibility signals. Beyond that, sites that appear most often in AI-generated answers tend to share a few traits:

  • Create useful, high-quality content that fully addresses search intent.
  • Make sure pages are indexed and technically accessible
  • Use schema markup to clarify meaning and structure
  • Earn third-party brand mentions from trusted publishers and communities
  • Build topical authority through consistent, focused publishing

Visibility in AI Mode is not guaranteed, but sites that are trusted, well-structured, and frequently cited are more likely to be referenced in AI-generated responses.. 

Search Is Changing but the Fundamentals Still Apply

The way people search is changing, and Google AI Mode is accelerating that shift.

People are finding information across a host of different platforms, not just Google. AI-generated answers are reducing clicks. And traditional content publishers are under pressure as gen AI eats up demand. 

At the same time, AI Mode doesn’t discard the fundamentals that have always mattered. Google is still prioritizing relevance, authority, and usefulness — it’s just surfacing them in new ways. Sites that understand search intent, build credibility beyond their own domains, and structure content clearly are better positioned to stay visible as AI Mode expands.

From the very start, Google had one aim: to solve users’ needs. That’s also what AI tools seek to do, and their models will continuously be designed to that end. 

Understanding your customers—and providing what they want through high-quality, useful content—is the best way of futureproofing your business and ensuring long-term visibility in LLMs.

Read more at Read More

How Marketers Are Spending in 2026

Marketing budgets aren’t collapsing in 2026, but they are making a shift. That’s the part many teams miss.

That distinction matters. Rising media costs, weaker attribution, privacy changes, and AI-driven search shifts have created real pressure, but the data shows budgets are still moving into marketing. They’re just moving with more intent.

Our latest NP Digital research on how marketers are spending their money in 2026 shows a clear pattern: teams are reallocating toward channels that defend ROI, compound value, and hold up under volatility. This article breaks down what’s changing, why it’s happening, and how to think about your own marketing budget for 2026 without relying on outdated assumptions.

Key Takeaways

  • Marketing budgets in 2026 are not shrinking. They’re being consolidated around confidence, efficiency, and defensibility. 
  • Channels tied directly to conversion, retention, and owned data are absorbing spend, while those with declining signal quality or unclear ROI are losing ground. 
  • SEO and content are not disappearing, but expectations have shifted toward extractability, authority, and measurable downstream impact. 
  • Paid media still plays a critical role, but marginal efficiency now determines where dollars stay or move. 
  • Teams that can reallocate budget quickly, based on real performance signals, are gaining a structural advantage.

The State of the Marketing Budget in 2026

Let’s start with the context that’s shaping every budget decision this year.

Media costs continue rising across search and social. CPCs aren’t coming down, and competition for attention keeps intensifying. At the same time, privacy changes have reduced signal quality, making it harder to target precisely and measure accurately.

Economic uncertainty is pushing marketers to defend ROI more aggressively than ever. Every dollar needs a clear path to revenue, and channels that can’t prove their value are getting cut.

AI adoption has accelerated faster than most teams can operationalize. Nearly everyone is experimenting, but few have figured out how to turn that experimentation into systematic advantage. The gap between “using AI” and “getting results from AI” is wider than you’d think.

Here’s the good news: budgets are not disappearing. They are being reallocated with intent. The marketers who understand where efficiency lives and where it’s eroding are the ones capturing share.

What’s Driving Budget Decisions

The shift in spending comes down to a few core factors:

Purchase journeys are more complex. 94% of purchase journeys now involve multiple touchpoints. Search and social are the most influential, appearing in 79% and 73% of journeys respectively. But they rarely operate in isolation. Budgets are being distributed to support visibility across the full path to purchase, not just the final click.

Information about purchase journeys.

Attribution is noisier. Third-party signals keep degrading, so budgets are following channels that stay measurable. Paid search, email, and CRO all offer clearer attribution than many emerging channels. In uncertain conditions, that clarity matters.

Organic reach is declining. Zero-click searches now account for roughly 58-60% of Google searches. Organic listings are being pushed below the fold by AI Overviews, ads, and SERP features. This is reducing organic click opportunities and increasing reliance on paid coverage.

Efficiency matters more than volume. When media costs rise and margins compress, growth comes from doing more with what you have. That’s why CRO, lifecycle marketing, and retention are getting more investment even as some acquisition channels face cuts.

The marketers who are winning in 2026 understand that budget decisions aren’t about chasing trends. They’re about matching investment to where performance can be proven and defended.

Common themes across budget reallocations

Where Budgets Are Growing, Holding, and Declining

Let’s look at the actual spending patterns across channels. We’ll start with the big picture, then break down what’s happening in each major category.

Overall Marketing Budget Direction

61% of B2B marketers are increasing overall spend this year, with 20% holding flat and 19% decreasing. B2C is slightly more cautious: 57% are increasing, 32% holding flat, and 11% decreasing.

The takeaway? Growth budgets still exist, but they’re being deployed more carefully than in previous years.

The Biggest Budget Shifts Since 2025

Here’s where the reallocation is happening:

SEO spend has rebounded sharply. After a softer 2025, 61% of marketers are now increasing SEO budgets (up from 44% last year). The return of confidence in organic search reflects a few things: better AI tools for content production, clearer ROI measurement, and recognition that organic visibility still matters even in a zero-click environment.

AI SEO investment is accelerating dramatically. 98% of marketers plan to increase AI SEO spend in 2026. This isn’t just hype. Teams have figured out that AI can accelerate research, content production, and optimization cycles without sacrificing quality.

CRO and UX remain a priority. 52% are increasing spend, and only 25% are planning decreases. When traffic is harder to earn, you optimize what you have. CRO delivers measurable improvements regardless of where visitors come from.

Content creation growth has slowed. Only 32% plan increases, while 31% plan to reduce spend. This reflects a shift away from volume-based content strategies toward fewer, higher-quality assets that can be repurposed across channels.

Organic social media is facing the steepest pullback. 64% of marketers are planning budget decreases. Organic reach has declined to the point where most brands treat social as a support channel, not a growth engine.

Email and lifecycle budgets have stabilized. 60% are keeping spend flat and 23% are increasing. Email remains one of the most reliable channels for retention and conversion, especially as first-party data becomes more valuable.

The pattern across all of this? Increased focus on channels tied to conversion and retention. Reduced investment in traditional advertising channels with declining efficiency signals. And a shift away from broad content volume toward targeted execution. 

Channel-by-Channel Breakdown

Now let’s get specific. Here’s what’s happening in each major channel category.

SEO and Organic Search

Information about SEO and Organic Search Budget Trends.

SEO budgets are rebounding, but the strategy is changing. Digital channels now represent 61.1% of total marketing spend, and organic search remains a major piece. But zero-click searches and AI Overviews are changing how value gets captured.

Search is becoming answer-first. Google increasingly resolves intent directly in the SERP through AI Overviews, featured snippets, and knowledge panels. This means fewer clicks but doesn’t make SEO irrelevant, just less predictable on its own. SEO needs to optimize for visibility and citation, not just click-through.

Treat rankings as one output among several that matter. Visibility in AI Overviews and featured snippets matters as much as position one. Prioritize topics tied to revenue intent and customer lifecycle stages. Build content that can win both ways: clicks and citations. Measure organic success across visibility, assisted conversion, and brand lift. More brands are pairing search with other channels, like community, that capture attention off the SERP.

AI systems increasingly resolve intent directly in the SERP, which concentrates click opportunities into fewer, higher-intent moments. Brands that show up consistently in AI-generated answers are building trust and authority even when users don’t click.

Content and Thought Leadership

Content budgets are being reallocated toward assets that influence discovery, trust, and conversion across channels. Thought leadership is increasingly used to earn inclusion in search results and AI-generated answers.

Content still fuels discovery, even when the click doesn’t happen immediately. Strong content is what AI systems summarize, cite, and pull into answers. In a noisy market, a differentiated perspective is one of the few advantages you can own.

Design content for multiple outputs: search, AI summaries, social, sales. Prioritize fewer topics with deeper authority and a clearer point of view. Shift from publishing volume to publishing leverage. Use AI for research acceleration and synthesis, but keep humans in charge of insight, brand voice, and editorial judgment.

Creators especially matter here as a result. They help brands move beyond renting attention and toward building long-term loyalty that holds up even as platforms and algorithms change. This is important because things like original insight, point of view, brand voice, and credibility are not things AI can manufacture on its own. Editorial judgment and prioritization are still very human decisions.

AI can help scale content, but the trust, experience, and perspective that influencers, creators, and SMEs offer gives content weight and relevance with an audience.

Paid Search

A graphic about paid search budgets.

Paid search remains a core demand capture channel, but expectations have reset. CPC inflation and competition continue to compress efficiency. Reduced organic click availability increases reliance on paid coverage.

Shift from keyword expansion to coverage efficiency. Prioritize high-intent, defensible queries over volume. Use fewer keywords with tighter control. Coordinate more closely with SEO and CRO. Put higher emphasis on marginal ROI rather than raw spend growth.

AI and automation now control bidding, targeting, and pacing by default. Competitive advantage shifts to inputs: structure, data quality, conversion signals.

Paid Social

Paid social remains the most flexible scaled reach channel. Platform-level shifts show TikTok leading growth at 57%, YouTube at 53%, and Instagram at 46%. Facebook is under pressure, with 36% decreasing spend and only 18% increasing.

Creative velocity matters more than audience hacks. Message clarity beats novelty. Platform-native formats outperform repurposed ads. Measurement focuses on incremental lift, not just ROAS. Close alignment with lifecycle and email capture turns paid social prospects into owned relationships.

Organic Social

A graphic aboutr organic social media budget direction.

Some cuts are dramatic—and predictable.

  • Organic social: 64 percent decreasing investment. 
  • Content creation volume: Only 32 percent increasing; 31 percent decreasing. 
  • Traditional display: Banner ads are essentially frozen (63 percent flat). 
  • Facebook paid: Thirty-six percent decreasing. 

The pattern is clear:
Teams are cutting channels with declining reach, opaque ROI, or inflated costs.

But that doesn’t mean content or social isn’t important—it simply means they’re no longer funded as volume engines. The strategy is changing, not disappearing.

Influencer Marketing

Community building is one of the strongest growth areas in 2026 budgets, with 69% of marketers increasing spend. Influencer marketing is seeing even stronger growth at 78%. These channels support retention, referrals, and brand defensibility.

Friend and direct traffic drive more conversions than any paid channel. Don’t just focus on the channels that cause direct conversions. Focus on the channels that create brand awareness and influence purchase decisions earlier in the journey.

Email + Lifecycle

A graphic about email and lifecycle marketing budget momentum.

Email and lifecycle budgets remain resilient because performance is driven by trust, relevance, and timing. 60% are keeping spend flat and 23% are increasing. First-party data enables consistent message delivery when paid reach and signal quality decline.

Customer acquisition isn’t the only scalable lever anymore. Retention is the controllable one. Retention programs stabilize margins as media costs, auctions, and platforms stay volatile.

AI enables real-time message sequencing based on behavior, dynamic content assembly across email and SMS, and faster iteration without rebuilding entire lifecycle programs.

CRO and UX

CRO, UX, and First-Party Data investment trends.

CRO and UX are treated as defensive investments that improve performance regardless of traffic source. 52% are increasing spend. Traffic is harder to earn and easier to lose. Fewer clicks mean every visit carries more revenue weight.

AI-assisted test generation allows faster signal detection across variants and continuous optimization tied to real behavior. Competitive advantage shifts to inputs: structure, data quality, and conversion signals.

A Simple Framework: How to Build a Smarter 2026 Marketing Budget

A framework on building 2026 marketing budgets.

Here’s a practical framework for budget agility.

Anchor spend in proven demand. Protect budgets tied directly to revenue and high-intent activity. These are your foundation channels.

Build flexibility around performance signals. Shift dollars based on real outcomes. Don’t lock yourself into annual commitments for channels that aren’t delivering.

Separate experimentation from core investment. Test intentionally without destabilizing what works. Set aside 10-15% of budget for testing new channels and tactics.

Reallocate faster than your competitors. Speed of adjustment becomes a competitive advantage in volatile conditions. Review performance monthly and be willing to move budget mid-quarter.

The winners in 2026 will be faster, not just bigger. Budgets are consolidating around fewer, higher-confidence channels. Efficiency and retention now matter as much as acquisition. AI is reshaping how value is captured, not just how work gets done. Visibility, conversion, and experience must be planned together.

Conclusion

Marketing in 2026 requires a different approach to budgeting. The channels that worked three years ago still work, but they work differently. The measurement that mattered in 2023 doesn’t tell the full story anymore. The strategies that justified budget in 2024 need updating for how search, social, and AI have evolved.

The marketers who thrive this year will be the ones who allocate budget where performance is provable, build systems that compound value over time, and move faster than their competitors when signals change.

If you need help translating these budget signals into a channel-specific growth plan, aligning SEO, paid media, content, and lifecycle into one system, or building measurement models that reflect zero-click and AI-driven behavior, we can help. Reach out to discuss your 2026 strategy.

Read more at Read More

Google Shopping API cutoff looms, putting ad delivery at risk

Inside Google Ads’ AI-powered Shopping ecosystem: Performance Max, AI Max and more

Google Shopping API migration deadlines are approaching, and advertisers who don’t act risk disrupted Shopping and Performance Max campaigns.

What’s happening. Google is sunsetting older API versions and pushing all merchants toward the Merchant API as the single source of truth for Shopping Ads. Advertisers can confirm which API they’re using in Merchant Center Next by checking the “Source” column under Settings > Data sources, where any listing marked “Content API” requires action.

Why we care. Google is actively reminding advertisers to migrate to the new Merchant API, with beta users required to complete the switch by Feb. 28th, and Content API users by Aug. 18th. If feeds aren’t properly reconnected, campaigns that rely on product data — especially those using feed labels — may stop serving altogether.

The risk. Feed labels don’t automatically carry over during migration. If advertisers don’t update their campaign and feed configurations in Google Ads, Shopping and Performance Max setups that depend on those labels for structure or bidding logic can quietly break.

What to do now. Google recommends completing the migration well ahead of the deadline, reviewing feed labels, and validating campaign delivery after reconnecting feeds. The transition was first outlined in mid-2024, but enforcement is now imminent as Google moves closer to fully retiring legacy APIs.

Bottom line. This isn’t a cosmetic backend change — it’s a technical cutoff that can directly impact revenue if ignored.

First seen. This update was spotted by Google Shopping Specialist Emmanuel Flossie, who shared the warnings he received on LinkedIn.

Read more at Read More

Does llms.txt matter? We tracked 10 sites to find out

Does llms.txt matter

The debate around llms.txt has become one of the most polarized topics in web optimization.

Some treat llms.txt as foundational infrastructure, while many SEO veterans dismiss it as speculative theater. Platform tools flag missing llms.txt files as site issues, yet server logs show that AI crawlers rarely request them.

Google even adopted it. Sort of. In December, the company added llms.txt files across many developer and documentation sites.

The signal seemed clear: if the company behind the sitemap standard is implementing llms.txt, it likely matters.

Except Google pulled it from its Search developer docs within 24 hours.

Google’s John Mueller said the change came from a sitewide CMS update that many content teams didn’t realize was happening. When asked why the files still exist on other Google properties, Mueller said they aren’t “findable by default because they’re not at the top-level” and “it’s safe to assume they’re there for other purposes,” not discovery.

The llms.txt research

We wanted data, not debates.

So we tracked llms.txt adoption across 10 sites in finance, B2B SaaS, ecommerce, insurance, and pet care — 90 days before implementation and 90 days after.

We measured AI crawl frequency, traffic from ChatGPT, Claude, Perplexity, and Gemini, and what else these sites changed during the same window.

The results:

  • Two of the 10 sites saw AI traffic increases of 12.5% and 25%, but llms.txt wasn’t the cause.
  • Eight sites saw no measurable change.
  • One site declined by 19.7%.

The 2 ‘success’ stories weren’t about the file

The Neobank: 25% growth

This digital banking platform implemented llms.txt early in Q3 2025. Ninety days later, AI traffic was up 25%.

Here’s what else happened in that window:

  • A PR campaign around its banking license, with coverage in major national publications.
  • Product pages restructured with extractable comparison tables for interest rates, fees, and minimums.
  • Twelve new FAQ pages optimized for extraction.
  • A rebuilt resource center with new banking information and concepts.
  • Technical SEO issues, like header structures, fixed. 

When a company gets Bloomberg coverage the same month it launches optimized content and fixes crawl errors, you can’t isolate the llms.txt as the growth driver.

The B2B SaaS platform: 12.5% growth

This workflow automation company saw traffic jump 12.5% two weeks after implementing llms.txt.

Perfect timing. Case closed. Except…

Three weeks earlier, the company published 27 downloadable AI templates covering project management frameworks, financial models, and workflow planners. Functional tools, not content marketing, drove the engagement behind the spike.

Google organic traffic to the templates rose 18% during the same period and continued climbing throughout the 90 days we measured.

Search engines and AI models surfaced the templates because they solved real problems and launched an entirely new site section — not because they were listed in an llms.txt file.

The 8 sites where nothing happened after uploading llms.txt

Eight sites saw no measurable change. One declined by 19.7%.

The decline came from an insurance site that implemented llms.txt in early September. The drop likely had nothing to do with the file.

The same pattern showed up across all traffic channels. Llms.txt neither prevented the decline nor created any advantage.

The other seven sites — ecommerce (pet supplies, home goods, fashion), B2B SaaS (HR tech, marketing analytics), finance, and pet care — all documented their best existing content in llms.txt. That included product pages, case studies, API docs, and buying guides.

Ninety days later, nothing changed. Traffic stayed flat. Crawl frequency was identical. The content was already indexed and discoverable, and the file didn’t alter that.

Sites that launched new, functional content saw gains. Sites that documented existing content saw no gains.

Why the disconnect?

No major LLM provider has officially committed to parsing llms.txt. Not OpenAI. Not Anthropic. Not Google. Not Meta.

Google’s Mueller put it plainly:

  • “None of the AI services have said they’re using llms.txt, and you can tell when you look at your server logs that they don’t even check for it.”

That’s the reality. The file exists. The advocacy exists. The adoption by platforms doesn’t show it (yet!). 

The token efficiency argument (and its limits)

The strongest case for llms.txt is about efficiency. Markdown saves time and tokens when AI agents parse documentation. Clean structure instead of complex HTML with navigation, ads, and JavaScript.

Vercel says 10% of their signups come from ChatGPT. Its llms.txt includes contextual API descriptions that help agents decide what to fetch.

This matters — but almost exclusively for developer tools and API documentation. If your audience uses AI coding assistants like Cursor or GitHub Copilot to interact with your product, token efficiency improves integration.

For ecommerce selling pet supplies, insurance explaining coverage, or B2B SaaS targeting nontechnical buyers, token efficiency doesn’t translate into traffic.

llms.txt is a sitemap, not a strategy

The most accurate comparison is a sitemap.

Sitemaps are valuable infrastructure. They help search engines discover and index content more efficiently. But no one credits traffic growth to adding a sitemap. The sitemap documents what exists; the content drives discovery.

Llms.txt works the same way. It may help AI models parse your site more efficiently if they choose to use it, but it doesn’t make your content more useful, authoritative, or likely to answer user queries.

In our analysis, the sites that grew did so because they:

  • Created functional assets like downloadable templates, comparison tables, and structured data.
  • Earned external visibility through press and backlinks.
  • Fixed technical barriers such as crawl and indexing issues.
  • Published content optimized for extraction, including FAQs and structured comparisons.

Llms.txt documented those efforts. It didn’t drive them.

What actually works

The two successful sites show what matters:

  • Create functional, extractable assets. The SaaS platform built 27 downloadable templates that users could deploy immediately. AI models surfaced these because they solved real problems, not because they were listed in a markdown file.
  • Structure content for extraction. The neobank rebuilt product pages with comparison tables with interest rates, fees, and account minimums. This is data AI models can pull directly into answers without interpretation.
  • Fix technical barriers first. The neobank fixed crawl errors that had blocked content for months. If AI models can’t access your content, no amount of documentation helps.
  • Earn external validation. Coverage from Bloomberg and other major publications drove referral traffic, branded searches, and likely influenced how AI models assess authority.
  • Optimize for user intent. Both sites answered specific queries: “best project management templates” and “how do [brand] interest rates compare?” Models surface content that maps to what users are asking, not content that’s merely well documented.

None of this requires llms.txt. All of it drives results.

Should you implement an llms.txt file?

If you’re a developer tool where AI coding assistants are a primary distribution channel, then yes — token efficiency matters. Your audience is already using agents to interact with documentation.

For everyone else, treat llms.txt like a sitemap: useful infrastructure, not a growth lever.

It’s good practice to have. It won’t hurt. But the hour spent implementing llms.txt is often better spent restructuring product pages with extractable data, publishing functional assets, fixing technical SEO issues, creating FAQ content, or earning press coverage.

Those tactics have shown real ROI in AI discovery. Llms.txt hasn’t — at least not yet.

The lesson isn’t that llms.txt is bad. It’s that we’re reaching for control in a system where the rules aren’t written yet. Llms.txt offers that comfort: something concrete, actionable, and familiar, shaped like the web standards we already know.

But looking like infrastructure isn’t the same as functioning like infrastructure.

Focus on what actually works:

  • Create useful content.
  • Structure it for extraction.
  • Make it technically accessible.
  • Earn external validation.

Platforms and formats will change. The fundamentals won’t.

Read more at Read More

7 real-world AI failures that show why adoption keeps going wrong

7 real-world AI failures that show why adoption keeps going wrong

AI has quickly risen to the top of the corporate agenda. Despite this, 95% of businesses struggle with adoption, MIT research found.

Those failures are no longer hypothetical. They are already playing out in real time, across industries, and often in public. 

For companies exploring AI adoption, these examples highlight what not to do and why AI initiatives fail when systems are deployed without sufficient oversight.

1. Chatbot participates in insider trading, then lies about it

In an experiment driven by the UK government’s Frontier AI Taskforce, ChatGPT placed illegal trades and then lied about it

Researchers prompted the AI bot to act as a trader for a fake financial investment company. 

They told the bot that the company was struggling, and they needed results. 

They also fed the bot insider information about an upcoming merger, and the bot affirmed that it should not use this in its trades. 

The bot still made the trade anyway, citing that “the risk associated with not acting seems to outweigh the insider trading risk,” then denied using the insider information.  

Marius Hobbhahn, CEO of Apollo Research (the company that conducted the experiment), said that helpfulness “is much easier to train into the model than honesty,” because “honesty is a really complicated concept.”

He says that current models are not powerful enough to be deceptive in a “meaningful way” (arguably, this is a false statement, see this and this).

However, he warns that it’s “not that big of a step from the current models to the ones that I am worried about, where suddenly a model being deceptive would mean something.”

AI has been operating in the financial sector for some time, and this experiment highlights the potential for not only legal risks but also risky autonomous actions on the part of AI.  

Dig deeper: AI-generated content: The dangers of overreliance

2. Chevy dealership chatbot sells SUV for $1 in ‘legally binding’ offer

An AI-powered chatbot for a local Chevrolet dealership in California sold a vehicle for $1 and said it was a legally binding agreement. 

In an experiment that went viral across forums on the web, several people toyed with the local dealership’s chatbot to respond to a variety of non-car-related prompts.  

One user convinced the chatbot to sell him a vehicle for just $1, and the chatbot confirmed it was a “legally binding offer – no takesies backsies.”

Fullpath, the company that provides AI chatbots to car dealerships, took the system offline once it became aware of the issue.

The company’s CEO told Business Insider that despite viral screenshots, the chatbot resisted many attempts to provoke misbehavior.

Still, while the car dealership didn’t face any legal liability from the mishap, some argue that the chatbot agreement in this case may be legally enforceable. 

3. Supermarket’s AI meal planner suggests poison recipes and toxic cocktails

A New Zealand supermarket chain’s AI meal planner suggested unsafe recipes after certain users prompted the app to use non-edible ingredients. 

Recipes like bleach-infused rice surprise, poison bread sandwiches, and even a chlorine gas mocktail were created before the supermarket caught on.

A spokesperson for the supermarket said they were disappointed to see that “a small minority have tried to use the tool inappropriately and not for its intended purpose,” according to The Guardian 

The supermarket said it would continue to fine-tune the technology for safety and added a warning for users. 

That warning stated that recipes are not reviewed by humans and do not guarantee that “any recipe will be a complete or balanced meal, or suitable for consumption.”

Critics of AI technology argue that chatbots like ChatGPT are nothing more than improvisational partners, building on whatever you throw at them. 

Because of the way these chatbots are wired, they could pose a real safety risk for certain companies that adopt them.  

Get the newsletter search marketers rely on.


4. Air Canada held liable after chatbot gives false policy advice

An Air Canada customer was awarded damages in court after the airline’s AI chatbot assistant made false claims about its policies

The customer inquired about the airline’s bereavement rates via its AI assistant after the death of a family member. 

The chatbot responded that the airline offered discounted bereavement rates for upcoming travel or for travel that has already occurred, and linked to the company’s policy page. 

Unfortunately, the actual policy was the opposite, and the airline did not offer reduced rates for bereavement travel that had already happened. 

The fact that the chatbot linked to the policy page with the correct information was an argument the airline made in court when trying to prove its case.

However, the tribunal (a small claims-type court in Canada) did not side with the defendant. As reported by Forbes, the tribunal called the scenario “negligent misrepresentation.”

Christopher C. Rivers, Civil Resolution Tribunal Member, said this in the decision:

  • “Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”

This is just one of many examples where people have been dissatisfied with chatbots due to their technical limitations and propensity for misinformation – a trend that is sparking more and more litigation. 

Dig deeper: 5 SEO content pitfalls that could be hurting your traffic

5. Australia’s largest bank replaces call center with AI, then apologizes and rehires staff

The largest bank in Australia replaced its call center team with AI voicebots with the promise of boosted efficiency, but admitted it made a big mistake. 

The Commonwealth Bank of Australia (CBA) believed the AI voicebots could reduce call volume by 2,000 calls per week. But it didn’t.

Instead, left without the assistance of its 45-person call center, the bank scrambled to offer overtime to remaining workers to keep up with the calls, and get other management workers to answer calls, too.

Meanwhile, the union representing the displaced workers elevated the situation to the Finance Sector Union (like the Equal Opportunity Commission in the U.S.). 

It was only one month after CBA replaced workers that it issued an apology and offered to hire them back.

CBA said in a statement that they did not “adequately consider all relevant business considerations and this error meant the roles were not redundant.”

Other U.S. companies have faced PR nightmares as well when attempting to replace human roles with AI.

Perhaps that’s why certain brands have deliberately gone in the opposite direction, making sure people remain central to every AI deployment.

Nevertheless, the CBA debacle shows that replacing people with AI without fully weighing the risks can backfire quickly and publicly.

6. New York City’s chatbot advises employers to break labor and housing laws

New York City launched an AI chatbot to provide information on starting and running a business, and it advised people to carry out illegal activities

Just months after its launch, people started noticing the inaccuracies provided by the Microsoft-powered chatbot.

The chatbot offered unlawful guidance across the board, from telling bosses they could pocket employees’ tips and skip notifying staff about schedule changes to tenant discrimination and cashless stores.

“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup
“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup

This is despite the city’s initial announcement promising that the chatbot would provide trusted information on topics such as “compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines.” 

Still, then-mayor Eric Adams defended the technology, saying: 

  • “Anyone that knows technology knows this is how it’s done,” and that “only those who are fearful sit down and say, ‘Oh, it is not working the way we want, now we have to run away from it all together.’ I don’t live that way.” 

Critics called his approach reckless and irresponsible. 

This is yet another cautionary tale in AI misinformation and how organizations can better handle the integration and transparency around AI technology. 

Dig deeper: SEO shortcuts gone wrong: How one site tanked – and what you can learn

7. Chicago Sun-Times publishes fake book list generated by AI

The Chicago Sun-Times ran a syndicated “summer reading” feature that included false, made-up details about books after the writer relied on AI without fact-checking the output. 

King Features Syndicate, a unit of Hearst, created the special section for the Chicago Sun-Times.  

Not only were the book summaries inaccurate, but some of the books were entirely fabricated by AI. 

“Syndicated content in Sun-Times special section included AI-generated misinformation,” Chicago Sun-Times

The author, hired by King Features Syndicate to create the book list, admitted to using AI to put the list together, as well as for other stories, without fact-checking. 

And the publisher was left trying to determine the extent of the damage. 

The Chicago Sun-Times said print subscribers would not be charged for the edition, and it put out a statement reiterating that the content was produced outside the newspaper’s newsroom. 

Meanwhile, the Sun-Times said they are in the process of reviewing their relationship with King Features, and as for the writer, King Features fired him.  

Oversight matters

The examples outlined here show what happens when AI systems are deployed without sufficient oversight. 

When left unchecked, the risks can quickly outweigh the rewards, especially as AI-generated content and automated responses are published at scale.

Organizations that rush into AI adoption without fully understanding those risks often stumble in predictable ways. 

In practice, AI succeeds only when tools, processes, and content outputs keep humans firmly in the driver’s seat.

Read more at Read More

Why LLM-only pages aren’t the answer to AI search

Why LLM-only pages aren’t the answer to AI search

With new updates in the search world stacking up in 2026, content teams are trying a new strategy to rank: LLM pages.

They’re building pages that no human will ever see: markdown files, stripped-down JSON feeds, and entire /ai/ versions of their articles.

The logic seems sound: if you make content easier for AI to parse, you’ll get more citations in ChatGPT, Perplexity, and Google’s AI Overviews.

Strip out the ads. Remove the navigation. Serve bots pure, clean text.

Industry experts such as Malte Landwehr have documented sites creating .md copies of every article or adding llms.txt files to guide AI crawlers.

Teams are even building entire shadow versions of their content libraries.

Google’s John Mueller isn’t buying it.

  • “LLMs have trained on – read and parsed – normal web pages since the beginning,” he said in a recent discussion on Bluesky. “Why would they want to see a page that no user sees?”
JohnMu, Lily Ray on BlueSky

His comparison was blunt: LLM-only pages are like the old keywords meta tag. Available for anyone to use, but ignored by the systems they’re meant to influence.

So is this trend actually working, or is it just the latest SEO myth?

The rise of ‘LLM-only’ web pages

The trend is real. Sites across tech, SaaS, and documentation are implementing LLM-specific content formats.

The question isn’t whether adoption is happening, it’s whether these implementations are driving the AI citations teams hoped for.

Here’s what content and SEO teams are actually building.

llms.txt files

A markdown file at your domain root listing key pages for AI systems.

The format was introduced in 2024 by AI researcher Simon Willison to help AI systems discover and prioritize important content. 

Plain text lives at yourdomain.com/llms.txt with an H1 project name, brief description, and organized sections linking to important pages.

Stripe’s implementation at docs.stripe.com/llms.txt shows the approach in action:

markdown# Stripe Documentation

> Build payment integrations with Stripe APIs

## Testing

- [Test mode](https://docs.stripe.com/testing): Simulate payments

## API Reference

- [API docs](https://docs.stripe.com/api): Complete API reference

The payment processor’s bet is simple: if ChatGPT can parse their documentation cleanly, developers will get better answers when they ask, “how do I implement Stripe.”

They’re not alone. Current adopters include Cloudflare, Anthropic, Zapier, Perplexity, Coinbase, Supabase, and Vercel.

Markdown (.md) page copies

Sites are creating stripped-down markdown versions of their regular pages.

The implementation is straightforward: just add .md to any URL. Stripe’s docs.stripe.com/testing becomes docs.stripe.com/testing.md.

Everything gets stripped out except the actual content. No styling. No menus. No footers. No interactive elements. Just pure text and basic formatting.

The thinking: if AI systems don’t have to wade through CSS and JavaScript to find the information they need, they’re more likely to cite your page accurately.

/ai and similar paths

Some sites are building entirely separate versions of their content under /ai/, /llm/, or similar directories.

You might find /ai/about living alongside the regular /about page, or /llm/products as a bot-friendly alternative to the main product catalog. 

Sometimes these pages have more detail than the originals. Sometimes they’re just reformatted.

The idea: give AI systems their own dedicated content that’s built for machine consumption, not human eyes. 

If a person accidentally lands on one of these pages, they’ll find something that looks like a website from 2005.

JSON metadata files

Dell took this approach with their product specs.

Instead of creating separate pages, they built structured data feeds that live alongside their regular ecommerce site.

The files contain clean JSON – specs, pricing, and availability.

Everything an AI needs to answer “what’s the best Dell laptop under $1000” without having to parse through product descriptions written for humans.

You’ll typically find these files as /llm-metadata.json or /ai-feed.json in the site’s directory.

# Dell Technologies

> Dell Technologies is a leading technology provider, specializing in PCs, servers, and IT solutions for businesses and consumers.

## Product and Catalog Data

- [Product Feed - US Store](https://www.dell.com/data/us/catalog/products.json): Key product attributes and availability.

- [Dell Return Policy](https://www.dell.com/return-policy.md): Standard return and warranty information.

## Support and Documentation

- [Knowledge Base](https://www.dell.com/support/knowledge-base.md): Troubleshooting guides and FAQs.

This approach makes the most sense for ecommerce and SaaS companies that already keep their product data in databases. 

They’re just exposing what they already have in a format AI systems can easily digest.

Dig deeper: LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Real-world citation data: What actually gets referenced

The theory sounds good. The adoption numbers look impressive. 

But do these LLM-optimized pages actually get cited?

The individual analysis

Landwehr, CPO and CMO at Peec AI, ran targeted tests on five websites using these tactics. He crafted prompts specifically designed to surface their LLM-friendly content.

Some queries even contained explicit 20+ word quotes designed to trigger specific sources.

Landwehr - LLM experiment 1

Across nearly 18,000 citations, here’s what he found.

llms.txt: 0.03% of citations

Out of 18,000 citations, only six pointed to llms.txt files. 

The six that did work had something in common: they contained genuinely useful information about how to use an API and where to find additional documentation. 

The kind of content that actually helps AI systems answer technical questions. The “search-optimized” llms.txt files, the ones stuffed with content and keywords, received zero citations.

Markdown (.md) pages: 0% of citations

Sites using .md copies of their content got cited 3,500+ times. None of those citations pointed to the markdown versions. 

The one exception: GitHub, where .md files are the standard URLs. 

They’re linked internally, and there’s no HTML alternative. But these are just regular pages that happen to be in markdown format.

/ai pages: 0.5% to 16% of citations

Results varied wildly depending on implementation. 

One site saw 0.5% of its citations point to its/ai pages. Another hit 16%. 

The difference? 

The higher-performing site put significantly more information in their /ai pages than existed anywhere else on their site. 

Keep in mind, these prompts were specifically asking for information contained in these files. 

Even with prompts designed to surface this content, most queries ignored the /ai versions.

JSON metadata: 5% of citations

One brand saw 85 out of 1,800 citations (5%) come from their metadata JSON file. 

The critical detail here is that the file contained information that didn’t exist anywhere else on the website. 

Once again, the query specifically asked for those pieces of information.

Landwehr - LLM experiment 1

The large-scale analysis

SE Ranking took a different approach

Instead of testing individual sites, they analyzed 300,000 domains to see if llms.txt adoption correlated with citation frequency at scale.

Only 10.13% of domains, or 1 in 10, had implemented llms.txt. 

For context, that’s nowhere near the universal adoption of standards like robots.txt or XML sitemaps.

During the study, an interesting relationship between adoption rates and traffic levels emerged.

Sites with 0-100 monthly visits adopted llms.txt at 9.88%. 

Sites with 100,001+ visits? Just 8.27%. 

The biggest, most established sites were actually slightly less likely to use the file than mid-tier ones.

But the real test was whether llms.txt impacted citations. 

SE Ranking built a machine learning model using XGBoost to predict citation frequency based on various factors, including the presence of llms.txt.

The result: removing llms.txt from the model actually improved its accuracy. 

The file wasn’t helping predict citation behavior, it was adding noise.

The pattern

Both analyses point to the same conclusion: LLM-optimized pages get cited when they contain unique, useful information that doesn’t exist elsewhere on your site.

The format doesn’t matter. 

Landwehr’s conclusion was blunt: “You could create a 12345.txt file and it would be cited if it contains useful and unique information.”

A well-structured about page achieves the same result as an /ai/about page. API documentation gets cited whether it’s in llms.txt or buried in your regular docs.

The files themselves get no special treatment from AI systems. 

The content inside them might, but only if it’s actually better than what already exists on your regular pages.

SE Ranking’s data backs this up at scale. There’s no correlation between having llms.txt and getting more citations. 

The presence of the file made no measurable difference in how AI systems referenced domains.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

What Google and AI platforms actually say

No major AI company has confirmed using llms.txt files in their crawling or citation processes.

Google’s Mueller made the sharpest critique in April 2025, comparing llms.txt to the obsolete keywords meta tag: 

  • “[As far as I know], none of the AI services have said they’re using LLMs.TXT (and you can tell when you look at your server logs that they don’t even check for it).”

Google’s Gary Illyes reinforced this at the July 2025 Search Central Deep Dive in Bangkok, explicitly stating Google “doesn’t support LLMs.txt and isn’t planning to.”

Google Search Central’s documentation is equally clear: 

  • “The best practices for SEO remain relevant for AI features in Google Search. There are no additional requirements to appear in AI Overviews or AI Mode, nor other special optimizations necessary.”

OpenAI, Anthropic, and Perplexity all maintain their own llms.txt files for their API documentation to make it easy for developers to load into AI assistants. 

But none have announced their crawlers actually read these files from other websites.

The consistent message from every major platform: standard web publishing practices drive visibility in AI search. 

No special files, no new markup, and no separate versions needed.

What this means for SEO teams

The evidence points to a single conclusion: stop building content that only machines will see.

Mueller’s question cuts to the core issue: 

  • “Why would they want to see a page that no user sees?” 

If AI companies needed special formats to generate better responses, they would tell you. As he noted:

  • “AI companies aren’t really known for being shy.” 

The data proves him right. 

Across Landwehr’s nearly 18,000 citations, LLM-optimized formats showed no advantage unless they contained unique information that didn’t exist anywhere else on the site. 

SE Ranking’s analysis of 300,000 domains found that llms.txt actually added confusion to their citation prediction model rather than improving it.

Instead of creating shadow versions of your content, focus on what actually works.

Build clean HTML that both humans and AI can parse easily. 

Reduce JavaScript dependencies for critical content, which Mueller identified as the real technical barrier: 

  • “Excluding JS, which still seems hard for many of these systems.” 

Heavy client-side rendering creates actual problems for AI parsing.

Use structured data when platforms have published official specifications, such as OpenAI’s ecommerce product feeds

Improve your information architecture so key content is discoverable and well-organized.

The best page for AI citation is the same page that works for users: well-structured, clearly written, and technically sound. 

Until AI companies publish formal requirements stating otherwise, that’s where your optimization energy belongs.

Dig deeper: GEO myths: This article may contain lies

Read more at Read More

SEO in 2026: What will stay the same

SEO in 2026 what will stay the same

Around the turn of the year, search industry media fills up with reviews and predictions. Bold, disruptive ideas steal the spotlight and trigger a sense of FOMO (fear of missing out).

However, sustainable online sales growth doesn’t come from chasing the next big trend. In SEO, what truly matters stays the same.

FOMO is bad for you 

We regularly get excited about the next big thing. Each new idea is framed as a disruptive force that will level the playing field.

Real shifts do happen, but they are rare. More often, the promised upheaval fades into a storm in a teacup.

Over the years, search has introduced many innovations that now barely raise an eyebrow. Just a few examples:

  • Voice search.
  • Universal Search.
  • Google Instant.
  • The Knowledge Graph.
  • HTTPS as a ranking signal.
  • RankBrain.
  • Mobile-first indexing.
  • AMP.
  • Featured snippets and zero-click searches.
  • E-A-T and E-E-A-T.
  • Core Web Vitals.
  • Passage indexing.
  • AI Overviews.

Some claimed these developments would revolutionize SEO or wipe it out entirely. That never happened.

The latest addition to the SEO hype cycle, LLMs and AI, fits neatly into this list. After the initial upheaval, the excitement has already started to fade.

The benefits of LLMs are clear in some areas, especially coding and software development. AI tools boost efficiency and significantly shorten production cycles.

In organic search, however, their impact remains limited, despite warnings from attention-seeking doomsayers. No AI-driven challenger has captured meaningful search market share.

Beyond ethical concerns about carbon footprint and extreme energy use, accuracy remains the biggest hurdle. Because they rely on unverified inputs, LLM-generated answers often leave users more confused than informed.

AI-driven platforms still depend on crawling the web and using core SEO signals to train models and answer queries. Like any bot, they need servers and content to be accessible and crawlable.

Without strong quality controls, low-quality inputs produce inconsistent and unreliable outputs. This is just one reason why Google’s organic search market share remains close to 90%.

It also explains why Google is likely to remain the dominant force in ecommerce search for the foreseeable future. For now, a critical mass of users will continue to rely on Google as their search engine of choice.

It’s all about data 

Fundamentally, it makes little difference whether a business focuses on Google, LLM-based alternatives, or both. All search systems depend on crawled data, and that won’t change.

Fast, reliable, and trustworthy indexing signals sit at the core of every ranking system. Instead of chasing hype, brands and businesses are better served by focusing on two core areas: their customers’ needs and the crawlability of their web platforms.

Customer needs always come first.

Most users do not care whether a provider uses the latest innovation. They care about whether expectations are met and promises are kept. That will not change.

Meeting user expectations will remain a core objective of SEO.

Crawlability is just as critical. A platform that cannot be properly crawled or indexed has no chance in competitive sectors such as retail, travel, marketplaces, news, or affiliate marketing.

Making sure bots can crawl a site, and algorithms can clearly understand the unique value of its content, will remain a key success factor in both SEO and GEO for the foreseeable future.

Won’t change: Uncrawled content won’t rank

Other factors are unlikely to change as well, including brand recognition, user trust, ease of use, and fast site performance.

These factors have always mattered and will continue to do so. They only support SEO and GEO if a platform can be properly crawled and understood. That is why regular reviews of technical signals are a critical part of a successful online operation.

Won’t change: server errors prevent indexing by any bot

At the start of a new year, you should resist the fear of missing out on the latest novelty. Following the herd rarely helps anyone stand out.

A better approach is to focus on what is certain to remain consistent in 2026 and beyond.

What to do next

Publishers can breathe a sigh of relief. There is no need to rush into a new tool just because everyone else is. Adopt it if it makes sense, but no tool alone will make a business thrive.

Focus on what you do best and make it even better. Your customers will notice and appreciate it.

At the same time, make sure your web platform is fast and reliable, that your most relevant content is regularly re-crawled, and that bots clearly understand its purpose. These are the SEO and GEO factors that will endure.

Holistic SEO is both an art and a science. While it is far more complex in 2026, it is the unchanging foundational signals that matter most.

Read more at Read More

Yext’s Visibility Brief: Your guide to brand visibility in AI search by Yext

Search visibility isn’t what it used to be. Rankings still matter, but they’re no longer the whole story. 

Today, discovery happens across traditional search results, local listings, brand knowledge panels, and increasingly, AI-driven experiences that surface answers without a click. For marketers, that makes visibility harder to measure — and easier to lose.

SEO teams now operate in a landscape where accuracy, consistency, and trust signals matter as much as keywords. Business information, reviews, and brand authority determine whether a brand shows up at all, especially as AI-powered search reshapes how results are generated and displayed. As a result, many brands think they’re visible — until they look closer.

The Visibility Brief was created to show you what’s really happening. Built on real data from thousands of brands, it provides a practical view of how visibility plays out across today’s search and discovery ecosystem.

Instead of focusing on a single channel or metric, it takes a broader view. The content highlights where brands are gaining ground, where gaps appear, and which trends are shaping performance.

You’ll see how traditional search and AI-driven discovery now overlap, why data accuracy has become a baseline requirement, and where brands are losing exposure without realizing it. 

The goal is simple: help you understand how visibility is changing and what to focus on now.

Watch or listen to the Visibility Brief to get a clearer view of today’s search landscape — and what it means for your brand’s visibility.

Subscribe to the Visibility Brief on Spotify or Apple Podcasts.

Read more at Read More

Web Design and Development San Diego

Some Google AI Overviews now use Gemini 3 Pro

Google now uses Gemini 3 Pro to generate some AI Overviews in Google Search. Google said for more complex queries Gemini 3 Pro is used for AI Overview.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

This was previously announced for AI Mode results back in November and then in December Google began using Gemini 3 Flash for AI Mode. Now, Google is taking Gemini 3 Pro to AI Overviews for complex queries.

Gemini 3 Pro is used to generate AI Overviews for complex queries in English, globally for Google AI Pro & Ultra subscribers.

What Google said. Robby Stein, VP of Product at Google Search wrote:

  • “Update: AI Overviews now tap into Gemini 3 Pro for complex topics.”
  • “Behind the scenes, Search will intelligently route your toughest Qs to our frontier model (just like we do in AI Mode) while continuing to use faster models for simpler tasks.”
  • “Live in English globally for Google AI Pro & Ultra subs.”

Why we care. The AI Overviews may be very different today than it was a week or so ago. Google will continue to improve its Gemini models and work those upgraded models into Google Search, including AI Overviews and AI Mode.

Read more at Read More

Perplexity AI User and Revenue Statistics

Founded in 2022, Perplexity offers an AI-powered search engine.

AI tools offer a new way to search for factual information, where Perplexity stands out as an AI-native search engine that combines large language models with real-time web search.

With a valuation of $20 billion and a growing user base of 30 million monthly active users, Perplexity is one of the fastest-growing tech startups challenging Google’s dominance with its AI-native search engine.

From the number of Perplexity active users to company revenue, we’ll cover the latest stats about the popular AI search engine on this page.

Key Perplexity Stats

  • Perplexity has 30 million monthly active users.
  • Perplexity processes around 600 million search queries a month.
  • Lifetime downloads of Perplexity mobile apps reached 80.5 million to date.
  • Perplexity’s annualized recurring revenue reportedly reached nearly $200 million.

Perplexity Monthly Active Users

According to the latest data, Perplexity AI has around 30 million monthly active users worldwide as of April 2025.

As of April 2025, Perplexity AI has 30 million monthly active users worldwide

That’s up from 10 million monthly active users reported in January 2024.

Here’s a table with the Perplexity AI’s monthly active user base since March 2023:

Date Perplexity AI Monthly Active Users
March 2023 2 million
January 2024 10 million
April 2025 30 million

Sources: The Verge, Perplexity AI, Perplexity AI

Perplexity Search Volume

According to Perplexity AI CEO, the search engine processes around 600 million queries per month as of April 2025. That’s an increase from 400 million reported in October 2024.

Search engine processes around 600 million queries per month as of April 2025

Here’s an overview of Perplexity AI monthly search volume over time since X:

Date Perplexity AI Monthly Search Queries
July 2024 250 million
October 2024 400 million
April 2025 600 million

Sources: The Verge, TechCrunch

Perplexity Website Traffic

According to the latest estimates, the Perplexity AI website received 239.7 million visits worldwide in November 2025, showing a 13.21% decrease compared to October 2025.

Perplexity AI website received 239.7 million visits worldwide in November 2025

Here’s a website traffic breakdown of the Perplexity AI website since September 2025:

Date Perplexity AI Website Traffic
September 2025 194.37 million
October 2025 276.5 million
November 2025 239.97 million

Source: Semrush

Perplexity App Downloads

According to recent estimates, Perplexity AI app downloads across Google Play and App Store reached an estimated lifetime downloads of 80.5 million to date, including 5.1 million in November 2025 alone.

Perplexity App Downloads

Perplexity AI had the highest number of app downloads in October 2025, with 15.5 million monthly installs worldwide.

Here’s a table with Perplexity AI app downloads over time since January 2024:

Date Perplexity AI App Downloads
January 2024 0.98 million
February 2024 0.84 million
March 2024 0.75 million
April 2024 0.63 million
May 2024 0.75 million
June 2024 0.79 million
July 2024 0.72 million
August 2024 0.8 million
September 2024 1 million
October 2024 1.27 million
November 2024 1.73 million
December 2024 1.6 million
January 2025 1.82 million
February 2025 2.88 million
March 2025 4 million
April 2025 2.94 million
May 2025 2.56 million
June 2025 2.62 million
July 2025 5.52 million
August 2025 8.84 million
September 2025 11.98 million
October 2025 15.45 million
November 2025 5.1 million

Source: Appfigures

Perplexity Revenue

Perplexity’s annual recurring revenue reportedly reached nearly $200 million as of September 2025, up from $100 million in March 2025.

Perplexity's annual recurring revenue reportedly reached nearly $200 million as of September 2025

Sources: TechCrunch, Perplexity

Perplexity Funding

Perplexity raised a total of $1.22 billion across 7 publicly disclosed funding rounds to date.

Perplexity Funding

Here’s a table with information on Perplexity AI’s latest funding rounds to date:

Date Funding Round, Amount
March 28, 2023 Series A, $28.8 million
January 4, 2024 Series B, $73.6 million
April 23, 2024 Series C, $63 million
August 9, 2024 Series C, $250 million
December 18, 2024 Series D, $500 million
July 18, 2025 Series E, $100 million
September 10, 2025 Series E, $200 million

Source: Tracxn

The post Perplexity AI User and Revenue Statistics appeared first on Backlinko.

Read more at Read More