Digital PR is about to matter more than ever. Not because it’s fashionable, or because agencies have rebranded link building with a shinier label, but because the mechanics of search and discovery are changing.
Brand mentions, earned media, and the wider PR ecosystem are now shaping how both search engines and large language models understand brands. That shift has serious implications for how SEO professionals should think about visibility, authority, and revenue.
At the same time, informational search traffic is shrinking. Fewer people are clicking through long blog posts written to target top-of-funnel keywords.
The commercial value in search is consolidating around high-intent queries and the pages that serve them: product pages, category pages, and service pages. Digital PR sits right at the intersection of these changes.
What follows are seven practical, experience-led secrets that explain how digital PR actually works when it’s done well, and why it’s becoming one of the most important tools in SEOs’ toolkit.
Secret 1: Digital PR can be a direct sales activation channel
Digital PR is usually described as a link tactic, a brand play or, more recently, as a way to influence generative search and AI outputs.
All of that’s true. What’s often overlooked is that digital PR can also drive revenue directly.
When a brand appears in a relevant media publication, it’s effectively placing itself in front of buyers while they are already consuming related information.
This is not passive awareness. It’s targeted exposure during a moment of consideration.
Platforms like Google are exceptionally good at understanding user intent, interests and recency. Anyone who has looked at their Discover feed after researching a product category has seen this in action.
Digital PR taps into the same behavioral reality. You are not broadcasting randomly. You are appearing where buyers already are.
Two things tend to happen when this is executed well.
If your site already ranks for a range of relevant queries, your brand gains additional recognition in nontransactional contexts. Readers see your name attached to a credible story or insight. That familiarity matters.
More importantly, that exposure drives brand search and direct clicks. Some readers click straight through from the article. Others search for your brand shortly after. In both cases, they enter your marketing funnel with a level of trust that generic search traffic rarely has.
This effect is driven by basic behavioral principles such as recency and familiarity. While it’s difficult to attribute cleanly in analytics, the commercial impact is very real.
We see this most clearly in direct-to-consumer, finance, and health markets, where purchase cycles are active and intent is high.
Digital PR is not just about supporting sales. In the right conditions, it’s part of the sales engine.
Secret 2: The mere exposure effect is one of digital PR’s biggest advantages
One of the most consistent patterns in successful digital PR campaigns is repetition.
When a brand appears again and again in relevant media coverage, tied to the same themes, categories, or areas of expertise, it builds familiarity.
That familiarity turns into trust, and trust turns into preference. This is known as the mere exposure effect, and it’s fundamental to how brands grow.
In practice, this often happens through syndicated coverage. A strong story picked up by regional or vertical publications can lead to dozens of mentions across different outlets.
Historically, many SEOs undervalued this type of coverage because the links were not always unique or powerful on their own.
That misses the point.
What this repetition creates is a dense web of co-occurrences. Your brand name repeatedly appears alongside specific topics, products, or problems. This influences how people perceive you, but it also influences how machines understand you.
For search engines and large language models alike, frequency and consistency of association matter.
An always-on digital PR approach, rather than sporadic big hits, is one of the fastest ways to increase both human and algorithmic familiarity with a brand.
Secret 3: Big campaigns come with big risk, so diversification matters
Large, creative digital PR campaigns are attractive. They are impressive, they generate internal excitement, and they often win industry praise. The problem is that they also concentrate risk.
A single large campaign can succeed spectacularly, or it can fail quietly. From an SEO perspective, many widely celebrated campaigns underperform because they do not generate the links or mentions that actually move rankings.
This happens for a simple reason. What marketers like is not always what journalists need.
Journalists are under pressure to publish quickly, attract attention, and stay relevant to their audience.
If a campaign is clever but difficult to translate into a story, it will struggle. If all your budget’s tied up in one idea, you have no fallback.
A diversified digital PR strategy spreads investment across multiple smaller campaigns, reactive opportunities, and steady background activity.
This increases the likelihood of consistent coverage and reduces dependence on any single idea working perfectly.
In digital PR, reliability often beats brilliance.
One of the most common mistakes in digital PR is forgetting who the gatekeeper is.
From a brand’s perspective, the goal might be links, mentions, or authority.
From a journalist’s perspective, the goal is to write a story that interests readers and performs well. These goals overlap, but they are not the same.
The journalist decides whether your pitch lives or dies. In that sense, they are the customer.
Effective digital PR starts by understanding what makes a journalist’s job easier.
That means providing clear angles, credible data, timely insights, and fast responses. Think about relevance before thinking about links.
When you help journalists do their job well, they reward you with exposure.
That exposure carries weight in search engines and in the training data that informs AI systems. The exchange is simple: value for value.
Treat journalists as partners, not as distribution channels.
Secret 5: Product and category page links are where SEO value is created
Not all links are equal.
From an SEO standpoint, links to product, category, and core service pages are often far more valuable than links to blog content. Unfortunately, they are also the hardest links to acquire through traditional outreach.
This is where digital PR excels.
Because PR coverage is contextual and editorial, it allows links to be placed naturally within discussions of products, services, or markets. When done correctly, this directs authority to the pages that actually generate revenue.
As informational content becomes less central to organic traffic growth, this matters even more.
Ranking improvements on high-intent pages can have a disproportionate commercial impact.
A relatively small number of high-quality, relevant links can outperform a much larger volume of generic links pointed at top-of-funnel content.
Digital PR should be planned with these target pages in mind from the outset.
Secret 6: Entity lifting is now a core outcome of digital PR
Search engines have long made it clear that context matters. The text surrounding a link, and the way a brand is described, help define what that brand represents.
This has become even more important with the rise of large language models. These systems process information in chunks, extracting meaning from surrounding text rather than relying solely on links.
When your brand is mentioned repeatedly in connection with specific topics, products, or expertise, it strengthens your position as an entity in that space. This is what’s often referred to as entity lifting.
The effect goes beyond individual pages. Brands see ranking improvements for terms and categories that were not directly targeted, simply because their overall authority has increased.
At the same time, AI systems are more likely to reference and summarize brands that are consistently described as relevant sources.
Digital PR is one of the most scalable ways to build this kind of contextual understanding around a brand.
Secret 7: Authority comes from relevant sources and relevant sections
Former Google engineer Jun Wu discusses this in his book “The Beauty of Mathematics in Computer Science,” explaining that authority emerges from being recognized as a source within specific informational hubs.
In practical terms, this means that where you are mentioned matters as much as how big the site is.
A link or mention from a highly relevant section of a large publication can be more valuable than a generic mention on the homepage. For example, a targeted subfolder on a major media site can carry strong authority, even if the domain as a whole covers many subjects.
Effective digital PR focuses on two things:
Publications that are closely aligned with your industry and sections.
Subfolders that are tightly connected to the topic you want to be known for.
This is how authority is built in a way that search engines and AI systems both recognize.
Digital PR is no longer a supporting act to SEO. It’s becoming central to how brands are discovered, understood, and trusted.
As informational traffic declines and high-intent competition intensifies, the brands that win will be those that combine relevance, repetition, and authority across earned media.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-03 14:00:002026-02-03 14:00:007 digital PR secrets behind strong SEO performance
I’ve spent over 20 years in companies where SEO sat in different corners of the organization – sometimes as a full-time role, other times as a consultant called in to “find what’s wrong.” Across those roles, the same pattern kept showing up.
The technical fix was rarely what unlocked performance. It revealed symptoms, but it almost never explained why progress stalled.
No governance
The real constraints showed up earlier, long before anyone read my weekly SEO reports. They lived in reporting lines, decision rights, hiring choices, and in what teams were allowed to change without asking permission.
When SEO struggled, it was usually because nobody rightfully owned the CMS templates, priorities conflicted across departments, or changes were made without anyone considering how they affected discoverability.
I did not have a word for the core problem at the time, but now I do – it’s governance, usually manifested by its absence.
Two workplaces in my career had the conditions that allowed SEO to work as intended. Ownership was clear.
Release pathways were predictable. Leaders understood that visibility was something you managed deliberately, not something you reacted to when traffic dipped.
Everywhere else, metadata and schema were not the limiting factor. Organizational behavior was.
Once sales pressures dominate each quarter, even technically strong sites undergo small, reasonable changes:
Navigation renamed by a new UX hire.
Wording adjusted by a new hire on the content team.
Templates adjusted for a marketing campaign.
Titles “cleaned up” by someone outside the SEO loop.
None of these changes look dangerous in isolation – if you know before they occur.
Over time, they add up. Performance slides, and nobody can point to a single release or decision where things went wrong.
This is the part of SEO most industry commentary skips. Technical fixes are tangible and teachable. Organizational friction is not. Yet that friction is where SEO outcomes are decided, usually months before any visible decline.
SEO loses power when it lives in the wrong place
I’ve seen this drift hurt rankings, with SEO taking the blame. In one workplace, leadership brought in an agency to “fix” the problem, only for it to confirm what I’d already found: a lack of governance caused the decline.
Where SEO sits on the org chart determines whether you see decisions early or discover them after launch. It dictates whether changes ship in weeks or sit in the backlog for quarters.
I have worked with SEO embedded under marketing, product, IT, and broader omnichannel teams. Each placement created a different set of constraints.
When SEO sits too low, decisions that reshape visibility ship first and get reviewed later — if they are reviewed at all.
Engineering adjusted components to support a new security feature. In one workplace, a new firewall meant to stop scraping also blocked our own SEO crawling tools.
Product reorganized navigation to “simplify” the user journey. No one asked SEO how it would affect internal PageRank.
Marketing “refreshed” content to match a campaign. Each change shifted page purpose, internal linking, and consistency — the exact signals search engines and AI systems use to understand what a site is about.
Without a seat at the right table, SEO becomes a cleanup function.
When one operational unit owns SEO, the work starts to reflect that unit’s incentives.
Under marketing, it becomes campaign-driven and short-term.
Under IT, it competes with infrastructure work and release stability.
Under product, it gets squeezed into roadmaps that prioritize features over discoverability.
The healthiest performance I’ve seen came from environments where SEO sat close enough to leadership to see decisions early, yet broad enough to coordinate with content, engineering, analytics, UX, and legal.
In one case, I was a high-priced consultant, and every recommendation was implemented. I haven’t repeated that experience since, but it made one thing clear: VP-level endorsement was critical. That client doubled organic traffic in eight months and tripled it over three years.
Unfortunately, the in-house SEO team is just another team that might not get the chance to excel. Placement is not everything, but it is the difference between influencing the decision and fixing the outcome.
The second pattern that keeps showing up is hiring – and it surfaces long before any technical review.
Many SEO programs fail because organizations staff strategically important roles for execution, when what they really need is judgment and influence. This isn’t a talent shortage. It’s a screening problem
The SEO manager often wears multiple hats, with SEO as a minor one. When they don’t understand SEO requirements, they become a liability, and the C-suite rarely sees it.
Across many engagements, I watched seasoned professionals passed over for younger candidates who interviewed well, knew the tool names, and sounded confident.
HR teams defaulted to “team fit” because it was easier to assess than a candidate’s ability to handle ambiguity, challenge bad decisions, or influence work across departments.
SEO excellence depends on lived experience. Not years on a résumé, but having seen the failure modes up close:
Migrations that wiped out templates.
Restructures that deleted category pages.
“Small” navigation changes that collapsed internal linking.
Those experiences build judgment. Judgment is what prevents repeat mistakes. Often, that expertise is hard to put in a résumé.
Without SEO domain literacy, hiring becomes theater. But we can’t blame HR, which has to hire people for all parts of the business. Its only expertise is HR.
Governance needs to step in.
One of the most reliable ways to improve recruitment outcomes is simple: let the SEO leader control the shortlist.
Fit still matters. Competence matters first. When the person accountable for results shapes the hiring funnel, the best candidates are chosen.
SEO roles require the ability to change decisions, not just diagnose problems. That skill does not show up in a résumé keyword scan.
Every department in a large organization has legitimate goals.
Product wants momentum.
Engineering wants predictable releases.
Marketing wants campaign impact.
Legal wants risk reduction.
Each team can justify its decisions – and SEO still absorbs the cost.
I have seen simple structural improvements delayed because engineering was focused on a different initiative.
At one workplace, I was asked how much sales would increase if my changes were implemented.
I have seen content refreshed for branding reasons that weakened high-converting pages. Each decision made sense locally. Collectively, they reshaped the site in ways nobody fully anticipated.
Today, we face an added risk: AI systems now evaluate content for synthesis. When content changes materially, an LLM may stop citing us as an authority on that topic.
Strong visibility governance can prevent that.
The organizations that struggled most weren’t the ones with conflict. They were the ones that failed to make trade-offs explicit.
What are we giving up in visibility to gain speed, consistency, or safety? When that question is never asked, SEO degrades quietly.
What improved outcomes was not a tool. It was governance: shared expectations and decision rights.
When teams understood how their work affected discoverability, alignment followed naturally. SEO stopped being the team that said “no” and became the function that clarified consequences.
International SEO improves when teams stop shipping locally good changes that are globally damaging. Local SEO improves when there is a single source of location truth.
Ownership gaps
Many SEO problems trace back to ownership gaps that only become visible once performance declines.
Who owns the CMS templates?
Who defines metadata standards?
Who maintains structured data? Who approves content changes?
When these questions have no clear answer, decisions stall or happen inconsistently. The site evolves through convenience rather than intent.
In contrast, the healthiest organizations I worked with shared one trait: clarity.
People knew which decisions they owned and which ones required coordination. They did not rely on committees or heavy documentation because escalation paths were already understood.
When ownership is clear, decisions move. When ownership is fragmented, even straightforward SEO work becomes difficult.
Across my career, the strongest results came from environments where SEO had:
Early involvement in upcoming changes.
Predictable collaboration with engineering.
Visibility into product goals.
Clear authority over content standards.
Stable templates and definitions.
A reliable escalation path when priorities conflicted.
Leaders who understood visibility as a long-term asset.
These organizations were not perfect. They were coherent.
People understood why consistency mattered. SEO was not a reactive service. It was part of the infrastructure.
What leaders can do now
If you lead SEO inside a complex organization, the most effective improvements come from small, deliberate shifts in how decisions get made:
Place SEO where it can see and influence decisions early.
Let SEO leaders – not HR – shape candidate shortlists.
Hire for judgment and influence, not presentation.
Create predictable access to product, engineering, content, analytics, and legal.
Stabilize page purpose and structural definitions.
Make the impact of changes visible before they ship.
These shifts do not require new software. They require decision clarity, discipline, and follow-through.
Visibility is an organizational outcome
SEO succeeds when an organization can make and enforce consistent decisions about how it presents itself. Technical work matters, but it can’t offset structures pulling in different directions.
The strongest SEO results I’ve seen came from teams that focused less on isolated optimizations and more on creating conditions where good decisions could survive change. That’s visibility governance.
When SEO performance falters, the most durable fixes usually start inside the organization.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-03 13:00:002026-02-03 13:00:00Why most SEO failures are organizational, not technical
Is traditional SEO is dead? Not exactly. But definitely evolving. Google still controlled a whopping 89% of all U.S. web traffic in 2025. It’s still a search powerhouse, no doubt, but it isn’t the only show in town anymore.
SEO as we know it is no more. The way people find information is changing dramatically.
Google’s rolling out 12-plus algorithm changes per day. At the same time, platforms like TikTok, Amazon, and generative AI tools like ChatGPT and Claude are becoming major players in the search game.
Let’s face it. Traditional SEO tactics aren’t always the best option.
Let’s dig into the data for a pulse check on SEO in 2026.
Key Takeaways
SEO isn’t dead, but traditional tactics alone won’t cut it. To stay visible, your strategy must account for AI Overviews, zero-click searches, and shifting user behavior across platforms.
AI Overviews and SERP features now dominate page one. If your content isn’t cited or structured for AI, you risk being invisible—no matter your ranking.
Brand signals like search volume, authority, and trust matter for AI visbility. Google favors entities, not just pages. Build real-world credibility if you want to rank.
Optimize for LLMs and SEO at the same time. Clear formatting, concise answers, and fact-rich content help you rank and get quoted in generative results.
Search is no longer just on Google. Users discover content through social media, marketplaces, and AI engines—your optimization strategy needs to reach beyond traditional search.
Is SEO Dead?
Google doesn’t share its search volume data. However, approximations place it in the tens of billions, somewhere over 15 billion per day.
This shows that SEO still holds weight, but AI and LLM searches are growing. Currently, these platforms account for about 6% of global search volume, which doesn’t seem like much. But when you consider that the number is about triple what it was a year ago, it makes marketers start to take notice.
According to SmartInsights, the top 3 positions carry double-digit click-through rates, but these drop drastically for positions lower down the page. Just look at the chart below:
This drastic drop highlights how Google’s been steadily moving toward a “zero-click” search experience.
Does this mean AI Overviews are surely going to kill SEO? Well, no, but they’re definitely shaking things up. In fact, Google’s been moving toward its “answer engine” model and its new AI mode for a while now.
Features like featured snippets and answer boxes already provide concise information directly on the search results page, reducing the need for users to click through to websites.
This trend is driven by the rise of “zero-click content”—content that’s so comprehensive and informative that it satisfies user intent right on the search engine results page (SERP).
Essentially, users can find their answers without visiting a website.
AI Overviews take the zero-click approach to a whole new level, providing even more content directly in the search results.
So, how do we come to grips with both truths—that zero-click search directly results in less engagement with SEO results and that organic search is still a significant driver of traffic?
A common concern for marketers is that emerging AI engines, like ChatGPT, will kill the industry as we know it. But consider this: AI search engines still rely on Google and other algorithm-driven engines for information.
Instead of assuming SEO is dead, we should consider how SEO works today in conjunction with these trends.
The Face of the New SEO Campaign
To understand what success looks like in the new world of search, let’s look at a successful campaign of one of our NP Digital clients, RefiJet.
RefiJet has quickly become a leader in the motorcycle and auto loan refinancing space over the last decade. But to grow further, they needed to differentiate themselves from competitors and grow their digital footprint, all while AI search was changing the very way the game is played. Their company also faced macroeconomic challenges as high interest rates pushed many borrowers to the sidelines.
Our strategy for them blended new AI search principles with traditional SEO best practices. We focused on traditional technical SEO aspects such as crawlability, site speed, and structured data optimization. These moves boosted RefiJet’s inclusion in AI overviews.
Next, we launched on-page optimization tactics. These were aimed at catching traditional long-tail, high-intent search queries. We also leveraged retrieval-augmented generation (RAG) to showcase RefiJet’s authority in its space and boost citations across the web.
This blended approach helped RefiJet achieve some pretty eye-catching results:
Their SERP features increased 30,800% (that’s not a typo) since May 2024
Their rankings in the highly coveted 1-3 slots in Google increased 522% year-over-year
Traffic from LLMs is up 2012% and site-wide page views from LLMs are up 7144% year over year.
Most importantly, RefiJet’s funded loans from organic search and LLMs are up 178% year over year.
So, no. Traditional SEO is not dead. The “new” strategy just takes a modern, blended approach to modern search problems.
SEO Isn’t Dying (It’s Just Changing)
So, is SEO dead? At this point, I think you know my answer.
That would be a resounding no.
SEO isn’t going anywhere. However, for brands to find success with SEO strategies, there are specific things to keep in mind when developing campaigns.
We know Google functions more as a discovery engine but here is what else you need to know to dominate SERPs.
AI Is Taking Up A Larger Portion of the SERPs
If you’ve searched for anything on Google lately, you’ve probably seen it. That big, AI-generated box right at the top—pushing organic results further down the page.
Google’s AI Overviews are live, and they’re eating up prime SERP real estate. For certain keywords, especially broad informational ones, they dominate. And if your content doesn’t get cited in those summaries? You might not even show above the fold.
But it’s not just AI Overviews. Google has been quietly expanding other SERP features too, like interactive knowledge panels, visual product listings, “Discussions and Forums,” and even its experimental AI mode inside Search Labs. The days of ten blue links are long gone.
This shift doesn’t mean SEO is over. It means we have to rethink how we optimize. Your content still needs to be the best answer, but now it also needs to be the kind of content Google’s AI is willing to quote.
If you haven’t yet, start digging into how AI Overviews work. See which types of pages Google is pulling from. Understand the patterns.
SEO isn’t dying. But the way we earn visibility is shifting. Fast.
Technical Fundamentals Still Matter
Google’s focus isn’t backlinks, keyword density, or a specific SEO metric. Instead, the focus is on a seamless and enjoyable user experience.
What metrics does Google use to gauge user experience?
Using a clear navigation structure is a good place to start. If you want people to spend a lot of time on your site, you need to understand how users navigate. This includes using a clear URL structure, enabling breadcrumbs, and linking internally.
Core Web Vitals—a set of standardized metrics Google uses to measure real-world page performance—is another good launchpad. These include:
Largest Contentful Paint (LCP): The time from when a user starts loading a page until the largest image or text block is visible in the viewport. Goal: 2.5 seconds or less.
Interaction to Next Paint (INP): The time between a user action, like a click or key press, and the next time it takes for the page to respond. Goal: 200 milliseconds or less.
Cumulative Layout Shift (CLS): How much a webpage’s layout unexpectedly shifts during loading. Goal: A CLS score of less than 0.1.
Other important user experience metrics include dwell time, time spent on page, bounce rate, and exit rate. You can find these metrics in Google Analytics.
So, how can you improve user experience? There are a few steps you can take that will positively impact the metrics mentioned above:
Improve site speed: The faster your site loads, the better experience the user will have (We saw the impact this can have in our RefiJet example). You can gauge site speed with tools like PageSpeed Insights and Pingdom.
Optimize for mobile: You can’t afford to not optimize for mobile, as it accounts for more than 50% of web traffic. Tools like PageSpeed Insights can give you the information you need to start, like eliminating render-blocking resources or reducing unused code. You will also want to consider a responsive design if you’re not already using one.
Social Search Is Taking A Larger Share
Google remains a powerful tool, but, as we’ve established, it’s no longer the sole player in search and discovery.
Platforms like TikTok, Reddit, and even voice search engines—such as Alexa and Siri—are reshaping SEO. The question is: Are you reshaping your strategies to match them?
When Google is deciding what to rank and where to rank it, it looks past its own dataset toward other spots online, like the platforms mentioned above.
All these platforms have one thing in common: They cater to users who prefer quick, conversational, or visual content.
So, what does optimizing your content strategy to leverage these platforms look like in practice? Each app has its own wrinkles you need to consider to maximize your performance across channels:
Reddit: Participate in relevant subreddits and provide value without overtly promoting.
YouTube: Create a combination of long-form and short-form videos, targeting different users on the platform.
Voice search: Focus on conversational keywords and provide clear answers to common questions.
You may be asking, why don’t users just use those platforms to find what they need?
They do, but before you say “SEO does not matter,” remember while Google is a search engine, it can provide results from other platforms, making them relevant. We’ve seen this recently with Reddit results surging to the top of the SERPs, and showing up in a whopping 97.5 percent of Google search queries for product reviews.
As younger audiences use social media or videos more for discovery, Google will continue to update and adapt to meet user needs. And since Google pulls from so many different spaces (not just social), it still offers more reliable results on topics people want to find.
Take Reddit, for example. It shows up in a whopping 97.5 percent of Google search queries for product reviews.
Google Loves Brands
As your brand grows, you’ll find your rankings climb because Google takes authority, trustworthiness, and relevance into account. Typically, well-established brands have a higher authority and level of trustworthiness. Branded search volume is the number of searches for keywords containing your brand name on a search engine. This is one of the metrics for tracking growth because it reflects user’s interest and awareness of your brand.
Let’s consider what happens when you type “men’s running shoes” into Google’s search bar. Here is an example of what you might get:
Brands, brands, and more brands.
If you search my name, Google assumes you want to look at my website, businesses, and information about me or my social accounts.
Often, Google assumes that people searching for these terms already know what they want (and likely plan to make a purchase). This is especially the case if a customer is searching for an already well-established brand.
So, how do you establish your brand?
Aligning with the E-E-A-T framework is a good start. When your brand exudes Expertise, Experience, Authoritativeness, and Trustworthiness, Google will notice (and so will users).
To build those signals:
Create content that shows off your real-world experience and authority. Think in-depth tutorials or original research. Customer stories help, too.
Earn mentions and backlinks from reputable sources. Digital PR matters here.
Foster community. Social proof, like reviews, forum engagement, or user-generated content, tells Google and users that your brand is alive and active.
While some argue SEO is dead, building brand authority proves otherwise. In the age of AI and zero-click searches, it’s your ticket to higher rankings and increased visibility.
To be clear, E-E-A-T doesn’t only help for branded terms. Be sure to optimize for branded and non-branded terms to get in front of the most users.
Intent Is More Important Than Ever
Google is getting better at understanding intent, and users expect results that feel tailored to what they actually want, not just what they typed. If someone searches “best running shoes,” are they looking to buy now, compare options, or read reviews? If your page doesn’t match that intent, it’s not going to rank or convert.
It’s not just about categories like “informational” or “transactional” anymore, either. Google’s updates and AI enhancements have made search more personalized. Things like location and device type all influence which results appear and in what format.
That means one-size-fits-all content just won’t cut it. You need to build pages that solve specific problems for specific searchers, and make it obvious within the first few seconds that your content delivers the answer.
Look at the top results for your target terms. What kind of experience is Google rewarding? Long guides? Product roundups? Local directories?
When you align with intent, you’re not just improving your SEO, you’re giving users what they came for. And that’s how you win in the long run.
Create Content That’s Friendly For LLMs and SEO (there’s crossover)
Your niche is where your product or services fit in the market. What do you offer, and LLM tools like ChatGPT and Perplexity are changing how people search. Users have the ability to ask a question and get an instant summary. That means your content has to be referenced in this zero-click section of the search.
This is where LLMO (large language model optimization) comes in. It overlaps with SEO in a lot of ways. Clear structure and concise answers are good for both. But there are key differences.
LLMs don’t care about keyword density; they care about relevance and clarity. They’re more likely to pull from well-organized content (read as “easy to parse”) and rich in facts. Formatting matters. Use short copy blocks and bulleted lists to increase readability, and, on the technical side, clean HTML and schema markup help machines understand your content even more.
When it comes to backlinks, they still matter for SEO, but LLMs are more influenced by how well your content explains a topic.
If you want to future-proof your content, think about both: ranking high in search and being the source that LLMs pull from when people skip the SERPs entirely. For example below, well-known and reviewed medical sources pop up for this medical question.
Smart content creators are already optimizing for both worlds. Don’t get left behind.
User-Generated Content/Original Content Matters
Have you noticed that when you search on Google, your results are different than those Google is focusing on rewarding content that actually shows experience.
That’s why original content, especially from real users, is more valuable than ever. Some good examples of this type of content are:
Customer reviews
Community Q&As
Case studies
Proprietary research
Photos from your team.
Here’s an example of a successful UGC campaign from GoPro:
These types of content act as trust signals that feed directly into Google’s E-E-A-T framework (experience, expertise, authoritativeness, and trustworthiness).
If you haven’t already, get familiar with E-E-A-T. It’s the lens Google uses to figure out if your content deserves to rank. And in a world where LLMs are regurgitating the same surface-level info, showing firsthand knowledge is how you stand out.
User-generated content helps with that. So does publishing original insights—whether that’s internal data, lessons learned, or your unique take on industry trends. This is the kind of material Google can’t find anywhere else. It’s also what LLMs prefer to cite when pulling answers.
If you’re just rephrasing what’s already out there, you’re invisible. But if you create something worth referencing, both humans and machines will take notice.
Start building a content library that’s not just SEO-optimized, but undeniably your brand.
Focus Metrics Are Changing
Clicks and rankings used to be the gold standard in SEO. Not anymore.
Today, traditional SEO numbers like clicks and rankings only tell part of the story. With AI Overviews and zero-click searches taking over the SERPs, it’s possible to “rank” without getting any traffic. In this new environment, the way we measure success needs to evolve.
Instead of obsessing over position one, look at visibility across AI and SERP features. Are you showing up in AI summaries? In featured snippets? In the “People Also Ask” box? These touchpoints matter more now because they shape user behavior before a click even happens.
Engagement metrics are shifting, too. Scroll depth, dwell time, and interaction with on-page elements can reveal more about content quality than bounce rate ever did. The same goes for branded search volume and return visits—both strong signs that your content is resonating.
Search Everywhere Optimization Has Taken Center Stage
Search engines no longer corner the market on search. Non-search platforms, like social media and generative AI engines, are increasingly being used for search and discovery, disrupting traditional SEO norms.
This is what search everywhere optimization is all about.
You can no longer assume that users are only using search engines to find services and products that they need. They’re also using marketplaces (e.g., Amazon, Walmart), social media (e.g., TikTok, Pinterest), and generative AI (e.g., ChatGPT).
This means you need to expand your search optimization efforts, well, everywhere! Here’s how:
Social media: Platforms like TikTok and Instagram prioritize engaging, visual content. Optimize by using trending hashtags, creating shareable posts, and collaborating with influencers. Forums like Reddit are also highly cited in LLM results.
Generative AI engines: Tools like ChatGPT are shaping search behavior by delivering conversational and context-aware responses. Businesses should focus on producing concise, relevant, and authoritative content to rank within these engines.
Marketplaces: Amazon and similar sites act as search engines for product discovery. Ensuring optimized product titles, descriptions, and reviews is crucial.
We’ve seen this trend of the expanded search surface for a while now, but in 2026, your audience can be found across more platforms than ever. That’s why finding where your audience hangs out, and spending time to see how they’re interacting within the community, is such an important part of modern marketing strategy.
The traditional direct path of top-of-funnel to mid-funnel to bottom-of-funnel doesn’t play anymore. Your audience can convert from virtually anywhere in today’s market. Understand where they are, understand how they’re interacting, and understand the nuances of marketing on each platform, and you’ll be good to go.
FAQs
Is local SEO dead?
Not even close. Local SEO remains essential for businesses that rely on local customers. In fact, features like the local pack, Google Business Profiles, and map results are highly influential, especially on mobile. What’s changing is how users find you. Optimize for reviews and local content to stay competitive.
How long will SEO exist?
SEO is here to stay, but it continues to evolve. As long as people use search engines, social platforms, and AI tools to discover information, SEO will exist, even if tactics evolve.
Conclusion
SEO isn’t dead; it’s adapting to how people search today.
With AI reshaping the SERPs and user behavior shifting fast, what worked five years ago won’t cut it now. But the fundamentals still matter: create useful content, match search intent, and build trust with your audience.
If you’re unsure where to start, look at your content strategy. Are you prioritizing originality and structure? That’s what both Google and LLMs are rewarding.
Now’s also the time to rethink how you measure success. Traffic is great—but brand signals like engagement and trust are carrying more weight, and will only grow in importance in the future.
Want more tactical advice? Check out our guides on search engine trends and how to improve your SEO rankings.
Modern platforms like AI didn’t kill SEO; they just made it smarter. And we all need to follow suit.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-02 20:00:002026-02-02 20:00:00Is SEO Dead in 2026?
Google’s been quietly upgrading Search Console and Analytics with AI. No fanfare. Just better data filtering. They sit quietly inside platforms you already use, like Search Console and Google Analytics, and they change how data is surfaced, filtered, and interpreted.
These updates don’t power AI Overviews or conversational search. They work behind the scenes in platforms you already use. Google is using AI to reduce manual analysis, surface issues faster, and help marketers understand complex datasets without exporting everything to spreadsheets.
Indexing patterns and performance trends are easier to spot, even if the underlying work still requires human judgment. Google’s automating the diagnostics. You still handle the strategy.
Key Takeaways
Google’s embedding AI into Search Console and Analytics 4 to cut down on manual data analysis. The AI handles filtering and pattern detection—you still make the decisions.
AI-powered features focus on filtering, pattern detection, and prioritization rather than execution.
Google Search Console AI helps surface performance insights faster.
Google Analytics 4 uses AI for anomaly detection, predictive metrics, and guided analysis.
Predictive metrics in GA4 (like churn probability) give you directional guidance, not guarantees. Use them to build hypotheses, not to replace analysis.
Why Google Is Embedding AI in SEO Tools
Google’s SEO tools have always produced more data than most teams can realistically analyze. As sites grow, so do performance reports and behavioral metrics. AI helps Google address that scale problem.
The main shift is from reactive analysis to proactive surfacing of insights. Instead of expecting marketers to manually filter reports, compare date ranges, and segment data, Google is using AI to highlight patterns and outliers automatically.
Search Console now groups issues more intelligently, with clearer prioritization, and more context around what matters. Analytics delivers automated insights, anomaly detection, and predictive metrics.
The most practical benefit is time savings. AI-powered filtering lets you type what you want to see instead of clicking through multiple dropdowns. You can ask for specific trends, segments, or anomalies and let the system do the slicing for you. That alone removes a lot of friction from daily SEO work.
Your SEO expertise still matters. AI just handles the mechanical steps that used to slow you down. Google’s goal is to help marketers spend less time finding the signal and more time deciding what to do with it. For teams managing complex sites, this automation is table stakes.
If you want to understand how AI fits into broader SEO workflows, check out our guide on AI SEO.
AI Features in Google Search Console
Google Search Console has gradually introduced AI-assisted functionality that focuses on diagnostics and data interpretation rather than automation.
As a start, Search Console’s performance reporting benefits from smarter analysis. The platform highlights notable changes in clicks, impressions, and rankings without requiring manual comparison. This helps teams catch traffic drops or unexpected gains earlier, before they become larger problems.
Conversational-style filtering saves even more time. Instead of manually applying multiple filters, marketers can describe what they want to see, and Search Console narrows the data automatically. This reduces the time spent digging through reports just to answer basic questions.
Here’s how it works in practice: Instead of clicking Performance > Filters > Query > Contains > ‘product name’ > Apply, you type ‘show me queries for product pages with declining CTR.’ The AI interprets your request, applies the right filters, and shows you the data. That’s the time savings—going from five clicks to one typed question.
Note: Conversational filtering is rolling out gradually and may not be available in all Search Console accounts yet.”
AI won’t fix your indexing issues or update your site. It finds problems faster so you can fix them yourself. The value comes from speed and clarity, not automation. For SEO teams, this shortens the path between detection and action without removing human oversight.
AI Features in Google Analytics 4
This is partly because GA4 handles more complex event-based data and cross-device behavior.
Analytics Advisor is the most visible AI feature. Currently in Beta and not available for everyone yet, It automatically flags unusual patterns, such as sudden traffic spikes, drops, or changes in engagement. These insights appear without manual configuration and are designed to draw attention to potential issues or opportunities.
To access Analytics Advisor, click the lightbulb icon in the top right corner of any GA4 property. The insights refresh daily and highlight metrics that deviate from your baseline. You might see ‘Pageviews from organic search increased 47% compared to last week’ with a link to explore the affected pages. That’s faster than manually comparing week-over-week reports.
Predictive metrics add another layer. Examples include purchase probability, churn probability, and revenue prediction for eligible properties. These metrics help teams forecast outcomes based on historical behavior rather than relying purely on past performance.
Predictive metrics require at least 1,000 positive and 1,000 negative examples of the target event over 28 days. If your site doesn’t meet that threshold, you won’t see predictions for purchase probability or churn. This makes the feature more useful for high-traffic e-commerce sites than small content publishers.
Another important use of AI in GA4 is automated anomaly detection. The platform monitors metrics continuously and alerts users when behavior deviates from expected patterns. This can surface tracking issues, campaign impacts, or site problems more quickly than manual review.
GA4’s AI points you toward what matters. You still handle the investigation. Teams still need to validate data quality, understand context, and decide how insights should influence strategy.
Other Google Tools Getting Smarter With AI
Beyond Search Console and GA4, other Google tools now have AI-supported features. Several other Google tools marketers use regularly now rely on machine learning to guide decisions and reduce manual work.
Google Analytics 4’s predictive metrics extend beyond reporting. They influence how audiences are built and activated, especially when connected to Google Ads. This allows marketers to target users based on likely future behavior rather than past actions alone.
Google Ads leans on machine learning to suggest budget shifts, adjust bids automatically, and test creative variations. You can accept or reject these suggestions, the control stays with you. These systems focus on optimization suggestions rather than forced changes, leaving final control with advertisers.
Here’s what matters: diagnostic AI explains what’s happening now. Predictive AI estimates what comes next. Diagnostic AI explains what is happening now and why. Predictive AI estimates what might happen next. Both influence how marketers act, but they serve different purposes. Understanding which type of insight a tool provides helps teams decide how much weight to give its recommendations.
This changes your daily workflow. Instead of checking reports manually and looking for problems, you respond to flagged issues. Instead of building audience segments from scratch, you refine AI-generated segments. The shift is from ‘find the problem’ to ‘validate the finding.’ That’s faster, but it requires trust in the system’s baseline accuracy.
Should You Trust AI to Support Your Reporting?
Google’s using AI to decide what you see first in your reports. That raises control questions. These tools influence what you see first, what gets flagged, and what feels urgent.
Trust the insights. Verify the recommendations. AI supports reporting by prioritizing information, not by defining truth. Understanding its role helps teams use it effectively without losing oversight.
Is AI Taking Too Much Control?
One concern is that AI-driven data points could push marketers into autopilot mode. When tools highlight issues automatically, it’s tempting to assume they reflect the full picture.
AI helps you see more. It surfaces technical problems and data anomalies that teams often miss because they’re buried in reports or obscured by volume. AI helps surface data anomalies that teams might miss due to scale or limited time. It reduces the chance that important issues stay hidden in reports.
Don’t follow every data point blindly. AI recommendations are based on models and thresholds that may not reflect business context. Treat insights as starting points, not final answers. Validation still matters.
Who Really Gets the Advantage?
People assume big brands with more data get better AI insights. Not true. Everyone has access to the same tools.
The advantage goes to teams that actually use the insights. A local contractor who spots a data anomaly flagged by Search Console and acts on it outranks a national franchise that ignores the same alert.
AI lowers the barrier to analysis, but it doesn’t guarantee better outcomes. Interpretation and execution still determine results.
FAQs
Does AI in GA4 replace manual analysis?
No. AI highlights anomalies and predictions, but analysts still need to validate findings and decide how to act.
Are predictive metrics in GA4 always accurate?
Predictive metrics are estimates based on historical data. They provide directional guidance, not certainty.
Conclusion
AI makes Google’s SEO tools more efficient. It doesn’t replace the need for strategy. You still need to validate insights, understand your business context, and decide how to act on recommendations. The teams winning with these tools treat AI as an assistant, not an autopilot.
They use automated insights to find problems faster, then apply their own expertise to fix them. That combination (AI-powered detection plus human strategy) is what drives results. Start by exploring the AI features already available in your Search Console and GA4 accounts. Check what Analytics Advisor has flagged. Look at how Search Console groups your indexing issues.
See if the insights align with what you’re already tracking manually. Then decide where automation saves you real time.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-29 20:00:002026-01-29 20:00:00AI-Powered Functionality in Google’s SEO Tools
Search today looks very different from what it did even a few years ago. Users are no longer browsing through SERPs to make up their own minds; instead, they are asking AI tools for conclusions, summaries, and recommendations. This shift changes how visibility is earned, how trust is formed, and how brands are evaluated during discovery. In AI-driven search, large language models interpret information, decide what matters, and present a narrative on behalf of the user.
Search has evolved; users now rely on AI for conclusions instead of traditional SERPs
Conversational AI serves as a new discovery layer, users expect quick answers and insights
Brands must navigate varied interpretations of their presence across different LLMs
Yoast AI Brand Insights helps track brand mentions and identify gaps in AI visibility across models
Understanding LLM brand visibility is crucial for modern brand strategy and perception
The rise of conversational AI as a discovery layer
“Assistant engines and wider LLMs are the new gatekeepers between our content and the person discovering that content – our potential new audience.” — Alex Moss
Search is no longer confined to typing queries into a search engine and scanning a list of links. Today’s discovery journey frequently begins with a conversation, whether that’s a typed question in a chatbot, a voice prompt to an AI assistant, or an embedded AI feature inside a platform people use every day.
This shift has made conversational AI a new layer of discovery, where users expect direct answers, recommendations, and curated insights that help them make decisions and build brand perception more quickly and confidently.
Discovery is happening everywhere
Users are now encountering AI-powered discovery across a range of interfaces:
AI chat interfaces
Tools like ChatGPT allow users to ask open-ended questions and follow up in a conversational manner. These interfaces interpret intent and tailor responses in a way that feels natural, making them a go-to for exploratory search.
Platforms such as Perplexity synthesize information from multiple sources and often cite them. They act as research helpers, offering concise summaries or explanations to complex queries.
Embedded AI experiences
AI is increasingly built directly into search and discovery environments that people already use. Examples include AI-assisted summaries within search results, such as Google’s AI Overviews, as well as AI features embedded in browsers, operating systems, and apps. In these moments, users may not even think of themselves as “using AI,” yet AI is already influencing what information is surfaced first and how it is interpreted.
This broad distribution of AI discovery surfaces means users now expect accessibility of information regardless of where they are, whether in a chat, an app, or embedded in the places they work, shop, and explore online.
How people are using AI in their day-to-day discovery
Users interact with conversational AI for a wide range of purposes beyond traditional search. These models increasingly guide decisions, comparisons, and exploration, often earlier in the journey than classic search engines.
Here are some prominent ways people use LLMs today:
Product comparisons
ChatGPT gives a detailed brand comparison
Rather than visiting multiple sites and aggregating reviews, there are 54% users who ask AI to compare products or services directly, for example, “How does Brand A compare to Brand B?” and “What are the pros and cons of X vs Y?” AI synthesizes information into a concise summary that often feels more efficient than browsing search results.
“Best tools for…” queries
Result by ChatGPT for “best crm software for smbs.”
Did you know 47% of consumers have used AI to help make a purchase decision?
AI users frequently ask for ranked suggestions or curated lists such as “best SEO tools for small businesses” or “top content optimization software.” These queries serve as discovery moments, where brands can be suggested alongside context and reasoning.
Trust and validation checks
Many users prompt AI models to validate decisions or confirm perceptions, for example, “Is Brand X reputable?” or “What do people say about Service Y?” AI responses blend sentiment, context, and summarization into one narrative, affecting how trust is formed.
In a study by Yext, it was found that 42% users employ AI for early-stage exploration, such as brainstorming topics, gathering potential search intents, or understanding broad categories before narrowing down specifics. AI user archetypes range from creators who use AI for ideation to explorers seeking deeper discovery.
Local discovery and service search
ChatGPT recommendations for “best cheesecake places in Lucknow, India.”
AI is also used for local searches. For example, many users turn to AI tools to research local products or services, such as finding nearby businesses, comparing local options, or understanding community reputations. In a recent AI usage study by Yext, 68% of consumers reported using tools like ChatGPT to research local products or services, even as trust in AI for local information remains lower than traditional search.
In each of these moments, conversational AI doesn’t just surface brands; it frames them by summarizing strengths, weaknesses, use cases, and comparisons in a single response. These narratives become part of how users interpret relevance, trust, and fit far earlier in the decision-making process than in traditional search.
Not all LLMs interpret brands the same way
As conversational AI becomes a discovery layer, one assumption often sneaks in quietly: if your brand shows up well in one AI model, it must be showing up everywhere. In reality, that’s rarely the case. Large language models interpret, retrieve, and present brand information differently, which means relying on a single AI platform can give a very incomplete picture of your brand’s visibility.
To understand why, it helps to look at how some of the most widely used models approach answers and brand mentions.
How ChatGPT interprets brands
ChatGPT is often used as a general-purpose assistant. People turn to it for explanations, comparisons, brainstorming, and decision support. When it mentions brands, it tends to focus on contextual understanding rather than explicit sourcing. Brand mentions are frequently woven into explanations, recommendations, or summaries, sometimes without clear attribution.
From a visibility perspective, this means brands may appear:
As examples in broader explanations
As recommendations in “best tools” or comparison-style prompts
As part of a narrative rather than a cited source
The challenge is that brand mentions can feel correct and authoritative, while still being outdated, incomplete, or inconsistent, depending on how the prompt is phrased.
How Gemini interprets brands
Gemini is deeply connected to Google’s ecosystem, which influences how it understands and surfaces brand information. It leans more heavily on entities, structured data, and authoritative sources, and its outputs often reflect signals familiar to traditional SEO teams.
For brands, this means:
Visibility is closely tied to how well the brand is understood as an entity
Clear, consistent information across the web plays a bigger role
Mentions often align more closely with established sources
Gemini can feel more predictable in some cases, but that predictability depends on strong foundational signals and accurate brand representation across trusted platforms.
How Perplexity interprets brands
Perplexity positions itself as an answer engine rather than a general assistant. It emphasizes citations and source-backed responses, which makes it popular for research and comparison queries. When brands appear in Perplexity answers, they are often tied directly to cited articles, reviews, or documentation.
This creates a different visibility dynamic:
Brands may be surfaced only if they are referenced in cited sources
Freshness and topical relevance matter more
Competitors with stronger editorial or PR coverage may appear more often
Here, brand presence is tightly coupled with external content and how frequently that content is used as a reference.
How these models differ at a glance
AI Model
How brands are surfaced
What influences the visibility
ChatGPT
Contextual mentions within explanations and recommendations
Once you see how differently large language models interpret brands, one thing becomes clear: looking at just one AI model gives you an incomplete picture. AI-driven discovery does not produce a single, consistent version of your brand. It produces multiple interpretations, shaped by the model, its data sources, and users’ interactions with it.
Therefore, tracking across your brand across multiple LLM models is essential because:
Brand visibility is fragmented by default
Across different LLMs, the same brand can show up in very different ways:
Correctly represented in one model, where information is accurate and well-contextualized
Completely missing in another, even for relevant queries
Partially outdated or misrepresented in a third, depending on the sources being used
This fragmentation happens because each model processes and prioritizes information differently. Without visibility across models, it’s easy to assume your brand is ‘covered’ when, in reality, it may only be visible in one corner of the AI ecosystem.
Different audiences use different AI tools
AI usage is not concentrated in a single platform. People choose tools based on intent:
Some use conversational assistants for exploration and ideation
Others rely on citation-led answer engines for research
Many encounter AI passively through search or embedded experiences
If your brand appears in only one environment, you are effectively visible only to a subset of your audience. This mirrors challenges SEO teams already recognize from traditional search, where performance varies by device, location, and search feature. The difference is that with AI, these variations are less obvious and more challenging to track without dedicated insights.
Blind spots create real business risks
Limited visibility across LLMs doesn’t just affect awareness; it also impairs learning. Over time, it can lead to:
Inconsistent brand narratives, where AI tools describe your brand differently depending on where users ask
Missed demand, especially for comparison or “best tools for” queries
Competitors are being recommended instead, simply because they are more visible or better understood by a specific model
These outcomes are rarely intentional, but they can quietly influence brand perception and decision-making long before users reach your website.
So all these points point to one thing: a broader, multi-model view helps build a more complete understanding of brand visibility.
The challenge: LLM visibility is hard to measure
As brands start paying attention to how they appear in AI-generated content, a new problem becomes obvious: LLM visibility doesn’t behave like traditional search visibility. The signals are fragmented, opaque, and constantly changing, which makes tracking and understanding brand presence across AI models far more complex than tracking rankings or traffic.
Below are some key challenges brand marketers might face when trying to understand how their brand appears to large language models.
1. Lack of visibility across AI platforms
Different LLMs, such as ChatGPT, Gemini, and Perplexity, rely on various data sources, retrieval methods, and citation logic. As a result, the same brand may be mentioned prominently in one model, inconsistently in another, or not at all elsewhere.
Without a unified view, it’s difficult to answer basic questions like where your brand shows up, which AI tools mention it, and where the gaps are. This fragmentation makes it easy to overestimate visibility based on a single platform.
2. No clear insight into how AI describes your brand
AI models often mention brands as part of explanations, comparisons, or recommendations, but traditional analytics tools don’t capture how those brands are described. Teams lack visibility into tone, context, sentiment, or whether mentions are positive, neutral, or misleading.
This makes it hard to understand whether AI is reinforcing your intended brand positioning or subtly reshaping it in ways you can’t see.
3. No structured way to measure change over time
AI-generated answers are inherently dynamic. Small changes in prompts, updates to models, or shifts in underlying data can all influence how brands appear. Without consistent, longitudinal tracking, it’s nearly impossible to tell whether visibility is improving, declining, or simply fluctuating.
One-off checks may offer snapshots, but they don’t reveal trends or patterns that matter for long-term strategy.
4. Limited ability to benchmark against competitors
Seeing your brand mentioned in AI answers is a start, but it doesn’t tell you the whole story. The real question is what’s happening around it: which competitors appear more often, how they’re described, and who AI recommends when users are ready to decide.
Without comparative insights, teams struggle to understand whether AI visibility represents a competitive advantage or a missed opportunity.
5. Missing attribution and source clarity
Some AI models summarize or paraphrase information without clearly attributing sources. When brands are mentioned, it’s not always obvious which pages, articles, or properties influenced the response.
This lack of source visibility makes it difficult to connect AI mentions back to specific content efforts, PR coverage, or SEO work, leaving teams guessing what is actually driving brand representation.
6. Existing tools weren’t built for AI visibility
Traditional SEO and analytics platforms are designed around clicks, impressions, and rankings. They don’t capture AI-powered mentions, sentiment, or visibility trends because AI platforms don’t expose those signals in a structured way.
As a result, teams are left without reliable reporting for one of the fastest-growing discovery channels.
Together, these challenges point to a clear gap: brands need a new way to understand visibility that reflects how AI models surface and interpret information. This is where tools explicitly designed for AI-driven discovery, such as Yoast AI Brand Insights, come into play.
How does Yoast AI Brand Insights help?
It won’t be wrong to say that the AI-driven brand discovery can be fragmented and opaque; therefore, leading us to our next practical question: how do brand marketing teams actually make sense of it?
Traditional SEO tools weren’t built to answer that, which is where Yoast AI Brand Insights comes in. It’s designed to help users understand how brands appear in AI-generated answers and is available as part of Yoast SEO AI+.
Rather than focusing on rankings or clicks, Yoast AI Brand Insights focuses on visibility and interpretation across large language models.
Track brand mentions across multiple AI models
One of the biggest gaps in AI visibility is fragmentation. Brands may appear in one AI model but not in another, without any obvious signal to explain why. Yoast AI Brand Insights addresses this by tracking brand mentions across multiple AI platforms, including ChatGPT, Gemini, and Perplexity.
This gives teams a clearer view of where their brand appears, rather than relying on isolated checks or assumptions based on a single model.
Identify gaps, inconsistencies, and opportunities
AI-generated answers don’t just mention brands; they frame them. Yoast AI Brand Insights helps surface patterns in how a brand is described, making it easier to spot:
Where mentions are missing altogether
Where descriptions feel outdated or incomplete
Where competitors appear more frequently or more favorably
These insights turn AI visibility into something teams can actually act on, rather than a black box.
Shared insights for SEO, PR, and content teams
AI-driven discovery sits at the intersection of SEO, content, and brand communication. One of the strengths of Yoast AI Brand Insights is that it provides a shared view of AI visibility that multiple teams can use. SEO teams can connect AI mentions back to site signals, content teams can understand how messaging is interpreted, and PR or brand teams can see how external coverage influences AI narratives.
Instead of working in silos, teams get a common reference point for how the brand appears across AI-driven search experiences.
A natural extension of Yoast’s SEO philosophy
Yoast AI Brand Insights builds on principles Yoast has long emphasized: clarity, consistency, and understanding how search systems interpret content. As AI becomes part of how people discover brands, those same principles now apply beyond traditional search results and into AI-generated answers.
In that sense, Yoast AI Brand Insights isn’t about chasing AI trends. It’s about giving teams a more straightforward way to understand how their brand is represented, where discovery is increasingly happening.
From rankings to representation in AI-driven search
AI-driven discovery is no longer an edge case. It’s becoming a regular part of how people explore options, validate decisions, and form opinions about brands. As large language models continue to evolve, the question for brands is not whether they appear in AI-generated answers, but whether they understand how they appear, where they appear, and what story is being told on their behalf. Gaining visibility into that layer is quickly becoming a foundational part of modern brand and search strategy.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-29 09:42:292026-01-29 09:42:29Why does having insights across multiple LLMs matter for brand visibility?
Discord has moved far beyond its gaming roots. Today, it’s becoming a direct access channel for brands that care about real engagement and meaningful digital PR outcomes.
This isn’t a Discord 101 guide. Most marketers already understand what the platform is and how servers work. Most marketers don’t know how to use Discord for engagement and PR, even as email pitches fail and social algorithms tank reach.
Discord matters now because it removes friction. Brands get real-time access to fans, creators, journalists, and niche communities without algorithmic interference. Over 200 million people use Discord monthly, and brands from Shopify to The New York Times now run active servers. Conversations happen in the open, persist over time, and create context that traditional channels struggle to replicate.
Brands can show up consistently in spaces people actually want to join. That changes how relationships form and stories emerge.
In this article, we’ll break down how marketers and PR teams can use Discord to drive engagement, support press outreach, host event-style campaigns, and turn community activity into earned media.
Key Takeaways
Discord works best as a relationship channel, not a broadcast platform. Engagement comes from participation, not posting frequency.
PR teams can use Discord to build trust and shared context before any formal outreach happens.
Features like roles, private channels, and stages support controlled access for media and creators.
Event-driven engagement inside Discord often creates moments journalists and creators want to reference.
Earned media from Discord grows out of visible conversation, not promotional messaging.
Most brands fail on Discord by broadcasting instead of conversing. The platform rewards brands that facilitate discussion, respond quickly, and give members real access to decision-makers.
Why Discord Is More Than Just a Community Platform
Discord gets grouped with other community tools, but that undersells what it actually does.
Discord is an owned communication layer. Members opt in. Conversations persist. There’s no feed to fight and no algorithm deciding who sees what. Engagement teams tired of declining social reach find that valuable.
The platform has also expanded into professional and brand-led use cases. B2B companies, SaaS platforms, media brands, and creator-led businesses now use Discord to host product discussions, feedback loops, and industry conversations. These servers often function as always-on focus groups where insight flows both directions.
Shopify hosts channels for developers and partners. Notion uses Discord for product feedback and feature requests. These aren’t gaming communities—they’re professional spaces where brands get direct access to customers, partners, and media without paying for ads or fighting algorithms.
For PR teams, Discord introduces something email can’t replicate: visible context. Journalists and creators don’t just receive a message. They see how a brand responds to questions, explains decisions, and engages with its community over time.
A tech journalist following a SaaS brand’s Discord sees how they handle bug reports, communicate delays, and support users. That context makes it easier to cover the company fairly when news breaks. Email alone can’t build that kind of ongoing visibility.
That ongoing presence builds familiarity before coverage is ever discussed. Discord blends access, continuity, and transparency into a single environment, which sets the foundation for both engagement and digital PR.
Core Features That Make Discord Ideal for Engagement and PR
Discord’s strength is ongoing conversation, not one-way distribution. That distinction changes how engagement and digital PR teams plan campaigns.
Chat channels stick around. Conversations don’t disappear after a day or get buried by new posts. Conversations don’t disappear after a day or get buried by new posts. A strong AMA thread, product debate, or media Q&A can remain active and searchable for weeks, giving journalists and creators extended context without repeated outreach.
Roles and access control make Discord viable for PR use cases. Teams can create press-only channels, creator lounges, or embargoed spaces tied to launches. Access feels intentional rather than promotional, which increases participation and trust.
Here’s how that works in practice: You can create a #press-only channel where journalists see embargoed announcements, background context, and Q&A access before public launches. A #creators channel might include early product access, collaboration opportunities, and direct messaging with your team. Fans see neither of these spaces—they get their own channels focused on community discussion and support. That segmentation makes Discord feel exclusive and valuable to each group.
Events, stages, and AMAs introduce timed engagement bursts. Moderated formats work well for leadership conversations, briefings, and launches. These events concentrate attention while still allowing real interaction.
Stages support up to 1,000 listeners with interactive Q&A. That’s enough for most brand events without requiring webinar software or event platforms. The recording stays in the channel afterward, so people who missed the live session can still participate in the discussion.
Integrations extend Discord’s usefulness. Feedback tools, shared resource hubs, and workflow automations connect Discord activity to broader marketing and PR efforts. Instead of living in a silo, Discord becomes part of day-to-day operations.
The key advantage is flexibility. Discord lets teams design micro-environments around how people actually communicate.
Using Discord to Build Journalist and Creator Relationships
Most PR teams still rely on cold email, despite falling response rates. Journalists and creators increasingly prefer communication that feels conversational and contextual rather than transactional.
Discord makes non-pitch engagement possible. Skip the ask. Invite journalists and creators into private or semi-private channels first. These spaces offer early context, background discussion, or access to subject-matter experts without pressure.
Buffer runs a Discord server where journalists can ask the CEO or product team questions directly. No PR gatekeepers. No scheduling calls. Just post a question in the #media channel and get a response within hours. That accessibility makes Buffer easier to cover than competitors who require formal interview requests and two-week lead times.
Direct access to decision-makers changes expectations. Journalists can ask follow-up questions, clarify details, or observe how a brand thinks before deciding whether a story fits. Creators can explore ideas collaboratively rather than responding to a single brief.
Here’s a simple journalist outreach flow:
Create a private #press channel with embargoed access
Invite 10-15 journalists who cover your industry (not thousands)
Share early context on product launches, company updates, or industry insights
Let them ask follow-up questions async
When a story fits, the relationship already exists
Over time, transparency and responsiveness in chat build trust faster than long email threads. When a pitch does make sense, the relationship already exists.
This approach works particularly well for tech, SaaS, and creator-driven industries where speed, access, and nuance influence coverage decisions.
This approach doesn’t work for every brand. Mass consumer brands or highly regulated industries might struggle with open-channel discussions. But for companies selling to creators, developers, or digital professionals, Discord shortens the relationship-building cycle from months to weeks.
Event-Based Engagement: How to Use Discord for Launches, AMAs, and More
Smart brands treat Discord like a live venue, not a static community.
Product launches often include countdown channels, staged reveals, and post-drop discussion. Leadership teams host AMAs. Engineers, designers, and product managers run Q&A sessions that surface both feedback and insight.
Good events take prep work.. Clear goals, advance question collection, and active moderation improve outcomes and keep discussions focused.
Effective Discord events typically include:
Before the event:
Announce 3-5 days early with clear agenda
Create dedicated event channel
Collect questions in advance via Google Form or channel thread
Assign at least 2 moderators
Test Stage or voice channel setup
During the event:
Pin the event agenda
Start with 3-5 pre-submitted questions to build momentum
Let mods filter and prioritize live questions
Keep responses under 3 minutes each
Screenshot strong quotes for later use
After the event:
Post a recap with key quotes, decisions, or takeaways
Thank participants by name
Share recap as blog post or social content
Leave the channel open for continued discussion
Most effective Discord events run 45-60 minutes. Longer sessions lose energy. Shorter sessions feel rushed. Plan for 10-12 questions max, with flexibility for strong follow-ups.
Events focused on audience value beat pure announcements every time. These moments also create reusable assets. Quotes, insights, and screenshots often become blog content, social posts, or supporting material for PR outreach.
Driving Earned Media Through Discord Engagement
Growing your Discord server matters less than what happens inside it.
Active communities generate stories organically. Journalists reference AMA insights. Industry newsletters cite ongoing discussions. Blogs quote real community sentiment.
Community-driven narratives often outperform traditional press releases because they show participation rather than positioning. Readers trust stories that reflect real dialogue.
A transparent Q&A or high-energy discussion thread can become the foundation for coverage. Discord surfaces narratives that feel timely, authentic, and grounded in lived interaction.
To maximize earned media potential from Discord:
Make conversations screenshot-friendly. Clear usernames, well-formatted responses, and threaded discussions make it easier for journalists to reference your server.
Highlight notable members. When industry experts or recognizable creators participate in your Discord, that increases media appeal.
Track quotable moments. Assign someone to screenshot strong quotes, insights, or exchanges during active discussions. These become PR assets.
Pitch the conversation, not just the product. Send journalists a link to an active discussion thread, not a press release. Let them see the community energy firsthand.
Common Mistakes When Using Discord for PR and Engagement
The biggest mistake? Treating Discord like a broadcast channel.
Post links without conversation and your server dies.. Members expect response and interaction, not scheduled promotion.
Another issue is weak moderation. Servers without clear purpose or active moderators lose focus fast, which discourages journalists and creators from participating.
PR teams also create friction when they treat creators or journalists like captive audiences. Discord works because participation is voluntary and collaborative.
Guide discussion. Share insider context. Show up consistently. Respect the community’s time.
“Mistake #1: Broadcasting Without Responding Posting ‘Check out our new blog post!’ and disappearing doesn’t work. People expect you to discuss the post, answer questions, or explain why it matters. If you’re not ready to engage, don’t post.
Mistake #2: No Clear Server Purpose Servers that try to be everything—community hub, support forum, news feed, social network—confuse members. Pick 2-3 core functions and build around those. Zapier’s Discord focuses on automation discussion and customer success. That’s it.
Mistake #3: Treating Journalists Like Fans Journalists don’t want hype. They want context, access, and honesty. A press channel filled with marketing language gets ignored. Background information, data, and direct responses get used.
Mistake #4: Inconsistent Presence Posting daily for two weeks, then ghosting for a month, breaks trust. If you can’t maintain active engagement, don’t launch a server. Better to have no Discord than an abandoned one.
Mistake #5: Over-Moderation or Under-Moderation Too many rules kill discussion. No rules create chaos. Find the balance: clear guidelines, active mods who participate (not just police), and flexibility for organic conversation.”
Tools, Bots, and Setups to Maximize PR ROI
The right setup makes Discord manageable for small teams and scalable for larger ones.
Roles segment audiences cleanly. Press, creators, and fans shouldn’t share the same access paths. Clear onboarding channels explain where to engage and what matters.
Bots support efficiency:
Event scheduling and reminders
Moderation and automation
Engagement and activity tracking
Larger teams often use ticket-style workflows to route media requests or creator inquiries without cluttering channels.
The goal is structure without rigidity. Discord should feel organized, not over-engineered.
For Events:Sesh – Schedules events with automatic reminders. Members RSVP directly in Discord, and the bot pings them 15 minutes before start time.
For Moderation:MEE6 – Auto-moderates spam, assigns roles based on activity, and sends custom welcome messages to new members. Free tier handles most small-to-mid sized servers.
For Analytics:Statbot – Tracks message volume, active members, peak engagement times, and channel-level activity. Shows which conversations generate the most participation—useful for PR teams measuring impact.
For Workflow:Zapier’s Discord integration – Connects Discord to Google Sheets, Notion, or your CRM. Auto-post media inquiries to a tracking sheet or notify your team in Slack when someone joins your press channel.
For Ticketing:Ticket Tool – Creates private support threads for media requests, creator pitches, or partnership inquiries. Keeps channels clean while routing requests to the right team member.
FAQs
How do you engage a Discord community?
Run regular events like AMAs, Q&As, or feedback sessions. Assign roles that give members status and access (not just colors). Recognize active contributors publicly. Create channels for member-led discussions, not just brand announcements. Give people reasons to return daily, like ongoing conversations or exclusive content drops.
How can you increase Discord engagement?
You can run a small server (under 500 members) with one dedicated person spending 30-60 minutes daily. Larger servers need at least 2-3 moderators to handle different time zones and maintain a consistent presence. Consider community volunteers once your server reaches 1,000+ active members.
What’s the minimum team size needed to run a Discord server effectively?
What’s the minimum team size needed to run a Discord server effectively?
Expect 3-6 months before Discord activity generates measurable earned media. Relationships take time. The first month focuses on setup and onboarding. Months 2-3 build conversation patterns. Months 4-6 typically produce quotable moments and media references. Results accelerate after you establish a consistent presence and trust.
How long does it take to see PR results from Discord?
Expect 3-6 months before Discord activity generates measurable earned media. Relationships take time. The first month focuses on setup and onboarding. Months 2-3 build conversation patterns. Months 4-6 typically produce quotable moments and media references. Results accelerate after you establish consistent presence and trust.
Conclusion
Discord rewards dialogue over distribution. That makes it a natural fit for engagement and digital PR teams focused on relationships rather than reach.
Brands that use Discord well create space for trust, transparency, and real participation. Those signals translate into earned media, stronger creator relationships, and long-term community value that social platforms can’t replicate.
Start small. Launch a focused server with clear purpose, maybe a press channel and a creator lounge. Host one monthly AMA or live event. See what surfaces organically before scaling up. The platform rewards consistent, genuine engagement more than polished campaigns.
Discord won’t replace your email list or social media presence. But for building the kind of relationships that lead to coverage, partnerships, and authentic advocacy, it’s one of the most effective channels available right now.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-28 20:00:002026-01-28 20:00:00Discord as an Engagement and Digital PR Platform
AI Max is Google’s latest foray into semi-keywordless targeting.
While you need keywords for the system to have a starting place, Google uses signals beyond keywords in deciding how to show ads to searchers.
In accounts with a strong history of broad match success, AI Max can be highly effective at finding new conversions.
If accounts are not well-optimized or have not been successful with broad match, AI Max can be a huge money pit.
To clear up a rumor before we get into the data: you do not have to use AI Max to have ads appear in AI Overviews.
Broad match keywords can show ads in AI Overviews regardless of your AI Max usage.
We’re looking at AI Max as a conversion expansion option, not just an option to show in AI Overviews.
This article examines the review steps you should take before you decide to test AI Max.
What to check before enabling AI Max
Accurate conversion tracking
Your conversion tracking must be accurate, deduplicated, and focused on business outcomes. AI Max optimizes toward what you have defined as success.
If you aren’t tracking all your conversions, or if your conversions are inflated, AI Max will be working from inaccurate data and making poor decisions.
Automated bidding with a conversion-focused strategy
Broad match only works well when you have a bid strategy that is focused on conversions, such as:
Our experiments with AI Max have shown that it is much more predictable with one of the target options (Target CPA or Target ROAS) than with the max bid options (Maximize conversion value or Maximize conversions).
Since the Max conversion options are meant to get you the most possible, regardless of the CPA or ROAS, they will often continue to spend your budget when the next set of conversions could have exceptionally high CPAs or very low ROAS.
If you use AI Max with one of the max bid options, pay close attention to your budget and the AI Max data.
Conversion volume
Technically, you can enable AI Max without any conversions for a campaign.
However, with under 30 conversions per month, AI Max has been highly erratic.
At over 100 conversions per month, it has done well more often than not, assuming you have had success with broad match in the past.
In general, you will want to test AI Max in campaigns that have at least 30 conversions per month.
If you are going to test AI Max, starting with non-brand campaigns that have a high conversion volume will usually give you a better introduction to AI Max’s possibilities for your account.
No impression share lost due to budget
If you’re already losing impressions due to your budget, your handpicked keywords will receive even less budget if you enable AI Max.
The goal is to spend as much as you can on your top keywords, and then have AI Max experiment with the budget we can’t spend.
If you are already losing impressions due to your budget, then enabling AI Max usually results in poorer performance.
Have proven broad match success
AI Max will treat all of your keywords as broad match, and then expand even further than your broad match keywords.
If you haven’t successfully used broad match, then enabling AI Max will be a waste of money.
You should first ensure that broad match can work for you, which might require reorganizing ad groups, testing new ads, and optimizing your landing pages.
Only after you have consistently seen good results with broad match should you try AI Max.
When you enable AI Max, you can expand URLs to other pages on your website.
This means that Google can pick any page of your website to use as a landing page when AI Max triggers an ad.
Google allows you to exclude URLs. Most sites should exclude:
Help files and support pages.
Pages not built for conversions.
Pages that do not have conversion tracking enabled.
FAQs.
Blogs.
Old landing page tests that are still live.
Old website designs that are still live.
A few people have found success with using AI Max with blogs and support pages. However, these seem to be exceptions more often than the standard result.
AI Max has struggled when there are many geographic landing pages.
We’ve seen accounts that target different geographies by campaign, and each campaign has its own set of landing pages.
AI Max has routinely mismatched the campaign’s geographic target with landing pages intended for other geographies.
For example, your California campaigns are sending all of their traffic to landing pages dedicated to Texas traffic.
If you want to use AI Max URL expansion, and you have landing pages dedicated to various geographies, you will need to exclude all the landing pages that are irrelevant to the geography of your campaign.
For companies that create dedicated landing pages for each campaign or ad group, I have yet to see an example of AI Max finding better landing pages.
In every example, AI Max’s URL expansion has needed to be turned off. Eventually, this option might work for advertisers, but I have yet to see that happen.
You can review the URLs that Google is using and exclude them. If you turn on URL expansion, you will want to regularly review these URLs.
My great hope for AI Max is the automatically created assets.
I wish I could enable this only for extensions. AI Max can help you scale messaging tremendously.
It can go through all of your ad groups and automatically create sitelinks and callouts at the ad group level.
This level of customization is one that many advertisers never have time to fully explore.
We had a client who enabled this feature, and suddenly, all their sitelinks linked to pages that were irrelevant to the keywords.
We’ve seen other clients use this feature, and their callouts improved dramatically.
Google still has a ways to go in how they auto-create assets, but this is a feature I have high hopes for.
Unfortunately, you can’t enable this feature for only ad assets (extensions). If you enable automatically created assets, Google will create additional RSA assets for you.
These assets can cause customer confusion by:
Making promises your brand doesn’t meet.
Using messaging that isn’t compliant with the law for regulated industries or doesn’t follow your brand guidelines.
You can write guidelines for how you want your ads to appear and rules on what shouldn’t be used.
If you’re going to have Google automatically create assets, you’ll want to add guidance on how the ads should be created.
Note that term exclusions and text guidelines (Google’s official names for these features) don’t appear to be enabled in all accounts right now and may still be rolling out to advertisers.
Overall, Google’s auto-generated RSA assets have a poor track record, and if you enable them, you will want to regularly review what Google is creating on your behalf.
How to test AI Max
Since Google has a history of matching broad match keywords to other brands and generic keywords, AI Max has been very inconsistent with brand keywords.
I’d suggest starting with your top non-brand keywords to test AI Max.
For most brands, there are more conversions to be had in non-brand expansion than in finding more people who are already searching for your brand.
AI Max can be enabled at the campaign or ad group level.
One of the best ways to run a limited test with AI Max is to enable it only in a few ad groups that have a lot of conversion data and a successful history with broad match.
In the interface, enabling AI Max for only a few ad groups is painfully slow.
You have to enable AI Max at the campaign level, then go into every ad group and turn it off where you don’t want it enabled.
The Google Ads Editor lets you turn AI Max on or off at the ad group level.
If you want to test AI Max in only a few ad groups, then use the editor for your initial setup.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/01/Google-Ads-Add-URL-exclusions-ft946X.png?fit=1105%2C389&ssl=13891105http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-27 15:00:002026-01-27 15:00:00Is your account ready for Google AI Max? A pre-test checklist
Does Google’s AI Mode mark a real shift in how search works? There’s a strong case that it does. And all businesses with an online presence need to pay attention, not just SEO folks.
Given how big the change is, you likely have a lot of questions.
What does AI Mode mean for your site traffic? How do you get featured? Do you need to change your content strategy? What happens to organic visibility as AI-generated answers become more common?
If you’re feeling uncertain, don’t worry. This guide breaks down what Google AI Mode actually is, how it works, and what it means for your site.
Key Takeaways
Google AI Mode is a search experience that builds on AI Overviews, offering deeper answers, reasoning, and more personalized responses.
AI Mode is currently available in English, with rollout expanding beyond early U.S. testing.
Users can access AI Mode directly from the Google homepage, where it functions through a conversational, ChatGPT-style interface.
Appearing in AI Mode is largely driven by strong SEO fundamentals, but brand mentions, structured data, and off-site signals play a growing role.
While AI Mode changes how results are presented, early data suggests users still click through to source content, especially for complex or high-consideration topics.
What Is Google’s AI Mode?
AI Mode is a search feature from Google designed to give direct, well-reasoned answers to complex queries. It builds on AI Overviews but uses a similar process that combines AI-generated responses with content from traditional search results and the Knowledge Graph (Google’s database of factual information).
It runs on a modified version of Gemini, Google’s core AI model, and analyzes information from multiple sources. It then synthesizes this information into a clear, concise answer that prioritizes reasoning and context, rather than just summarizing pages.
The interface feels a lot like an AI Overview—same layout and a similar answer—but with a box to ask follow-up questions at the bottom.
Here’s what Robby Stein, Google’s VP of Search, said about AI Mode in a post on The Keyword:
“Using a custom version of Gemini 2.0, AI Mode is particularly helpful for questions that need further exploration, comparisons and reasoning. You can ask nuanced questions that might have previously taken multiple searches — like exploring a new concept or comparing detailed options — and get a helpful AI-powered response with links to learn more.”
AI Mode integrates several elements from traditional search engine results pages (SERPs), such as Shopping listings and Maps.
Finally, Google has said that it will continue to add new features. These include agentic workflows in conjunction with Project Mariner, increasing levels of personalization, and even custom charts and graphs.
AI Mode Is Becoming an Interactive Application Layer
Google is actively turning AI Mode into a more interactive part of search, not just a place to read AI-generated answers.
Recent updates already point to deeper personalization, richer inline links, and more interactive result formats, including charts, comparisons, and visual outputs. With Gemini 3 now integrated directly into AI Mode, those interfaces are becoming more dynamic and tool-driven instead of purely informational.
“We spend a ton of time focused on this question of when and how to show links, and how we can really make the web shine. It will continue to be an ongoing effort as AI Mode and the Search Results Page evolves,” says Stein.
This shift matters. Rather than sending users to external calculators, templates, or apps, Google is starting to surface that functionality directly inside search. For certain queries, AI Mode can simulate outcomes, compare options, or guide users through multi-step decisions without requiring a click to another site.
Over time, this opens the door to agent-driven experiences. In those scenarios, AI Mode does not just explain an answer. It helps users complete tasks, from planning and analysis to evaluation and execution, inside the search interface itself.
As Gemini becomes more tightly integrated across Search, AI Mode is moving closer to a default experience. For brands, this raises the bar. Content that wins in AI-first search needs defensible value, interactive depth, or proprietary insight, not just basic information.
How to Access Google’s AI Mode and Availability
Google AI Modeis now available beyond early U.S.-only testing, with a broader global rollout underway. Users accessing Google in supported regions can enter AI Mode directly from the Google homepage, where it appears alongside the main search experience rather than as an experimental feature.
When users tap “show more” on certain AI-generated results, the AI Overview expands. Once in the expanded AI overview users can click “Dive Deeper in AI Mode” to enter AI mode. This signals a shift toward AI Mode acting as a default exploration layer, not a separate destination.
Once inside AI Mode, users can interact with responses conversationally, asking follow-up questions that carry context forward. Links to supporting pages remain available, and users can access their “AI mode history” once inside AI mode, so they can continue conversations that they previously started.
Google has moved away from positioning AI Mode as a Labs experiment, and there is no longer a separate opt-in process. Access is tied to Google’s standard search interface, and availability is expanding as Google refines performance, localization, and personalization features.
Timeline of Google AI Mode
While most people think of AI as starting with ChatGPT, Google’s been building AI tools for decades.
AI Mode is part of Google’s broader family of AI tools, which include Veo, a video maker, Imagen, a text-to-image model, Project Mariner, an agent that can automate tasks, and others.
Here’s a short timeline that puts AI Mode in context:
May 2017: CEO Sundar Pichai announces the launch of a dedicated AI division called Google AI at I/O, the company’s annual developer conference.
March 2023: Google opens up early access to Bard, its first gen AI chatbot. It is rolled out globally several months later. Global availability follows later that year.
December 2024: Google announces Gemini, a multimodal LLM that can work with different content inputs (images, voice, and text).
February 2024: Bard is coupled with Duet AI, Google’s Workplace AI assistant, and rebranded to Gemini.
May 2024: AI Overviews, initially called Search Generative Experience, are first released.The feature reaches broad availability later in the year, combining generative AI with Google’s traditional information retrieval systems.
May 2025: Google releases AI Mode, a ChatGPT-style interface available on its homepage. It builds on the core functionality of AI overviews. It is available only in America. Early access is limited, but usage expands rapidly.
August 2025: Google begins a more comprehensive global rollout of AI Mode, signaling its transition from a test experience to a core part of Search. Google also announced that they’re increasing the number of links in AI mode. Searchers begin to see inline link carousels and contextual introductions explaining why a link might be useful to visit.
November 2025: Google integrates Gemini 3.0 and Nano Banana in AI Mode.
Using AI Mode: AI Overviews vs. AI Mode
Time for the unboxing. To illustrate how AI Mode differs from AI Overviews, consider a simple comparison scenario.
First, a general query is entered into standard Google Search: “What will be the most popular spring break destinations this year.” This triggers an AI Overview.
AI Overviewanalyzes the query, considers general context such as location, and pulls information from multiple sources, stitched together into a quick summary.
Next, the query becomes a bit more specific: “what will be the most popular spring break destinations this year with a 6-month-old baby.”
AI Overview adjusts the response based on the added constraint, returning suggestions that better match the scenario while still relying on summarization.
The same queries are then entered into Google’s AI Mode using the dedicated prompt box.
The initial response looked similar but for a subtle shift. Instead of simply summarizing existing information, AI Mode applies additional reasoning to evaluate suitability and trade-offs.
A follow-up question is then added without restating the full context.
AI Mode retains the earlier details, understands the added nuance, and returns a more detailed, logically structured set of recommendations. This ability to carry context forward highlights one of the key differences between AI Mode and AI Overviews.
How Is AI Mode Different from AI Overviews and Gemini?
Simply put, AI Mode is an expanded version of AI Overview. It incorporates and builds on features of AI Overviews, and both of these run on Gemini, which is Google’s core model.
Here’s how AI Mode compares to AI Overviews:
More advanced reasoning: While AI Overview summarizes information from across sources, AI Mode interprets that information, connects related concepts, and surfaces conclusions based on reasoning rather than aggregation alone.
Multimodal understanding: In the Google app (on Android and iOS), AI Mode can also answer questions based on photos and images.
Better handling of complex questions: AI Overview works well for simple, fact-based queries, but AI Mode is designed for nuanced, multi-layered, or exploratory questions that benefit from context and comparison.
Follow-ups: You can ask follow-up questions, and the AI will respond based on the ongoing context in a conversational style.
AI Mode is also evolving in how it presents sources. Searchers increasingly see inline links, carousels, and contextual explanations that clarify why a particular source may be useful, rather than a static list of citations.
Research conducted by NP Digital shows that these features match emerging user demand. We found, for example, that 72% of people are inputting very precise, “exactly what I want” queries. And 76% are opting for more human-like and conversational interactions.
What Is the Technology Behind AI Mode?
LLMs are vastly complex entities, and Gemini, the model that powers AI Mode, is no different. However, three main technologies separate AI Mode from standard gen AI bots and AI overviews.
Here are the three core processes that power AI Mode:
AI Mode uses a query fan-out technique. This involves breaking a query into subtopics and researching them in parallel. It then combines dozens of information points into a single answer.
Structured logic is a key part of how AI Mode works. It takes a query and then creates a reasoning chain (e.g., “user is looking for a water bottle for hiking, therefore features should include durability and size, therefore a minimum capacity of 3 liters is needed, etc.) and then validates answers against these steps to determine suitable outcomes.
Personal context plays a significant role. This means that AI Mode records conversations over time and builds a picture of individual user preferences, adjusting responses based on past inputs. It does this by creating a sort of digital ID—called a vector embedding—that is included in the answer generation process. This is a form of background memory that works in much the same way as ChatGPT.
How to Optimize Your Site for AI Mode
So-called GEO—generative engine optimization—is big business at the moment. However, there’s still a lot of uncertainty about what directly influences visibility in AI Mode, and many claims go beyond what Google has actually confirmed.
Rather than chasing shortcuts, the clearer pattern is that AI Mode rewards the same fundamentals Google has emphasized for years — with a few emerging signals becoming more important as AI-generated results mature.
Let’s look at what we actually know about “ranking” in AI Mode.
1. Traditional SEO principles still apply
Google has been pretty unequivocal about this. Traditional SEO optimization is still the most important activity for appearing in AI Overviews and AI Mode.
As long as you follow SEO basics—create useful content, generate natural backlinks, and optimize technical health—you’re ahead of 90% of the competition.
Research also backs this up. Ziptie, for example, found that sites with a number one ranking in traditional search results are 25% more likely to be featured in AI Overviews.
2. Indexed web pages are eligible to appear in AI Mode
On the technical front, there’s good news. As long as a page is indexed, it’s eligible to appear in AI Mode. There are no other requirements. You can check your pages are indexed using the URL inspection tool in Search Console.
If you’re having issues, be sure to check you’re adhering to Google Search technical requirements. Make sure Googlebots aren’t blocked, pages return 200 success codes, and content doesn’t violate spam policies.
3. Forum and discussion board citations matter
Recent analysis across multiple large language models shows that discussion forums and Q&A platforms are frequently referenced when generating explanatory or opinion-based answers, particularly for queries that benefit from lived experience or peer discussion.
Reddit, in particular, continues to surface prominently across AI-generated responses, in part due to its scale, freshness, and breadth of first-hand commentary. However, the weighting of any single forum is dynamic and continues to evolve as Google refines how AI Mode sources and cites content.
Given Reddit and Google’s partnership, it’s likely that well-moderated, high-signal community content remains an important input for Gemini-powered experiences.
If you haven’t already, build up a presence on Reddit and other similar forums and discussion boards. This can help reinforce topical authority and increase the likelihood of being referenced in AI-generated answers.
4. Schema markup (structured data) gives you a boost
Schema markup, also called structured data, is a type of code that you add to your content. It givessearch engines and AI systems additional information to help them understand what it’s about. One simple example of schema markup is identifying a recipe as “@type”: “Recipe.”
Research by Aiso has shown that LLMs extract more accurate data from pages with schema markup, with a 30% improvement in quality.
Using schema markup helps reduce ambiguity for AI-generated answers and increases the likelihood that your content is interpreted correctly. Fortunately, adding schema to your web page is relatively straightforward.
5. Digital PR is important
LLMs access information in two ways. They are initially trained on a large amount of information—called training data—and they can also access new online content, such as news articles.
Digital PR is all about acquiring mentions and backlinks from reputable third-party sources, especially media websites.
Brand mentions boost visibility in LLM training materials and strengthen topical associations (a measure of the number of times you’re cited in relation to a specific subject), meaning you’re more likely to appear in responses.
Digital PR involves creating share-worthy content and contacting journalists and site admins to ask them to feature you. Our research shows that original research and tools are especially good at encouraging people to talk about your brand.
6. Be Ready To Test and Track AI Visibility
As AI Mode becomes more integrated into the search experience, visibility is no longer limited to rankings alone. Brands need ways to measure whether — and how often — their content appears in AI-generated answers.
New AI visibility platforms, such as Writesonic and Profound, are emerging to help track citations, brand mentions, and source inclusion across large language models. These tools provide early signals about which content formats, topics, and entities are being surfaced by AI systems.
Monitoring this data allows teams to validate whether SEO, digital PR, and structured data efforts are translating into real AI exposure. It also makes it easier to spot gaps, test changes, and adapt as Google continues to evolve AI Mode.
Treat AI visibility tracking as a complement to traditional performance metrics, not a replacement. Both matter.
What Does AI Mode Mean for the Future of Search?
There are a lot of unknowns about how increased use of AI tools will affect the way people look for information. That said, emerging usage patterns are already pointing to meaningful shifts in how AI SEO is evolving.
With that in mind, here are five implications for the future of search as AI Mode becomes more prominent:
Searchers will still click through to websites: Early performance data from AI-generated results shows that clicks are reduced for some informational queries, but not eliminated. Users continue to seek out original content, particularly for complex decisions, comparisons, and high-consideration topics.
Long-play brand building will become more common: LLMs use third-party brand mentions to measure the authority of publishers. Popular brands are cited more by gen AI search tools and, as such, long-term brand building with an outlook of five years and above will become much more common.
Marketing strategies will become more omnichannel: As AI Mode absorbs more discovery queries, brands will need visibility across multiple platforms, not just Google’s traditional results. This reinforces a broader “search everywhere” approach, where discovery happens across AI tools, social platforms, and communities.
People will favor AI for more specific searches: Analysis of large query sets shows that AI-generated results appear more frequently for longer, more specific searches. Short, navigational queries may still rely on traditional results, while nuanced questions increasingly trigger AI Mode.
Trust in AI will continue to grow: Hallucinations are a big problem with AI Overviews and AI Mode also makes mistakes, according to user reports. With that said, user adoption and satisfaction with AI-powered search tools are trending upward. As Google refines AI Mode, usage is likely to grow alongside improvements in reliability and transparency.
FAQs
What is Google AI Mode?
Google AI Mode is a conversational search experience powered by Gemini, Google’s core AI model. It provides more detailed, context-aware answers to search queries, similar in format to tools like ChatGPT, but integrated directly into Google Search.
Instead of returning a list of links first, AI Mode synthesizes information from multiple sources and presents a reasoned response, with links available for deeper exploration. Users can ask follow-up questions, and the system carries context forward, making the interaction feel more like an ongoing conversation.
AI Mode builds on AI Overviews but goes further by handling complex, multi-step, or exploratory queries more effectively.
How do you use Google AI Mode?
In supported regions, users can access AI Mode directly from the Google homepage. On some AI-generated results, selecting “show more” will also open AI Mode automatically, allowing users to continue their search without returning to traditional results.
Once inside AI Mode, questions can be entered conversationally, and follow-ups don’t require repeating the original context. Users can still click through to source pages or switch back to standard search results at any point.
AI Mode is no longer accessed through Google Labs, and there is no separate opt-in process.
How do you optimize your website for Google AI Mode?
Start with strong SEO fundamentals, which Google has confirmed remain the primary eligibility signals. Beyond that, sites that appear most often in AI-generated answers tend to share a few traits:
Create useful, high-quality content that fully addresses search intent.
Make sure pages are indexed and technically accessible
Use schema markup to clarify meaning and structure
Earn third-party brand mentions from trusted publishers and communities
Build topical authority through consistent, focused publishing
Visibility in AI Mode is not guaranteed, but sites that are trusted, well-structured, and frequently cited are more likely to be referenced in AI-generated responses..
Search Is Changing but the Fundamentals Still Apply
The way people search is changing, and Google AI Mode is accelerating that shift.
People are finding information across a host of different platforms, not just Google. AI-generated answers are reducing clicks. And traditional content publishers are under pressure as gen AI eats up demand.
At the same time, AI Mode doesn’t discard the fundamentals that have always mattered. Google is still prioritizing relevance, authority, and usefulness — it’s just surfacing them in new ways. Sites that understand search intent, build credibility beyond their own domains, and structure content clearly are better positioned to stay visible as AI Mode expands.
From the very start, Google had one aim: to solve users’ needs. That’s also what AI tools seek to do, and their models will continuously be designed to that end.
Understanding your customers—and providing what they want through high-quality, useful content—is the best way of futureproofing your business and ensuring long-term visibility in LLMs.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-21 20:00:002026-01-21 20:00:00What Is Google AI Mode and How Does It Work?
The debate around llms.txt has become one of the most polarized topics in web optimization.
Some treat llms.txt as foundational infrastructure, while many SEO veterans dismiss it as speculative theater. Platform tools flag missing llms.txt files as site issues, yet server logs show that AI crawlers rarely request them.
Google even adopted it. Sort of. In December, the company added llms.txt files across many developer and documentation sites.
The signal seemed clear: if the company behind the sitemap standard is implementing llms.txt, it likely matters.
Except Google pulled it from its Search developer docs within 24 hours.
Google’s John Mueller said the change came from a sitewide CMS update that many content teams didn’t realize was happening. When asked why the files still exist on other Google properties, Mueller said they aren’t “findable by default because they’re not at the top-level” and “it’s safe to assume they’re there for other purposes,” not discovery.
The llms.txt research
We wanted data, not debates.
So we tracked llms.txt adoption across 10 sites in finance, B2B SaaS, ecommerce, insurance, and pet care — 90 days before implementation and 90 days after.
We measured AI crawl frequency, traffic from ChatGPT, Claude, Perplexity, and Gemini, and what else these sites changed during the same window.
The results:
Two of the 10 sites saw AI traffic increases of 12.5% and 25%, but llms.txt wasn’t the cause.
Eight sites saw no measurable change.
One site declined by 19.7%.
The 2 ‘success’ stories weren’t about the file
The Neobank: 25% growth
This digital banking platform implemented llms.txt early in Q3 2025. Ninety days later, AI traffic was up 25%.
Here’s what else happened in that window:
A PR campaign around its banking license, with coverage in major national publications.
Product pages restructured with extractable comparison tables for interest rates, fees, and minimums.
Twelve new FAQ pages optimized for extraction.
A rebuilt resource center with new banking information and concepts.
Technical SEO issues, like header structures, fixed.
When a company gets Bloomberg coverage the same month it launches optimized content and fixes crawl errors, you can’t isolate the llms.txt as the growth driver.
The B2B SaaS platform: 12.5% growth
This workflow automation company saw traffic jump 12.5% two weeks after implementing llms.txt.
Perfect timing. Case closed. Except…
Three weeks earlier, the company published 27 downloadable AI templates covering project management frameworks, financial models, and workflow planners. Functional tools, not content marketing, drove the engagement behind the spike.
Google organic traffic to the templates rose 18% during the same period and continued climbing throughout the 90 days we measured.
Search engines and AI models surfaced the templates because they solved real problems and launched an entirely new site section — not because they were listed in an llms.txt file.
The 8 sites where nothing happened after uploading llms.txt
Eight sites saw no measurable change. One declined by 19.7%.
The decline came from an insurance site that implemented llms.txt in early September. The drop likely had nothing to do with the file.
The same pattern showed up across all traffic channels. Llms.txt neither prevented the decline nor created any advantage.
The other seven sites — ecommerce (pet supplies, home goods, fashion), B2B SaaS (HR tech, marketing analytics), finance, and pet care — all documented their best existing content in llms.txt. That included product pages, case studies, API docs, and buying guides.
Ninety days later, nothing changed. Traffic stayed flat. Crawl frequency was identical. The content was already indexed and discoverable, and the file didn’t alter that.
Sites that launched new, functional content saw gains. Sites that documented existing content saw no gains.
Why the disconnect?
No major LLM provider has officially committed to parsing llms.txt. Not OpenAI. Not Anthropic. Not Google. Not Meta.
“None of the AI services have said they’re using llms.txt, and you can tell when you look at your server logs that they don’t even check for it.”
That’s the reality. The file exists. The advocacy exists. The adoption by platforms doesn’t show it (yet!).
The token efficiency argument (and its limits)
The strongest case for llms.txt is about efficiency. Markdown saves time and tokens when AI agents parse documentation. Clean structure instead of complex HTML with navigation, ads, and JavaScript.
This matters — but almost exclusively for developer tools and API documentation. If your audience uses AI coding assistants like Cursor or GitHub Copilot to interact with your product, token efficiency improves integration.
For ecommerce selling pet supplies, insurance explaining coverage, or B2B SaaS targeting nontechnical buyers, token efficiency doesn’t translate into traffic.
llms.txt is a sitemap, not a strategy
The most accurate comparison is a sitemap.
Sitemaps are valuable infrastructure. They help search engines discover and index content more efficiently. But no one credits traffic growth to adding a sitemap. The sitemap documents what exists; the content drives discovery.
Llms.txt works the same way. It may help AI models parse your site more efficiently if they choose to use it, but it doesn’t make your content more useful, authoritative, or likely to answer user queries.
In our analysis, the sites that grew did so because they:
Created functional assets like downloadable templates, comparison tables, and structured data.
Earned external visibility through press and backlinks.
Fixed technical barriers such as crawl and indexing issues.
Published content optimized for extraction, including FAQs and structured comparisons.
Llms.txt documented those efforts. It didn’t drive them.
What actually works
The two successful sites show what matters:
Create functional, extractable assets. The SaaS platform built 27 downloadable templates that users could deploy immediately. AI models surfaced these because they solved real problems, not because they were listed in a markdown file.
Structure content for extraction. The neobank rebuilt product pages with comparison tables with interest rates, fees, and account minimums. This is data AI models can pull directly into answers without interpretation.
Fix technical barriers first. The neobank fixed crawl errors that had blocked content for months. If AI models can’t access your content, no amount of documentation helps.
Earn external validation. Coverage from Bloomberg and other major publications drove referral traffic, branded searches, and likely influenced how AI models assess authority.
Optimize for user intent. Both sites answered specific queries: “best project management templates” and “how do [brand] interest rates compare?” Models surface content that maps to what users are asking, not content that’s merely well documented.
None of this requires llms.txt. All of it drives results.
Should you implement an llms.txt file?
If you’re a developer tool where AI coding assistants are a primary distribution channel, then yes — token efficiency matters. Your audience is already using agents to interact with documentation.
For everyone else, treat llms.txt like a sitemap: useful infrastructure, not a growth lever.
It’s good practice to have. It won’t hurt. But the hour spent implementing llms.txt is often better spent restructuring product pages with extractable data, publishing functional assets, fixing technical SEO issues, creating FAQ content, or earning press coverage.
Those tactics have shown real ROI in AI discovery. Llms.txt hasn’t — at least not yet.
The lesson isn’t that llms.txt is bad. It’s that we’re reaching for control in a system where the rules aren’t written yet. Llms.txt offers that comfort: something concrete, actionable, and familiar, shaped like the web standards we already know.
But looking like infrastructure isn’t the same as functioning like infrastructure.
Focus on what actually works:
Create useful content.
Structure it for extraction.
Make it technically accessible.
Earn external validation.
Platforms and formats will change. The fundamentals won’t.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/01/does-llms-txt-matter-JgN5U9.png?fit=1920%2C1080&ssl=110801920http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-20 18:07:362026-01-20 18:07:36Does llms.txt matter? We tracked 10 sites to find out
“LLMs have trained on – read and parsed – normal web pages since the beginning,” he said in a recent discussion on Bluesky. “Why would they want to see a page that no user sees?”
His comparison was blunt: LLM-only pages are like the old keywords meta tag. Available for anyone to use, but ignored by the systems they’re meant to influence.
So is this trend actually working, or is it just the latest SEO myth?
The rise of ‘LLM-only’ web pages
The trend is real. Sites across tech, SaaS, and documentation are implementing LLM-specific content formats.
The question isn’t whether adoption is happening, it’s whether these implementations are driving the AI citations teams hoped for.
Here’s what content and SEO teams are actually building.
llms.txt files
A markdown file at your domain root listing key pages for AI systems.
The format was introduced in 2024 by AI researcher Simon Willison to help AI systems discover and prioritize important content.
Plain text lives at yourdomain.com/llms.txt with an H1 project name, brief description, and organized sections linking to important pages.
Stripe’s implementation at docs.stripe.com/llms.txt shows the approach in action:
markdown# Stripe Documentation
> Build payment integrations with Stripe APIs
## Testing
- [Test mode](https://docs.stripe.com/testing): Simulate payments
## API Reference
- [API docs](https://docs.stripe.com/api): Complete API reference
The payment processor’s bet is simple: if ChatGPT can parse their documentation cleanly, developers will get better answers when they ask, “how do I implement Stripe.”
They’re not alone. Current adopters include Cloudflare, Anthropic, Zapier, Perplexity, Coinbase, Supabase, and Vercel.
Markdown (.md) page copies
Sites are creating stripped-down markdown versions of their regular pages.
The implementation is straightforward: just add .md to any URL. Stripe’s docs.stripe.com/testing becomes docs.stripe.com/testing.md.
Everything gets stripped out except the actual content. No styling. No menus. No footers. No interactive elements. Just pure text and basic formatting.
The thinking: if AI systems don’t have to wade through CSS and JavaScript to find the information they need, they’re more likely to cite your page accurately.
/ai and similar paths
Some sites are building entirely separate versions of their content under /ai/, /llm/, or similar directories.
You might find /ai/about living alongside the regular /about page, or /llm/products as a bot-friendly alternative to the main product catalog.
Sometimes these pages have more detail than the originals. Sometimes they’re just reformatted.
The idea: give AI systems their own dedicated content that’s built for machine consumption, not human eyes.
If a person accidentally lands on one of these pages, they’ll find something that looks like a website from 2005.
Instead of creating separate pages, they built structured data feeds that live alongside their regular ecommerce site.
The files contain clean JSON – specs, pricing, and availability.
Everything an AI needs to answer “what’s the best Dell laptop under $1000” without having to parse through product descriptions written for humans.
You’ll typically find these files as /llm-metadata.json or /ai-feed.json in the site’s directory.
# Dell Technologies
> Dell Technologies is a leading technology provider, specializing in PCs, servers, and IT solutions for businesses and consumers.
## Product and Catalog Data
- [Product Feed - US Store](https://www.dell.com/data/us/catalog/products.json): Key product attributes and availability.
- [Dell Return Policy](https://www.dell.com/return-policy.md): Standard return and warranty information.
## Support and Documentation
- [Knowledge Base](https://www.dell.com/support/knowledge-base.md): Troubleshooting guides and FAQs.
This approach makes the most sense for ecommerce and SaaS companies that already keep their product data in databases.
They’re just exposing what they already have in a format AI systems can easily digest.
Real-world citation data: What actually gets referenced
The theory sounds good. The adoption numbers look impressive.
But do these LLM-optimized pages actually get cited?
The individual analysis
Landwehr, CPO and CMO at Peec AI, ran targeted tests on five websites using these tactics. He crafted prompts specifically designed to surface their LLM-friendly content.
Some queries even contained explicit 20+ word quotes designed to trigger specific sources.
Across nearly 18,000 citations, here’s what he found.
llms.txt: 0.03% of citations
Out of 18,000 citations, only six pointed to llms.txt files.
The six that did work had something in common: they contained genuinely useful information about how to use an API and where to find additional documentation.
The kind of content that actually helps AI systems answer technical questions. The “search-optimized” llms.txt files, the ones stuffed with content and keywords, received zero citations.
Markdown (.md) pages: 0% of citations
Sites using .md copies of their content got cited 3,500+ times. None of those citations pointed to the markdown versions.
The one exception: GitHub, where .md files are the standard URLs.
They’re linked internally, and there’s no HTML alternative. But these are just regular pages that happen to be in markdown format.
/ai pages: 0.5% to 16% of citations
Results varied wildly depending on implementation.
One site saw 0.5% of its citations point to its/ai pages. Another hit 16%.
The difference?
The higher-performing site put significantly more information in their /ai pages than existed anywhere else on their site.
Keep in mind, these prompts were specifically asking for information contained in these files.
Even with prompts designed to surface this content, most queries ignored the /ai versions.
JSON metadata: 5% of citations
One brand saw 85 out of 1,800 citations (5%) come from their metadata JSON file.
The critical detail here is that the file contained information that didn’t exist anywhere else on the website.
Once again, the query specifically asked for those pieces of information.
Instead of testing individual sites, they analyzed 300,000 domains to see if llms.txt adoption correlated with citation frequency at scale.
Only 10.13% of domains, or 1 in 10, had implemented llms.txt.
For context, that’s nowhere near the universal adoption of standards like robots.txt or XML sitemaps.
During the study, an interesting relationship between adoption rates and traffic levels emerged.
Sites with 0-100 monthly visits adopted llms.txt at 9.88%.
Sites with 100,001+ visits? Just 8.27%.
The biggest, most established sites were actually slightly less likely to use the file than mid-tier ones.
But the real test was whether llms.txt impacted citations.
SE Ranking built a machine learning model using XGBoost to predict citation frequency based on various factors, including the presence of llms.txt.
The result: removing llms.txt from the model actually improved its accuracy.
The file wasn’t helping predict citation behavior, it was adding noise.
The pattern
Both analyses point to the same conclusion: LLM-optimized pages get cited when they contain unique, useful information that doesn’t exist elsewhere on your site.
The format doesn’t matter.
Landwehr’s conclusion was blunt: “You could create a 12345.txt file and it would be cited if it contains useful and unique information.”
A well-structured about page achieves the same result as an /ai/about page. API documentation gets cited whether it’s in llms.txt or buried in your regular docs.
The files themselves get no special treatment from AI systems.
The content inside them might, but only if it’s actually better than what already exists on your regular pages.
SE Ranking’s data backs this up at scale. There’s no correlation between having llms.txt and getting more citations.
The presence of the file made no measurable difference in how AI systems referenced domains.
No major AI company has confirmed using llms.txt files in their crawling or citation processes.
Google’s Mueller made the sharpest critique in April 2025, comparing llms.txt to the obsolete keywords meta tag:
“[As far as I know], none of the AI services have said they’re using LLMs.TXT (and you can tell when you look at your server logs that they don’t even check for it).”
Google’s Gary Illyes reinforced this at the July 2025 Search Central Deep Dive in Bangkok, explicitly stating Google “doesn’t support LLMs.txt and isn’t planning to.”
Google Search Central’s documentation is equally clear:
“The best practices for SEO remain relevant for AI features in Google Search. There are no additional requirements to appear in AI Overviews or AI Mode, nor other special optimizations necessary.”
OpenAI, Anthropic, and Perplexity all maintain their own llms.txt files for their API documentation to make it easy for developers to load into AI assistants.
But none have announced their crawlers actually read these files from other websites.
The consistent message from every major platform: standard web publishing practices drive visibility in AI search.
No special files, no new markup, and no separate versions needed.
What this means for SEO teams
The evidence points to a single conclusion: stop building content that only machines will see.
“Why would they want to see a page that no user sees?”
If AI companies needed special formats to generate better responses, they would tell you. As he noted:
“AI companies aren’t really known for being shy.”
The data proves him right.
Across Landwehr’s nearly 18,000 citations, LLM-optimized formats showed no advantage unless they contained unique information that didn’t exist anywhere else on the site.
SE Ranking’s analysis of 300,000 domains found that llms.txt actually added confusion to their citation prediction model rather than improving it.
Instead of creating shadow versions of your content, focus on what actually works.
Build clean HTML that both humans and AI can parse easily.
Reduce JavaScript dependencies for critical content, which Mueller identified as the real technical barrier:
“Excluding JS, which still seems hard for many of these systems.”
Heavy client-side rendering creates actual problems for AI parsing.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/01/JohnMu-Lily-Ray-on-BlueSky-vgN5IJ.webp?fit=634%2C511&ssl=1511634http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-20 14:00:002026-01-20 14:00:00Why LLM-only pages aren’t the answer to AI search