Web Design and Development San Diego

Why most SEO failures are organizational, not technical

Why most SEO failures are organizational, not technical

I’ve spent over 20 years in companies where SEO sat in different corners of the organization – sometimes as a full-time role, other times as a consultant called in to “find what’s wrong.” Across those roles, the same pattern kept showing up.

The technical fix was rarely what unlocked performance. It revealed symptoms, but it almost never explained why progress stalled.

No governance

The real constraints showed up earlier, long before anyone read my weekly SEO reports. They lived in reporting lines, decision rights, hiring choices, and in what teams were allowed to change without asking permission. 

When SEO struggled, it was usually because nobody rightfully owned the CMS templates, priorities conflicted across departments, or changes were made without anyone considering how they affected discoverability.

I did not have a word for the core problem at the time, but now I do – it’s governance, usually manifested by its absence.

Two workplaces in my career had the conditions that allowed SEO to work as intended. Ownership was clear.

Release pathways were predictable. Leaders understood that visibility was something you managed deliberately, not something you reacted to when traffic dipped.

Everywhere else, metadata and schema were not the limiting factor. Organizational behavior was.

Dig deeper: How to build an SEO-forward culture in enterprise organizations

Beware of drift

Once sales pressures dominate each quarter, even technically strong sites undergo small, reasonable changes:

  • Navigation renamed by a new UX hire.
  • Wording adjusted by a new hire on the content team.
  • Templates adjusted for a marketing campaign.
  • Titles “cleaned up” by someone outside the SEO loop.

None of these changes look dangerous in isolation – if you know before they occur.

Over time, they add up. Performance slides, and nobody can point to a single release or decision where things went wrong.

This is the part of SEO most industry commentary skips. Technical fixes are tangible and teachable. Organizational friction is not. Yet that friction is where SEO outcomes are decided, usually months before any visible decline.

SEO loses power when it lives in the wrong place

I’ve seen this drift hurt rankings, with SEO taking the blame. In one workplace, leadership brought in an agency to “fix” the problem, only for it to confirm what I’d already found: a lack of governance caused the decline.

Where SEO sits on the org chart determines whether you see decisions early or discover them after launch. It dictates whether changes ship in weeks or sit in the backlog for quarters.

I have worked with SEO embedded under marketing, product, IT, and broader omnichannel teams. Each placement created a different set of constraints.

When SEO sits too low, decisions that reshape visibility ship first and get reviewed later — if they are reviewed at all.

  • Engineering adjusted components to support a new security feature. In one workplace, a new firewall meant to stop scraping also blocked our own SEO crawling tools.
  • Product reorganized navigation to “simplify” the user journey. No one asked SEO how it would affect internal PageRank.
  • Marketing “refreshed” content to match a campaign. Each change shifted page purpose, internal linking, and consistency — the exact signals search engines and AI systems use to understand what a site is about.

Dig deeper: SEO stakeholders: Align teams and prove ROI like a pro

Positioning the SEO function

Without a seat at the right table, SEO becomes a cleanup function.

When one operational unit owns SEO, the work starts to reflect that unit’s incentives.

  • Under marketing, it becomes campaign-driven and short-term.
  • Under IT, it competes with infrastructure work and release stability.
  • Under product, it gets squeezed into roadmaps that prioritize features over discoverability.

The healthiest performance I’ve seen came from environments where SEO sat close enough to leadership to see decisions early, yet broad enough to coordinate with content, engineering, analytics, UX, and legal.

In one case, I was a high-priced consultant, and every recommendation was implemented. I haven’t repeated that experience since, but it made one thing clear: VP-level endorsement was critical. That client doubled organic traffic in eight months and tripled it over three years.

Unfortunately, the in-house SEO team is just another team that might not get the chance to excel. Placement is not everything, but it is the difference between influencing the decision and fixing the outcome.

Get the newsletter search marketers rely on.


Hiring mistakes

The second pattern that keeps showing up is hiring – and it surfaces long before any technical review.

Many SEO programs fail because organizations staff strategically important roles for execution, when what they really need is judgment and influence. This isn’t a talent shortage. It’s a screening problem

The SEO manager often wears multiple hats, with SEO as a minor one. When they don’t understand SEO requirements, they become a liability, and the C-suite rarely sees it.

Across many engagements, I watched seasoned professionals passed over for younger candidates who interviewed well, knew the tool names, and sounded confident.

HR teams defaulted to “team fit” because it was easier to assess than a candidate’s ability to handle ambiguity, challenge bad decisions, or influence work across departments.

SEO excellence depends on lived experience. Not years on a résumé, but having seen the failure modes up close:

  • Migrations that wiped out templates.
  • Restructures that deleted category pages.
  • “Small” navigation changes that collapsed internal linking.

Those experiences build judgment. Judgment is what prevents repeat mistakes. Often, that expertise is hard to put in a résumé.

Without SEO domain literacy, hiring becomes theater. But we can’t blame HR, which has to hire people for all parts of the business. Its only expertise is HR.

Governance needs to step in.

One of the most reliable ways to improve recruitment outcomes is simple: let the SEO leader control the shortlist.

Fit still matters. Competence matters first. When the person accountable for results shapes the hiring funnel, the best candidates are chosen.

SEO roles require the ability to change decisions, not just diagnose problems. That skill does not show up in a résumé keyword scan.

Dig deeper: The top 5 strategic SEO mistakes enterprises make (and how to avoid them)

When priorities pull in different directions

Every department in a large organization has legitimate goals.

  • Product wants momentum.
  • Engineering wants predictable releases.
  • Marketing wants campaign impact.
  • Legal wants risk reduction.

Each team can justify its decisions – and SEO still absorbs the cost.

I have seen simple structural improvements delayed because engineering was focused on a different initiative.

At one workplace, I was asked how much sales would increase if my changes were implemented.

I have seen content refreshed for branding reasons that weakened high-converting pages. Each decision made sense locally. Collectively, they reshaped the site in ways nobody fully anticipated.

Today, we face an added risk: AI systems now evaluate content for synthesis. When content changes materially, an LLM may stop citing us as an authority on that topic.

Strong visibility governance can prevent that.

The organizations that struggled most weren’t the ones with conflict. They were the ones that failed to make trade-offs explicit.

What are we giving up in visibility to gain speed, consistency, or safety? When that question is never asked, SEO degrades quietly.

What improved outcomes was not a tool. It was governance: shared expectations and decision rights.

When teams understood how their work affected discoverability, alignment followed naturally. SEO stopped being the team that said “no” and became the function that clarified consequences.

International SEO improves when teams stop shipping locally good changes that are globally damaging. Local SEO improves when there is a single source of location truth.

Ownership gaps

Many SEO problems trace back to ownership gaps that only become visible once performance declines.

  • Who owns the CMS templates?
  • Who defines metadata standards?
  • Who maintains structured data? Who approves content changes?

When these questions have no clear answer, decisions stall or happen inconsistently. The site evolves through convenience rather than intent.

In contrast, the healthiest organizations I worked with shared one trait: clarity.

People knew which decisions they owned and which ones required coordination. They did not rely on committees or heavy documentation because escalation paths were already understood.

When ownership is clear, decisions move. When ownership is fragmented, even straightforward SEO work becomes difficult.

Dig deeper: How to win SEO allies and influence the brand guardians

Healthy environments for SEO to succeed

Across my career, the strongest results came from environments where SEO had:

  • Early involvement in upcoming changes.
  • Predictable collaboration with engineering.
  • Visibility into product goals.
  • Clear authority over content standards.
  • Stable templates and definitions.
  • A reliable escalation path when priorities conflicted.
  • Leaders who understood visibility as a long-term asset.

These organizations were not perfect. They were coherent.

People understood why consistency mattered. SEO was not a reactive service. It was part of the infrastructure.

What leaders can do now

If you lead SEO inside a complex organization, the most effective improvements come from small, deliberate shifts in how decisions get made:

  • Place SEO where it can see and influence decisions early.
  • Let SEO leaders – not HR – shape candidate shortlists.
  • Hire for judgment and influence, not presentation.
  • Create predictable access to product, engineering, content, analytics, and legal.
  • Stabilize page purpose and structural definitions.
  • Make the impact of changes visible before they ship.

These shifts do not require new software. They require decision clarity, discipline, and follow-through.

Visibility is an organizational outcome

SEO succeeds when an organization can make and enforce consistent decisions about how it presents itself. Technical work matters, but it can’t offset structures pulling in different directions.

The strongest SEO results I’ve seen came from teams that focused less on isolated optimizations and more on creating conditions where good decisions could survive change. That’s visibility governance.

When SEO performance falters, the most durable fixes usually start inside the organization.

Dig deeper: What 15 years in enterprise SEO taught me about people, power, and progress

Read more at Read More

Web Design and Development San Diego

Google Ads API update cracks open Performance Max by channel

Is your account ready for Google AI Max? A pre-test checklist

As part of the v23 Ads API launch, Performance Max campaigns can now be reported by channel, including Search, YouTube, Display, Discover, Gmail, Maps, and Search Partners. Previously, performance data was largely grouped into a single mixed category.

The change under the hood. Earlier API versions typically returned a MIXED value for the ad_network_type segment in Performance Max campaigns. With v23, those responses now break out into specific channel enums — a meaningful shift for reporting and optimization.

Why we care. Google Ads API v23 doesn’t just add features — it changes how advertisers understand Performance Max. The update introduces channel-level reporting, giving marketers long-requested visibility into where PMax ads actually run.

How advertisers can use it. Channel-level data is available at the campaign, asset group, and asset level, allowing teams to see how individual creatives perform across Google properties. When combined with v22 segments like ad_using_video and ad_using_product_data, advertisers can isolate results such as video performance on YouTube or Shopping ads on Search.

For developers. Upgrading to v23 will surface more detailed reporting than before. Reporting systems that relied on the legacy MIXED value will need to be updated to handle the new channel enums.

What to watch:

  • Channel data is only available for dates starting June 1, 2025.
  • Asset group–level channel reporting remains API-only and won’t appear in the Google Ads UI.

Bottom line. The latest Google Ads API release quietly delivers one of the biggest Performance Max updates yet — turning a black-box campaign type into something advertisers can finally analyze by channel.

Read more at Read More

Web Design and Development San Diego

How to build a modern Google Ads targeting strategy like a pro

Search marketing is still as powerful as ever. Google recently surpassed $100 billion in ad revenue in a single quarter, with more than half coming from search. But search alone can no longer deliver the same results most businesses expect.

As Google Ads Coach Jyll Saskin Gales showed at SMX Next, real performance now comes from going beyond traditional search and using it to strengthen a broader PPC strategy.

The challenge with traditional Search Marketing

As search marketers, we’re great at reaching people who are actively searching for what we sell. But we often miss people who fit our ideal audience and aren’t searching yet.

The real opportunity sits at the intersection of intent and audience fit.

Take the search [vacation packages]. That query could come from a family with young kids, a honeymooning couple, or a group of retirees. The keyword is the same, but each audience needs a different message and a different offer.

Understanding targeting capabilities in Google Ads

There are two main types of targeting:

  • Content targeting shows ads in specific places.
  • Audience targeting shows ads to specific types of people.

For example, targeting [flights to Paris] is content targeting. Targeting people who are “in-market for trips to Paris” is audience targeting. Google builds in-market audiences by analyzing behavior across multiple signals, including searches, browsing activity, and location.

The three types of content targeting

  • Keyword targeting: Reach people when they search on Google, including through dynamic ad groups and Performance Max.
  • Topic targeting: Show ads alongside content related to specific topics in display and video campaigns.
  • Placement targeting: Put ads on specific websites, apps, YouTube channels, or videos where your ideal customers already spend time.

The four types of audience targeting

  • Google’s data: Prebuilt segments include detailed demographics (such as parents of toddlers vs. teens), affinity segments (interests like vegetarianism), in-market segments (people actively researching purchases), and life events (graduating or retiring). Any advertiser can use these across most campaign types.
  • Your data: Target website visitors, app users, people who engaged with your Google content (YouTube viewers or search clickers), and customer lists through Customer Match. Note that remarketing is restricted for sensitive interest categories.
  • Custom segments: Turn content targeting into audience targeting by building segments based on what people search for, their interests, and the websites or apps they use. These go by different names depending on campaign type—“custom segments” in most campaigns and “custom search terms” in video campaigns.
  • Automated targeting: This includes optimized targeting (finding people similar to your converters), audience expansion in video campaigns, audience signals and search themes in Performance Max, and lookalike segments that model new users from your seed lists.

Building your targeting strategy

To build a modern targeting strategy, you need to answer two questions:

  • How can I sell my offer with Google Ads?
  • How can I reach a specific kind of person with Google Ads?

For example, to reach Google Ads practitioners for lead gen software, you could build custom segments that target people who use the Google Ads app, visit industry sites like searchengineland.com, or search for Google Ads–specific terms such as “Performance Max” or “Smart Bidding.”

You can also layer in content targeting, like YouTube placements on industry educator channels and topic targeting around search marketing.

Strategies for sensitive interest categories

If you work in a restricted category such as legal or healthcare and can’t use custom segments or remarketing, use non-linear targeting. Ignore the offer and focus on the audience. Choose any Google data audience with potential overlap, even if it’s imperfect, and let your creative do the filtering.

Use industry-specific jargon, abbreviations, and imagery that only your target audience will recognize and value. Everyone else will scroll past.

Remember: High CPCs aren’t the enemy

Low-quality traffic is the real problem. You’re better off paying $10 per click with a 10% conversion rate than $1 per click with a 0.02% conversion rate.

When evaluating targeting strategies, focus on conversion rate and cost per acquisition, not just cost per click.

Search alone can’t deliver the results you’re used to

By expanding beyond traditional search keywords and using content and audience targeting, you can reach the right people and keep driving strong results.

Watch: How to build a modern targeting strategy like a pro + Live Q&A

Read more at Read More

What is NLWeb (Natural Language Web)?

Natural language is quickly becoming the default way people interact with online tools. Instead of typing a few keywords, users now ask full questions, give detailed instructions, and are starting to expect clear, conversational answers. So, how can you make sure your content provides the answer to their question? Or better yet, how can you make it possible for them to interact with your website in a similar way? That’s where Microsoft’s NLWeb comes in. 

Meet NLWeb, Microsoft’s new open project

NLWeb, short for Natural Language Web, is an open project recently launched by Microsoft. The aim of this project is to bring conversational interfaces directly to websites, rather than users having to use an external chatbot that’s in control of what’s shown. Instead of relying on traditional navigation or search bars, NLWeb is designed to allow users to ask questions and explore content in a more personal, conversational way. 

At its core, NLWeb connects website content to AI-powered tools. It enables AI to understand what a website is about, what information it contains, and how that information should be interpreted for the purpose of returning personalized results. With this project, Microsoft is moving toward a more interoperable, standards-based, and open web that allows everyone to prepare their website for the future of search.  

This project was initiated and realized by R.V. Guha, CVP and Technical Fellow at Microsoft. Guha is one of the creators of widely used web standards such as RSS and Schema.org.  

How NLWeb works

NLWeb works by combining structured data, standardized APIs and AI models capable of understanding natural language. Every NLWeb instance acts as a Model Context Protocol (MCP) server, which makes your content discoverable for all the agents operating in the MCP ecosystem. This makes it easy for these agents to find your website.  

Using structured data, website owners then present their content in a machine-readable way. AI applications can then consume this data and answer user questions accurately by matching them to the most relevant information. The result is a conversational experience powered by existing content, either directly on a website or through using an online search tool. A conversational interface for both human users and AI agents collecting information. 

An important thing to note is that NLWeb is an open project. It’s not a closed ecosystem, meaning that Microsoft wants to make it accessible to everyone. The idea is to make it easy for any website owner to create an intelligent, natural language experience for their site, while also preparing their content to interact with and be discovered by other online agents, such as AI tools and search engines.  

How does natural language work? 

Natural language simply refers to the way we speak and write. This means using full sentences that allow room for intent, context and nuance. More than keywords or short commands, natural language reflects how people think and what they are looking for exactly. 

To give you an example: a focus keyphrase might be running shoes trail. But using natural language, the request would look more like this: What are the best running shoes for trail running in wet conditions? 

Natural language in AI tools 

Modern AI tools are designed to understand this kind of input. The large language models behind these tools can analyze intent and context to generate responses that fulfill the given request. This is why conversational interfaces feel more intuitive than traditional search or forms. 

Tools like AI chat assistants, voice search, and even traditional search engines rely heavily on natural language understanding and users have quickly adapted to it. 

The current state of search 

The way people find information online is changing fast. A change that is heavily influenced by the use of AI-powered tools. We now expect personalized answers instead of a list of results to sort through ourselves. AI chatbots also give us the option to follow up on our original search query, which turns search into a conversation instead of a series of clicks. 

Research from McKinsey & Company shows that AI adoption and natural language interfaces are becoming mainstream, with 50% of consumers already using AI-driven tools for information discovery. The majority even say it’s the top digital source they use to make buying decisions. As these habits continue to grow, websites that aren’t optimized for natural language risk becoming invisible in AI-generated answers. 

Why this is interesting for you 

The shift to natural language isn’t just a technical trend. As discussed above, it directly impacts your online visibility and competitive position. 

If users ask an AI system for information, only a handful of sources will be referenced in the response. This is because, like search engines, AI platforms also need to be able to read the information on your website. Being one of those sources can be the difference between being discovered or being overlooked. 

NLWeb collaborates with Yoast 

With NLWeb, you are communicating your website’s content clearly and in a standardized way. That means your brand, products, or expertise can appear in AI-powered answers instead of your competitors. To help as many website owners as possible benefit from this shift, Yoast is collaborating with NLWeb.   

The best part? If you’re a user of any of our Yoast plans designed for WordPress, you’re well ahead here. Yoast’s integration with NLWeb will roll out in phases, starting with functionality that helps our users using WordPress express their content in ways AI systems can interpret accurately, without any additional setup required. So sit tight and let us help you prepare your website for the new world of search! 

NLWeb aims to make your content understandable not just for people, but for the AI systems that are increasingly relevant to your website’s discovery. 

Read more: Yoast collaborates with Microsoft to help AI understand Open Web »

The post What is NLWeb (Natural Language Web)? appeared first on Yoast.

Read more at Read More

Recap: The January 2026 SEO Update by Yoast

The January 2026 SEO Update by Yoast is part of our monthly webinar series covering the latest developments in search and AI. In each session, we review the most important news from the past month and explore what it means for your search strategy. Hosted by Carolyn Shelby and Alex Moss, this month’s update looks at key industry shifts and practical takeaways for staying competitive. Below is a recap of the topics discussed and what they mean for your strategy.

Here’s the recap video on YouTube

Watch the full recap on YouTube to hear Carolyn and Alex dive deeper into these topics, answer audience questions, and provide additional examples of how these changes could affect your work.

SEO and AI news from January 2026

SEO is shifting from rankings to selection

Microsoft’s recent guide on AEO (Agentic Engine Optimization) and GEO (Generative Engine Optimization) highlights a major change: the goal isn’t just to rank, but to be chosen by AI and users. Tools like Gemini and ChatGPT don’t just match keywords; they evaluate brand authority, structured data, and real-world mentions. If your content isn’t clear, well-organized, or trustworthy, AI may overlook it, even if it performs well in traditional search. To stay competitive, focus on structured data, fast-loading pages, and strong brand signals.

Agentic commerce is on the rise

Google’s Universal Commerce Protocol (UCP) is an open-source framework designed to help AI handle purchases. This means AI won’t just recommend products, but could also buy them for users. For businesses, optimizing for AI “selection” is now as important as ranking. If you sell products, prioritize product schema, fast load times, and a strong brand presence to ensure AI picks you.

Google’s core updates continue to reshape publishing

The December 2025 core update hit news publishers hard, particularly those relying on prediction-based content (like “2026 Oscar predictions”). Google is favoring original, authoritative reporting over speculative or AI-generated content. If you’re in publishing, EEAT (Experience, Expertise, Authoritativeness, Trustworthiness) remains critical.

YouTube is a growing force in AI search

Gemini is now pulling YouTube videos into its responses, even for non-video queries. If you’re not repurposing content for YouTube, you’re missing an opportunity. Optimize video titles, descriptions, and transcripts so AI can find and cite your work.

New tools are changing how we work

Anthropic’s Claude CoWork can organize files and automate tasks, while open-source tools like Moltbot (formerly Clawdbot) let you run AI agents locally. These tools aren’t just novelties, but signs of how quickly AI is integrating into workflows. For SEO, staying adaptable and testing new tools will be key.

Yoast is helping AI work for everyone

Yoast is building on Microsoft’s NLWeb framework to help AI systems better understand web content. The goal is to ensure small publishers and businesses aren’t left behind as AI-driven discovery grows. If you’re using WordPress, Yoast SEO’s existing tools—like schema markup and readability checks—already support this effort. We’ve also added Gemini and Perplexity to our AI Brand Insights tool, so you can track how AI models perceive your brand.

What to focus on in 2026

  • Structure your content so AI can parse it easily (schema markup helps)
  • Build brand authority across channels—social media, PR, email, and YouTube all send signals AI notices
  • Understand agentic commerce if you sell products. Fast, well-structured pages will help AI “select” you
  • Avoid AI-generated slop. AI can help draft content, but human insight and expertise are irreplaceable

Sign up for the next SEO Update by Yoast

The next SEO Update by Yoast is on February 24, 2026, at 4 PM CET (10 AM EST). Sign up to join the live discussion or get the recording. Don’t miss it!

The post Recap: The January 2026 SEO Update by Yoast appeared first on Yoast.

Read more at Read More

New to Yoast SEO for Shopify: Enhanced pricing visibility in product schema 

We are excited to announce an update to our Offer schema within Yoast SEO for Shopify. This update introduces a more robust way to communicate pricing to search engines, specifically introducing sale price strikethroughs

What’s new? 

Previously, communicating a “sale” was often limited to showing a single price. With this update, we’ve refined how our schema handles the Offer object. You can now clearly define: 

  • The original price: The “base” price before any discounts. 
  • The sale price: The current active price the customer pays. 

Why this matters 

When search engines understand the relationship between your original and sale prices, they can better represent your deals in search results. This update is designed to help trigger those eye-catching strikethrough price treatments in Google Shopping and organic snippets, improving your click-through rate by visually highlighting the value you’re offering. 

organic search results for cable knit hat, demonstrating how the strikethrough features look from the searcher perspective
Organic search results for ‘cable knit hat’ showing how the structured data appears on Google.

How to use it 

The schema automatically bridges the gap between your product data and the structured data output. Simply ensure your product’s “Regular Price” and “Sale Price” are populated, and our updated schema handles the rest. For more information about the structured data included with all our products, check out our structured data feature page.

Get started

If you are a Yoast SEO for Shopify customer, you can access your product schema by opening a product in the Yoast product editor in your Shopify store. If you are not a customer and want to learn more, you can start a 14 day free trial of Yoast SEO for Shopify from the Shopify App Store.

The post New to Yoast SEO for Shopify: Enhanced pricing visibility in product schema  appeared first on Yoast.

Read more at Read More

What is the open web?

The open web is the part of the internet built on open standards that anyone can use. This concept creates a democratic digital space where people can build on each other’s work without restrictions, just like how WordPress.org is built. For website owners, understanding and leveraging the open web is increasingly crucial. Especially with the rise of AI-powered systems and the general direction that online search is taking. So, let’s explore what the open web is and what it means for your website.

What is the open web?

The open web refers to the part of the internet built on open, shared standards that are available to everyone. It’s powered by technologies like HTTP, HTML, RSS, and Schema.org, which make it easy for websites and online systems to interact with each other. But it is more than just technical protocols. It also includes open‑source code, public APIs, and the free flow of data and content across sites, services, and devices. Creating a democratic digital space where people can build on each other’s work without heavy restrictions.

Because these standards are not owned or patented, the open web remains largely decentralized. This allows content to be accessed, understood, and reused across devices and platforms. This not only encourages innovation but also ensures that information is discoverable without being locked behind proprietary ecosystems.

The benefits of an open web

The open web is built on publicly available protocols that enable access, collaboration, and innovation at a global scale. 

The most important benefits include:

  • Collaboration and innovation: Open protocols enable developers to build on each other’s work without proprietary restrictions.
  • Accessibility: Users and AI agents alike can access and interact with web content regardless of device, platform, or underlying technology.
  • Democratization: No single company controls access to information, giving publishers greater autonomy.
  • Inclusion: The open web creates a more level playing field, where everyone gets a chance to participate in the digital economy.

The open web vs the deep web

To give you a better idea of what the open web is, it helps to know about the “deep web” and closed or “walled garden” platforms. The deep web covers content not indexed by search engines, while closed systems or walled gardens restrict access and keep data siloed.

On the open web, anyone can access information freely. A good example of that is Wikipedia. Accessible to anyone looking for information on a topic and anyone who wants to contribute to its content. Closed-off platforms, like proprietary apps or social media ecosystems, create places where content is only available if you pay or use a specific service. Well-known examples of this are social media platforms such as Facebook and Instagram. Another example is a news website that requires a paid subscription to get access.

In essence, the open web keeps information discoverable, accessible, and interoperable – instead of locked inside a handful of platforms.

AI and the open web

The popularity of AI-powered search makes open web principles more important than ever. Decentralized and accessible information allows AI tools to interact with content directly and use it freely to generate an answer for a user. 

“We believe the future of AI is grounded in the open web.” 

Ramanathan Guha, CVP and Technical Fellow at Microsoft. 

Microsoft’s open project NLWeb is a prime example. It provides a standardized layer that enables AI agents to discover, understand, and interact with websites efficiently, without needing separate integrations for every platform. 

What this means for website owners

For website owners, including small business owners, embracing the open web means making your content freely available in ways that AI can interpret. By using structured data standards like Schema.org, your website becomes discoverable to AI tools. Increasing your reach and ensuring that your content remains part of the future of search. 

Yoast and Microsoft: collaborating towards a more open web

Yoast is proud to collaborate with NLWeb, a Microsoft project that makes your content easier to understand for AI agents without extra effort from website owners. Allowing your content to remain discoverable, reach a wider audience with and show up in AI-powered search results.  

The open web strives towards an accessible web where content is available for everyone. A web where it doesn’t matter how big your website or marketing budget is. Giving everyone the chance to be found and represented in AI-powered search. NLWeb helps turn this vision into reality by connecting today’s open web with tomorrow’s AI-driven search ecosystem 

Read on: Yoast collaborates with Microsoft to help AI understand Open Web »

The post What is the open web? appeared first on Yoast.

Read more at Read More

Why does having insights across multiple LLMs matter for brand visibility?

Search today looks very different from what it did even a few years ago. Users are no longer browsing through SERPs to make up their own minds; instead, they are asking AI tools for conclusions, summaries, and recommendations. This shift changes how visibility is earned, how trust is formed, and how brands are evaluated during discovery. In AI-driven search, large language models interpret information, decide what matters, and present a narrative on behalf of the user.

Key takeaways

  • Search has evolved; users now rely on AI for conclusions instead of traditional SERPs
  • Conversational AI serves as a new discovery layer, users expect quick answers and insights
  • Brands must navigate varied interpretations of their presence across different LLMs
  • Yoast AI Brand Insights helps track brand mentions and identify gaps in AI visibility across models
  • Understanding LLM brand visibility is crucial for modern brand strategy and perception

The rise of conversational AI as a discovery layer

“Assistant engines and wider LLMs are the new gatekeepers between our content and the person discovering that content – our potential new audience.” — Alex Moss

Search is no longer confined to typing queries into a search engine and scanning a list of links. Today’s discovery journey frequently begins with a conversation, whether that’s a typed question in a chatbot, a voice prompt to an AI assistant, or an embedded AI feature inside a platform people use every day.

This shift has made conversational AI a new layer of discovery, where users expect direct answers, recommendations, and curated insights that help them make decisions and build brand perception more quickly and confidently.

Discovery is happening everywhere

Users are now encountering AI-powered discovery across a range of interfaces:

AI chat interfaces

Tools like ChatGPT allow users to ask open-ended questions and follow up in a conversational manner. These interfaces interpret intent and tailor responses in a way that feels natural, making them a go-to for exploratory search.

Also read: What is search intent and why is it important for SEO?

Answer engines

Platforms such as Perplexity synthesize information from multiple sources and often cite them. They act as research helpers, offering concise summaries or explanations to complex queries.

Embedded AI experiences

AI is increasingly built directly into search and discovery environments that people already use. Examples include AI-assisted summaries within search results, such as Google’s AI Overviews, as well as AI features embedded in browsers, operating systems, and apps. In these moments, users may not even think of themselves as “using AI,” yet AI is already influencing what information is surfaced first and how it is interpreted.

This broad distribution of AI discovery surfaces means users now expect accessibility of information regardless of where they are, whether in a chat, an app, or embedded in the places they work, shop, and explore online.

How people are using AI in their day-to-day discovery

Users interact with conversational AI for a wide range of purposes beyond traditional search. These models increasingly guide decisions, comparisons, and exploration, often earlier in the journey than classic search engines.

Here are some prominent ways people use LLMs today:

Product comparisons

ChatGPT gives a detailed brand comparison

Rather than visiting multiple sites and aggregating reviews, there are 54% users who ask AI to compare products or services directly, for example, “How does Brand A compare to Brand B?” and “What are the pros and cons of X vs Y?” AI synthesizes information into a concise summary that often feels more efficient than browsing search results.

“Best tools for…” queries

Result by ChatGPT for “best crm software for smbs.”

Did you know 47% of consumers have used AI to help make a purchase decision?

AI users frequently ask for ranked suggestions or curated lists such as “best SEO tools for small businesses” or “top content optimization software.” These queries serve as discovery moments, where brands can be suggested alongside context and reasoning.

Trust and validation checks

Many users prompt AI models to validate decisions or confirm perceptions, for example, “Is Brand X reputable?” or “What do people say about Service Y?” AI responses blend sentiment, context, and summarization into one narrative, affecting how trust is formed.

Also read: Why is summarizing essential for modern content?

Idea generation and research exploration

In a study by Yext, it was found that 42% users employ AI for early-stage exploration, such as brainstorming topics, gathering potential search intents, or understanding broad categories before narrowing down specifics. AI user archetypes range from creators who use AI for ideation to explorers seeking deeper discovery.

Local discovery and service search

local search results on chatgpt
ChatGPT recommendations for “best cheesecake places in Lucknow, India.”

AI is also used for local searches. For example, many users turn to AI tools to research local products or services, such as finding nearby businesses, comparing local options, or understanding community reputations. In a recent AI usage study by Yext, 68% of consumers reported using tools like ChatGPT to research local products or services, even as trust in AI for local information remains lower than traditional search.

In each of these moments, conversational AI doesn’t just surface brands; it frames them by summarizing strengths, weaknesses, use cases, and comparisons in a single response. These narratives become part of how users interpret relevance, trust, and fit far earlier in the decision-making process than in traditional search.

Not all LLMs interpret brands the same way

As conversational AI becomes a discovery layer, one assumption often sneaks in quietly: if your brand shows up well in one AI model, it must be showing up everywhere. In reality, that’s rarely the case. Large language models interpret, retrieve, and present brand information differently, which means relying on a single AI platform can give a very incomplete picture of your brand’s visibility.

To understand why, it helps to look at how some of the most widely used models approach answers and brand mentions.

How ChatGPT interprets brands

ChatGPT is often used as a general-purpose assistant. People turn to it for explanations, comparisons, brainstorming, and decision support. When it mentions brands, it tends to focus on contextual understanding rather than explicit sourcing. Brand mentions are frequently woven into explanations, recommendations, or summaries, sometimes without clear attribution.

From a visibility perspective, this means brands may appear:

  • As examples in broader explanations
  • As recommendations in “best tools” or comparison-style prompts
  • As part of a narrative rather than a cited source

The challenge is that brand mentions can feel correct and authoritative, while still being outdated, incomplete, or inconsistent, depending on how the prompt is phrased.

How Gemini interprets brands

Gemini is deeply connected to Google’s ecosystem, which influences how it understands and surfaces brand information. It leans more heavily on entities, structured data, and authoritative sources, and its outputs often reflect signals familiar to traditional SEO teams.

For brands, this means:

  • Visibility is closely tied to how well the brand is understood as an entity
  • Clear, consistent information across the web plays a bigger role
  • Mentions often align more closely with established sources

Gemini can feel more predictable in some cases, but that predictability depends on strong foundational signals and accurate brand representation across trusted platforms.

How Perplexity interprets brands

Perplexity positions itself as an answer engine rather than a general assistant. It emphasizes citations and source-backed responses, which makes it popular for research and comparison queries. When brands appear in Perplexity answers, they are often tied directly to cited articles, reviews, or documentation.

This creates a different visibility dynamic:

  • Brands may be surfaced only if they are referenced in cited sources
  • Freshness and topical relevance matter more
  • Competitors with stronger editorial or PR coverage may appear more often

Here, brand presence is tightly coupled with external content and how frequently that content is used as a reference.

How these models differ at a glance

AI Model How brands are surfaced What influences the visibility
ChatGPT Contextual mentions within explanations and recommendations Prompt phrasing, training data, general relevance
Gemini Entity-driven, aligned with authoritative sources Structured data, brand consistency, trusted signals
Perplexity Citation-based mentions tied to sources Content coverage, freshness, external references

Why brands need insights across multiple LLMs?

Once you see how differently large language models interpret brands, one thing becomes clear: looking at just one AI model gives you an incomplete picture. AI-driven discovery does not produce a single, consistent version of your brand. It produces multiple interpretations, shaped by the model, its data sources, and users’ interactions with it.

Must read: When AI gets your brand wrong: Real examples and how to fix it

Therefore, tracking across your brand across multiple LLM models is essential because:

Brand visibility is fragmented by default

Across different LLMs, the same brand can show up in very different ways:

  • Correctly represented in one model, where information is accurate and well-contextualized
  • Completely missing in another, even for relevant queries
  • Partially outdated or misrepresented in a third, depending on the sources being used

This fragmentation happens because each model processes and prioritizes information differently. Without visibility across models, it’s easy to assume your brand is ‘covered’ when, in reality, it may only be visible in one corner of the AI ecosystem.

Different audiences use different AI tools

AI usage is not concentrated in a single platform. People choose tools based on intent:

  • Some use conversational assistants for exploration and ideation
  • Others rely on citation-led answer engines for research
  • Many encounter AI passively through search or embedded experiences

If your brand appears in only one environment, you are effectively visible only to a subset of your audience. This mirrors challenges SEO teams already recognize from traditional search, where performance varies by device, location, and search feature. The difference is that with AI, these variations are less obvious and more challenging to track without dedicated insights.

Blind spots create real business risks

Limited visibility across LLMs doesn’t just affect awareness; it also impairs learning. Over time, it can lead to:

  • Inconsistent brand narratives, where AI tools describe your brand differently depending on where users ask
  • Missed demand, especially for comparison or “best tools for” queries
  • Competitors are being recommended instead, simply because they are more visible or better understood by a specific model

These outcomes are rarely intentional, but they can quietly influence brand perception and decision-making long before users reach your website.

So all these points point to one thing: a broader, multi-model view helps build a more complete understanding of brand visibility.

The challenge: LLM visibility is hard to measure

As brands start paying attention to how they appear in AI-generated content, a new problem becomes obvious: LLM visibility doesn’t behave like traditional search visibility. The signals are fragmented, opaque, and constantly changing, which makes tracking and understanding brand presence across AI models far more complex than tracking rankings or traffic.

Below are some key challenges brand marketers might face when trying to understand how their brand appears to large language models.

1. Lack of visibility across AI platforms

Different LLMs, such as ChatGPT, Gemini, and Perplexity, rely on various data sources, retrieval methods, and citation logic. As a result, the same brand may be mentioned prominently in one model, inconsistently in another, or not at all elsewhere.

Without a unified view, it’s difficult to answer basic questions like where your brand shows up, which AI tools mention it, and where the gaps are. This fragmentation makes it easy to overestimate visibility based on a single platform.

2. No clear insight into how AI describes your brand

AI models often mention brands as part of explanations, comparisons, or recommendations, but traditional analytics tools don’t capture how those brands are described. Teams lack visibility into tone, context, sentiment, or whether mentions are positive, neutral, or misleading.

This makes it hard to understand whether AI is reinforcing your intended brand positioning or subtly reshaping it in ways you can’t see.

3. No structured way to measure change over time

AI-generated answers are inherently dynamic. Small changes in prompts, updates to models, or shifts in underlying data can all influence how brands appear. Without consistent, longitudinal tracking, it’s nearly impossible to tell whether visibility is improving, declining, or simply fluctuating.

One-off checks may offer snapshots, but they don’t reveal trends or patterns that matter for long-term strategy.

4. Limited ability to benchmark against competitors

Seeing your brand mentioned in AI answers is a start, but it doesn’t tell you the whole story. The real question is what’s happening around it: which competitors appear more often, how they’re described, and who AI recommends when users are ready to decide.

Without comparative insights, teams struggle to understand whether AI visibility represents a competitive advantage or a missed opportunity.

5. Missing attribution and source clarity

Some AI models summarize or paraphrase information without clearly attributing sources. When brands are mentioned, it’s not always obvious which pages, articles, or properties influenced the response.

This lack of source visibility makes it difficult to connect AI mentions back to specific content efforts, PR coverage, or SEO work, leaving teams guessing what is actually driving brand representation.

6. Existing tools weren’t built for AI visibility

Traditional SEO and analytics platforms are designed around clicks, impressions, and rankings. They don’t capture AI-powered mentions, sentiment, or visibility trends because AI platforms don’t expose those signals in a structured way.

As a result, teams are left without reliable reporting for one of the fastest-growing discovery channels.

Together, these challenges point to a clear gap: brands need a new way to understand visibility that reflects how AI models surface and interpret information. This is where tools explicitly designed for AI-driven discovery, such as Yoast AI Brand Insights, come into play.

How does Yoast AI Brand Insights help?

It won’t be wrong to say that the AI-driven brand discovery can be fragmented and opaque; therefore, leading us to our next practical question: how do brand marketing teams actually make sense of it?

Traditional SEO tools weren’t built to answer that, which is where Yoast AI Brand Insights comes in. It’s designed to help users understand how brands appear in AI-generated answers and is available as part of Yoast SEO AI+.

Rather than focusing on rankings or clicks, Yoast AI Brand Insights focuses on visibility and interpretation across large language models.

Track brand mentions across multiple AI models

One of the biggest gaps in AI visibility is fragmentation. Brands may appear in one AI model but not in another, without any obvious signal to explain why. Yoast AI Brand Insights addresses this by tracking brand mentions across multiple AI platforms, including ChatGPT, Gemini, and Perplexity.

This gives teams a clearer view of where their brand appears, rather than relying on isolated checks or assumptions based on a single model.

Identify gaps, inconsistencies, and opportunities

AI-generated answers don’t just mention brands; they frame them. Yoast AI Brand Insights helps surface patterns in how a brand is described, making it easier to spot:

  • Where mentions are missing altogether
  • Where descriptions feel outdated or incomplete
  • Where competitors appear more frequently or more favorably

These insights turn AI visibility into something teams can actually act on, rather than a black box.

Shared insights for SEO, PR, and content teams

AI-driven discovery sits at the intersection of SEO, content, and brand communication. One of the strengths of Yoast AI Brand Insights is that it provides a shared view of AI visibility that multiple teams can use. SEO teams can connect AI mentions back to site signals, content teams can understand how messaging is interpreted, and PR or brand teams can see how external coverage influences AI narratives.

Instead of working in silos, teams get a common reference point for how the brand appears across AI-driven search experiences.

A natural extension of Yoast’s SEO philosophy

Yoast AI Brand Insights builds on principles Yoast has long emphasized: clarity, consistency, and understanding how search systems interpret content. As AI becomes part of how people discover brands, those same principles now apply beyond traditional search results and into AI-generated answers.

In that sense, Yoast AI Brand Insights isn’t about chasing AI trends. It’s about giving teams a more straightforward way to understand how their brand is represented, where discovery is increasingly happening.

From rankings to representation in AI-driven search

AI-driven discovery is no longer an edge case. It’s becoming a regular part of how people explore options, validate decisions, and form opinions about brands. As large language models continue to evolve, the question for brands is not whether they appear in AI-generated answers, but whether they understand how they appear, where they appear, and what story is being told on their behalf. Gaining visibility into that layer is quickly becoming a foundational part of modern brand and search strategy.

The post Why does having insights across multiple LLMs matter for brand visibility? appeared first on Yoast.

Read more at Read More

Bing Webmaster Tools testing new AI Performance report

Microsoft has been promising to give data on the performance of websites mentioned in AI results within Bing and Copilot since February 2023 and then again in April 2023. But then decided to let us down and only lump the data together with web queries, not giving us a clear view of how our sites perform within Bing’s AI experiences.

Now Bing is reportedly testing showing a new report within Bing Webmaster Tools named AI Performance report.

AI Performance report. This report is currently in a super limited beta – Microsoft has not announced anything about this publicly. But a source told us this report shows citation data from both Microsoft Copilot and partners. It shows the number of citations and the number of cited pages by day.

You can see how many times Copilot cited your website and across how many pages. It does not show you how many people clicked from those citations on Copilot to your site.

It does also let you see the data listed by “grounding queries” and “pages.” Grounding queries is likely not the full query entered into the search box on Copilot but how Bing interprets that query. Plus, it will show you the “intent” behind the query, whether it is a navigational, informational, or other form of query.

The report also shows you the specific pages cited by Copilot.

ETA. Again, Microsoft has not announced this report yet but some are seeing it go live within Bing Webmaster Tools under the Search Performance report named “AI Performance.” I do not know when you or I will gain access to the report.

Why we care. It is great to see more AI performance reporting coming from Bing Webmaster Tools, but I really do wish for click data. Every publisher, content creator, and site owner wants to know how the click-through rate from AI experiences compares to web search.

It just feels like all the search engines are deliberately hiding this data from us.

Read more at Read More

Google AI Overviews follow up questions jump you directly to AI Mode

Google will now jump you directly into AI Mode when you do a follow-up question from AI Overviews within Google Search. This makes the “transition to a conversation even more seamless,” Robby Stein, VP of Product, Google Search wrote.

Plus, Google AI Overviews are powered by Gemini 3 by default, globally.

AI Overviews jumping to AI Mode. We covered when Google was officially testing this back in December and also before Google confirmed the test in October 2025. The ask a follow-up question within the Google Search AI Overviews will jump you into a conversation directly in AI Mode.

Google said this is about “making the transition to a conversation even more seamless,” within Google Search.

Why is Google doing this? Google said that during its testing, it “found that people prefer an experience that flows naturally into a conversation – and that asking follow-up questions while keeping the context from AI Overviews makes Search more helpful.”

Here is how it works:

When you click on “Show more,” Google will overlay AI Mode directly over the search results. You can to click the X at the top right of the screen to go back to the search results. And all the sources are removed from this view, so much for sending more traffic to publishers and content creators…

Note, this is live on mobile only right now.

Gemini powering AI Overviews. Google also said that it is rolling out Gemini 3 as the default model for AI Overviews globally. Robby Stein said, “we’re making Gemini 3 the new default model for AI Overviews globally, so you get a best-in-class AI response right on the search results page, for questions where it’s helpful.”

This is different from his previous announcement about a week ago, where Gemini 3 Pro would power AI Overviews for complex queries for English globally for Google AI Pro & Ultra subs.

Now, Gemini 3 is the default model uses for AI Overviews globally.

Why we care. While Gemini 3 may provide better quality responses for AI Overviews, the bigger news is that Google officially rolling out that follow up questions go to AI Mode from Google Search’s AI Overviews.

This is a big deal, because this will likely result in even fewer clicks from Google Search to publishers and instead will drive more searchers into AI Mode.

AI Overviews show up at the top of the search results for many queries. It is hard enough to get clicks from those citation cards now, and it will be even harder as this new follow-up experience rolls out. Google is actively pushing those searchers from Search into AI Mode and not to your website.

Read more at Read More