SEO in 2026 is expanding, not changing. Traditional search still matters, but now SEO also includes AI-driven discovery, social platforms, and chatbots. The principles are the same, like clarity, structure, authority, and relevance, but the platforms are multiplying. We surveyed 59 SEOs to see how they’re handling these changes.
Some have less than a year of experience. Others have been in the field for over a decade. Their answers show an industry figuring things out. A few are ahead of the curve, but most are still catching up.
The best SEOs aren’t just reacting to AI. They’re using it to strengthen what already works: technical foundations, high-quality content, and real authority. Others are stuck debating whether SEO should even keep its name.
Here’s what stood out, and where Yoast fits into the conversation of what SEO means in 2026.
You can find the full results, with more questions and deeper insights from Yoast’s principal SEOs, Carolyn Shelby and Alex Moss, in a downloadable PDF. Sign up below!
Download the PDF report now
Enter your email address below. We’ll send you a download link to the full Yoast Perspective PDF report. Check your inbox, as it’ll arrive in minutes.
51% of respondents consider SEO to be “evolving”. 33% say it’s “thriving”. Only 10% think it’s “declining”.
This is an interesting divide, but it’s not random. In the results, those with 10+ years of experience say SEO is thriving, while newcomers say it is not. It might be that experts know the landscape better and see change as a constant.
Alex Moss’s take:“SEO has always adapted to changes in the SERP, and now it’s adapting again. The traditional SERP is gone, but SEO isn’t.”
Carolyn Shelby’s take:“SEO is evolving, but not because its fundamentals are breaking. The interfaces between users and information are changing. Search is no longer confined to ten blue links, but the need for structured, relevant, trustworthy content hasn’t diminished.”
The Yoast Perspective: We think SEO isn’t going anywhere, but there are changes happening. Traditional search from Google and Bing still drives traffic, but AI-driven discovery from LLM-powered assistants shapes perception and discovery. Therefore, the best SEOs don’t choose sides in this fight; they are mastering both directions.
2. Keep the name Search Engine Optimization
39% say SEO should be relabeled “Search Everywhere Optimization”. Only 32% want to keep “Search Engine Optimization”.
Big support for relabeling SEO, and even among veterans, 41% prefer Search Everywhere Optimization. Of course, this doesn’t mean that we should do this.
Alex Moss’s take:“The term ‘SEO’ will stay. The role will widen to include AI and other disciplines, but the name doesn’t need to change.”
Carolyn Shelby’s take:“The term ‘SEO’ still holds shared meaning, credibility, and market recognition. There’s no strong evidence that rebranding the discipline itself is necessary or beneficial. Responses favoring ‘Search Everywhere Optimization’ reflect where SEO outcomes now surface, not a fundamentally different practice.”
The Yoast Perspective: We at Yoast don’t think the term SEO is broken. Yes, there is a lot of change happening, especially in search, with AI overviews, chatbots, and social media platforms, but what about the core SEO work? You still have to focus on technical foundations, content quality, brand building, and authority.
‘Search Everywhere Optimization’ might describe where SEO happens, but it doesn’t change what SEO is. The name ‘SEO’ still works, but we just need to explain how it applies to AI and social platforms.
3. Good SEO is LLM optimization
64% agree LLM optimization is essentially the same as traditional SEO. 59% aren’t even actively optimizing for LLMs.
You might call this laziness, but you could also call it efficiency. It oftentimes comes down to the same thing.
There’s also the 9% who strongly disagree with this statement. These respondents say LLMs prioritize synthesis over rankings, so focusing on structured data and brand mentions makes more sense for them. Of course, they are not wrong, but they don’t contradict what others have said. LLMs don’t require new tactics; they just reward the same SEO principles more strictly.
Alex Moss’s take:“If you’re undertaking good SEO, you’re already optimizing well for LLMs. The tactics don’t change—just the audience.”
Carolyn Shelby’s take:“The same practices that make content discoverable and trustworthy for search engines also make it usable for LLMs. The confusion arises when people treat LLMs as a completely separate system. In reality, LLM visibility rewards clarity, relevance, and authority—all long-standing SEO principles.”
LLM optimization isn’t a separate discipline because it’s SEO for AI. The same principles apply: clarity, structure, and authority. The difference? AI systems are less forgiving of mediocre content, so the bar for quality is higher.
4. Rankings still matter, but not like they used to
52% say rankings are “equally important” as before. 30% say they’re “less important”.
This is a sensible shift. Google’s AI overviews and other zero-click results mean visibility does not equal traffic. For AI systems, rankings are still an authority signal.
Alex Moss’s take:“Traditional rankings are still important because agents still search the web to ingest information. If you aren’t visible there, it’s less likely an agent will identify and select you into their responses.”
Carolyn Shelby’s take:“Rankings still matter, but they are no longer the end goal. They are a proxy for visibility, not a guarantee of impact.”
The Yoast Perspective: We need to stop obsessing over ranking number one, so start tracking visibility and presence. Check whether you are cited in AI-driven answers, and try to be mentioned in industry discussions. AI visibility and citations are the new rankings.
5. Organic traffic is still king, but for how long?
55% say “organic traffic” is their top metric. Yet 49% cite “reducing organic clicks” as their biggest challenge.
We see this as the great paradox of 2026. Traffic is down, but the value of that traffic could be up. You might get less traffic, but the clicks that do happen have a better intent.
Carolyn Shelby’s take: “As AI reduces the need for some visits, success looks like being represented correctly rather than merely visited. Visibility in AI overviews doesn’t always drive clicks, but it builds legitimacy. Being included signals that you’re a credible source, even when users don’t click.”
Our advice:
Work on AI visibility, as this is the new SEO metric. Just as rankings show your visibility in traditional search, citations in AI overviews show your authority in AI-driven discovery. Track it alongside rankings and traffic
Keep an eye on branded search volume to learn whether people are looking for you by name
Monitor citations to see if others are referencing your content online
6. Content saturation is a big threat
39% say “competing with AI-generated content” is their top challenge. Only 4% cite a “talent gap.”
We know AI can write bad content. But it’s a bigger challenge when AI writes good enough content at scale. This will flood the web with noise, making it hard to penetrate.
Alex Moss’s take:“AI-generated content is artificial. Humans connect with stories, not regurgitated lists.”
Carolyn Shelby’s take:“AI doesn’t change what good content is, but just raises the bar. Mediocrity doesn’t just rank lower; it disappears.”
Our advice:
Focus on building your EEAT, because AI can’t fake real-world expertise and authority
Prioritize quality over quantity, as a single great piece of content can beat ten average ones
Use AI, but be careful and always use it as a tool, not as a replacement
7. Most SEOs are ignoring a fast-growing search channel
Traditional search (Google/Bing) is still #1. But TikTok search ranks #5, lower than Amazon.
This might be something of a blind spot for many. Younger generations use TikTok and other video platforms for entertainment, recommendations, tutorials, and even B2B advice.
Alex Moss’s take:“Social platforms influence how LLMs perceive freshness and authority. Ignoring them means missing out on signals that AI systems value.”
Carolyn Shelby’s take:“You don’t need to rank on TikTok, but you do need to be discoverable there. LLMs scrape social platforms for real-world signals.”
The Yoast Perspective: SEO now includes social platforms like TikTok. You don’t need to rank there, but you do need to be discoverable, because LLMs scrape these platforms for fresh, authoritative content. A great video channel can boost your authority in AI responses.
Our advice:
Repurpose content for video platforms like TikTok and YouTube
Check brand mentions in these platforms
Improve your video SEO in general
What Yoast’s experts really think
The data shows trends, but the real wisdom comes from Yoast’s SEO leaders, Carolyn Shelby and Alex Moss. Here is a small peek at the insights they share about the various debates:
On “Search Everywhere Optimization”:
Alex: “The term ‘SEO’ will stay. The role will widen, but the name doesn’t need to change.”
Carolyn: “Rebranding risks fragmenting understanding. ‘SEO’ is already well-established outside the industry.”
On the future of SEO metrics:
Alex: “As we move from being seen to being selected, visits don’t hold the same value they used to. The business goal should be the most important metric.”
Carolyn: “Visibility in AI overviews doesn’t always drive clicks, but it builds legitimacy. Being included signals that you’re a credible source.”
On rankings vs. influence:
Alex: “Rankings still matter because agents search the web to ingest information.”
Carolyn: “Rankings are a proxy for visibility, not a guarantee of impact. Focus on presence.”
On the role of SEOs in 2026:
Alex: “100% all three: marketers, brand builders, and SEO specialists. Brand and marketing have become intertwined with SEO as our role expands.”
Carolyn: “A blended mindset is essential. SEO can’t operate in isolation from brand, product, or communications.”
Do you want to read the full story?
These insights are just a small taster for you. In the full Yoast SEO report, you’ll find much more:
Includes the full answers to all 25 questions
In-depth commentary from Yoast’s SEO experts, Carolyn Shelby and Alex Moss
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-21 10:29:142026-04-21 10:29:14The Yoast Perspective 2026: 7 things we learned from the SEO industry
At one end: a human asks an AI a question and gets a fast, generated response.
At the other: an AI receives a goal and browses the web on a human’s behalf. It evaluates your brand, makes a decision, and leaves no trace in your analytics.
That’s agentic search.
And it’s already emerging.
ChatGPT’s deep research, Gemini’s agentic mode, and Perplexity’s research features are early expressions of it. Shopping within ChatGPT and booking tables without ever visiting a website are where it’s heading.
AI systems are already running multi-step evaluations with less human direction at each step.
The brands that show up in those evaluations aren’t waiting to see how this develops.
They’re optimizing for it now.
By the end of this guide, you’ll know what agentic search is, how it differs from typical AI search, and how you can prepare your brand for it.
What Agentic Search Actually Is
Agentic search is AI that searches and acts on your behalf — not just composing an answer from its training data, but going out to find information, use tools, and complete tasks.
At the simpler end of the agentic search spectrum, the AI retrieves sources and synthesizes a response.
At the more complex end, the AI agent receives a search goal, breaks it into sub-tasks, searches across multiple sources, cross-references what it finds, and takes action, without waiting for your input at each stage.
Examples of Agentic Search in Action
At the simpler end of the agentic search spectrum, you give an AI tool a prompt like “Which project management software is best for a remote team of ten?”
It won’t just produce an answer from its training data. It’ll go online, search for comparison articles, pull pricing and feature information from review platforms, and synthesize a recommendation.
Move further along the AI search spectrum and the behavior gets more complex.
For instance, imagine you ask the AI to research the competitive landscape in your market. It formulates a plan, then runs multiple searches across different source types — news coverage, review platforms, company pages, industry analysis.
It cross-references what it finds, and you get a structured report.
You’re still the one taking action based on this report, but this is a step up from the fairly simple, synthesized response we’re now used to.
Further still: some agents don’t need a prompt at all. Configured with a recurring search task, like monitoring competitor pricing, flagging new entrants, or summarizing industry news weekly, they run on a schedule.
And at the furthest end of the agentic search spectrum, the AI finds the right option, evaluates it against alternatives, and completes a transaction on your behalf. You asked for a recommendation. It booked the table.
Both OpenAI and Google have published open protocols specifically designed to make this possible (more on them soon).
Why This Is Different from What SEOs Already Know
Agentic search challenges some of the core assumptions SEO has operated on for years.
Here are the three that matter most.
Rankings Matter Less Than Before for Overall Visibility
AI tools are built to pull from a deliberately diverse range of sources, not just the highest-ranking pages.
A single search query triggers retrieval across multiple source types: editorial content, review platforms, community forums, company pages. No single ranking position dominates that process.
AI tools also heavily weigh up content and brand relevance when forming responses, versus factors like website authority, which is more important for SEO.
That doesn’t mean backlinks don’t matter — they do. But topical depth and relevance to the searcher’s intent are the focus in these tools.
Finally, when an AI tool processes a search, it generates multiple related sub-queries, pulling from the results of each. This is called query fan-out.
Your ranking for the original keyword is just one input into a much wider retrieval process. This makes broader topical coverage a key component of AI search in general. This is how you show AI agents that you’re worth citing, recommending, and taking action on.
Your Content Depth Is Now a Competitive Advantage
As Crystal Carter, Head of AI Search & SEO at Wix, puts it: “LLMs don’t get tired of reading 45 pages about your business.”
The average user won’t read countless pages of product documentation. But an agent will — and it’ll use what it finds to make a recommendation.
FAQs, knowledge base articles, documentation, case studies — content that might rarely surface in a standard browsing session becomes evidence in an agentic evaluation.
Crystal gives Levi’s sustainability documentation as an example.
A human visitor might not find it. If you were wondering if Levi’s were sustainable, you’d probably look them up on a single trusted site.
Compare that with what Perplexity AI does to answer the question “Are Levi’s sustainable?”
It conducts a deep dive into Levi’s site.
It evaluates evidence from 15 different sources.
It reads multiple pages from Levi’s own site, including their sustainability report, details on the sustainability of their fibers, their stance on human rights, and a page on slavery from a domain in a separate geography (Levi’s UK).
To succeed in agentic search, you need to make sure agents can answer any questions about your brand your users may have.
AI systems don’t simply retrieve results. They actively research, compare, and filter brands before a human ever sees a recommendation.
Your brand isn’t being ranked once. It’s being audited across sources.
If we take the Levi’s example again, ChatGPT doesn’t just look at Levi’s own content to answer the sustainability question.
It also looks at official rating bodies, third-party research, and media publications. It acts more like a professional researcher than a human conducting a low-stakes product search question.
An agentic system evaluates brands through layered filters like:
Can it find you clearly?
Does it understand you correctly?
Are you validated elsewhere?
Does it trust you enough to recommend you?
If you fail any of those layers, you can disappear entirely from the final answer.
Your Site Needs to Be Usable By Agents, Not Just People
Increasingly, AI agents interact with businesses through structured agentic protocols designed for machine-to-machine communication.
Instead of just relying on what’s in a page’s HTML, AI agents are moving toward standardized protocols, like the Agentic Commerce Protocol (ACP) and Natural Language Web (NLWeb).
This changes what “being accessible” actually means.
Content that only exists inside a visual interface — FAQs that expand on click, pricing tables rendered dynamically, product comparisons loaded via JavaScript — may never exist in the structured layer agents rely on to retrieve and execute actions.
And if they can’t access it, they can’t use it.
That matters because AI agents are increasingly the ones deciding what to include in their recommendations and what to ignore. The human only sees your site if you’re in those recommendations.
So the question is no longer just: “Can people find my website?”
It’s: “Can AI systems clearly understand and use my business information without friction?”
Because in this new system, if your business isn’t easy for AI to access and act on, you may not show up at all.
An agent evaluating your brand might find everything it needs on a single page of your website.
But when it does go looking further, it’s not just gathering information. It’s also checking whether the rest of its sources agree.
An agent corroborates, actively checking whether the picture is consistent across everything it finds.
Here are some of the key places agents look:
Your Website
Agents are likely to prioritize sites that are easy to parse and extract from. They look for:
Clear, up-to-date pricing in plain HTML (not hidden behind interactions).
Feature descriptions that explain capabilities — not just marketing claims.
Positioning that makes it obvious who the product is for (and who it isn’t).
Review Platforms (G2, Capterra, Trustpilot)
Agents read review content for specificity, covering things like use case, company size, outcomes, and integrations.
Community Signals (Reddit & Other Forums)
Agents look at user sentiment on community platforms to cross-check vendor claims.
A brand that talks about itself one way and gets discussed differently in communities creates a consistency gap that leaves agents hesitant to recommend your brand (at least without caveats).
Third-Party Editorial
Agents also look at comparison articles, analyst coverage, and industry endorsements.
Appearing consistently in credible “best X for Y” content is a positive signal.
6 Things to Do Before Agentic Search Goes Mainstream
Agentic search isn’t fully mainstream yet, but the infrastructure is being built now.
The brands that will be well-positioned are the ones that start taking action before their rivals are even aware of what agentic search is.
Here’s how to make sure you’re one of those brands.
1. Run a Cross-Source Consistency Audit
Check your pricing, features, and positioning across your own site, your G2 and Capterra profiles (or any other platforms your target audience users), and comparison articles where your brand appears.
Flag and correct every contradiction.
Make this a recurring part of your workflow. Old positioning language lingers in third-party content long after you’ve updated your own pages.
2. Build Hub Pages for Your Highest-Value Queries
If you don’t have them already, create new standalone pages that fully answer the key questions: what you do, who it’s for, how it compares to other solutions, what it costs, and what customers say.
3. Pressure-Test Your Declared Audience
Pull up your homepage, pricing page, and top comparison content.
Ask: can an agent clearly extract who this is for, what problem it solves, and what makes it right for a specific profile?
To make this concrete, paste the content into an AI tool and use this prompt:
“You are an AI agent evaluating this company. Based only on the content provided, extract: (1) who this product is for, (2) what problem it solves, (3) key use cases, and (4) what differentiates it from alternatives. Then highlight any ambiguity or contradictions.”
If the output is vague or generic, your positioning is too.
4. Ask Customers for More Detailed Reviews
Most reviews are vague: “Great product, really helpful team.”
That doesn’t help AI systems understand when your product is actually a good fit.
Instead, ask customers to be more specific about how they use it and what changed.
For example, in your review requests, you can say:
“If you’re happy to leave a review, it would be really helpful if you could include:
What you use the product for
Your company size or team type
The problem you were trying to solve
The outcome or result you saw
Any tools you integrate with”
5. Check Your Accessibility
Make sure your pricing, FAQs, and feature comparisons are in plain HTML.
Also check your forms and CTAs. If an agent needs to book, enquire, or transact on a user’s behalf, it needs to be able to find and use the form. So don’t hide them behind JavaScript.
6. Implement Agentic Search Protocols
While agentic search protocols are still new and being actively developed, understanding how they work and implementing them on your site can help you prepare for wider rollouts.
For more information on which protocols matter and what they do, read our guide to agentic search protocols.
7. Monitor Your AI Footprint
Right now, here are two things you can actually track to monitor your AI footprint:
Run Regular Brand Queries
Open ChatGPT, Perplexity, and Google AI Mode, and search for your brand by name.
Then search for the category queries a buyer would use — “best [product type] for [your target audience].”
In both cases, document what comes back. Is your brand mentioned? Is what’s being said accurate? Is it consistent with your current positioning?
Do this monthly and track how things change over time.
If your positioning is wrong or outdated, update your homepage, pricing, and comparison pages first (these are usually the sources AI systems rely on most).
If competitors are being favoured, strengthen your comparison content and aim to get more third-party reviews.
If you’re missing entirely, check whether your key pages are crawlable, indexable, and clearly describe your use case.
Agentic search is already here. And as time goes on, complex agentic tasks — like signing up for a tool or buying on behalf of the user — will only become more common. That’s why it’s worth preparing for full agentic search right now.
Start by figuring out where you stand currently.
Tools like Semrush’s AI Visibility Toolkit show you how AI systems currently perceive your brand across platforms. That’s your baseline before you tackle anything else. Learn how to use it in our Semrush AI visibility guide.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-20 13:00:142026-04-20 13:00:14What Is Agentic Search? (And Why SEOs Need to Pay Attention)
AI is changing search and rewriting the rules. If your brand isn’t visible in AI-generated answers, you have a bigger problem than just traffic. You’re missing out on trust, credibility, and customers who now expect AI to recommend the best options everywhere.
We see that traditional SEO isn’t enough anymore. Today, it’s possible to rank #1 on Google and still be invisible in the AI responses people now often turn to for recommendations.
Yoast AI Brand Insights is a great tool that shows you exactly how your brand appears in AI-generated answers from ChatGPT, Perplexity, or Google Gemini. It tracks sentiments and benchmarks against competitors. What’s more, it doesn’t just help build your AI visibility, but also helps control your brand’s narrative.
Key takeaways
AI visibility matters; brands absent in AI responses lose trust and customers.
Yoast AI Brand Insights helps track brand mentions, sentiment, and credibility across AI platforms.
Modern SEO now focuses on AI visibility, moving beyond traditional search engines.
To improve AI brand visibility, brands should publish authoritative content and optimize for AI citations.
Active participation in online communities enhances brand visibility on AI platforms.
Why modern SEO is about AI visibility
People are no longer just searching on Google. Every day, more people are asking AI tools and Large Language Models (LLMs) like ChatGPT, Gemini, and Perplexity for recommendations. Unlike classic search engines, these tools don’t just list links; they curate answers by combining trained knowledge with information they’ve learned from the web.
AI platforms combine information from multiple sources to provide a single, context-aware, and custom answer. People even start treating these AI answers as personal advice, not just generic search results. This will happen more and more as search engines like Google increasingly integrate AI into their search results. As a result, the boundaries between traditional search and AI-generated answers are blurring.
AI search is a blind spot for most
Classic SEO tools track rankings, but they don’t track how your brand appears in AI answers. This leads to blind spots where your competitors might be all over the AI recommendations in your market without you realizing it.
What’s more, you might rank well on Google, but you could be invisible to a growing audience if AI systems ignore your brand. Your competitors can appear more often or more positively in AI recommendations. Or there’s negative sentiment in AI responses that can harm your reputation without you even knowing.
Controlling the narrative of your brand
AI platforms like ChatGPT, Perplexity, and Gemini piece together your brand’s story from scattered sources, like reviews, news articles, social media, and your own content. If these send mixed signals, the answers an AI gives will too. That’s why you need to send a unified, consistent message. This is one of the most effective ways to reinforce your narrative across every platform.
Repeat your main message, whether that’s “affordable luxury” or “sustainable innovation,” everywhere, from your site content to press releases and from social media to external interviews.
Quickly address misinformation and respond to inaccurate reviews by publishing clarifications online. By doing this, you prevent the AI from amplifying outdated or incorrect details.
Support your brand’s most important attributes with structured data. Add the awards your brand won, or its unique selling points, so you can give the AI platform an all-encompassing framework to reference.
Remember, consistency is about repeating your most important brand aspects everywhere. Shape the narrative in such a way that the AI has no choice but to reflect the brand the way you want it to project.
Yoast AI Brand Insights is here to help
Yoast AI Brand Insights is a helpful tool that tracks how your brand appears in AI answers. It provides a clear, actionable view of your brand’s visibility, sentiment, and credibility across major AI platforms.
Yoast AI Brand Insights helps you:
Understand if and how your brand is mentioned in AI responses
Track sentiment and see if AI platforms describe your brand positively or negatively
Identify the sources to see what AI references when mentioning your brand
Benchmark against competitors to see how you stack up
We didn’t build this to get you some data, but to turn that AI black box into actionable insights.
The main page of the Yoast AI Brand Insights shows your main metrics, and you can delve deeper into your analysis by going to Analysis details
Understanding the AI visibility metrics
Using the Yoast AI Brand Insights metrics helps you measure and improve your brand’s visibility in AI platforms. To make the most of it, you have to understand what metrics mean and why they matter.
AI Visibility Index (AIVI)
The AI Visibility Index (AIVI) scores (on a scale of 100) how visible your brand is on AI platforms such as ChatGPT, Perplexity, and Gemini. It consists of the following metrics:
Mentions, or how often your brand is cited in AI answers
Citations, or the number of authoritative sources referencing your brand
Sentiment, or the rate of positive vs. negative keywords associated with your brand
Rankings, or the relative position of your brand mentions compared to your competitors
The higher the AIVI score (on a scale of 0-100), the more visible your brand is in AI search results for the tracked terms. If you find that your score is low, you should focus on getting more mentions and citations. You should also work on positive sentiment around your business.
You build your relevance by publishing authoritative content. Try to get featured on relevant sites and monitor and improve negative sentiment around your brand. Learn more about how AI shapes brand perception.
The higher the AIVI score (on a scale of 0-100), the more visible your brand is in AI search results for the tracked terms
Mentions
The Mentions section tracks the specific queries for which your brand appears in AI responses. So, if someone asks, “What is the best low-cost CRM system for small businesses?” and your brand is in the results, that is a mention.
It’s not hard to understand why this is important. More mentions generally lead to greater visibility. If you don’t show up for the terms and queries relevant to your brand, you need to start improving your content.
Use the built-in AI-generated brand queries to find high-intent questions and write content that answers those questions thoroughly. These could be blog posts or FAQ pages, or whatever makes sense. Also optimize for conversational queries, such as “Is brand X good for startups?”
The mentions section tracks the specific queries for which your brand appears in AI responses
Sentiment
Sentiment measures the percentage of negative vs. positive words in the query results associated with your brand. So, if the AI describes your brand as “innovative” or “reliable”, that counts as positive sentiment. However, if they use terms like “overpriced” or “unreliable”, that’s negative sentiment.
Positive sentiment helps build trust, while negative sentiment can drive potential customers away. That’s why you should always actively address negative sentiments online. Don’t leave those bad online reviews unresponded to. You can also publish testimonials on your site to amplify positive voices, and you can do the same in your marketing messaging by talking about “a brand loved by thousands” or “award-winning” products.
Keep an eye on trends in your online sentiment and catch and fix issues early.
Sentiment measures the percentage of negative vs. positive words in the query results associated with your brand
Citations
Citations refer to the sources that AI platforms explicitly reference when generating an answer, not the brands mentioned within those sources. For example, if Gemini answers a query about “the best credit cards” and cites a New York Times article about best credit cards, that New York Times page is the citation. Even if the article includes brands like American Express or Chase, the citation is attributed to the publisher, not to the individual brands.
That said, appearing in those cited sources still matters a great deal. If your brand is consistently featured in relevant, high-authority publications like The New York Times, it increases the likelihood that AI systems will surface your brand in their responses over time. In other words, you may not receive a direct citation, but you benefit from being part of the content that AI platforms trust and rely on.
Over time, your brand (say, American Express or Chase) becomes more likely to be included in AI responses to queries like “best credit cards,” especially if it consistently appears in trusted sources.
AI platforms use citations to validate their answers. Citations from top sources, such as industry publications, enhance credibility. Find where there’s a natural match between your customers and their audience, and publish the type of content people will want to link to.
Citations refer to the sources that AI platforms explicitly reference when generating an answer
5 Ways to improve your AI brand visibility
Now that you understand the metrics, here’s how to use insights from Yoast AI Brand Insights to improve your AI visibility.
Optimize for AI citations
AI platforms like Gemini, Perplexity, and ChatGPT use citations to validate their responses. So, citations increase the likelihood of your brand being included and trusted in AI-generated answers
Try to get featured on relevant, authoritative sites and publish guest posts on industry sites, news sites, or educational domains. Get mentioned in roundup articles, like “Top 10 tools for doing X”. Ask customers to write reviews on platforms like Capterra, G2, and Trustpilot. All of these tactics can act as proof that your brand is a well-trusted source. Remember, it must be relevant citations.
Make sure your content is structured so the AI can read it easily. Use clear, hierarchical headings and bullet points to make the content easy to scan. Add FAQs and publish direct answers to common questions. It is also a good idea to add schema markup to help the AI crawlers understand your content.
Don’t forget to update old content regularly. The AI platforms prioritize fresh, up-to-date information when retrieving sources, so refresh your content regularly to stay relevant.
Monitor and improve brand sentiment
By mentioning your brand, the AI platforms also shape how people see it. If those sentiments in the AI’s answers are negative, it can hurt your trustworthiness and cost conversions. This could signal the need for a broader reconsideration of business strategy priorities.
Once you find AI platforms associate your brand with negative terms (like “slow customer service”), respond to this issue publicly. For instance, you could contact customers on review sites to resolve complaints. You can also publish case studies and testimonials to steer the AI towards positive perceptions.
In your monitoring, you’ll also find the positive terms AI platforms associate with your brand, such as “trusted” or “innovative”. Use these terms in your marketing, in your site content, and on social media.
The weekly scans in Yoast AI Brand Insights track sentiment shifts for your queries over time. If sentiment drops, investigate the cause, like a recent PR issue or a product recall.
Benchmark against competitors
AI visibility is also about how you compare to the competition. If they are mentioned more often or in a better light than you, they will appear more often in recommendations made by AI platforms.
See how your brand stacks up against competitors. Use Yoast’s Competitor ranking tab to see which brands show up a lot in AI answers. Analyze their content strategy. Do they publish more case studies? Are they active on review sites?
This tool shows how AI describes your brand compared to others in your market. For example, if you’re a coffee company like Taylor’s of Harrogate, you might find that Lavazza is consistently labeled as “the Italian espresso expert.” Now you know exactly what to highlight, whether it’s your heritage, roasting process, or sustainability, to stand out. Use these insights to sharpen your messaging and compete more effectively.
Don’t forget to check your weekly competitor analyses to see if your AI visibility is improving. Double down on the strategy that works for you. The tool also includes an historical view. This lets you look back at earlier analyses by selecting a past date, helping you compare visibility and sentiment across different points in time.
For each tracked query, Yoast AI Brand Insights gives specific insights into how your brand performs versus the competition
Answer brand-specific questions
AI platforms are very good at answering specific questions, such as “Is brand X reliable?” or “What’s the best tool to do Y?” You’re missing out on a lot of potential customers when your brand isn’t in these answers.
Yoast AI Brand Insights suggests queries you should monitor based on your input, such as “Is [Your Brand] good for small businesses?” In addition, do deep research into the common questions asked in your industry using tools like AnswerThePublic, AlsoAsked, or simply by checking Google’s People Also Ask section.
With the insights gathered, publish blog posts, FAQs, or landing pages and directly answer those brand-related queries. Support the content with properly structured data, such as FAQ and how-to schema, to give AI platforms more tools to understand your content.
In Yoast AI Brand Insights, track which questions get the most mentions from AI platforms. Don’t forget to keep your content up to date to keep it accurate and relevant.
During the setup, Yoast AI Brand Insights generates five highly relevant queries based on your input. You can change them if you like
Track progress with the AI Visibility Index
Improving the AI visibility of your brand isn’t a one-time task, but a recurring effort. Luckily, Yoast’s AI Visibility Index gives you an easy-to-understand metric that you can use to track your progress over time.
Run your first scan to establish the starting point for your AI Visibility Index. Note which areas, like citations or sentiment, are strongest and weakest.
Yoast AI Brand Insights runs weekly scans. Please review them to track progress. Check the historical view, but remember these cannot be viewed together. Select the week before and then reselect this week to spot changes. Look for trends, such as improvements in sentiment or a sudden increase in citations.
If your score doesn’t improve, revisit the strategies above, such as optimizing for citations and improving sentiment. Be sure to experiment with new tactics and publish original research to secure more earned media.
How to influence LLMs to mention your brand
Imagine this: A potential customer asks ChatGPT, “What’s the best CRM for small businesses?” If your brand isn’t mentioned in the answer, you’ve lost a customer before they even knew you existed.
LLMs like ChatGPT, Gemini, and Perplexity don’t just pull answers out of thin air. They rely on data, citations, and patterns to generate responses. If your brand isn’t part of those patterns, it’s far less likely to be mentioned, no matter how well you rank on Google.
Publish authoritative content
LLMs are looking for well-structured, factually accurate content. These AI platforms love sources that provide unique insights or expert opinions, so be sure to focus on that.
Start with original research. Publish surveys, case studies, or industry reports with unique data. For example, “2026 State of [Your Industry] Report: Key Trends and Insights” positions your brand as an authority and gives AI platforms a reason to cite you.
Use the proven inverted pyramid structure in your content. Start with the most important information, like key findings and conclusions, follow with supporting details, and end with background information. This makes it easier for AI to extract, digest, and use your content.
Don’t forget to optimize for facts. Include statistics, quotes from experts, and actionable insights. For example, instead of “Our tool is great for marketers,” say “Our tool increased conversion rates by 30% for 500+ marketers in 2025, according to our latest case study.”
For example, HubSpot built its authority by publishing ultimate guides, like “The Ultimate Guide to Inbound Marketing.” These guides became go-to resources for marketers, earning backlinks from industry blogs, news sites, and even competitors. As a result, HubSpot is now frequently cited in AI responses about marketing tools.
Get mentioned on relevant, high-authority sites
LLMs trust reputable sources like industry publications, news sites, and review platforms. The more your brand is mentioned on these sites, the more likely it is to appear in AI responses. Please keep in mind that relevance is key here. For instance, if Yoast gets mentioned in Gardeners’ World or Home and Garden, it will do little to nothing for our brand. Find the most important and relevant sources and focus on those.
Pitch stories to journalists or industry blogs. For example, try to get featured in “Top 10 [Your Industry] Tools in 2026” lists.
Encourage customers to leave reviews on G2, Capterra, Trustpilot, or Google Reviews. Don’t forget to respond to (negative) reviews to show engagement and transparency.
If possible, try to reach out to sites like HubSpot, Search Engine Journal, or industry-specific blogs and ask to write for them. Be sure to include a bio with your brand name to reinforce recognition.
Optimize for conversational queries
LLMs are designed to answer natural language questions. This means you have to optimize your content for conversational queries. Conversational queries are things like “What’s the best CRM for startups?” rather than “best CRM”.
In your content, you should use question-focused headings. For example, answer the question “Is [Your Brand] good for small businesses?” directly in the first paragraph to make it clear and easy to understand.
LLMs often answer long-tail questions, so you should target long-tail keywords. For example, instead of “project management tool,” target “best project management tool for remote teams in 2026.”
In support of all of this, create FAQ pages with schema markup to help AI better understand your content.
Build citations
Build up a network of high-quality mentions that reinforce your brand’s authority. The more high-quality, relevant citations you have, the more likely LLMs are to mention your brand.
Publish assets like ultimate guides, templates, or tools that others want to reference and link to. For example, “The Ultimate Guide to [Your Industry] in 2026.”
Reach out to bloggers, journalists, and influencers to reference your content. For example, “We noticed you mentioned [Competitor] in your article. Here’s why [Your Brand] might be a better fit.”
Get featured in press releases, podcasts, or webinars. For example, “[Your Brand] Announces Groundbreaking Feature for [Industry].”
Make sure AI crawlers can reach your site
It’s important to ensure that AI crawlers can discover and index your content. If your site is invisible to them for whatever reason, your brand won’t appear in AI responses.
Your site should be technically sound, but you can also help LLMs in other ways. Make sure your site loads fast and is mobile-friendly. Use clean URLs, good meta tags and descriptions, and alt text for images. Also, use schema on your site to help AI crawlers understand what your site is about and how it all ties together.
In Yoast SEO, you can activate an llms.txt file. This proposed standard helps point AI crawlers to your most important content. Also, check whether your robots.txt file inadvertently blocks AI crawlers from accessing your content.
The llms.txt file in Yoast SEO helps point AI crawlers to your most important content
Be active in online communities
LLMs are trained on and can retrieve information from forums, social media, and community platforms such as Reddit, Quora, and LinkedIn. It can improve your brand’s visibility on AI platforms if you participate there.
Answer questions on Quora and Reddit. Provide valuable, non-promotional answers that naturally mention your brand. For example, “As a [Your Industry] expert, I recommend [Your Brand] because…”
Join discussions on Slack, Discord, or niche forums. Share insights and link to your content when relevant. Post thought leadership content on LinkedIn, Twitter, or Facebook. For example, “Here’s why [Your Industry] is changing in 2026, and how [Your Brand] is leading the way.”
The future of brand visibility is AI-driven
We’ve seen that AI is changing how people discover brands. There’s a simple rule: if your brand isn’t visible in AI responses, you are missing out on an ever-growing audience.
Luckily, Yoast AI Brand Insights gives you the tools to:
Track mentions, sentiment, and citations across AI platforms
Benchmark against competitors to identify gaps
Optimize for high-intent queries to capture more attention
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-16 14:45:232026-04-16 14:45:235 ways to improve your AI brand visibility (Using Yoast AI Brand Insights)
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-15 18:32:082026-04-15 18:32:08Top 10 Best SaaS SEO Agencies To Help You Grow in 2026
A major shift is underway in digital advertising: Meta Platforms is projected to generate more ad revenue than Google in 2026, signaling how marketers are increasingly favoring automated, performance-driven platforms.
Driving the news. According to Emarketer, Meta is expected to bring in $243.46 billion in global ad revenue this year, narrowly topping Google’s projected $239.54 billion.
Meta is forecast to capture 26.8% of global ad spend.
Google is projected to take 26.4%.
It would be the first time Google has lost the top spot in digital ad revenue.
Why we care. Meta’s growth suggests brands are getting more value from automated, performance-focused tools, which could influence how they split budgets between Meta and Google. It’s also a reminder that platform dynamics are changing fast, so media strategies need to stay flexible.
Catch up quick: Google has long dominated digital advertising through Search ads, Display ads across the web, and YouTube.
But its core ad business is growing more slowly than in previous years.
Meanwhile, Meta has benefited from AI-powered ad automation, stronger performance measurement tools, and continued scale across Facebook, Instagram, and WhatsApp.
Why Meta is winning now. Advertisers are increasingly prioritizing platforms that can deliver both reach and measurable return.
Meta’s advantage has been its ability to automate creative and targeting faster, optimize campaigns with less manual input, and make it easier for brands to prove ROI.
That’s especially appealing in a tighter economic environment where marketers are under pressure to do more with less.
Yes, but. Google is still enormous — and still growing.
Its search business remains one of the most profitable ad engines in the world, and YouTube continues to attract brand budgets. But the company faces more pressure from, AI search disruption, antitrust scrutiny, and slowing growth in traditional search advertising.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/04/Inside-Metas-AI-driven-advertising-system-How-Andromeda-and-GEM-work-together-zLESHi.jpg?fit=1920%2C1080&ssl=110801920Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-14 17:48:032026-04-14 17:48:03Meta is on track to overtake Google in global ad revenue for the first time
A growing number of advertisers say their Google Ads campaigns were suddenly hit with mass disapprovals tied to DNS and 500 server errors — even when their sites appeared to be working normally. The issue is raising fresh concerns about platform reliability and the risk of sudden performance disruptions.
Driving the news. PPC advertisers began flagging widespread problems this week across Google Ads accounts, with multiple agency leaders saying clients were affected at the same time.
Managing Director at Cornerhouse Media, Ryan Berry, said more than 1,500 ads were disapproved in a single account around 1:30 p.m. UTC.
Others said they received overnight emails warning that ads had been disapproved.
Why we care. Sudden mass disapprovals can instantly pause traffic, leads, and revenue — even if nothing is actually wrong with their website. If Google’s systems are incorrectly flagging DNS or server errors, brands could lose performance and spend valuable time troubleshooting an issue they didn’t cause. It also highlights the need for closer monitoring and faster escalation when platform glitches happen.
What advertisers are seeing:
DNS errors, even when internal IT teams found no website issue.
Google Ads trainer, Charlotte Osborne said she saw two separate cases this week — one tied to a DNS error and another to a 500 error — with no issues found on the client side.
Google Advertising specialist Joshua Barr said he received “lots of emails overnight” about disapproved ads and has been dealing with similar problems for weeks.
Several Paid Search experts also said they were seeing the same issue across accounts.
What’s likely happening. Google’s ad review systems use automated crawlers to test landing pages. If Googlebot encounters temporary server issues, DNS lookup failures, redirects, or timeout errors, ads can be automatically disapproved under the platform’s “destination not working” policy.
That means advertisers can be penalized even if:
their site is live for users,
the issue is temporary,
or the problem is on Google’s crawler side.
What to do now:
Check Google Ads policy manager for exact disapproval reasons.
Test landing pages using multiple locations and devices.
Review DNS uptime, redirects, and CDN/firewall settings.
Submit appeals for clearly incorrect disapprovals.
Document account-level impacts in case the issue proves platform-wide.
The bottom line. For advertisers, this is a reminder that campaign performance can be derailed by platform glitches as much as by strategy — and when Google’s systems misfire, spend and leads can disappear fast.
Google’s legal troubles over its search and ad tech businesses are entering a new phase — one that could expose the company to billions in payouts from advertisers seeking damages after U.S. courts found it illegally monopolized key digital ad markets.
Driving the news. A growing group of advertisers is preparing to file mass arbitration claims against Google, according to attorney Ashley Keller, who said the first filings are expected this week.
Keller says he has already signed up a “significant number” of advertisers.
He estimates potential claims tied to online search and display advertising could exceed $218 billion, based on economic analysis his firm commissioned.
Similar mass arbitration cases typically take 12 to 24 months to resolve.
Catch up quick. Courts in 2024 dealt Google major antitrust blows.
Why we care. This case could open a path to recover money advertisers believe they overpaid for search and display ads due to Google’s alleged monopoly power. Mass arbitration may give businesses more leverage than individual claims and could pressure Google into settlements.
It also signals growing legal scrutiny of the digital ad market, which could eventually lead to more competition and lower costs.
Why arbitration matters. Most advertisers can’t simply sue Google in court because their contracts require disputes to go through arbitration.
That usually favors large companies when claims are handled one by one. But mass arbitration — which bundles 25 or more similar claims — can shift leverage back toward claimants.
It increases pressure to settle.
It can lower legal costs for smaller businesses.
It allows companies with relatively modest individual claims to pursue damages collectively.
What’s new. This case could break new ground because most mass arbitrations to date have involved consumers or workers — not corporate plaintiffs.
A large-scale advertiser action against Google would be among the first major efforts to use the strategy for business-to-business claims.
What Google says. In a recent filing, Google said it faces private damages claims tied to global antitrust cases but cannot yet estimate potential losses.
The company said it believes it has “strong arguments” and plans to defend itself aggressively.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/04/google-search-court-1920-5DIRBs.png?fit=1920%2C1080&ssl=110801920Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-14 16:14:022026-04-14 16:14:02Advertisers are gearing up to hit Google with mass arbitration claims worth billions
Topical authority is a key concept in SEO, but it doesn’t account for how search and AI systems choose between competing sources.
The missing layer isn’t in content or structure. It’s in the signals that determine selection once a topic is understood — the difference between being eligible and being chosen.
Topical authority explains content, not selection
Topical authority is foundational for SEO and now AEO and AAO. But the framework the industry calls topical authority is incomplete. It covers semantics, content, and structure, but that’s just one part of a three-row, nine-cell model that defines topical ownership.
Topical authority describes what you’ve built. Topical ownership describes whether the system picks you.
Search and AI systems don’t reward content for existing. They reward content for winning a selection process. At Recruitment (Gate 6 in the AI engine pipeline), the system selects candidate answers from everything it has indexed.
Topical ownership has three layers: coverage, architecture, and position.
Everything in this article builds on Koray Tuğberk GÜBÜR’s foundation. He has engineered a rigorous methodology for building content architecture that signals genuine expertise to search engines, and his case studies prove it produces measurable results.
He coined “topical map” as a standard SEO deliverable, engineered the semantic content network methodology, and brought mathematical rigor to what had been vague advice about writing comprehensively.
His own formula (topical authority equals topical coverage plus historical Data) already acknowledges the temporal dimension I’ll expand below. He’s the authority on this subject. The expanded framework names the cells he already recognized and adds the one row he hasn’t yet formalized.
Topical authority, fully defined, is a three-by-three matrix.
As with everything in this series, the “straight C” principle applies. To compete in any algorithmic selection process, you can’t afford a failing grade in any of the criteria that are being evaluated.
Excellence in some dimensions doesn’t compensate for absence in others. The system requires a passing grade for each criterion. The three rows aren’t equally weighted above that floor, and position is the dominant row, as we’ll see.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
Row 1: Coverage is the entry ticket, not the destination
Coverage in one sentence: Go deep enough that nothing’s left to add, cover every adjacent angle, and bring a perspective nobody else has.
Coverage describes the content itself.
Depth is vertical exhaustiveness and is often underestimated.
Breadth is the horizontal range across subtopics and adjacent areas. GÜBÜR’s topical map concept is the engineering discipline that makes breadth systematic rather than accidental.
Original thought is the dimension that is almost always overlooked. Pushing the boundaries of a topic is what makes your coverage non-interchangeable.
An entity that covers a topic with perfect depth and breadth but says nothing new is an encyclopedia: comprehensive, correct, and structurally identical to any other comprehensive source. That’s an advantage that you will lose over time since it will become prior knowledge in the training data of the AI sooner or later. You’re no longer needed and won’t be cited.
Original thought is the key to retaining the attention of the AI — a new framework, a novel angle, and a perspective no one else has articulated is a good reason to come back again and again, and ultimately cite.
Importantly, original thought doesn’t require being revolutionary, nor do you need to be original on every page. Often it will be as simple as a fresh way of framing a familiar concept.
Define your brand’s specific perspective on specific vocabulary. When done properly, that’s enough.
There are two kinds of original thought, and they carry different risk profiles.
Reframing connects two existing validated truths that nobody has explicitly joined before. Both components are already corroborated; the system can verify them independently, and the originality lives in the framing.
True invention is different. There’s nothing for the system to cross-reference and nothing that’s already established to anchor the new claim. The result is that you look fringe until the world catches up.
The window between being right and being recognized can be long and uncomfortable, and to take that risk credibly, you need absolute conviction not only that you’re right, but that you’ll be proven right, and the patience to survive looking wrong in the meantime.
The reframe carries a fraction of that risk: the source truths are already verifiable, so the connection is credible from the moment it’s published.
Row 2: All architecture decisions begin with source context
Architecture in one sentence: Write sentences clearly, make your content flow in a logical manner, and link intelligently.
The three cells in the architecture row are GÜBÜR’s terms, and I’m using them as he defined them.
Source context determines everything that follows:
The publisher’s angle.
The identity and purpose that shapes what the topical map should contain.
How the semantic network should be constructed.
GÜBÜR’s insight that a casino affiliate and a casino technology provider need fundamentally different topical maps for the same subject captures the principle: structure follows identity.
Topical map is the structural design of the content: core sections and outer sections, which attributes become standalone pages and which merge together, the direction of internal linking, and the identification and elimination of information gaps.
Semantic network is the interconnected execution that makes the structure machine-readable: contextual flow between sentences and paragraphs, semantic distance minimized between related concepts, and cost of retrieval optimized so that the system can extract facts without unnecessary computational effort.
Good architecture makes coverage legible to the system. You can have thorough coverage that the algorithm can’t parse, and the result is the same as not having the content at all. Architecture is the bridge between what exists and what the system understands.
Where architecture falls short as a complete model is that it’s entirely within what you control. It describes how to organize your own house. It doesn’t address who the neighborhood knows you as.
Row 3: Position is why two equally thorough sources produce different results
Position in one sentence: Be first to stake the claim, be recognized by others as the best at what you do, and do things that ensure you are the person everyone refers to when they talk about your topic.
Position is the competitive layer. It’s the only row that describes the entity rather than the content. That distinction makes it the dominant row, for the same structural reason links were the dominant signal in traditional SEO: external validation at the entity level breaks ties that content quality alone can’t.
Because you’re building entity reputation, the position row requires the greatest investment of resources and must be maintained over time. Because most brands are looking for quick, easy wins and are unwilling to commit to long-term investment in their position, this is where your competitive advantage lies and where you’ll see a real difference.
Two entities can have identical coverage and architecture, and yet one will be treated as the authority and the other won’t. The current definition of topical authority can’t explain why. Position is the huge missing piece.
Temporal position is about when you said it. The source that established a claim, coined a term, or described a mechanism before anyone else has a structurally different relationship to that topic than a source that repeated it later.
GÜBÜR’s formula already acknowledges this: “Historical data” in his equation is the accumulated proof of chronological priority. First-mover advantage in knowledge graphs is an architectural phenomenon we see over and over in our data.
Hierarchical position is about dominance: being recognized by others as the top voice on the topic. Primary sources, practitioners who work in the field, researchers who run studies, and experts who generate knowledge. This isn’t self-declared. Others assign it. When Matt Diggity describes GÜBÜR as “one of the most knowledgeable people” in semantic SEO, that’s a hierarchical position being conferred by a peer.
Narrative position is about centrality: being the person everyone refers to when they talk about the topic. The journalist credits you, the researcher cites you, and the conference features you as the reference voice.
All roads lead to Rome, and you’re Rome. The system reads these co-citation patterns and builds a picture of where you sit in the source landscape.
Narrative position can’t be manufactured with first-party content. It’s earned by doing things in the world that others find worth referencing.
Topical authority, N-E-E-A-T-T, and topical ownership
N-E-E-A-T-T — Google’s experience, expertise, authoritativeness, and trustworthiness (E-E-A-T) framework, extended with notability and transparency — describes the credibility signals that drive algorithmic confidence and are rightly a huge focus of the industry.
N-E-E-A-T-T describes inputs, not structure. Those signals don’t exist in a vacuum. They attach to an entity that the system has already understood.
I made this argument in a Semrush webinar with Lily Ray, Nik Ranger, and Andrea Volpini in 2020, when we were still talking about E-A-T: entity understanding is a prerequisite to leveraging credibility signals, not an optional layer on top.
The nine-cell matrix shows where each signal lands.
The coverage row provides the source material for AI to evaluate your knowledge on your claimed topic.
The architecture row is where your content gets classified and positioned relative to a topic.
The position row is where strong N-E-E-A-T-T signals translate into a competitive advantage because N-E-E-A-T-T is an entity framework: it measures the publisher and author, not the content. Position is the entity row.
Note on the diagram: It could be argued that the four gaps in the diagram are partially covered by inference.
Expertise implies the knowledge to build a topical map and the depth that produces original thought.
Experience implies the first-hand involvement that creates temporal priority.
Transparency implies the clear structural identity that shapes a semantic network.
Those arguments aren’t wrong. N-E-E-A-T-T evaluates the person primarily — what they built is an indirect signal.
N-E-E-A-T-T maps onto two of the three position dimensions.
Hierarchical position is, in structural terms, what Authoritativeness and expertise measure — your level of knowledge and peer recognition of your standing on a topic.
Narrative position is what notability captures. The co-citation patterns that tell the system you’re the reference voice.
Temporal position sits outside N-E-E-A-T-T. No credibility signal changes just because you said something first.
Original thought sits outside it, too. The framework that’s supposed to reward quality has no mechanism for recognizing originality — at least not in the short term. It can reward reframing immediately, because both source truths are already verifiable.
True invention only registers retroactively, once corroboration has accumulated to the point where assertion becomes position.
That structural gap points to a practical problem. Most practitioners build N-E-E-A-T-T credibility as a general brand exercise — demonstrate expertise, earn trust, and accumulate signals. However, credibility without topical position is a credential without context. The fix is to audit all nine dimensions and focus your work on building N-E-E-A-T-T credibility to improve your weakest.
My own situation is a good example of the difficulties of original thought:
Temporal position is well-documented. Brand SERP in 2012, Entity home in 2015, answer engine Optimization in 2017, the algorithmic trinity and untrained salesforce in 2024, and now assistive agent optimization in 2025. The chronological priority is established and verifiable.
Hierarchical position has partial coverage. I’m recognized within specific circles as the reference voice on brand SERPs and algorithmic brand optimization, but not yet broadly enough to call it dominance.
Narrative position is the biggest gap. Many people use the terms I coined, but few third-party sources cite me unprompted, and more articles on my own properties won’t change that. The fix I am implementing is doing things in the world that others find worth referencing: keynotes, independent collaborations, corroboration with partners, and articles like this one.
This is why crediting GÜBÜR for source context, topical map, and semantic network is intentional. Accurate attribution from a credible source builds the narrative position of the person being credited (GÜBÜR), and giving credit accurately signals to the system that my own claims are likely to be equally well-founded.
Crediting well is a position signal, and it’s one most practitioners consistently underuse. My take is that citing the original source is the same as linking out. People resisted for years to protect the mysterious “link juice,” but it’s now accepted that linking out to provide supporting evidence is worth more than the PageRank cost. The same logic applies to citations: the value it brings you is greater than the loss.
This article is itself a demonstration.
GÜBÜR’s architecture framework is validated and extensively corroborated.
The AI engine pipeline argument runs across the previous eight articles in this series.
The nine-cell connection is new.
For the original thought in this article, I’m using the safer form of original thought: the reframe-cite-and-add technique. I invite you to do the same.
Recruitment (Gate 6) is where position determines the winner
Article 8 in this series covered annotation (Gate 5) — the gate where you’re alone with the machine, where the system classifies your content based on your signals alone, and with no competitor in the frame. Annotation is the last absolute gate. From recruitment onward, you’re always being compared with your competition.
So, recruitment (Gate 6) is where the game changes. Every source that reaches recruitment has cleared the infrastructure gates and survived annotation (hopefully in a healthy, competition-ready state). Now the system is selecting between candidates, and it’s selecting based on relative standing, not absolute quality.
This is the moment the entire matrix resolves into a single question: when the algorithm culls candidates at the recruitment gate, is your entity’s position strong enough to be one of the survivors in that selection?
In my three-by-three topical ownership grid, coverage gets you into the candidate pool, architecture makes the system confident it understands your content, and position determines whether it picks you ahead of the competition.
Coverage and architecture are content rows. They describe what you published. Position is the entity row. It describes who published it.
At recruitment, the system evaluates the content, and selection is heavily influenced by its assessment of the entity in the context of the topic. You can rewrite the content, but you can’t quickly rewrite who you are.
Darwin described natural selection as the mechanism by which organisms best adapted to their environment survive. An entity that occupies a strong position is an entity best adapted to the system’s selection criteria: temporal priority, hierarchical standing, and narrative centrality.
The system isn’t being arbitrary when it selects one well-structured, comprehensive source over another equally well-structured, equally comprehensive one. It’s selecting the entity best adapted to the query’s requirements, and best adapted means best positioned, not best written.
The signals behind each row have never been equally weighted, and entity is the clearest illustration of that. In traditional SEO, inbound links were the dominant signal. They could sometimes overcome very weak criteria and were almost a guarantee of victory when all other signals were roughly equal.
That dominance gradually diminished as links became one signal among many, table stakes rather than differentiator. Entity has followed the inverse trajectory. It began as a minor signal with the introduction of the knowledge graph and knowledge panels, and has grown steadily in structural importance ever since.
N-E-E-A-T-T attaches to an entity. Topical ownership attaches to an entity. Agential behavior requires a resolvable entity to function. Co-citation and co-occurrence patterns are only meaningful when the system has an entity to attach them to.
The AI engine pipeline stalls at the annotation stage (Gate 5) without a resolved entity. That gate is entity classification, and everything downstream depends on it. Brand SERPs, Knowledge panels, and AI résumés are entity constructs. Without a resolved entity, they don’t exist in a meaningful way.
The future will be more entity-dependent, not less, and the gap between brands that have invested in their entity and those that haven’t will compound. Entity is no longer simply a signal. It’s the substrate that other signals require to operate, and the most important single investment you can make in your long-term search and AI strategy.
To update a common saying: the best time to start was 10 years ago, the next best time is today, and the time it won’t be worth starting is tomorrow.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
Topical ownership requires all nine cells, all three rows
Topical ownership is the state where an entity dominates all nine cells of the matrix for a given topic. Not just comprehensive, not just well-structured, but the entity others reference when they write about the subject — ideally the one that got there first, and the one peers defer to by name.
Coverage tells the system you’re eligible.
Architecture tells the system you’re legible.
Position tells the system you’re the right answer.
The industry has been actively optimizing for six of those nine cells.
Understandability work builds the entity. N-E-E-A-T-T builds credibility. But the position row — the one that determines who wins at recruitment — has been built largely without intent. Practitioners accumulate N-E-E-A-T-T signals as a general credibility exercise and assume that covers the entity layer.
Position requires deliberate engineering of temporal, hierarchical, and narrative standing on specific topics. Being intentional about all nine, knowing which row each piece of work serves and why, is where the competitive advantage lives now.
Simply becoming conscious of the grid and the three rows will make your topical ownership, SEO, and N-E-E-A-T-T work more purposeful across all nine cells, because you will implement each signal with specific intent rather than general ambition.
The brands AI consistently recommends aren’t just covering their topics well. They own them.
This is the ninth piece in my AI authority series.
Despite all the shiny new capabilities at our disposal, many professionals seem stuck in a cycle of “AI Groundhog Day.”
You open a chat window, carefully craft a prompt, paste in your context, and get a great result. An hour later, you do it all over again. If this is how you use AI to automate, you’re still doing manual work — you’re just doing it in a chat box.
To move from using AI to building with it, you need to shift from a human doer to a true human orchestrator. That means stopping one-off prompts and starting to build systems. In this new phase of AI automation, what you really need are AI skills.
I explore this shift in my new book, “The AI Amplified Marketer,” where I look at how the human element of marketing remains vital even as new AI tools and shifting expectations evolve at a breakneck pace.
Below, I’ll show how to use Skills, a newer AI capability, to make you more efficient when managing PPC.
What’s a Claude Skill?
While many marketers have used ChatGPT’s Custom Instructions to set a general approach for how their AI works, a Skill is a more rigorous definition of how the AI needs to do things. These instructions can help it deliver more predictable outcomes that fit your expectations.
For example, I recently used a standard chat to rate search terms. While the AI’s logic was sound, the output was inconsistent: one session returned letter grades, another gave a percentage out of 100, and a third used a 1-10 scale.
In a professional setting, this inconsistency is a problem. It makes it difficult to integrate that prompt into a larger workflow where unpredictable grading might confuse other tools or team members.
A Skill solves this by providing a reusable set of instructions. It defines which tools and logic to use for a complex task and ensures the results are formatted exactly the same way every time.
It’s what turns the AI from a temperamental assistant into a reliable professional teammate.
And thanks to more recent agentic capabilities in Claude, a Skill is like turning your best multi-step PPC playbook into something an AI can execute on demand by delegating the various tasks to the right tools and subagents.
Whether it’s your agency’s proprietary account audit checklist or your framework for mining search query reports, a Skill encodes that process. It turns your PPC expertise into a scalable system that anyone on your team can use with their AI.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
How to build your first AI Skill
Creating a Skill is more straightforward than it might sound and you can do it through a simple chat session with your AI. Provide an account audit checklist, a standard operating procedure (SOP) from your team, or a blueprint to Claude. You can then ask it to convert that process into the formal structure of a Skill.
Interestingly, when you ask Claude to help build a Skill, it uses a specialized Skill-building protocol. This ensures your final output is structured correctly, follows best practices, and remains consistent with Anthropic’s underlying architecture.
Technically, a Skill is saved as a Markdown (.md) file that contains the playbook for the task at hand.
This file can be stored locally on your computer if you’re concerned about data privacy. Alternatively, you can share it in a central cloud repository. This makes it easy for your team to update and deploy best practices across your entire organization.
You don’t have to start from zero. Many pre-built Skills are available on platforms like GitHub. You can find examples for various marketing tasks, download them, and adapt them to fit your specific needs and workflows.
How to use a Skill in PPC
To use a skill, first make sure there are some available in your account.
Then, just tell the AI the task you want to do.
The AI will look through connected Skills and, if it finds one that matches the task, it will use those instructions to perform the work.
Sidenote: This means it is important not to have competing skills in your account. Imagine what could go wrong if you did: with two skills that both do Google Ads audits, you lose the predictability a Skill was supposed to give you in the first place, because it may randomly pick a different one and do the work in different ways as a result.
A Skill provides powerful logic, but without access to live account data, it remains theoretical.
A Skill can define an analysis, such as “review search terms from the last 14 days with costs over $50 and zero conversions.” However, it doesn’t know how to pull that data from Google Ads on its own.
In the past, the workaround was to manually download static data, like a CSV from the Google Ads interface or a Google Ads Editor file. You would then feed this file to the AI as context. This works, but it’s slow, manual, and the data is outdated the moment you download it.
A more modern approach uses a Model Context Protocol (MCP) to connect your AI and its Skills to other systems, such as live data sources. For example, using the Optmyzr MCP, your Skill can dynamically pull the exact Google Ads data it needs, when it needs it. This connection turns a static set of instructions into a living, responsive tool. (Disclosure: I’m the cofounder and CEO of Optmyzr.)
How Skills tell AI how to do things, and how tools and MCP enable it to do those things more reliably
Combining a Skill with a tool like an MCP is where the real transformation happens. Your AI moves from being an assistant that requires constant direction to a system that can manage a process. It transitions from giving you ideas to executing your vision.
Let’s look at a common PPC task:
Task: Search Term Analysis to Eliminate Irrelevant Clicks
A Skill without tools is a task-oriented assistant: It might instruct you: “Paste in your search term report as a CSV, and I will identify potential negative keywords.” You’re still the one doing the grunt work of retrieving data and implementing the findings.
A Skill with tools acts as a junior manager for that specific process: It can be configured to: “Pull the search term report for the last 7 days via the MCP, identify terms with high spend and no conversions, and apply them as exact match negatives to the appropriate campaign.” The entire workflow is handled, and your role shifts to one of oversight.
When you combine structured logic (Skills) with live data and execution capabilities (tools), you’re building more than a chatbot; you’re building a reliable teammate. It’s a grounded, practical system that handles defined tasks, freeing you up to be the orchestrator of your strategy.
To move from theory to practice, let’s look at four concrete examples of PPC Skills. In each case, notice how connecting these Skills to live tools transforms the AI from a passive analyst into an active participant.
1. Search term mining
This Skill’s logic guides the AI to analyze a search query report to find wasted spend and opportunities.
Without tools: You provide a CSV. The Skill returns a structured list of recommended negative keywords and new keyword ideas. You have to implement them manually.
With tools (MCP): The Skill automatically pulls the latest search query report data, identifies the negative keywords, and uses a tool function to apply them directly to your Google Ads account.
2. Ad copy generation
This Skill takes a landing page URL and target keywords to generate ad copy variations based on value propositions and user intent.
Without tools: The Skill produces headlines and descriptions in a text format. You copy and paste them into Google Ads.
With tools (MCP): The Skill finds underperforming ad assets in your account, and then generates the ad copy and pushes the new ads directly into the correct ad groups, potentially even setting up a new ad experiment.
3. Account auditing
This Skill runs a predefined checklist against an account, looking for issues like missing ad extensions, campaigns limited by budget, or ad groups with low CTR.
Without tools: The Skill generates a report that lists all the problems it found. You then have to log in to the account and fix each one.
With tools (MCP): The Skill not only identifies that an ad group is missing a callout extension but can also apply a relevant, pre-approved extension from extensions used elsewhere in the account. It doesn’t just report the problem; it fixes it.
4. Budget reallocation
This Skill analyzes campaign performance data to find opportunities to shift budget from underperforming campaigns to those with higher potential returns.
Without tools: The Skill provides a recommendation, such as: “Decrease Campaign A’s budget by 20% and increase Campaign B’s budget by 15%.”
With tTools (MCP): The Skill performs a dynamic analysis, pulling in exactly the right data with the appropriate lookback and time segmentation, and then executes the budget change directly, ensuring budgets are optimized as soon as the opportunity is identified.
The future of your role: From PPC doer to PPC designer
The combination of Skills and tools enables you to move from playing with AI to having AI do meaningful work. For years, AI has been good at generating ideas but weak at executing them inside the ad platforms. This solves the “last mile problem” by giving AI the logic, data, and permissions to act.
This also signals a change in the role of the PPC professional. Your job will shift from doing the repetitive work to designing the systems that do the work. Instead of manually analyzing reports and making changes, you will spend more time designing Skills, defining the rules and guardrails for automation, and reviewing the outcomes.
We’re at a point where the large language models are capable, the tools for connecting them to platforms are available, and the interfaces make it possible for non-developers to build. It’s time to rethink your processes and get AI to be a real teammate.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
The end of endless prompting
The cycle of endless prompting is a dead end. It keeps you in the role of a manual operator when you should be a systems designer. By embracing Claude Skills, you’re doing more than just working faster; you’re changing the very nature of your job. You’re moving from “doing PPC work” to “designing the PPC systems” that perform that work with predictability and at scale.
This is the ultimate expression of the AI-amplified marketer: building a true partner that codifies your expertise into a reliable, efficient engine.
The first step is to look at your daily tasks through the lens of a designer. What repetitive process is ready to be turned into your first Skill?
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-14 14:00:002026-04-14 14:00:00Claude Skills for PPC: How to turn one-off prompts into scalable systems
Google’s Ask Maps feature does more than help users find nearby businesses.
Based on hands-on testing of local service queries for plumbers, electricians, and HVAC companies, Ask Maps often narrows the field, interprets user intent, and frames businesses around qualities such as responsiveness, specialization, honesty, and repair-first thinking.
In more complex prompts, it sometimes provides guidance before recommending businesses. This shows Google Maps moving beyond simple local retrieval and toward a more recommendation-driven experience.
To evaluate that shift, we tested Ask Maps across five levels of local intent — starting with simple category searches and progressing toward conversational prompts involving uncertainty, trust, and decision-making.
A clear pattern emerged. As query nuance increased, Ask Maps shifted from listing businesses to interpreting which businesses fit and why.
This article draws from hands-on testing across a limited set of local service queries in one geographic area. Treat these findings as an early directional view, not a comprehensive representation across all markets or query types.
The testing framework
To evaluate progression, we built a five-level intent model based on how homeowners and local service customers actually search. Instead of organizing around traditional keyword categories, we structured the framework from simple retrieval toward conversational decision-making.
Level 1 focused on basic requests with minimal context.
Example: “Looking for an HVAC company near me.”
Level 2 introduced more service specificity.
Example: “I need an electrician to upgrade my panel in an older home.”
Level 3 moved into situational queries, where the user described a problem.
Example: “My furnace is making a loud banging noise and I’m not sure if it needs to be replaced or repaired.”
Level 4 introduced trust and decision concerns.
Example: “I think my furnace might need to be replaced, but I don’t want to get overcharged. Who is honest about that?”
Level 5 combined those elements into fully conversational prompts asking for guidance, validation, and recommendations in the same search.
Example: “I was told I need a full furnace replacement, but it feels expensive. How do I know if that’s actually necessary, and who should I call for a second opinion in my area?”
This framework allowed us to evaluate:
Which businesses appeared.
How Ask Maps interpreted prompts.
What attributes it emphasized.
When results started to resemble guided recommendations rather than search results.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
Ask Maps narrows the field and adds interpretation
One of the clearest patterns across the testing was that Ask Maps consistently returned a relatively small set of businesses while increasing the amount of interpretation as the user’s search intent became more complex.
At Level 1, the average number of businesses shown was 3.6. Level 2 rose to 4.3. Level 3 dropped slightly to 3.3. Level 4 averaged 5, and Level 5 averaged 4.6. Across the full set, the range remained fairly tight, generally between three and eight businesses.
That’s a different experience from traditional Maps, where a user can scroll through a much broader set of options and do more of the evaluation work themselves.
Ask Maps narrows choices early and spends more effort explaining why those businesses fit the prompt, but stops short of being fully action-oriented. Even when a phone number is shown, there’s no clickable call button directly in the Ask Maps response.
To call or access the full set of contact options, the user still has to click into the business’s Google Business Profile. That matters because while Ask Maps is becoming more interpretive, the underlying GBP is still where action happens.
As prompts become more nuanced, uncertain, or trust-sensitive, Ask Maps draws on a broader range of sources. It shows fewer businesses, replacing breadth with interpretation.
Even the simplest queries don’t behave like a traditional Maps result.
At the baseline level, Ask Maps still relies heavily on Google Business Profile data, including:
Business descriptions.
Review content.
Ratings.
Hours.
In some cases, posts.
Website influence is minimal here, and there’s little evidence of outside sourcing. But even within that mostly closed ecosystem, it goes beyond listing nearby businesses.
Instead of just showing names, ratings, and locations, Ask Maps:
Generates narrative summaries based on information in the Google Business Profile.
Describes businesses in terms of responsiveness, experience, specialization, or the kinds of situations they seem well-suited for.
Draws on reviews when framing businesses.
Even at the most basic level, Ask Maps isn’t neutral. It’s beginning to interpret businesses for the user.
As queries become more specific, Ask Maps starts matching capability
Once the prompt shifts from a general service search to a specific type of job, Ask Maps becomes more selective in how it matches businesses to the request.
A query about an electrical panel upgrade doesn’t behave the same way as a query about urgent AC repair.
Replacement-oriented prompts emphasize installation and system expertise.
Repair-oriented prompts emphasize speed, availability, and responsiveness.
Queries tied to older homes or higher-risk work call for more evidence of specialization.
At this level, Google Business Profile and reviews still carry much of the weight, but websites matter more when the job is more complex or costly. A panel upgrade query produces stronger external link usage than a more straightforward AC repair prompt.
That doesn’t mean websites are always heavily used. It shows more selectivity. As decisions become more complex, Google looks for more supporting evidence before recommending businesses.
The more noticeable shift begins once the prompts move from service categories to real-world scenarios.
At Level 3, the user is no longer looking for a plumber, electrician, or HVAC company. Instead, they’re describing a problem, such as a loud banging furnace, outdated electrical in an older home, or an AC unit that has stopped working during extreme heat. In those cases, Ask Maps increasingly interprets the problem before introducing businesses.
Some responses provide guidance or context first. Others identify the provider and clarify the work before making recommendations. The businesses that follow aren’t framed as generic providers. They’re framed as possible solutions to the situation.
Review content becomes important here. Rather than simply supporting a business’s credibility, reviews act as evidence that the company has handled similar situations before. Fast arrival times, experience with older homes, communication during stressful repairs, and problem-solving ability all become more meaningful when describing businesses.
This is the point where Ask Maps moves more clearly from retrieval to interpretation.
Trust-oriented queries change what gets emphasized
When the prompts introduce fear, skepticism, or concern about making the wrong decision, Ask Maps changes again.
At Level 4, the focus is less on the service need itself and more on the emotional context around it. The user is worried about being overcharged, being pushed into unnecessary replacement, or hiring someone who would cut corners.
Ask Maps doesn’t just return businesses capable of doing the work. It organizes businesses around trust-related qualities such as honesty, transparency, careful workmanship, fairness, and second-opinion value.
This is one of the strongest patterns in the research. At this stage, review language is the primary signal shaping how businesses are framed. Specific phrases and anecdotes matter, elevating businesses that explain options clearly, don’t upsell, offer honest assessments, or deliver careful, professional work.
External sources become more relevant here. In addition to GBP information and reviews, Ask Maps shows more willingness to pull from company websites, testimonials, third-party platforms, and educational resources when the user’s concern involves decision risk rather than just service need.
Once the query becomes trust-driven, the recommendation no longer appears to be based only on who can do the job. It reflects who is most likely to handle the situation in a way that the user feels good about.
The strongest example of this progression came at Level 5. These are prompts where the user combines a problem, uncertainty, and a request for recommendations in a single query.
For example, someone might say they were told they needed a full furnace replacement but were unsure whether that was really necessary and wanted to know who to call for a second opinion. In these cases, Ask Maps moves most clearly into a decision-support role.
Instead of leading with local businesses, it often starts with an explanation, introducing frameworks, safety context, or ways to think about the decision.
Only after that does it recommend businesses, and those businesses are often grouped not just by rating or proximity, but by approach. Some are framed as repair-first options. Others are framed as second-opinion experts or safety-focused specialists.
This is where Ask Maps feels least like a directory and most like an advisor. The structure of the response looks more like a guided decision process than a traditional local search result.
That doesn’t mean the system is flawless or that every answer is equally strong. But it does suggest that when a prompt includes uncertainty and a need for validation, Ask Maps is trying to do more than match a category. It’s trying to help the user think through what to do next.
Across the testing, several source patterns appear repeatedly, and the mix appears to shift depending on the type of query.
At the foundation, Google Business Profile does much of the early work. Business categories, service descriptions, hours, ratings, and review counts help determine which businesses are eligible to appear and how they are initially framed. In some cases, Ask Maps also pulls from GBP services and products, business descriptions, and occasionally posts when those help reinforce what the business does.
Reviews seem to be one of the most important inputs across nearly every query type. Not just in ratings, but in how review language shapes the summary.
Ask Maps often draws on review themes tied to:
Responsiveness.
Honesty.
Professionalism.
Fast arrival times.
Work on older homes.
Repair-versus-replace situations.
Whether customers feel the company explains options clearly or avoids unnecessary upselling.
In other words, reviews support reputation and help define how a business is positioned in the response.
Business websites matter more once the query becomes more specific, higher-stakes, or more tied to decision-making. In those cases, Ask Maps seems more likely to pull in service pages, testimonial pages, or other on-site business information that helps reinforce specialization, repair-first positioning, second-opinion value, or experience with a particular type of job.
That’s more noticeable in queries tied to things like panel upgrades, replacement decisions, or older-home electrical concerns than in simpler “near me” searches.
External sources are the most selective layer, but they become more visible when the query involves safety, diagnosis, pricing uncertainty, or broader decision support.
In those cases, Ask Maps pulls in:
Educational content around issues like repair-versus-replace decisions, quote validation, and electrical safety.
Third-party review and directory platforms such as Angi, HomeAdvisor, YouTube, and Facebook.
Other publicly available business information, when it helps reinforce trust, workmanship, or reputation.
In some of the trust-oriented electrician queries in particular, this outside sourcing is more prominent than in simpler local lookups, suggesting Google may broaden its evidence base when evaluating how a business is likely to operate, not just what services it offers.
Ask Maps isn’t relying on a single source of truth. It appears to be constructing an answer from a mix of Google Business Profile data, review language, business website content, and selectively chosen outside sources, with the balance shifting based on what the user is actually asking.
What this may mean for local visibility
If Ask Maps continues to develop in this direction, it could have meaningful implications for local visibility in Google Maps.
Inclusion alone may matter less than interpretation. If Ask Maps is consistently showing a smaller set of businesses and adding more explanation around them, the question is no longer just whether a business appears. It’s also how that business is framed and whether Google has enough confidence to position it as a good fit for the situation.
Review content is becoming more important than many businesses realize. The language within reviews appears to influence not just credibility, but the actual way a business is described and recommended.
Website content plays a more targeted role than many local businesses assume. It may not be equally important for every prompt, but it matters more when the service is complex, expensive, or tied to greater uncertainty.
More broadly, Ask Maps points toward a version of local search in which retrieval, evaluation, and decision support occur much more closely together. Instead of searching, comparing, researching, and then deciding across several steps, the user may increasingly be guided through much of that process within a single AI-mediated Maps experience.
What businesses and SEOs should tighten up now
If Ask Maps continues moving in this direction, the practical response isn’t to chase a new tactic or treat it like a separate channel. It’s to make the business easier for Google to understand and easier for customers to trust.
Keep the Google Business Profile current and specific
A Google Business Profile may play a bigger role when Ask Maps is trying to decide what a business does, what kinds of jobs it handles, and whether it fits a more nuanced prompt.
Review primary and secondary categories to make sure they reflect the core work accurately.
Tighten the business description so it clearly explains the services offered, the types of jobs handled, and any specialties or areas of focus.
Make sure hours, service areas, and contact details are complete and current.
Add photos that reinforce the kinds of jobs the business wants to be associated with.
Treat posts and profile updates as another way to reinforce services and activity, not just as optional extras.
Use the Services and Products sections fully, adding clear descriptions that reflect the specific jobs, specialties, and situations the business wants to be known for.
Pay closer attention to review language
If Ask Maps uses review language to shape how businesses are positioned, then the wording in reviews may matter more than many businesses realize.
Look beyond review volume and average rating.
Pay attention to whether reviews naturally mention specific jobs, customer concerns, and outcomes.
Watch for language around responsiveness, honesty, professionalism, repair-first thinking, and clear communication.
Encourage reviews that reflect real experiences rather than generic praise.
Use review trends to understand how the business is likely being framed by Google.
Revisit website content for higher-consideration services
Website content appears more likely to matter when the query is more complex, more expensive, or tied to more uncertainty.
Strengthen service pages for the higher-value or higher-risk work the business wants to be known for.
Add FAQs that address real decision points, not just basic definitions.
Include examples of the kinds of jobs handled, especially where context matters.
Reinforce trust signals such as experience, process, reviews, and proof of work.
Use language that helps explain situations like repair versus replace, older-home work, or second-opinion scenarios.
Think beyond ranking for a phrase
There’s a broader strategic shift here for local SEO. The question may no longer be only whether a business can rank for a phrase. It may also be whether Google has enough evidence to recommend that business in response to a real-world question.
Evaluate whether the business is easy to understand across GBP, reviews, website content, and broader digital mentions.
Look at whether the business is clearly associated with the jobs and situations it wants to win.
Think about trust and decision support, not just service relevance.
Focus on making the business more legible to both Google and potential customers.
Treat local optimization less like keyword matching alone and more like building a clear, consistent business profile across sources.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
The direction of Ask Maps is becoming clearer
The main question behind this research was when Ask Maps stops behaving like a directory and starts behaving more like a recommendation engine. Based on this testing, that shift starts earlier than many might expect.
Even at the most basic level, Ask Maps narrows, summarizes, and interprets. As prompts become more specific, situational, and trust-driven, they move further toward guided recommendations. At the highest level of complexity, it begins to look less like traditional local search and more like a system designed to help users make decisions.
That doesn’t mean Google Maps has fully changed into something else. But it does suggest the direction is becoming clearer. For local businesses and the people who support them, that makes this worth watching closely. Visibility inside Maps may increasingly depend not just on being present, but on being understood well enough for Google to explain why the business fits the user’s needs.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-14 13:00:002026-04-14 13:00:00Google Ask Maps is moving from listings to recommendations