Google gives local businesses two main ways to generate PPC leads online: Local Services Ads (LSAs) and Search campaigns.
LSAs are pay-per-lead campaigns – for actions such as calls, messages, or booked appointments – with a quick setup process that involves verifying your business. After that, Google automates most of the ad and keyword setup.
Search campaigns are more complex but offer far greater control over ad copy, keywords, and optimization.
Understanding how each format works – and when to use them – can help you get more qualified leads and make smarter use of your ad budget.
Most advertisers use both and shift budgets based on which delivers better long-term results.
Getting started with Google Local Services Ads
LSAs work for businesses of all sizes, not just those with small budgets.
For small business owners, LSAs offer an easy way to set up and run ads quickly.
This is one of the few ad formats where following Google’s setup instructions can actually work well.
That’s not the case for Google Search campaigns, which are far more complex and often waste spend when relying on Google’s automated suggestions.
Small businesses can prepay a few hundred dollars to test results.
While LSAs offer fewer options for control, customization, or optimization, they can work well for very small budgets.
They don’t require as much active management as Search campaigns – though they aren’t completely “set it and forget it” either.
Larger companies can also benefit from testing LSAs alongside other ad formats to compare results.
However, not all industries are eligible, so always confirm availability before allocating budget.
During setup, review all details carefully – including company information, service areas, and specific services – rather than assuming Google configured them correctly.
You have limited control over ad copy and keywords, since Google automatically determines relevant terms.
As Google’s documentation notes, “there is no need to do keyword research as relevant keywords are automatically determined by Google.”
This can work in your favor – or lead to irrelevant traffic – because you can’t define your own keywords.
Reviews are especially important in this format, as they appear prominently and heavily influence results. Collecting legitimate, high-quality reviews is critical for success.
To evaluate performance, connect third-party tools to track and qualify leads.
A basic CRM can help you measure how many leads convert into customers.
Platforms like HouseCall Pro and ServiceTitan can also integrate booking features, letting customers schedule appointments directly through your LSAs.
Google Search campaigns are more complex but offer a wider range of features for setup and optimization.
On top of setting business hours, target areas, and other details, Search campaigns give you greater control over ad testing, assets, keywords, match types, bidding strategies, and more.
Testing with just a few hundred dollars is not recommended. These campaigns require active monitoring and frequent optimization to perform well over time.
Unlike LSAs, you can add negative keywords and test a wide range of terms to identify which are most effective and profitable.
A/B testing ad copy and landing pages is also possible, giving Search campaigns much more scalability.
When starting, test a small budget using phrase and exact match keywords only, even with manual CPC bidding to set your maximum bid per click.
This offers tight control for new accounts, though it’s typically a temporary setup before switching to automated bidding and broader match types.
Begin with broad match keywords using a Maximize Conversions bid strategy, then add a target CPA (tCPA) once performance data builds.
In industries with high CPCs, set up portfolio bidding to include both a tCPA and a maximum CPC bid.
Microsoft Ads includes this option natively in its tCPA setting, so portfolio bidding isn’t required there.
After running a Search campaign for two to three months, begin expanding and refining based on performance.
Add new campaigns and ad groups to test additional keyword and ad combinations, aligning each with specific landing pages to maximize lead generation – something not possible with LSAs.
Combining LSAs and Search campaigns for stronger results
As with any advertising channel, it’s essential to regularly evaluate lead quality using a CRM and call tracking tools, such as CallTrackingMetrics or CallRail.
When running both LSAs and Search ads, compare leads from each to assess performance.
LSAs often face lead quality issues, despite being pay-per-lead campaigns.
Google continues improving spam filtering and invalid lead detection for LSAs, but the system still isn’t perfect. Invalid leads can be disputed.
Ad positioning also differs between the two formats. LSAs typically appear at the top of the page, though fewer of them are shown compared to Search ads.
Showing in multiple placements isn’t a problem, but you should continually evaluate cost per lead, lead quality, and lead-to-customer conversion rates for both formats.
For larger budgets, several other Google Ads campaign types are worth testing. These can support lead generation directly or help build local brand awareness.
Display, Video and Demand Gen campaigns can generate leads on their own or build brand awareness for top-of-funnel audiences.
They work well for higher-priced products or services with longer sales cycles, and for lower-priced services that rely on staying top-of-mind – such as plumbing or AC repair.
Performance Max campaigns can also deliver strong lead volume.
However, because they extend beyond Search, it’s essential to monitor lead quality through your CRM and compare it against Search and LSA performance.
With Google Analytics and Google Ads tracking multiple touchpoints before a conversion, you may see fractional conversions.
For example, 0.5 for a Video campaign and 0.5 for a Search campaign – indicating that both contributed to a single lead.
While not a perfect system, this data provides useful context for how different campaigns interact across the customer journey.
Test and compare
Both small and large businesses can benefit from testing LSAs, and all should consider running them alongside Search campaigns to compare results.
There’s no one-size-fits-all approach – both formats can be profitable when properly tracked and optimized.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2025/11/Google-Local-Services-Ads-vs.-Search-Ads-Which-drives-better-local-leads-7uLXRn.jpg?fit=1920%2C1080&ssl=110801920http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-11-03 14:00:002025-11-03 14:00:00Google Local Services Ads vs. Search Ads: Which drives better local leads?
Google has expanded the What’s happening feature within Google Business Profiles to restaurants and bars in the United Kingdom, Canada, Australia, and New Zealand. It is now available for multi-location restaurants, not just single-location restaurants.
The What’s happening feature launched back in May as a way for some businesses to highlight events, deals, and specials prominently at the top of your Google Business Profile. Now, Google is bringing it to more countries.
What Google said. Google’s Lisa Landsman wrote on LinkedIn:
How do you promote your “Taco Tuesday” in Toledo and your “Happy Hour” in Houston… right when locals are searching for a place to go?
I’m excited to share that the Google Business Profile feature highlighting what’s happening at your business, such as timely events, specials and deals, has now rolled out for multi-location restaurants & bars across the US, UK, CA, AU & NZ! (It was previously only available for single-location restaurants)
This is a great option for driving real-time foot traffic. It automatically surfaces the unique specials, live music, or events you’re already promoting at a specific location, catching customers at the exact moment they’re deciding where to eat or grab a cocktail.
What it looks like. Here is a screenshot of this feature:
More details. Google’s Lisa Landsman added, “We’ve already seen excellent results from testing and look forward to hearing how this works for you!”
Availability. This feature is only available for restaurants & bars. Google said it hopes to expand to more categories soon. It is also only available in the United States, United Kingdom, Canada, Australia, and New Zealand.
The initial launch was for single-location Food and Drink businesses in the U.S., UK, Australia, Canada, and New Zealand. It is now available for multi-location restaurants, not just single-location restaurants.
Why we care. If you manage restaurants and/or bars, this may be a new way to get more attention and visitors to your business from Google Search. Now, if you manage multi-location restaurants or bars, you can leverage this feature.
Marketing, technology, and business leaders today are asking an important question: how do you optimize for large language models (LLMs) like ChatGPT, Gemini, and Claude?
LLM optimization is taking shape as a new discipline focused on how brands surface in AI-generated results and what can be measured today.
For decision makers, the challenge is separating signal from noise – identifying the technologies worth tracking and the efforts that lead to tangible outcomes.
The discussion comes down to two core areas – and the timeline and work required to act on them:
Tracking and monitoring your brand’s presence in LLMs.
Improving visibility and performance within them.
Tracking: The foundation of LLM optimization
Just as SEO evolved through better tracking and measurement, LLM optimization will only mature once visibility becomes measurable.
We’re still in a pre-Semrush/Moz/Ahrefs era for LLMs.
Tracking is the foundation of identifying what truly works and building strategies that drive brand growth.
Without it, everyone is shooting in the dark, hoping great content alone will deliver results.
The core challenges are threefold:
LLMs don’t publish query frequency or “search volume” equivalents.
Their responses vary subtly (or not so subtly) even for identical queries, due to probabilistic decoding and prompt context.
They depend on hidden contextual features (user history, session state, embeddings) that are opaque to external observers.
Why LLM queries are different
Traditional search behavior is repetitive – millions of identical phrases drive stable volume metrics. LLM interactions are conversational and variable.
People rephrase questions in different ways, often within a single session. That makes pattern recognition harder with small datasets but feasible at scale.
These structural differences explain why LLM visibility demands a different measurement model.
This variability requires a different tracking approach than traditional SEO or marketing analytics.
The leading method uses a polling-based model inspired by election forecasting.
The polling-based model for measuring visibility
A representative sample of 250–500 high-intent queries is defined for your brand or category, functioning as your population proxy.
These queries are run daily or weekly to capture repeated samples from the underlying distribution of LLM responses.
Tracking tools record when your brand and competitors appear as citations (linked sources) or mentions (text references), enabling share of voice calculations across all competitors.
Over time, aggregate sampling produces statistically stable estimates of your brand visibility within LLM-generated content.
Early tools providing this capability include:
Profound.
Conductor.
OpenForge.
Consistent sampling at scale transforms apparent randomness into interpretable signals.
Over time, aggregate sampling provides a stable estimate of your brand’s visibility in LLM-generated responses – much like how political polls deliver reliable forecasts despite individual variations.
Building a multi-faceted tracking framework
While share of voice paints a picture of your presence in the LLM landscape, it doesn’t tell the complete story.
Just as keyword rankings show visibility but not clicks, LLM presence doesn’t automatically translate to user engagement.
Brands need to understand how people interact with their content to build a compelling business case.
Because no single tool captures the entire picture, the best current approach layers multiple tracking signals:
Share of voice (SOV) tracking: Measure how often your brand appears as mentions and citations across a consistent set of high-value queries. This provides a benchmark to track over time and compare against competitors.
Referral tracking in GA4: Set up custom dimensions to identify traffic originating from LLMs. While attribution remains limited today, this data helps detect when direct referrals are increasing and signals growing LLM influence.
Branded homepage traffic in Google Search Console: Many users discover brands through LLM responses, then search directly in Google to validate or learn more. This two-step discovery pattern is critical to monitor. When branded homepage traffic increases alongside rising LLM presence, it signals a strong causal connection between LLM visibility and user behavior. This metric captures the downstream impact of your LLM optimization efforts.
Nobody has complete visibility into LLM impact on their business today, but these methods cover all the bases you can currently measure.
Be wary of any vendor or consultant promising complete visibility. That simply isn’t possible yet.
Understanding these limitations is just as important as implementing the tracking itself.
Because no perfect models exist yet, treat current tracking data as directional – useful for decisions, but not definitive.
Measuring LLM impact is one thing. Identifying which queries and topics matter most is another.
Compared to SEO or PPC, marketers have far less visibility. While no direct search volume exists, new tools and methods are beginning to close the gap.
The key shift is moving from tracking individual queries – which vary widely – to analyzing broader themes and topics.
The real question becomes: which areas is your site missing, and where should your content strategy focus?
To approximate relative volume, consider three approaches:
Correlate with SEO search volume
Start with your top-performing SEO keywords.
If a keyword drives organic traffic and has commercial intent, similar questions are likely being asked within LLMs. Use this as your baseline.
Layer in industry adoption of AI
Estimate what percentage of your target audience uses LLMs for research or purchasing decisions:
High AI-adoption industries: Assume 20-25% of users leverage LLMs for decision-making.
Slower-moving industries: Start with 5-10%.
Apply these percentages to your existing SEO keyword volume. For example, a keyword with 25,000 monthly searches could translate to 1,250-6,250 LLM-based queries in your category.
Using emerging inferential tools
New platforms are beginning to track query data through API-level monitoring and machine learning models.
Accuracy isn’t perfect yet, but these tools are improving quickly. Expect major advancements in inferential LLM query modeling within the next year or two.
The technologies that help companies identify what to improve are evolving quickly.
While still imperfect, they’re beginning to form a framework that parallels early SEO development, where better tracking and data gradually turned intuition into science.
Optimization breaks down into two main questions:
What content should you create or update, and should you focus on quality content, entities, schema, FAQs, or something else?
How should you align these insights with broader brand and SEO strategies?
Identify what content to create or update
One of the most effective ways to assess your current position is to take a representative sample of high-intent queries that people might ask an LLM and see how your brand shows up relative to competitors. This is where the Share of Voice tracking tools we discussed earlier become invaluable.
These same tools can help answer your optimization questions:
Track who is being cited or mentioned for each query, revealing competitive positioning.
Identify which queries your competitors appear for that you don’t, highlighting content gaps.
Show which of your own queries you appear for and which specific assets are being cited, pinpointing what’s working.
From this data, several key insights emerge:
Thematic visibility gaps: By analyzing trends across many queries, you can identify where your brand underperforms in LLM responses. This paints a clear picture of areas needing attention. For example, you’re strong in SEO but not in PPC content.
Third-party resource mapping: These tools also reveal which external resources LLMs reference most frequently. This helps you build a list of high-value third-party sites that contribute to visibility, guiding outreach or brand mention strategies.
Blind spot identification: When cross-referenced with SEO performance, these insights highlight blind spots; topics or sources where your brand’s credibility and representation could improve.
Understand the overlap between SEO and LLM optimization
LLMs may be reshaping discovery, but SEO remains the foundation of digital visibility.
Across five competitive categories, brands ranking on Google’s first page appeared in ChatGPT answers 62% of the time – a clear but incomplete overlap between search and AI results.
That correlation isn’t accidental.
Many retrieval-augmented generation (RAG) systems pull data from search results and expand it with additional context.
The more often your content appears in those results, the more likely it is to be cited by LLMs.
Brands with the strongest share of voice in LLM responses are typically those that invested in SEO first.
Strong technical health, structured data, and authority signals remain the bedrock for AI visibility.
What this means for marketers:
Don’t over-focus on LLMs at the expense of SEO. AI systems still rely on clean, crawlable content and strong E-E-A-T signals.
Keep growing organic visibility through high-authority backlinks and consistent, high-quality content.
Use LLM tracking as a complementary lens to understand new research behaviors, not a replacement for SEO fundamentals.
Redefine on-page and off-page strategies for LLMs
Just as SEO has both on-page and off-page elements, LLM optimization follows the same logic – but with different tactics and priorities.
Off-page: The new link building
Most industries show a consistent pattern in the types of resources LLMs cite:
Wikipedia is a frequent reference point, making a verified presence there valuable.
Reddit often appears as a trusted source of user discussion.
Review websites and “best-of” guides are commonly used to inform LLM outputs.
Citation patterns across ChatGPT, Gemini, Perplexity, and Google’s AI Overviews show consistent trends, though each engine favors different sources.
This means that traditional link acquisition strategies, guest posts, PR placements, or brand mentions in review content will likely evolve.
Instead of chasing links anywhere, brands should increasingly target:
Pages already being cited by LLMs in their category.
Reviews or guides that evaluate their product category.
Articles where branded mentions reinforce entity associations.
The core principle holds: brands gain the most visibility by appearing in sources LLMs already trust – and identifying those sources requires consistent tracking.
On-page: What your own content reveals
The same technologies that analyze third-party mentions can also reveal which first-party assets, content on your own website, are being cited by LLMs.
This provides valuable insight into what type of content performs well in your space.
For example, these tools can identify:
What types of competitor content are being cited (case studies, FAQs, research articles, etc.).
Where your competitors show up but you don’t.
Which of your own pages exist but are not being cited.
From there, three key opportunities emerge:
Missing content: Competitors are cited because they cover topics you haven’t addressed. This represents a content gap to fill.
Underperforming content: You have relevant content, but it isn’t being referenced. Optimization – improving structure, clarity, or authority – may be needed.
Content enhancement opportunities: Some pages only require inserting specific Q&A sections or adding better-formatted information rather than full rewrites.
Leverage emerging technologies to turn insights into action
The next major evolution in LLM optimization will likely come from tools that connect insight to action.
Early solutions already use vector embeddings of your website content to compare it against LLM queries and responses. This allows you to:
Detect where your coverage is weak.
See how well your content semantically aligns with real LLM answers.
Identify where small adjustments could yield large visibility gains.
Current tools mostly generate outlines or recommendations.
The next frontier is automation – systems that turn data into actionable content aligned with business goals.
Timeline and expected results
While comprehensive LLM visibility typically builds over 6-12 months, early results can emerge faster than traditional SEO.
The advantage: LLMs can incorporate new content within days rather than waiting months for Google’s crawl and ranking cycles.
However, the fundamentals remain unchanged.
Quality content creation, securing third-party mentions, and building authority still require sustained effort and resources.
Think of LLM optimization as having a faster feedback loop than SEO, but requiring the same strategic commitment to content excellence and relationship building that has always driven digital visibility.
From SEO foundations to LLM visibility
LLM traffic remains small compared to traditional search, but it’s growing fast.
A major shift in resources would be premature, but ignoring LLMs would be shortsighted.
The smartest path is balance: maintain focus on SEO while layering in LLM strategies that address new ranking mechanisms.
Like early SEO, LLM optimization is still imperfect and experimental – but full of opportunity.
Brands that begin tracking citations, analyzing third-party mentions, and aligning SEO with LLM visibility now will gain a measurable advantage as these systems mature.
In short:
Identify the third-party sources most often cited in your niche and analyze patterns across AI engines.
Map competitor visibility for key LLM queries using tracking tools.
Audit which of your own pages are cited (or not) – high Google rankings don’t guarantee LLM inclusion.
Continue strong SEO practices while expanding into LLM tracking – the two work best as complementary layers.
Approach LLM optimization as both research and brand-building.
Don’t abandon proven SEO fundamentals. Rather, extend them to how AI systems discover, interpret, and cite information.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2025/10/semrush-discover-ai-optimization-77vsfP.webp?fit=800%2C440&ssl=1440800http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-10-28 13:00:002025-10-28 13:00:00LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery
AI tools can help teams move faster than ever – but speed alone isn’t a strategy.
As more marketers rely on LLMs to help create and optimize content, credibility becomes the true differentiator.
And as AI systems decide which information to trust, quality signals like accuracy, expertise, and authority matter more than ever.
It’s not just what you write but how you structure it. AI-driven search rewards clear answers, strong organization, and content it can easily interpret.
This article highlights key strategies for smarter AI workflows – from governance and training to editorial oversight – so your content remains accurate, authoritative, and unmistakably human.
Your organization will benefit from clear boundaries and expectations. Creating policies for AI use ensures consistency and accountability.
Only 7% of companies using genAI in marketing have a full-blown governance framework, according to SAS.
However, 63% invest in creating policies that govern how generative AI is used across the organization.
Source- “Marketers and GenAI- Diving Into the Shallow End,” SAS
Even a simple, one-page policy can prevent major mistakes and unify efforts across teams that may be doing things differently.
As Cathy McPhillips, chief growth officer at the Marketing Artificial Intelligence Institute, puts it:
“If one team uses ChatGPT while others work with Jasper or Writer, for instance, governance decisions can become very fragmented and challenging to manage. You’d need to keep track of who’s using which tools, what data they’re inputting, and what guidance they’ll need to follow to protect your brand’s intellectual property.”
So drafting an internal policy sets expectations for AI use in the organization (or at least the creative teams).
When creating a policy, consider the following guidelines:
What the review process for AI-created content looks like.
When and how to disclose AI involvement in content creation.
How to protect proprietary information (not uploading confidential or client information into AI tools).
Which AI tools are approved for use, and how to request access to new ones.
How to log or report problems.
Logically, the policy will evolve as the technology and regulations change.
Keep content anchored in people-first principles
It can be easy to fall into the trap of believing AI-generated content is good because it reads well.
LLMs are great at predicting the next best sentence and making it sound convincing.
But reviewing each sentence, paragraph, and the overall structure with a critical eye is absolutely necessary.
Think: Would an expert say it like that? Would you normally write like that? Does it offer the depth of human experience that it should?
“People-first content,” as Google puts it, is really just thinking about the end user and whether what you are putting into the world is adding value.
Any LLM can create mediocre content, and any marketer can publish it. And that’s the problem.
People-first content aligns with Google’s E-E-A-T framework, which outlines the characteristics of high-quality, trustworthy content.
E-E-A-T isn’t a novel idea, but it’s increasingly relevant in a world where AI systems need to determine if your content is good enough to be included in search.
According to evidence in U.S. v. Google LLC, we see quality remains central to ranking:
“RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: [redacted]% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.”
Source: U.S. v. Google LLC court documentation
It suggests that the same quality factors reflected in E-E-A-T likely influence how AI systems assess which pages are trustworthy enough to ground their answers.
So what does E-E-A-T look like practically when working with AI content? You can:
Review Google’s list of questions related to quality content: Keep these in mind before and after content creation.
Demonstrate firsthand experience through personal insights, examples, and practical guidance: Weave these insights into AI output to add a human touch.
Use reliable sources and data to substantiate claims: If you’re using LLMs for research, fact-check in real time to ensure the best sources.
Insert authoritative quotes either from internal stakeholders or external subject matter experts: Quoting internal folks builds brand credibility while external sources lend authority to the piece.
Create detailed author bios: Include:
Relevant qualifications, certifications, awards, and experience.
Links to social media, academic papers (if relevant), or other authoritative works.
Add schema markup to articles to clarify the content further: Schema can clarify content in a way that AI-powered search can better understand.
Become the go-to resource on the topic: Create a depth and breadth of material on the website that’s organized in a search-friendly, user-friendly manner. You can learn more in my article on organizing content for AI search.
Source: Creating helpful, reliable, people-first content,” Google Search Central
The do’s and don’ts of phrases and language to use.
Formatting rules such as SEO-friendly headers, sentence length, paragraph length, bulleted list guidelines, etc.
You can refresh this as needed and use it to further train the model over time.
Build a prompt kit
Put together a packet of instructions that prompts the LLM. Here are some ideas to start with:
The style guide
This covers everything from the audience personas to the voice style and formatting.
If you’re training a custom GPT, you don’t need to do this every time, but it may need tweaking over time.
A content brief template
This can be an editable document that’s filled in for each content project and includes things like:
The goal of the content.
The specific audience.
The style of the content (news, listicle, feature article, how-to).
The role (who the LLM is writing as).
The desired action or outcome.
Content examples
Upload a handful of the best content examples you have to train the LLM. This can be past articles, marketing materials, transcripts from videos, and more.
If you create a custom GPT, you’ll do this at the outset, but additional examples of content may be uploaded, depending on the topic.
Sources
Train the model on the preferred third-party sources of information you want it to pull from, in addition to its own research.
For example, if you want it to source certain publications in your industry, compile a list and upload it to the prompt.
As an additional layer, prompt the model to automatically include any third-party sources after every paragraph to make fact-checking easier on the fly.
SEO prompts
Consider building SEO into the structure of the content from the outset.
Early observations of Google’s AI Mode suggest that clearly structured, well-sourced content is more likely to be referenced in AI-generated results.
With that in mind, you can put together a prompt checklist that includes:
Crafting a direct answer in the first one to two sentences, then expanding with context.
Covering the main question, but also potential subquestions (“fan-out” queries) that the system may generate (for example, questions related to comparisons, pros/cons, alternatives, etc.).
Chunking content into many subsections, with each subsection answering a potential fan-out query to completion.
Being an expert source of information in each individual section of the page, meaning it’s a passage that can stand on its own.
Provide clear citations and semantic richness (synonyms, related entities) throughout.
A custom GPT is a personalized version of ChatGPT that’s trained on your materials so it can better create in your brand voice and follow brand rules.
It mostly remembers tone and format, but that doesn’t guarantee the accuracy of output beyond what’s uploaded.
Some companies are exploring RAG (retrieval-augmented generation) to further train LLMs on the company’s own knowledge base.
RAG connects an LLM to a private knowledge base, retrieving relevant documents at query time so the model can ground its responses in approved information.
While custom GPTs are easy, no-code setups, RAG implementation is more technical – but there are companies/technologies out there that can make it easier to implement.
That’s why GPTs tend to work best for small or medium-scale projects or for non-technical teams focused on maintaining brand consistency.
Create a custom GPT in ChatGPT
RAG, on the other hand, is an option for enterprise-level content generation in industries where accuracy is critical and information changes frequently.
Run an automated self-review
Create parameters so the model can self-assess the content before further editorial review. You can create a checklist of things to prompt it.
For example:
“Is the advice helpful, original, people-first?” (Perhaps using Google’s list of questions from its helpful content guidance.)
“Is the tone and voice completely aligned with the style guide?”
Have an established editing process
Even the best AI workflow still depends on trained editors and fact-checkers. This human layer of quality assurance protects accuracy, tone, and credibility.
Writers and editors need to continue to upskill in the coming year, and, according to the Microsoft 2025 annual Work Trend Index, AI skilling is the top priority.
Source: 2025 Microsoft Work Trend Index Annual Report
Professional training creates baseline knowledge so your team gets up to speed faster and can confidently handle outputs consistently.
This includes training on how to effectively use LLMs and how to best create and edit AI content.
In addition, training content teams on SEO helps them build best practices into prompts and drafts.
Editorial procedures
Ground your AI-assisted content creation in editorial best practices to ensure the highest quality.
This might include:
Identifying the parts of the content creation workflow that are best suited for LLM assistance.
Conducting an editorial meeting to sign off on topics and outlines.
Drafting the content.
Performing the structural edit for clarity and flow, then copyediting for grammar and punctuation.
Getting sign-off from stakeholders.
AI editorial process
The AI editing checklist
Build a checklist to use during the review process for quality assurance. Here are some ideas to get you started:
Every claim, statistic, quote, or date is accompanied by a citation for fact-checking accuracy.
All facts are traceable to credible, approved sources.
Outdated statistics (more than two years) are replaced with fresh insights.
Draft meets the style guide’s voice guidelines and tone definitions.
Content adds valuable, expert insights rather than being vague or generic.
For thought leadership, ensure the author’s perspective is woven throughout.
Draft is run through the AI detector, aiming for a conservative percentage of 5% or less AI.
Draft aligns with brand values and meets internal publication standards.
Final draft includes explicit disclosure of AI involvement when required (client-facing/regulatory).
Grounding AI content in trust and intent
AI is transforming how we create, but it doesn’t change why we create.
Every policy, workflow, and prompt should ultimately support one mission: to deliver accurate, helpful, and human-centered content that strengthens your brand’s authority and improves your visibility in search.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2025/10/Source-Marketers-and-GenAI-Diving-Into-the-Shallow-End-SAS-xm87YV.png?fit=608%2C546&ssl=1546608http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-10-28 12:00:002025-10-28 12:00:00How to balance speed and credibility in AI-assisted content creation
Structured data helps search engines, Large Language Models (LLMs), AI assistants, and other tools understand your website. Using Schema.org and JSON-LD, you make your content clearer and easier to use across platforms. This guide explains what structured data is, why it matters today, and how you can set it up the right way.
Structured data helps search engines and AI better understand your website, enhancing visibility and eligibility for rich results.
Using Schema.org and JSON-LD improves content clarity and connects different pieces of information graphically.
Implementing structured data today prepares your content for future technologies and AI applications.
Yoast SEO simplifies structured data implementation by automatically generating schema for various content types.
Focus on key elements like business details and products to maximize the impact of your structured data.
What is structured data?
Structured data is a way to tell computers exactly what’s on your web page. Using a standard set of tags from Schema.org, you can identify important details, like whether a page is about a product, a review, an article, an event, or something else.
This structured format helps search engines, AI assistants, LLMs, and other tools understand your content quickly and accurately. As a result, your site may qualify for special features in search results and can be recognized more easily by digital assistants or new AI applications.
Structured data is written in code, with JSON-LD being the most common format. Adding it to your pages gives your content a better chance to be found and understood, both now and as new technologies develop.
Below is a simple example of structured data using Schema.org in JSON-LD format. This is a basic schema for a product with review properties. This code tells search engines that the page is a product (Product). It provides the name and description of the product, pricing information, the URL, plus product ratings and reviews. This allows search engines to understand your products and present your content in search results.
Structured data gives computers a clear map of what’s on your website. It spells out details about your products, reviews, events, and much more in a format that’s easy for search engines and other systems to process.
This clarity leads to better visibility in search, including features like star ratings, images, or additional links. But the impact reaches further now. Structured data also helps AI assistants, voice search tools, and new web platforms like chatbots powered by Large Language Models understand and represent your content with greater accuracy.
New standards, such as NLWeb (Natural Language Web) and MCP (Model Context Protocol), are emerging to help different systems share and interpret web content consistently. Adding structured data today not only gives your site an advantage in search but also prepares it for a future where your content will flow across more platforms and digital experiences.
The effort you put into structured data now sets up your content to be found, used, and displayed in many places where people search and explore online.
Is structured data important for SEO?
Structured data plays a key role in how your website appears in search results. It helps search engines understand and present your content with extra features, such as review stars, images, and additional links. These enhanced listings can catch attention and drive more clicks to your site.
While using structured data doesn’t directly increase your rankings, it does make your site eligible for these rich results. That alone can set you apart from competitors. As search engines evolve and adopt new standards, well-structured data ensures your content stays visible and accessible in the latest search features.
For SEO, structured data is about making your site stand out, improving user experience, and giving your content the best shot at being discovered, both now and as search technology changes.
Structured data can lead to rich results
By describing your site for search engines, you allow them to do exciting things with your content. Schema.org and its support are constantly developing, improving, and expanding. As structured data forms the basis for many new developments in the SEO world, there will be more shortly. Below is an overview of the rich search results available; examples are in Google’s Search Gallery.
Structured data type
Example use/description
Article
News, blog, or sports article
Breadcrumb
Navigation showing page position
Carousel
Gallery/list from one site (with Recipe, Course, Movie, Restaurant)
Course list
Lists of educational courses
Dataset
Large datasets (Google Dataset Search)
Discussion forum
User-generated forum content
Education Q&A
Education flashcard Q&As
Employer aggregate rating
Ratings about employers in job search results
Event
Concerts, festivals, and other events
FAQ
Frequently asked questions pages
Image metadata
Image creator, credit, and license details
Job posting
Listings for job openings
Local business
Business details: hours, directions, ratings
Math solver
Structured data for math problems
Movie
Lists of movies, movie details
Organization
About your company: name, logo, contact, etc.
Practice problem
Education practice problems for students
Product
Product listings with price, reviews, and more
Profile page
Info on a single person or organization
Q&A
Pages with a single question and answers
Recipe
Cooking recipes, steps, and ingredients
Review snippet
Short review/rating summaries
Software app
Ratings and details on apps or software
Speakable
Content for text-to-speech on Google Assistant
Subscription and paywalled content
Mark articles/content behind a paywall
Vacation rental
Details about vacation property listings
Video
Video info, segments, and live content
The rich results formerly known as rich snippets
You might have heard the term “rich snippets” before. Google now calls these enhancements “rich results.” Rich results are improved search listings that use structured data to show extra information, like images, reviews, product details, or FAQs, directly in search.
For example, a product page marked up with structured data can show its price, whether it’s in stock, and customer ratings right below the search listing, even before someone clicks. Here’s what that might look like:
Some listings offer extra information, like star ratings or product details
With rich results, users see helpful details up front—such as a product’s price, star ratings, or stock status. This can make your listing stand out and attract more clicks.
Keep in mind, valid structured data increases your chances of getting rich results, but display is controlled by Google’s systems and is never guaranteed.
Results like this often appear more prominently on mobile devices. Search listings with structured data can display key information, like product prices, ratings, recipes, or booking options, in a mobile-friendly format. Carousels, images, and quick actions are designed for tapping and swiping with your finger.
For example, searching for a recipe on your phone might bring up a swipeable carousel showing photos, cooking times, and ratings for each dish. Product searches can highlight prices, availability, and reviews right in the results, helping users make decisions faster.
Many people now use mobile search as their default search method. Well-implemented structured data not only improves your visibility on mobile but can also make your content easier for users to explore and act on from their phones. To stay visible and competitive, regularly check your markup and make sure it works smoothly on mobile devices.
Knowledge Graph Panel
A knowledge panel
The Knowledge Graph Panel shows key facts about businesses, organizations, or people beside search results on desktop and at the top on mobile. It can include your logo, business description, location, contact details, and social profiles.
Using structured data, especially Organization, LocalBusiness, or Person markup with current details, helps Google recognize and display your entity accurately. Include recommended fields like your official name, logo, social links (using sameAs), and contact info.
Entity verification is becoming more important. Claim your Knowledge Panel through Google, and make sure your information is consistent across your website, social media, and trusted directories. Major search engines and AI assistants use this entity data for results, summaries, and answers, not just in search but also in AI-powered interfaces and smart devices.
While Google decides who appears in the Knowledge Panel and what details are shown, reliable structured data, verified identity, and a clear online presence give you the best chance of being featured.
Different kinds of structured data
Schema.org includes many types of structured data. You don’t need to use them all, just focus on what matches your site’s content. For example:
If you sell products, use product schema
For restaurant or local business sites, use local business schema
Recipe sites should add recipe schema
Before adding structured data, decide which parts of your site you want to highlight. Check Google’s or other search engines’ documentation to see which types are supported and what details they require. This helps ensure you are using the markup that will actually make your content stand out in search and other platforms.
How Yoast SEO helps with structured data
Yoast SEO automatically adds structured data to your site using smart defaults, making it easier for search engines and platforms to understand your content. The plugin supports a wide range of content types, like articles, products, local businesses, and FAQs, without the need for manual schema coding.
With Yoast SEO, you can:
With a few clicks, set the right content type for each page (such as ContactPage, Product, or Article)
Use built-in WordPress blocks for FAQs and How-tos, which generate valid schema automatically
Link related entities across your site, such as authors, brands, and organizations, to help search engines see the big picture
Adjust schema details per page or post through the plugin’s settings
Yoast SEO also offers an extensible structured data platform. Developers can build on top of Yoast’s schema framework, add custom schema types, or connect other plugins. This helps advanced users or larger sites tailor their structured data for specific content, integrations, or new standards.
Yoast keeps pace with updates to structured data guidelines, so your markup stays aligned with what Google and other platforms support. This makes it easier to earn rich results and other search enhancements.
Yoast SEO helps you fine-tune your schema structured data settings per page
Which structured data types matter most?
When adding structured data, focus first on the types that have the biggest impact on visibility and features in Google Search. These forms of schema are widely supported, trigger rich results, and apply to most kinds of sites:
Most important structured data types
Article: For news sites, blogs, and sports publishers. Adding Article schema can enable rich results like Top Stories, article carousels, and visual enhancements
Product: Essential for ecommerce. Product schema helps show price, stock status, ratings, and reviews right in search. This type is key for online stores and retailers
Event: For concerts, webinars, exhibitions, or any scheduled events. Event schema can display dates, times, and locations directly in search results, making it easier for people to find and attend
Recipe: This is for food blogs and cooking sites. The recipe schema supports images, cooking times, ratings, and step-by-step instructions as rich results, giving your recipes extra prominence in search
FAQPage: For any page with frequently asked questions. This markup can expand your search listing with Q&A drop-downs, helping users get answers fast
QAPage: For online communities, forums, or support sites. QAPage schema helps surface full question-and-answer threads in search
ReviewSnippet: This markup is for feedback on products, books, businesses, or services. It can display star ratings and short excerpts, adding trust signals to your listings
LocalBusiness is vital for local shops, restaurants, and service providers. It supplies address, hours, and contact info, supporting your visibility in the map pack and Knowledge Panel
Organization: Use this to describe your brand or company with a logo, contact details, and social profiles. Organization schema feeds into Google’s Knowledge Panel and builds your online presence
Video: Mark up video content to enable video previews, structured timestamps (key moments), and improved video visibility
Breadcrumb: This feature shows your site’s structure within Google’s results, making navigation easier and your site look more reputable
Other valuable or sector-specific types:
Course: Highlight educational course listings and details for training providers or schools
JobPosting: Share open roles in job boards or company careers pages, making jobs discoverable in Google’s job search features
SoftwareApp: For software and app details, including ratings and download links
Movie: Used for movies and film listings, supporting carousels in entertainment searches and extra movie details
Dataset: Makes large sets of research or open data discoverable in Google Dataset Search
DiscussionForum: Surfaces user-generated threads in dedicated “Forums” search features
ProfilePage: Used for pages focused on an individual (author profiles, biographies) or organization
EmployerAggregateRating: Displays company ratings and reviews in job search results
PracticeProblem: For educational sites offering practice questions or test prep
VacationRental: Displays vacation property listings and details in travel results
Special or supporting types:
Person: This helps Google recognize and understand individual people for entity and Knowledge Panel purposes (it does not create a direct rich result)
Book: Can improve book search features, usually through review or product snippets
Speakable: Reserved for news sites and voice assistant features; limited support
Image metadata, Math Solver, Subscription/Paywalled content: Niche markups that help Google properly display, credit, or flag special content
Carousel: Used in combination with other types (like Recipe or Movie) to display a list or gallery format in results
When choosing which schema to add, always select types that match your site’s actual content. Refer to Google’s Search Gallery for the latest guidance and requirements for each type.
Adding the right structured data makes your pages eligible for rich results, enhances your visibility, and prepares your content for the next generation of search features and AI-powered platforms.
Voice search remains important, with a significant share of online queries now coming from voice-enabled devices. Structured data helps content be understood and, in some cases, selected as an answer for voice results.
The Speakable schema (for marking up sections meant to be read aloud by voice assistants) is still officially supported, but adoption is mostly limited to news content. Google and other assistants also use a broader mix of signals, like content clarity, authority, E-E-A-T, and traditional structured data, to power their spoken answers.
If you publish news or regularly answer concise, fact-based questions, consider using Speakable markup. For other content types, focus on structured data and well-organized, user-focused pages to improve your chances of being chosen by voice assistants. Voice search and voice assistants continue to draw on featured snippets, clear Q&A, and trusted sources.
Structured data uses Schema.org’s hierarchy. This vocabulary starts with broad types like Thing and narrows down to specific ones, such as Product, Movie, or LocalBusiness. Every type has its own properties, and more specific types inherit from their ancestors. For example, a Movie is a type of CreativeWork, which is a type of Thing.
When adding structured data, select the most specific type that fits your content. For a movie, this means using the Movie schema. For a local company, choose the type of business that best matches your offering under LocalBusiness.
Properties
Every Schema.org type includes a range of properties. While you can add many details, focus on the properties that Google or other search engines require or recommend for rich results. For example, a LocalBusiness should include your name, address, phone number, and, if possible, details such as opening hours, geo-coordinates, website, and reviews. You’ll find our Local SEO plugin (available in Yoast SEO Premium) very helpful if you need help with your local business markup.
Properties: name, address, phone, email, openingHours, geo, review, logo
The more complete and accurate your markup, the greater your chances of being displayed with enhanced features like Knowledge Panels or map results. For details on recommended properties, always check Google’s up-to-date structured data documentation.
In the local business example, you’ll see that Google lists several required properties, like your business’s NAP (Name and Phone) details. There are also recommended properties, like URLs, geo-coordinates, opening hours, etc. Try to fill out as many of these as possible because search engines will only give you the whole presentation you want.
Structured data should be a graph
When you add structured data to your site, you’re not just identifying individual items, but you’re building a data graph. A graph in this context is a web of connections between all the different elements on your site, such as articles, authors, organizations, products, and events. Each entity is linked to others with clear relationships. For instance, an article can be marked as written by a certain author, published by your organization, and referencing a specific product. These connections help search engines and AI systems see the bigger picture of how everything on your site fits together.
Creating a fully connected data graph removes ambiguity. It allows search engines to understand exactly who created content, what brand a product belongs to, or where and when an event takes place, rather than making assumptions based on scattered information. This detailed understanding increases the chances that your site will qualify for rich results, Knowledge Panels, and other enhanced features in search. As your website grows, a well-connected graph also makes it easier to add new content or expand into new areas, since everything slots into place in a way that search engines can quickly process and understand.
Yoast SEO builds a graph
With Yoast SEO, many of the key connections are generated automatically, giving your site a solid foundation. Still, understanding the importance of building a connected data graph helps you make better decisions when structuring your own content or customizing advanced schema. A thoughtful, well-linked graph sets your site up for today’s search features, while making it more adaptable for the future.
Your schema should be a well-formed graph for easier understanding by search engines and AI
Beyond search: AI, assistants, and interoperability
Structured data isn’t just about search results. It’s a map that helps AI assistants, knowledge graphs, and cross‑platform apps understand your content. It’s not just about showing a richer listing; it’s about enabling reliable AI interpretation and reuse across contexts.
Today, the primary payoff is still better search experiences. Tomorrow, AI systems and interoperable platforms will rely on clean, well‑defined data to summarize, reason about, and reuse your content. That shift makes data quality more important than ever.
Practical steps for today
Keep your structured data clean with a few simple habits. Use the same names for people, organizations, and products every time they appear across your site. Connect related information so search engines can see the links. For example, tie each article to its author or a product to its brand. Fill in all the key details for your main schema types and make sure nothing is missing. After making changes or adding new content, run your markup through a validation tool. If you add any custom fields or special schema, write down what they do so others can follow along later. Doing quick checks now and then keeps your data accurate and ready for both search engines and AI.
Interoperability, MCP, and the role of structured data
More and more, AI systems and search tools are looking for websites that are easy to understand, not just for people but also for machines. The Model Context Protocol (MCP) is gaining ground as a way for language models like Google Gemini and ChatGPT to use the structured data already present on your website. MCP draws on formats like Schema.org and JSON-LD to help AI match up the connections between things such as products, authors, and organizations.
Another project, the Natural Language Web (NLWeb), an open project developed by Microsoft, aims to make web content easier for AI to use in conversation and summaries. NLWeb builds on concepts like MCP, but hasn’t become a standard yet. For now, most progress and adoption are happening with MCP, and large language models are focusing their efforts on this area.
Using Schema.org and JSON-LD to keep your structured data clean (no duplicate entities), complete (all indexable content included), and connected (relationships preserved) will prepare you for search engines and new AI-driven features appearing across the web.
Schema.org and JSON-LD: the foundation you can trust
Schema.org and JSON-LD remain the foundation for structured data on the web. They enable today’s rich results in search and form the basis for how AI systems will interpret web content in the future. JSON-LD should be your default format for new markup, allowing you to build structured data graphs that are clean, accurate, and easy to maintain. Focus on accuracy in your markup rather than unnecessary complexity.
To future-proof your data, prioritize stable identifiers such as @id and use clear types to reduce ambiguity. Maintain strong connections between related entities across your pages. If you develop custom extensions to your structured data, document them thoroughly so both your team and automated tools can understand their purpose.
Design your schema so that components can be added or removed without disrupting the entire graph. Make a habit of running validations and audits after you change your site’s structure or content.
Finally, stay current by following guidance and news from official sources, including updates about standards such as NLWeb and MCP, to ensure your site remains compatible with both current search features and new interoperability initiatives.
What do you need to describe for search engines?
To get the most value from structured data, focus first on the most important elements of your site. Describe the details that matter most for users and for search, such as your business information, your main products or services, reviews, events, or original articles. These core pieces of information are what search engines look for to understand your site and display enhanced results.
Rather than trying to mark up everything, start with the essentials that best match your content. As your experience grows, you can build on this foundation by adding more detail and creating links between related entities. Accurate, well-prioritized markup is both easier to maintain and more effective in helping your site stand out in search results and across new AI-driven features.
How to implement structured data
We’d like to remind you that Yoast SEO comes with an excellent structured data implementation. It’ll automatically handle most sites’ most pressing structured data needs. Of course, as mentioned below, you can extend our structured data framework as your needs become bigger.
Do the Yoast SEO configuration and get your site’s structured data set up in a few clicks! The configuration is available for all Yoast SEO users to help you get your plugin configured correctly. It’s quick, it’s easy, and doing it will pay off. Plus, if you’re using the new block editor in WordPress you can also add structured data to your FAQ pages and how-to articles using our structured data content blocks.
Thanks to JSON-LD, there’s nothing scary about adding the data to your pages anymore. This JavaScript-based data format makes it much easier to add structured data since it forms a block of code and is no longer embedded in the HTML of your page. This makes it easier to write and maintain, plus both humans and machines better understand it. If you need help implementing JSON-LD structured data, you can enroll in our free Structured Data for Beginners course, our Understanding Structured Data course, or read Google’s introduction to structured data.
Structured data with JSON-LD
JSON-LD is the recommended way to add structured data to your site. All major search engines, including Google and Bing, now fully support this format. JSON-LD is easy to implement and maintain, as it keeps your structured data separate from the main HTML.
Yoast SEO automatically creates a structured data graph for every page, connecting key elements like articles, authors, products, and organizations. This approach helps search engines and AI systems understand your site’s structure. Our developer resources include detailed Schema documentation and example graphs, making it straightforward to extend or customize your markup as your site grows.
Tools for working with structured data
Yoast SEO automatically handles much of the structured data in the background. You could extend our Schema framework, of course — see the next chapter –, but if adding code by hand seems scary, you could try some of the tools listed below. If you need help with how to proceed, ask your web developer for help. They will fix this for you in a couple of minutes.
Yoast SEO uses JSON-LD to add Schema.org information about your site search, your site name, your logo, images, articles, social profiles, and a lot more to your web pages. We ask if your site represents a person or an organization and adapt our structured data based on that. Also, our structured data content blocks for the WordPress block editor make it easy to add structured data to your FAQs and How-Tos. Check out the structured data features in Yoast SEO.
The Yoast SEO Schema structured data framework
Implementing structured data has always been challenging. Also, the results of most of those implementations often needed improvement. At Yoast, we set out to enhance the Schema output for millions of sites. For this, we built a Schema framework, which can be adapted and extended by anyone. We combined all those loose bits and pieces of structured data that appear on many sites, improved these, and put them in a graph. By interconnecting all these bits, we offer search engines all your connections on a silver platter.
See this video for more background on the schema graph.
Of course, there’s a lot more to it. We can also extend Yoast SEO output by adding specific Schema pieces, like how-tos or FAQs. We built structured data content blocks for use in the WordPress block editor. We’ve also enabled other WordPress plugins to integrate with our structured data framework, like Easy Digital Downloads, The Events Calendar, Seriously Simple Podcasting, and WP Recipe Maker, with more to come. Together, these help you remove barriers for search engines and users, as it has always been challenging to work with structured data.
Expanding your structured data implementation
A structured and focused approach is key to successful Schema.org markup on your website. Start by understanding Schema.org and how structured data can influence your site’s presence in search and beyond. Resources like Yoast’s developer portal offer useful insights into building flexible and future-proof markup.
Always use JSON-LD as recommended by Google, Bing, and Yoast. This format is easy to maintain and works well with modern websites. To maximize your implementation, use tools and frameworks that allow you to add, customize, and connect Schema.org data efficiently. Yoast SEO’s structured data framework, for example, enables seamless schema integration and extensibility across your site.
Validate your structured data regularly with tools like the Rich Results Test or Schema Markup Validator and monitor Google Search Console’s Enhancements reports for live feedback. Reviewing your markup helps you fix issues early and spot opportunities for richer results as search guidelines change. Periodically revisiting your strategy keeps your markup accurate and effective as new types and standards emerge.
Read up
By following the guidelines and adopting a comprehensive approach, you can successfully get structured data on your pages and enhance the effectiveness of your schema.org markup implementation for a robust SEO performance. Read the Yoast SEO Schema documentation to learn how Yoast SEO works with structured data, how you can extend it via an API, and how you can integrate it into your work.
Several WordPress plugins already integrate their structured data into the Yoast SEO graph
Structured data has become an essential part of building a visible, findable, and adaptable website. Using Schema.org and JSON-LD not only helps search engines understand your content but also sets your site up for better performance in new AI-driven features, rich results, and across platforms.
Start by focusing on the most important parts of your site, like business information, products, articles, or events, and grow your structured data as your needs evolve. Connected, well-maintained markup now prepares your site for search, AI, and whatever comes next in digital content.
Explore our documentation and training resources to learn more about best practices, advanced integrations, or how Yoast SEO can simplify structured data. Investing the time in good markup today will help your content stand out wherever people (or algorithms) find it.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-10-28 09:00:452025-10-28 09:00:45Structured data with schema for search and AI
The conversation around artificial intelligence (AI) has been dominated by “replacement theory” headlines. From front-line service roles to white-collar knowledge work, there’s a growing narrative that human capital is under threat.
Economic anxiety has fueled research and debate, but many of the arguments remain narrow in scope.
Stanford’s Digital Economy Lab found that since generative AI became widespread, early-career workers in the most exposed jobs have seen a 13% decline in employment.
This fear has spread into higher-paid sectors as well, with hedge fund managers and CEOs predicting large-scale restructuring of white-collar roles over the next decade.
However, much of this narrative is steeped in speculation rather than the fundamental, evolving dynamics of skilled work.
Yes, we’ve seen layoffs, hiring slowdowns, and stories of AI automating tasks. But this is happening against the backdrop of high interest rates, shifts in global trade, and post-pandemic over-hiring.
As the global talent thought-leader Josh Bersin argues, claims of mass job destruction are “vastly over-hyped.” Many roles will transform, not vanish.
What this means for SEO
For the SEO discipline, the familiar refrain “SEO is dead” is just as overstated.
Yes, the nature of the SEO specialist is changing. We’ve seen fewer leadership roles, a contraction in content and technical positions, and cautious hiring. But the function itself is far from disappearing.
In fact, SEO job listings remain resilient in 2025 and mid-level roles still comprise nearly 60% of open positions. Rather than declining, the field is being reshaped by new skill demands.
Don’t ask, “Will AI replace me?” Ask instead, “How can I use AI to multiply my impact?”
Think of AI not as the jackhammer replacing the hammer but as the jackhammer amplifying its effect. SEOs who can harness AI through agents, automation, and intelligent systems will deliver faster, more impactful results than ever before.
“AI is a tool. We can make it or teach it to do whatever we want…Life will go on, economies will continue to be driven by emotion, and our businesses will continue to be fueled by human ideas, emotion, grit, and hard work,” Bersin said.
Rewriting the SEO narrative
As an industry, it’s time to change the language we use to describe SEO’s evolution.
Too much of our conversation still revolves around loss. We focus on lost clicks, lost visibility, lost control, and loss of num=100.
That narrative doesn’t serve us anymore.
We should be speaking the language of amplification and revenue generation. SEO has evolved from “optimizing for rankings” to driving measurable business growth through organic discovery, whether that happens through traditional search, AI Overviews, or the emerging layer of Generative Engine Optimization (GEO).
AI isn’t the villain of SEO; it’s the force multiplier.
When harnessed effectively, AI scales insight, accelerates experimentation, and ties our work more directly to outcomes that matter:
Pipeline.
Conversions.
Revenue.
We don’t need to fight the dystopian idea that AI will replace us. We need to prove that AI-empowered SEOs can help businesses grow faster than ever before.
The new language of SEO isn’t about survival, it’s about impact.
The team landscape has already shifted
For years, marketing and SEO teams grew headcount to scale output.
Today, the opposite is true. Hiring freezes, leaner budgets, and uncertainty around the role of SEO in an AI-driven world have forced leaders to rethink team design.
A recent Search Engine Land report noted that remote SEO roles dropped to 34% of listings in early 2025, while content-focused SEO positions declined by 28%. A separate LinkedIn survey found a 37% drop in SEO job postings in Q1 compared to the previous year.
This signals two key shifts:
Specialized roles are disappearing. “SEO writers” and “link builders” are being replaced by versatile strategists who blend technical, analytical, and creative skill sets.
Leadership is demanding higher ROI per role. Headcount is no longer the metric of success – capability is.
What it means for SEO leadership
If your org chart still looks like a pyramid, you’re behind.
The new landscape demands flexibility, speed, and cross-functional integration with analytics, UX, paid media, and content.
It’s time to design teams around capabilities, not titles.
Rethinking SEO Talent
The best SEO leaders aren’t hiring specialists, they’re hiring aptitude. Modern SEO organizations value people who can think across disciplines, not just operate within one.
The strongest hires we’re seeing aren’t traditional technical SEOs focused on crawl analysis or schema. They’re problem solvers – marketers who understand how search connects to the broader growth engine and who have experience scaling impact across content, data, and product.
Progressive leaders are also rethinking resourcing. The old model of a technical SEO paired with engineering support is giving way to tech SEOs working alongside AI product managers and, in many cases, vibe coding solutions. This model moves faster, tests bolder, and builds systems that drive real results.
For SEO leaders, rethinking team architecture is critical. The right question isn’t “Who should I hire next?” It’s “What critical capability must we master to stay competitive?”
Once that’s clear, structure your people and your agents around that need. The companies that get this right during the AI transition will be the ones writing the playbook for the next generation of search leadership.
The new human-led, agent-empowered team
The future of SEO teams will be defined by collaboration between humans and agents.
These agents are AI-enabled systems like automated content refreshers, site-health bots, or citation-validation agents that work alongside human experts.
The human role? To define, train, monitor, and QA their output.
Why this matters
Agents handle high-volume, repeatable tasks (e.g., content generation, basic auditing, link-score filtering) so humans can focus on strategy, insight, and business impact.
The cost of building AI agents can range from $20,000 to $150,000, depending on the complexity of the system, integrations, and the specialized work required across data science, engineering, and human QA teams, according to RTS Labs.
A single human manager might oversee 10-20 agents, shifting the traditional pyramid and echoing the “short pyramid” or “rocket ship” structure explored by Tomasz Tunguz.
The future: teams built around agents and empowered humans.
Real-world archetypes
SaaS companies: Develop a bespoke “onboarding agent” that reads product data, builds landing pages, and runs first-pass SEO audits, human strategist refines output.
Marketplace brands (e.g., upcoming seasonal trend): Use an “Audience Discovery Agent” that taps customer and marketplace data, but the human team writes the narrative and guides the vertical direction.
Enterprise content hubs: deploy “Content Refresh Agents” that identify high-value pages, suggest optimizations, and push drafts that editors review and finalise.
Integration is key
These new teams succeed when they don’t live in silos. The SEO/GEO squad must partner with paid search, analytics, revenue ops, and UX – not just serve them.
Agents create capacity; humans create alignment and amplification.
A call to SEO practitioners
Building the SEO community of the future will require change.
The pace of transformation has never been faster and it’s created a dangerous dependence on third-party “AI tools” as the answer to what is unknown.
But the true AI story doesn’t begin with a subscription. It begins inside your team.
If the only AI in your workflow is someone else’s product, you’re giving up your competitive edge. The future belongs to teams that build, not just buy.
Here’s how to start:
Build your own agent frameworks, designed with human-in-the-loop oversight to ensure accuracy, adaptability, and brand alignment.
Partner with experts who co-create, not just deliver. The most successful collaborations help your team learn how to manage and scale agents themselves.
Evolve your team structure, move beyond the pyramid mentality, and embrace a “rocket ship” model where humans and agents work in tandem to multiply output, insights, and results.
The future of SEO starts with building smarter teams. It’s humans working with agents. It’s capability uplift. And if you lead that charge, you’ll not only adapt to the next generation of search, you’ll be the ones designing it.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2025/10/pyramid-seo-team-hupNVT.jpg?fit=1024%2C1260&ssl=112601024http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-10-27 17:35:022025-10-27 17:35:02The future of SEO teams is human-led and agent-powered
Google added Query groups to the Search Console Insights report. Query groups groups similar search queries together so you can quickly see the main topics your audience searches for.
What Google said. Google wrote, “We are excited to announce Query groups, a powerful Search Console Insights feature that groups similar search queries.”
“Query groups solve this problem by grouping similar queries. Instead of a long, cluttered list of individual queries, you will now see lists of queries representing the main groups that interest your audience. The groups are computed using AI; they may evolve and change over time. They are designed for providing a better high level perspective of your queries and don’t affect ranking,” Google added.
What it looks like. Here is a sample screenshot of this new Query groups report:
You can see that Google is lumping together “search engine optimization, seo optimization, seo website, seo optimierung, search engine optimization (seo), search …” into the “seo” query group in the second line. This shows the site overall is getting 9% fewer clicks on SEO related queries than it did previously.
Availability. Google said query groups will be rolling out gradually over the coming weeks. It is a new card in the Search Console Insights report. Plus, query groups are available only to properties that have a large volume of queries, as the need to group queries is less relevant for sites with fewer queries.
Why we care. Many SEOs have been grouping these queries into these clusters manually or through their own tools. Now, Google will do it for you, making it easier for more novie SEOs and beginner SEOs to understand.
Every year, Search Engine Land is delighted to celebrate the best of search marketing by rewarding the agencies, in-house teams, and individuals worldwide for delivering exceptional results.
Today, I’m excited to announce all 18 winners of the 11th annual Search Engine Land Awards.
The 2025 Search Engine Land Awards winners
Best Use Of AI Technology In Search Marketing
15x ROAS with AI: How CAMP Digital Redefined Paid Search for Home Services
ATRA & Jason Stone Injury Lawyers – Leveraging CRM Data to Scale Case Volume
Best Commerce Search Marketing Initiative – PPC
Adwise & Azerty – 126% uplift in profit from paid advertising & 1 percent point net margin business uplift by advanced cross-channel bucketing
Best Local Search Marketing Initiative – PPC
How We Crushed Belron’s Lead Target by 238% With an AI-Powered Local Strategy (Adviso)
Best B2B Search Marketing Initiative – PPC
Blackbird PPC and Customer.io: Advanced Data Integration to Drive 239% Revenue Increase with 12% Greater Lead Efficiency, with MMM Future-Proofing 2025 Growth
Best Integration Of Search Into Omnichannel Marketing
How NBC used search to drive +2,573 accounts in a Full-Funnel Media Push (Adviso)
Best Overall SEO Initiative – Small Business
Digital Hitmen & Elite Tune: The Toyota Shift That Delivered 678% SEO ROI
Best Overall SEO Initiative – Enterprise
825 Million Clicks, Zero Content Edits: How Amsive Engineered MSN’s Technical SEO Turnaround
Best Commerce Search Marketing Initiative – SEO
Scaling Non-Branded SEO for Assouline to Drive +26% Organic Revenue Uplift (Block & Tam)
Best Local Search Marketing Initiative – SEO
Building an Unbeatable Foundation for Success: Using Hyperlocal SEO to Build Exceptional ROI (Digital Hitmen)
Best B2B Search Marketing Initiative – SEO
Page One, Pipeline Won: The B2B SEO Playbook That Turned 320 Visitors into $10.75M in Pipeline (LeadCoverage)
Agency Of The Year – PPC
Driving Growth Where Search Happens: Stella Rising’s Paid Search Transformation
Agency Of The Year – SEO
How Amsive Rescued MSN’s Global Visibility Through Enterprise Technical SEO at Scale
In-House Team Of The Year – SEO
How the American Cancer Society’s Lean SEO Team Drove Enterprise-Wide Consolidation and AI Search Visibility Gains for Cancer.org
Search Marketer Of The Year
Mike King, founder and CEO of iPullRank
Small Agency Of The Year – PPC
ATRA & Jason Stone Injury Lawyers – Leveraging CRM Data to Scale Case Volume
Small Agency Of The Year – SEO
From Zero to Top of the Leaderboard: Bloom Digital Drives Big Growth With Small SEO Budgets
“I’m going to SMX Next!”
Select winners of the 2025 Search Engine Land Awards will be invited to speak live at SMX Next during our two ask-me-anything-style sessions. Bring your burning SEO and PPC questions to ask this award-winning panel of search marketers!
Congrats again to all the winners. And huge thank yous to everyone who entered the 2025 Search Engine Land Awards, the finalists, and our fantastic panel of judges for this year’s awards.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2025/10/best-use-of-AI-in-search-marketing-search-engine-land-awards-2025-BqBNkX.webp?fit=1920%2C600&ssl=16001920http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-10-27 14:00:002025-10-27 14:00:00Search Engine Land Awards 2025: And the winners are…
Many PPC advertisers obsess over click-through rates, using them as a quick measure of ad performance.
But CTR alone doesn’t tell the whole story – what matters most is what happens after the click. That’s where many campaigns go wrong.
The problem with chasing high CTRs
Most advertisers think the ad with the highest CTR is often the best. It should have a high Quality Score and attract lots of clicks.
However, in most cases, lower CTR ads usually outperform higher CTR ads in terms of total conversions and revenue.
If all I cared about was CTR, then I could write an ad:
“Free money.”
“Claim your free money today.”
“No strings attached.”
That ad would get an impressive CTR for many keywords, and I’d go out of business pretty quickly, giving away free money.
When creating ads, we must consider:
Type of searchers we want to attract.
Ensure the users are qualified.
Set expectations for the landing page.
I can take my free money ad and refine it:
“Claim your free money.”
“Explore college scholarships.”
“Download your free guide.”
I’ve now:
Told searchers they can get free money for college through scholarships if they download a guide.
Narrowed down my audience to people who are willing to apply for scholarships and willing to download a guide, presumably in exchange for some information.
If you focus solely on CTR and don’t consider attracting the right audience, your advertising will suffer.
While this sentiment applies to both B2C and B2B companies, B2B companies must be exceptionally aware of how their ads appear to consumers versus business searchers.
B2B companies must pre-qualify searchers
If you are advertising for a B2B company, you’ll often notice that CTR and conversion rates have an inverse relationship. As CTR increases, conversion rates decrease.
The most common reason for this phenomenon is that consumers and businesses can search for many B2B keywords.
B2B companies must try to show that their products are for businesses, not consumers.
For instance, “safety gates”is a common search term.
The majority of people looking to buy a safety gate are consumers who want to keep pets or babies out of rooms or away from stairs.
However, safety gates and railings are important for businesses with factories, plants, or industrial sites.
These two ads are both for companies that sell safety gates. The first ad’s headlines for Uline could be for a consumer or a business.
It’s not until you look at the description that you realize this is for mezzanines and catwalks, which is something consumers don’t have in their homes.
As many searchers do not read descriptions, this ad will attract both B2B and B2C searchers.
The second ad mentions Industrial in the headline and follows that up with a mention of OSHA compliance in the description and the sitelinks.
While both ads promote similar products, the second one will achieve a better conversion rate because it speaks to a single audience.
We have a client who specializes in factory parts, and when we graph their conversion rates by Quality Score, we can see that as their Quality Score increases, their conversion rates decrease.
They will review their keywords and ads whenever they have a 5+ Quality Score on any B2B or B2C terms.
This same logic does not apply to B2B search terms.
Those terms often contain more jargon or qualifying statements when looking for B2B services and products.
B2B advertisers don’t have to use characters to weed out B2C consumers and can focus their ads only on B2B searchers.
How to balance CTR and conversion rates
As you are testing various ads to find your best pre-qualifying statements, it can be tricky to examine the metrics. Which one of these would be your best ad?
15% CTR, 3% conversion rate.
10% CT, 7% conversion rate.
5% CTR, 11% conversion rate.
When examining mixed metrics, CTR and conversion rates, we can use additional metrics to define our best ads. My favorite two are:
Conversion per impression (CPI): This is a simple formula dividing your conversion by the number of impressions (conversions/impressions).
Revenue per impression (RPI): If you have variable checkout amounts, you can instead use your revenue metrics to decide your best ads by dividing your revenue by your impressions (revenue/impressions).
You can also multiply the results by 1,000 to make the numbers easier to digest instead of working with many decimal points. So, we might write:
CPI = (conversions/impressions) x 1,000
By using impression metrics, you can find the opportunity for a given set of impressions.
CTR
Conversion rate
Impressions
Clicks
Conversions
CPI
15%
3%
5,000
750
22.5
4.5
10%
7%
4,000
400
28
7
5%
11%
4,500
225
24.75
5.5
By doing some simple math, we can see that option 2, with a 10% CTR and a 7% conversion rate, gives us the most total conversions.
How do you dissuade users who don’t fit your audience from clicking on your ads?
How do you attract your qualified audience?
Are your ads setting proper landing page expectations?
By considering each of these questions as you create ads, you can find ads that speak to the type of users you want to attract to your site.
These ads are rarely your best CTRs. These ads balance the appeal of high CTRs with pre-qualifying statements that ensure the clicks you receive have the potential to turn into your next customer.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2025/10/OSHA-compliance-Google-Ads-hT8Y2X.webp?fit=871%2C574&ssl=1574871http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-10-27 13:00:002025-10-27 13:00:00Why a lower CTR can be better for your PPC campaigns
The web’s purpose is shifting. Once a link graph – a network of pages for users and crawlers to navigate – it’s rapidly becoming a queryable knowledge graph.
For technical SEOs, that means the goal has evolved from optimizing for clicks to optimizing for visibility and even direct machine interaction.
Enter NLWeb – Microsoft’s open-source bridge to the agentic web
At the forefront of this evolution is NLWeb (Natural Language Web), an open-source project developed by Microsoft.
NLWeb simplifies the creation of natural language interfaces for any website, allowing publishers to transform existing sites into AI-powered applications where users and intelligent agents can query content conversationally – much like interacting with an AI assistant.
Developers suggest NLWeb could play a role similar to HTML in the emerging agentic web.
Its open-source, standards-based design makes it technology-agnostic, ensuring compatibility across vendors and large language models (LLMs).
This positions NLWeb as a foundational framework for long-term digital visibility.
Schema.org is your knowledge API: Why data quality is the NLWeb foundation
NLWeb proves that structured data isn’t just an SEO best practice for rich results – it’s the foundation of AI readiness.
Its architecture is designed to convert a site’s existing structured data into a semantic, actionable interface for AI systems.
In the age of NLWeb, a website is no longer just a destination. It’s a source of information that AI agents can query programmatically.
The NLWeb data pipeline
The technical requirements confirm that a high-quality schema.org implementation is the primary key to entry.
Data ingestion and format
The NLWeb toolkit begins by crawling the site and extracting the schema markup.
The schema.org JSON-LD format is the preferred and most effective input for the system.
This means the protocol consumes every detail, relationship, and property defined in your schema, from product types to organization entities.
For any data not in JSON-LD, such as RSS feeds, NLWeb is engineered to convert it into schema.org types for effective use.
Semantic storage
Once collected, this structured data is stored in a vector database. This element is critical because it moves the interaction beyond traditional keyword matching.
Vector databases represent text as mathematical vectors, allowing the AI to search based on semantic similarity and meaning.
For example, the system can understand that a query using the term “structured data” is conceptually the same as content marked up with “schema markup.”
This capacity for conceptual understanding is absolutely essential for enabling authentic conversational functionality.
Every NLWeb instance operates as an MCP server, an emerging standard for packaging and consistently exchanging data between various AI systems and agents.
MCP is currently the most promising path forward for ensuring interoperability in the highly fragmented AI ecosystem.
The ultimate test of schema quality
Since NLWeb relies entirely on crawling and extracting schema markup, the precision, completeness, and interconnectedness of your site’s content knowledge graph determine success.
The key challenge for SEO teams is addressing technical debt.
Custom, in-house solutions to manage AI ingestion are often high-cost, slow to adopt, and create systems that are difficult to scale or incompatible with future standards like MCP.
NLWeb addresses the protocol’s complexity, but it cannot fix faulty data.
If your structured data is poorly maintained, inaccurate, or missing critical entity relationships, the resulting vector database will store flawed semantic information.
This leads inevitably to suboptimal outputs, potentially resulting in inaccurate conversational responses or “hallucinations” by the AI interface.
Robust, entity-first schema optimization is no longer just a way to win a rich result; it is the fundamental barrier to entry for the agentic web.
By leveraging the structured data you already have, NLWeb allows you to unlock new value without starting from scratch, thereby future-proofing your digital strategy.
NLWeb vs. llms.txt: Protocol for action vs. static guidance
The need for AI crawlers to process web content efficiently has led to multiple proposed standards.
A comparison between NLWeb and the proposed llms.txt file illustrates a clear divergence between dynamic interaction and passive guidance.
The llms.txt file is a proposed static standard designed to improve the efficiency of AI crawlers by:
Providing a curated, prioritized list of a website’s most important content – typically formatted in markdown.
Attempting to solve the legitimate technical problems of complex, JavaScript-loaded websites and the inherent limitations of an LLM’s context window.
In sharp contrast, NLWeb is a dynamic protocol that establishes a conversational API endpoint.
Its purpose is not just to point to content, but to actively receive natural language queries, process the site’s knowledge graph, and return structured JSON responses using schema.org.
NLWeb fundamentally changes the relationship from “AI reads the site” to “AI queries the site.”
Attribute
NLWeb
llms.txt
Primary goal
Enables dynamic, conversational interaction and structured data output
Improves crawler efficiency and guides static content ingestion
Operational model
API/Protocol (active endpoint)
Static Text File (passive guidance)
Data format used
Schema.org JSON-LD
Markdown
Adoption status
Open project; connectors available for major LLMs, including Gemini, OpenAI, and Anthropic
Proposed standard; not adopted by Google, OpenAI, or other major LLMs
Strategic advantage
Unlocks existing schema investment for transactional AI uses, future-proofing content
Reduces computational cost for LLM training/crawling
The market’s preference for dynamic utility is clear. Despite addressing a real technical challenge for crawlers, llms.txt has failed to gain traction so far.
NLWeb’s functional superiority stems from its ability to enable richer, transactional AI interactions.
It allows AI agents to dynamically reason about and execute complex data queries using structured schema output.
The strategic imperative: Mandating a high-quality schema audit
While NLWeb is still an emerging open standard, its value is clear.
It maximizes the utility and discoverability of specialized content that often sits deep in archives or databases.
This value is realized through operational efficiency and stronger brand authority, rather than immediate traffic metrics.
Several organizations are already exploring how NLWeb could let users ask complex questions and receive intelligent answers that synthesize information from multiple resources – something traditional search struggles to deliver.
The ROI comes from reducing user friction and reinforcing the brand as an authoritative, queryable knowledge source.
For website owners and digital marketing professionals, the path forward is undeniable: mandate an entity-first schema audit.
Because NLWeb depends on schema markup, technical SEO teams must prioritize auditing existing JSON-LD for integrity, completeness, and interconnectedness.
Publishers should ensure their schema accurately reflects the relationships among all entities, products, services, locations, and personnel to provide the context necessary for precise semantic querying.
The transition to the agentic web is already underway, and NLWeb offers the most viable open-source path to long-term visibility and utility.
It’s a strategic necessity to ensure your organization can communicate effectively as AI agents and LLMs begin integrating conversational protocols for third-party content interaction.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2025/10/ChatGPT-Image-The-agentic-web-is-here-Why-NLWeb-makes-schema-your-greatest-SEO-asset-VCuOt6.png?fit=1920%2C1080&ssl=110801920http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-10-27 12:00:002025-10-27 12:00:00The agentic web is here: Why NLWeb makes schema your greatest SEO asset