Google is now promoting its own AI features inside Google Ads — a rare move that inserts marketing directly into advertisers’ workflow.
What’s happening. Users are seeing promotional messages for AI Max for Search campaigns when they open campaign settings panels.
The notifications appear during routine account audits and updates.
It essentially serves as an internal advertisement for Google’s own tooling.
Why we care. The in-platform placement signals Google is pushing to accelerate AI adoption among advertisers, moving from optional rollouts to active promotion. While Google often introduces AI-driven features, promoting them directly within existing workflows marks a more aggressive adoption strategy.
What to watch. Whether this promotional approach expands to other Google Ads features — and how advertisers respond to marketing within their management interface.
First seen. Julie Bacchini, president and founder of Neptune Moon, spotted the notification and shared it on LinkedIn. She wrote: “Nothing like Google Ads essentially running an ad for AI Max in the settings area of a campaign.”
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/02/AI-Max-ad-6t2tEE.jpg?fit=800%2C412&ssl=1412800http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-10 17:44:232026-02-10 17:44:23Google pushes AI Max tool with in-app ads
Microsoft today launched AI Performance in Bing Webmaster Tools in beta. AI Performance lets you see where, and how often, your content is cited in AI-generated answers across Microsoft Copilot, Bing’s AI summaries, and select partner integrations, the company said.
AI Performance in Bing Webmaster Tools shows which URLs are cited, which queries trigger those citations, and how citation activity changes over time.
What’s new. AI Performance is a new, dedicated dashboard inside Bing Webmaster Tools. It tracks citation visibility across supported AI surfaces. Instead of measuring clicks or rankings, it shows whether your content is used to ground AI-generated answers.
Microsoft framed the launch as an early step toward Generative Engine Optimization (GEO) tooling, designed to help publishers understand how their content shows up in AI-driven discovery.
What it looks like. Microsoft shared this image of AI Performance in Bing Webmaster Tools:
What the dashboard shows. The AI Performance dashboard introduces metrics focused specifically on AI citations:
Total citations: How many times a site is cited as a source in AI-generated answers during a selected period.
Average cited pages: The daily average number of unique URLs from a site referenced across AI experiences.
Grounding queries: Sample query phrases AI systems used to retrieve and cite publisher content.
Page-level citation activity: Citation counts by URL, highlighting which pages are referenced most often.
Visibility trends over time: A timeline view showing how citation activity rises or falls across AI experiences.
These metrics only reflect citation frequency. They don’t indicate ranking, prominence, or how a page contributed to a specific AI answer.
Why we care. It’s good to know where and how your content gets cited, but Bing Webmaster Tools still won’t reveal how those citations translate into clicks, traffic, or any real business outcome. Without click data, publishers still can’t tell if AI visibility delivers value.
How to use it. Microsoft said publishers can use the data to:
Confirm which pages are already cited in AI answers.
Identify topics that consistently appear across AI-generated responses.
Improve clarity, structure, and completeness on indexed pages that are cited less often.
The guidance mirrors familiar best practices: clear headings, evidence-backed claims, current information, and consistent entity representation across formats.
What’s next. Microsoft said it plans to “improve inclusion, attribution, and visibility across both search results and AI experiences,” and continue to “evolve these capabilities.”
B2B advertising faces a distinct challenge: most automation tools weren’t built for lead generation.
Ecommerce campaigns benefit from hundreds of conversions that fuel machine learning. B2B marketers don’t have that luxury. They deal with lower conversion volume, longer sales cycles, and no clear cart value to guide optimization.
The good news? Automation can still work.
Melissa Mackey, Head of Paid Search at Compound Growth Marketing, says the right strategy and signals can turn automation into a powerful driver of B2B leads. Below is a summary of the key insights and recommendations she shared at SMX Next.
The fundamental challenge: Why automation struggles with lead gen
Automation systems are built for ecommerce success, which creates three core obstacles for B2B marketers:
Customer journey length: Automation performs best with short journeys. A user visits, buys, and checks out within minutes. B2B journeys can last 18 to 24 months. Offline conversions only look back 90 days, leaving a large gap between early engagement and closed revenue.
Conversion volume requirements: Google’s automation works best with about 30 leads per campaign per month. Google says it can function with less, but performance is often inconsistent below that level. Ecommerce campaigns easily hit hundreds of monthly conversions. B2B lead gen rarely does.
The cart value problem: In ecommerce, value is instant and obvious. A $10 purchase tells the system something very different than a $100 purchase. Lead generation has no cart. True value often isn’t clear until prospects move through multiple funnel stages — sometimes months later.
The solution: Sending the right signals
Despite these challenges, proven strategies can make automation work for B2B lead generation.
Offline conversions: Your number one priority
Connecting your CRM to Google Ads or Microsoft Ads is essential for making automation work in lead generation. This isn’t optional. It’s the foundation. If you haven’t done this yet, stop and fix it first.
In Google Ads’ Data Manager, you’ll find hundreds of CRM integration options. The most common B2B setups include:
HubSpot and Salesforce: Both offer native, seamless integrations with Google Ads. Setup is simple. Once connected, customer stages and CRM data flow directly into the platform.
Other CRMs: If you don’t use HubSpot or Salesforce, you can build a custom data table with only the fields you want to share. Use connectors like Snowflake to send that data to Google Ads while protecting user privacy and still supplying strong automation signals.
Third-party integrations: If your CRM doesn’t integrate directly, tools like Zapier can connect almost anything to Google Ads. There’s a cost, but the performance gains typically pay for it many times over.
Embrace micro conversions with strategic values
Micro conversions signal intent. They show a “hand raiser” — someone engaged on your site who isn’t an MQL yet but clearly interested.
The key is assigning relative value to these actions, even when you don’t know their exact revenue impact. Use a simple hierarchy to train automation what matters most:
Video views (value: 1): Shows curiosity, but qualification is unclear.
Form fills (value: 100): Reflects meaningful commitment and willingness to share personal information.
Marketing qualified leads (value: 1,000): The highest-value signal and top optimization priority.
This value structure tells automation that one MQL matters more than 999 video views. Without these distinctions, campaigns chase impressive conversion rates driven by low-value actions — while real leads slip through the cracks.
Making Performance Max work for lead generation
You might dismiss Performance Max (PMax) for lead generation — and for good reason. Run it on a basic maximize conversions strategy, and it usually produces junk leads and wastes budget.
But PMax can deliver exceptional results when you combine conversion values and offline conversion data with a Target ROAS bid strategy.
One real client example shows what’s possible. They tracked three offline conversion actions — leads, opportunities, and customers — and valued customers at 50 times a lead. The results were dramatic:
Leads increased 150%
Opportunities increased 350%
Closed deals increased 200%
Closed deals became the campaign’s top-performing metric because they reflected real, paying customers. The key difference? Using conversion values with a Target ROAS strategy instead of basic maximize conversions.
Campaign-specific goals: An underutilized feature
Campaign-specific goals let you optimize campaigns for different conversion actions, giving you far more control and flexibility.
You can set conversion goals at the account level or make them campaign-specific. With campaign-specific goals, you can:
Run a mid-funnel campaign optimized only for lead form submissions using informational keywords.
Build audiences from those form fills to capture engaged prospects.
Launch a separate campaign optimized for qualified leads, targeting that warm audience with higher-value offers like demos or trials.
This approach avoids asking someone to “marry you on the first date.” It also keeps campaigns from competing against themselves by trying to optimize for conflicting goals.
Portfolio bidding: Reaching the data threshold faster
Portfolio bidding groups similar campaigns so you can reach the critical 30-conversions-per-month threshold faster.
For example, four separate campaigns might generate 12, 11, 0, and 15 conversions. On their own, none qualify. Grouped into a single portfolio, they total 38 conversions — giving automation far more data to optimize against.
You may still need separate campaigns for valid reasons — regional reporting, distinct budgets, or operational constraints. Portfolio bidding lets you keep that structure while still feeding the system enough volume to perform.
Bonus benefit: Portfolio bidding lets you set maximum CPCs. This prevents runaway bids when automation aggressively targets high-propensity users. This level of control is otherwise only available through tools like SA360.
First-party audiences: Powerful targeting signals
First-party audiences send strong signals about who you want to reach, which is critical for AI-powered campaigns.
If HubSpot or Salesforce is connected to Google Ads, you can import audiences and use them strategically:
Customer lists: Use them as exclusions to avoid paying for existing customers, or as lookalikes in Demand Gen campaigns.
Contact lists: Use them for observation to signal ideal audience traits, or for targeting to retarget engaged users.
Audiences make it much easier to trust broad match keywords and AI-driven campaign types like PMax or AI Max — approaches that often feel too loose for B2B without strong audience signals in place.
Leveraging AI for B2B lead generation
AI tools can significantly improve B2B advertising efficiency when you use them with intent. The key is remembering that most AI is trained on consumer behavior, not B2B buying patterns.
The essential B2B prompt addition
Always tell the AI you’re selling to other businesses. Start prompts with clear context, like: “You’re a SaaS company that sells to other businesses.” That single line shifts the AI’s lens away from consumer assumptions and toward B2B realities.
Client onboarding and profile creation
Use AI to build detailed client profiles by feeding it clear inputs, including:
What you sell and your core value.
Your unique selling propositions.
Target personas.
Ideal customer profiles.
Create a master template or a custom GPT for each client. This foundation sharpens every downstream AI task and dramatically improves accuracy and relevance.
Competitor research in minutes, not hours
Competitive analysis that once took 20–30 hours can now be done in 10–15 minutes. Ask AI to analyze your competitors and break down:
Current offers
Positioning and messaging
Value propositions
Customer sentiment
Social proof
Pricing strategies
AI delivers clean, well-structured tables you can screenshot for client decks or drop straight into Google Sheets for sorting and filtering. Use this insight to spot gaps, uncover opportunities, and identify clear strategic advantages.
Competitor keyword analysis
Use tools like Semrush or SpyFu to pull competitor keyword lists, then let AI do the heavy lifting. Create a spreadsheet with columns for each competitor’s keywords alongside your client’s keywords. Then ask the AI to:
Identify keywords competitors rank for that you don’t to uncover gaps to fill.
Identify keywords you own that competitors don’t to surface unique advantages.
Group keywords by theme to reveal patterns and inform campaign structure.
What once took hours of pivot tables, filtering, and manual cleanup now takes AI about five minutes.
Automating routine tasks
Negative keyword review: Create an AI artifact that learns your filtering rules and decision logic. Feed it search query reports, and it returns clear add-or-ignore recommendations. You spend time reviewing decisions instead of doing first-pass analysis, which makes SQR reviews faster and easier to run more often.
Ad copy generation: Tools like RSA generators can produce headlines and descriptions from sample keywords and destination URLs. Pair them with your custom client GPT for even stronger starting points. Always review AI-generated copy, but refining solid drafts is far faster than writing from scratch.
Experiments: testing what works
The Experiments feature is widely underused. Put it to work by testing:
Different bid strategies, including portfolio vs. standard
Match types
Landing pages
Campaign structures
Google Ads automatically reports performance, so there’s no manual math. It even includes insight summaries that tell you what to do next — apply the changes, end the experiment, or run a follow-up test.
Solutions: Pre-built scripts made easy
Solutions are prebuilt Google Ads scripts that automate common tasks, including:
Reporting and dashboards
Anomaly detection
Link checking
Flexible budgeting
Negative keyword list creation
Instead of hunting down scripts and pasting code, you answer a few setup questions and the solution runs automatically. Use caution with complex enterprise accounts, but for simpler structures, these tools can save a significant amount of time.
Key takeaways
Automation wasn’t built for lead generation, but with the right strategy, you can still make it work for B2B.
Send the right signals: Offline conversions with assigned values aren’t optional. First-party audiences add critical targeting context. Together, these signals make AI-driven campaigns work for B2B.
AI is your friend: Use AI to automate repetitive work — not to replace people. Take 50 search query reports off your team’s plate so they can focus on strategy instead of tedious analysis.
Leverage platform tools: Experiments, Solutions, campaign-specific goals, and portfolio bidding are powerful features many advertisers ignore. Use what’s already built into your ad platforms to get more out of every campaign.
Watch: It’s time to embrace automation for B2B lead gen
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/02/v8q5mt4pjuo-KPp98m.jpg?fit=1280%2C720&ssl=17201280http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-10 17:00:002026-02-10 17:00:00How to make automation work for lead gen PPC
Let me guess: you just spent three months building a perfectly optimized product taxonomy, complete with schema markup, internal linking, and killer metadata.
Then, the product team decided to launch a site redesign without telling you. Now half your URLs are broken, the new templates strip out your structured data, and your boss is asking why organic traffic dropped 40%.
Sound familiar?
Here’s the thing: this isn’t an SEO failure, but a governance failure. It’s costing you nights and weekends trying to fix problems that should never have happened in the first place.
This article covers why weak governance keeps breaking SEO, how AI has raised the stakes, and how a visibility governance maturity model helps SEO teams move from firefighting to prevention.
Governance isn’t bureaucracy – it’s your insurance policy
I know what you’re thinking. “Great, another framework that means more meetings and approval forms.” But hear me out.
The Visibility Governance Maturity Model (VGMM) isn’t about creating red tape. It’s about establishing clear ownership, documented processes, and decision rights that prevent your work from being accidentally destroyed by teams who don’t understand SEO.
Think of it this way: VGMM is the difference between being the person who gets blamed when organic traffic tanks versus being the person who can point to documentation showing exactly where the process broke down – and who approved skipping the SEO review.
This maturity model:
Protects your work from being undone by releases you weren’t consulted on.
Documents your standards so you’re not explaining canonical tags for the 47th time.
Establishes clear ownership so you’re not expected to fix everything across six different teams.
Gets you a seat at the table when decisions affecting SEO are being made.
Makes your expertise visible to leadership in ways they understand.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
The real problem: AI just made everything harder
Remember when SEO was mostly about your website and Google? Those were simpler times.
Now you’re trying to optimize for:
AI Overviews that rewrite your content.
ChatGPT citations that may or may not link back.
Perplexity summaries that pull from competitors.
Voice assistants that only cite one source.
Knowledge panels that conflict with your site.
And you’re still dealing with:
Content teams who write AI-generated fluff.
Developers who don’t understand crawl budget.
Product managers who launch features that break structured data.
Marketing directors who want “just one small change” that tanks rankings.
Without governance, you’re the only person who understands how all these pieces fit together.
When something breaks, everyone expects you to fix it – usually yesterday. When traffic is up, it’s because marketing ran a great campaign. When it’s down, it’s your fault.
You become the hero the organization depends on, which sounds great until you realize you can never take a real vacation, and you’re working 60-hour weeks.
What VGMM actually measures – in terms you care about
VGMM doesn’t care about your keyword rankings or whether you have perfect schema markup. It evaluates whether your organization is set up to sustain SEO performance without burning you out. Below are the five maturity levels that translate to your daily reality:
Level 1: Unmanaged (your current nightmare)
Nobody knows who’s responsible for SEO decisions.
Changes happen without SEO review.
You discover problems after they’ve tanked traffic.
You’re constantly firefighting.
Documentation doesn’t exist or is ignored.
Level 2: Aware (slightly better)
Leadership admits SEO matters.
Some standards exist but aren’t enforced.
You have allies but no authority.
Improvements happen but get reversed next quarter.
You’re still the only one who really gets it.
Level 3: Defined (getting somewhere)
SEO ownership is documented.
Standards exist, and some teams follow them.
You’re consulted before major changes.
QA checkpoints include SEO review.
You’re working normal hours most weeks.
Level 4: Integrated (the dream)
SEO is built into release workflows.
Automated checks catch problems before they ship.
Cross-functional teams share accountability.
You can actually take a vacation without a disaster.
Your expertise is respected and resourced.
Level 5: Sustained (unicorn territory)
SEO survives leadership changes.
Governance adapts to new AI surfaces automatically.
Problems are caught before they impact traffic.
You’re doing strategic work, not firefighting.
The organization values prevention over reaction.
Most organizations sit at Level 1 or 2. That’s not your fault – it’s a structural problem that VGMM helps diagnose and fix.
VGMM coordinates multiple domain-specific maturity models. Think of it as a health checkup that looks at all your vital signs, not just one metric.
It evaluates maturity across domains like:
SEO governance: Your core competency.
Content governance: Are writers following standards?
Performance governance: Is the site actually fast?
Accessibility governance: Is the site inclusive?
Workflow governance: Do processes exist and work?
Each domain gets scored independently, then VGMM looks at how they work together. Because excellent SEO maturity doesn’t matter if the performance team deploys code that breaks the site every Tuesday or if the content team publishes AI-generated nonsense that tanks your E-E-A-T signals.
VGMM produces a 0–100% score based on:
Domain scores: How mature is each area?
Weighting: Which domains matter most for your business?
Dependencies: Are weaknesses in one area breaking strengths in another?
Coherence: Do decision rights and accountability actually align?
The final score isn’t about effort – it’s about whether governance actually works.
You don’t need to explain VGMM theory. You need to connect it to problems leadership already knows exist.
Frame it as risk reduction: “We’ve had three major traffic drops this year from changes that SEO didn’t review. VGMM helps us identify where our process breaks down so we can prevent this.”
Frame it as efficiency: “I’m spending 60% of my time firefighting problems that could have been prevented. VGMM establishes processes so I can focus on growth opportunities instead.”
Frame it as a competitive advantage: “Our competitors are getting cited in AI Overviews, and we’re not. VGMM evaluates whether we have the governance structure to compete in AI-mediated search.”
Frame it as scalability: “Right now, our SEO capability depends entirely on me. If I get hit by a bus tomorrow, nobody knows how to maintain what we’ve built. VGMM establishes documentation and processes that make our SEO sustainable.”
The ask: “I’d like to conduct a VGMM assessment to identify where our processes need strengthening.”
What success actually looks like
Organizations with higher VGMM maturity experience measurably better outcomes:
Fewer unexplained traffic drops because changes are reviewed.
More stable AI citations because content quality is governed.
Less rework after launches because SEO is built into workflows.
Clearer accountability because ownership is documented.
Better resource allocation because gaps are visible to leadership.
But the real win for you personally:
You stop being the hero who saves the day and become the strategist who prevents disasters.
Your expertise is recognized and properly resourced.
You can take actual vacations.
You work normal hours most of the time.
Your job becomes about building and improving, not constantly fixing.
Getting started: Practical next steps
Step 1: Self-assessment
Look at the five maturity levels. Where is your organization honestly sitting? If you’re at Level 1 or 2, you have evidence for why governance matters.
Step 2: Document current-state pain
Make a list of the last six months of SEO incidents:
Changes that weren’t reviewed.
Traffic drops from preventable problems.
Time spent fixing avoidable issues.
Requests that had to be explained multiple times.
This becomes your business case.
Step 3: Start with one domain
You don’t need to implement full VGMM immediately. Start with SEOGMM:
Document your standards.
Create a review checklist.
Establish who can approve exceptions.
Get stakeholder sign-off on the process.
Step 4: Show results
Track prevented problems. When you catch an issue before it ships, document it. When a process prevents a regression, quantify the impact. Build your case for expanding governance.
Step 5: Expand systematically
Once SEOGMM is working, expand to related domains (content, performance, accessibility). Show how integrated governance catches problems that individual domain checks miss.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
Why governance determines whether SEO survives
Governance isn’t about making your job harder. It’s about making your organization work better so your job becomes sustainable.
VGMM gives you a framework for diagnosing why SEO keeps getting undermined by other teams and a roadmap for fixing it. It translates your expertise into language that leadership understands. It protects your work from accidental destruction.
Most importantly, it moves you from being the person who’s always fixing emergencies to being the person who builds systems that prevent them.
You didn’t become an SEO professional to spend your career firefighting. VGMM helps you get back to doing the work that actually matters – the strategic, creative, growth-focused work that attracted you to SEO in the first place.
If you’re tired of watching your best work get undone by teams who don’t understand SEO, if you’re exhausted from being the only person who knows how everything works, if you want your expertise to be recognized and protected – start the VGMM conversation with your leadership.
The framework exists. What’s missing is someone in your organization saying, “We need to govern visibility like we govern everything else that matters.”
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-10 15:00:002026-02-10 15:00:00Why governance maturity is a competitive advantage for SEO
It is no secret that publishing SEO-friendly blog posts is one of the easiest and most effective ways to drive organic traffic and improve SERP rankings. However, in the era of artificial intelligence, blog posts matter more than ever. They help establish brand authority by consistently delivering fresh, valuable content that can be cited in AI-generated answers.
In this guide, we will share a practical, detailed approach to writing SEO-friendly blog content that not only ranks on Google SERPs but is also surfaced by AI models.
SEO friendly blog post now means writing with search intent, ensuring content is clear and quotable for AI systems
Key factors for SEO friendly blog posts include trustworthiness, machine-readability, answer-first structure, and topical authority
Conduct thorough keyword research and find readers’ questions to match search intent effectively
Use clear headings, improve readability, include inclusive language, and add relevant media to engage readers
Write compelling meta titles and descriptions, link to existing content, and focus on building authority to enhance visibility
What does an SEO-friendly blog post mean in the AI era?
The way people search for information has changed, and with it, the meaning of an SEO-friendly blog post. Before the rise of generative AI, writing an SEO-friendly blog post mostly meant this:
‘Writing content with the intention of ranking highly in search engine results pages (SERPs). The content is optimized for specific target keywords, easy to read, and provides value to the reader.’
That definition is not wrong. But it is no longer complete.
In the AI era, an SEO-friendly blog post is written with search intent first, answering a user’s question clearly and efficiently. It is not just about placing keywords in the right spots. It is about creating an information-dense piece with accurate, well-structured, and quotable sentences that AI systems can confidently extract and surface as direct answers.
The new definition clearly shows that strong SEO foundations still matter, and they matter more than ever. What has changed is how content is evaluated and discovered. Search engines and AI models now look beyond clicks and rankings to understand whether your content is trustworthy, helpful, and easy to interpret.
Here are some key factors that play a key role in determining whether a blog post is truly SEO-friendly:
Trustworthiness (E-E-A-T): Demonstrating real-world experience, expertise, and credibility helps your content stand out from low-value AI-generated rehashes
Machine-readability: Clear structure, clean HTML, and technical signals such as schema markup help search engines and AI systems understand what your content is about
Answer-first structure: Placing concise, direct answers at the beginning of sections makes it easier for AI models to extract and reference your content
Topical authority: Publishing interconnected, in-depth content around a subject is far more effective than creating isolated blog posts
9 tips to write SEO-friendly blogs for LLM and SERP visibility
Now we get to the core of this guide. Below are some foundational tips to help you plan and write SEO-friendly blog posts that are genuinely helpful, easy to understand, and focused on solving real reader problems. When done right, these practices not only improve search visibility but also shape how your brand is perceived by both users and AI systems.
1. Conduct thorough keyword research
Before you start writing a single word, start with solid keyword research. This step helps you understand how people search for a topic, which terms carry demand, and how competitive those searches are. It also ensures your content aligns with real user intent instead of assumptions.
You can use tools like Google Keyword Planner, Ahrefs, or Semrush for this. Personally, I prefer using Semrush’s Keyword Magic Tool because it quickly surfaces thousands of relevant keyword ideas around a single topic.
Keyword Magic Tool by Semrush for the relevant keyword list
Here’s how I usually approach it. I enter a broad keyword related to my topic, for example, ‘SEO.’ The tool then returns an extensive list of related keywords along with important metrics. I mainly focus on three of them:
Search intent, to understand what the user is really looking for
Keyword Difficulty (KD%), to estimate how hard it is to rank
Search volume, to gauge demand
This combination helps me choose keywords that are realistic to rank for and meaningful for readers.
If you use Yoast SEO, this process becomes even easier. Semrush is integrated into Yoast SEO (both free and Premium), giving you keyword suggestions directly in Yoast SEO. With a single click, you can access relevant keyword data while writing, making it easier to create focused, useful content from the start.
Keyword research tells you what people search for. Questions tell you why they search.
When you actively look for the questions your audience is asking, you move closer to matching search intent. This is especially important in the AI era, where search engines and AI models prioritize clear, answer-driven content.
For example, consider these two queries:
What are the key features of good running shoes?
This shows informational intent. The searcher wants to understand what makes a running shoe good.
What are the best running shoes?
This suggests a transactional or commercial intent. The searcher is likely comparing options before making a purchase.
Both questions are valid, but they require very different content approaches.
There are two simple ways I usually find relevant questions. The first is by checking the People also ask section in Google search results. By typing in a broad keyphrase, you can see related questions that Google itself considers relevant.
The People also ask section showing questions related to the broad keyphrase ‘SEO’
The second method is to use the Questions filter in Semrush’s Keyword Magic Tool. This helps uncover question-based queries directly tied to your main topic.
Apart from these methods, I also like using Google’s AI Overview and AI mode as a quick research layer. When I search for my main topic, I pay close attention to AI-cited sources, as they often surface broad questions people are actively seeking. The structured points and highlighted terms usually reflect the answers and subtopics that matter most to users. If I want to go deeper, I click “Show more,” which reveals additional angles and follow-up questions I might not have considered initially.
AI cited sources by Google AI Overview
Finding and answering these questions helps you do lightweight online audience research and create content that feels genuinely helpful. It also increases the chances of your blog post being referenced in AI-generated answers, since LLMs are designed to surface clear responses to specific questions.
3. Structure your content with headings and subheadings
In our 2026 SEO predictions, we highlighted that editorial quality is no longer just about good writing. It has become a machine-readability requirement. Content that is clearly structured is easier to understand, reuse, and surface across both search and AI-driven experiences.
How LLMs use headings
AI models rely on headings to identify topics, questions, and answers within a page. When your content is broken into clear sections, it becomes easier for them to extract key information and include it in AI-generated summaries.
Why headings still matter for SEO
Headings help search engines understand the hierarchy of your content and the main points you are trying to rank for. They also improve scannability and usability, especially on mobile devices, and increase the chances of earning featured snippets.
Good structure has always been a core SEO principle. In the AI era, it remains one of the simplest and most effective ways to improve visibility and discoverability.
4. Focus on readability aspects
An SEO-friendly blog post should be easy to read before it can rank or get picked up by AI systems. Readability helps readers stay engaged and helps search engines and AI models better understand your content.
A few key readability aspects to focus on while writing:
Avoid passive voice where possible Active sentences are clearer and more direct. They make it easier for readers to understand who is doing what, and they reduce ambiguity for AI systems processing your content.
Use transition words Transition words like “because,” “for example,” and “however” guide readers through your content. They improve flow and make it easier to follow relationships between sentences and paragraphs.
Keep sentences and paragraphs short Long, complex sentences reduce clarity. Breaking content into shorter sentences and paragraphs improves scannability and comprehension.
Avoid consecutive sentences starting in the same way Varying sentence structure keeps your writing engaging and prevents it from sounding repetitive or robotic.
The readability analysis in the Yoast SEO for WordPress metabox
If you are a WordPress or Shopify user, Yoast SEO (and Yoast SEO for Shopify for Shopify users) can help here. Its readability analysis checks for passive voice, transition words, sentence length, and other clarity signals while you write. If you prefer drafting in Google Docs, you can use the Yoast SEO Google Docs add-on to get the same readability feedback before publishing.
Use Yoast SEO in Google Docs
Optimize as you draft for SEO, inclusivity, and readability. The Yoast SEO Google Docs add-on lets you export content ready for WordPress, no reformatting required.
Good readability is not just about pleasing algorithms. It helps readers understand your message more quickly and makes your content easier to reuse in AI-generated responses.
5. Use inclusive language
Inclusive language helps ensure your content is respectful, clear, and welcoming to a broader audience. It avoids assumptions about gender, ability, age, or background, and focuses on people-first communication.
From an SEO and AI perspective, inclusive language also improves clarity. Content that avoids vague or biased terms is easier to interpret, digest, and trust. This directly supports brand perception, especially when your content is surfaced in AI-generated responses.
Yoast SEO supports this through its inclusive language check, which flags potentially non-inclusive terms and suggests better alternatives. This feature is available in Yoast SEO, Yoast SEO Premium, and in the Yoast SEO Google Docs add-on, making it easier to build inclusive habits directly into your writing workflow.
Inclusive language ensures your content is intentional, thoughtful, and clear, aligning closely with what modern SEO and AI systems value.
6. Add relevant media and interaction points
A well-written blog post should not feel like a long block of text. Adding the right media and interaction points helps guide readers through your content, keeps them engaged, and encourages them to take action.
Why media matters
Media elements such as images, videos, embeds, and infographics make your content easier to consume and more engaging. Blog posts that include images receive 94% more views than those without, simply because visuals break up large blocks of text and make pages easier to scan.
Video content plays an even bigger role. Embedded videos help explain complex ideas faster and can significantly improve organic visibility compared to text-only posts. Together, these elements encourage readers to stay longer on your page, which is a strong signal of content quality for search engines and AI systems alike.
Media also improves accessibility. Properly optimized images with descriptive alt text make content usable for screen readers, while original visuals, screenshots, or diagrams help reinforce credibility and expertise.
Use interaction points to guide and engage readers
Interaction does not always mean complex features. Even simple elements can significantly improve engagement when used well.
Table of contents and sidebar CTA used as interaction points in a Yoast blog post
A table of contents, for example, allows readers to jump directly to the section they care about most.
Other interaction points include clear calls to action (CTAs) that guide readers to the next step, relevant recommendations that encourage users to keep exploring your site, and social sharing buttons that make it easy to amplify your content. Interactive elements like polls, quizzes, or embedded tools further encourage participation and increase time on page.
7. Plan your content length
Content length still matters, but not in the way many people think it does.
A common question is what the ideal word count is for a blog post that performs well. A 2024 study by Backlinko found that while longer content tends to attract more backlinks, the average page ranking on Google’s first page contains around 1,500 words.
That said, this should not be treated as a fixed benchmark. The ideal length is the one that fully answers the user’s question. In an AI-driven era, publishing long content that adds little value or is padded with unnecessary fluff can do more harm than good.
If a topic genuinely requires a longer format, breaking the content into clear subheadings makes a big difference. I personally prefer structuring long articles this way because it improves readability, helps readers navigate the page more easily, and makes the content easier for search engines and AI systems to understand.
If you use Yoast SEO or Yoast SEO Premium, the paragraph and sentence length checks can help here. These checks exist to prevent pages from being too thin to provide real value. Pages with very low word counts often lack context and struggle to demonstrate relevance or expertise. Yoast SEO flags such cases as a warning, while clearly indicating that adding more words alone does not guarantee better rankings.
Think of word count as a guideline, not a goal. Your focus should always be on clarity, completeness, and usefulness.
8. Link to existing content
Internal linking is one of the most underrated SEO practices, yet it does a lot of heavy lifting behind the scenes.
By linking to relevant content within your site, you help readers discover additional resources and help search engines understand how your content is connected. Over time, this strengthens topical authority and signals that your site consistently covers a subject in depth.
Good internal linking follows a few simple principles:
Link only when it adds value and feels natural in context
Use clear, descriptive anchor text so users and search engines know what to expect
Avoid linking to outdated URLs or pages that redirect, as this wastes crawl signals
Internal links also keep readers engaged longer by guiding them to related articles. This improves overall site engagement while reinforcing your expertise on a topic.
From an AI and search perspective, internal linking plays an even bigger role. Modern search systems analyze content structure, metadata hierarchies, schema markup, and internal links to assess topical depth and clarity. Well-linked content clusters make it easier for search engines and AI systems to understand what your site is about and which pages are most important.
For WordPress users, Yoast SEO Premium offers internal linking suggestions directly in the editor. This makes it easier to spot relevant linking opportunities as you write, helping you build stronger content connections without interrupting your workflow.
A smarter analysis in Yoast SEO Premium
Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!
Meta titles and meta descriptions help users decide whether to click on your content. While meta descriptions are not a direct ranking factor, they strongly influence click-through rates, making them an essential part of writing SEO-friendly blog posts.
A good meta title clearly communicates what the page is about. Place your main keyword near the beginning, keep it concise, and aim for roughly 55-60 characters so it doesn’t get truncated in search results.
Meta descriptions act like a short invitation. They should explain what the reader will gain from clicking and why it matters. Instead of stuffing keywords, focus on clarity and usefulness. Mention what aspects of the topic your content covers and how it helps the reader. Simple language works best.
Pro tip: Using action-oriented verbs such as “learn,” “discover,” or “read” can also encourage clicks and make your description more engaging.
If you use Yoast SEO Premium, this process becomes much easier. The AI-powered meta title and description generation feature helps you create relevant, well-structured metadata in just one click. It follows SEO best practices while producing descriptions and titles that are clear, engaging, and aligned with search intent.
Bonus tips
Once you have the fundamentals in place, a few extra refinements can go a long way. The following bonus tips help improve usability, clarity, and long-term discoverability. They are not mandatory, but when applied thoughtfully, they can make your blog posts more helpful for readers and easier to surface across search engines and AI-driven experiences.
1. Add a table of contents
A table of contents (TOC) helps readers quickly understand what your blog post covers and jump straight to the section they care about. This is especially useful for long-form content, where users often scan rather than scroll from top to bottom.
From an SEO perspective, a TOC improves structure and readability and can create jump links in search results, which may increase click-through rates. It reduces bounce rates by helping users find answers faster and improves accessibility by offering clear navigation.
By the way, did you know Yoast can help you here too? Yes, the Yoast SEO Internal linking blocks feature lets you add a TOC block to your blog post that automatically includes all the headings with just one click!
2. Add key takeaways
Key takeaways help readers quickly grasp the main points of your blog post without having to read the whole post. This is especially helpful for time-constrained users who want quick, actionable insights.
Summaries also support SEO by reinforcing topic relevance and improving content comprehension for search engines and AI systems. Well-written takeaways might increase visibility in featured snippets and “People also ask” results.
If you use Yoast SEO Premium, the Yoast AI Summarize feature can generate key takeaways for your content in just one click, making it easier to add concise summaries without extra effort.
3. Add an FAQ section
An FAQ section gives you space to answer specific questions your readers may still have after reading your post. This improves user experience by addressing concerns directly and building trust.
FAQs also help search engines better understand your content by clearly outlining common questions and answers related to your topic. While they can support rankings, their real value lies in reducing friction, improving clarity, and even supporting conversions by clearing doubts.
4. Short permalinks
A permalink is the permanent URL of your blog post. Short, descriptive permalinks are easier to read, easier to share, and more likely to be clicked.
Good permalinks clearly describe what the page is about, avoid unnecessary words, and include the main topic where relevant. They improve usability and help search engines understand page context at a glance.
5. Focus on building authority (EEAT aspect)
Building authority is critical, especially for sites that cover sensitive or high-impact topics. Demonstrating Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) helps both users and search engines trust your content.
This includes citing reliable sources, showing real-world experience, maintaining consistent quality, and clearly communicating who is behind the content. Strong E-E-A-T signals are especially important for YMYL topics, where accuracy and credibility matter most.
6. Plan content distribution
Writing a great blog post is only half the work. Distribution helps your content reach the right audience.
Sharing posts on social media, repurposing key insights into newsletters, and earning backlinks from relevant sites can drive more traffic and visibility. Distribution also increases engagement signals and helps your content gain traction faster, which supports long-term SEO performance.
Target your readers always!
In AI-driven search, retrieval beats ranking. Clarity, structure, and language alignment now decide if your content gets seen. – Carolyn Shelby
This perfectly sums up what writing SEO-friendly blog posts looks like today. Success is no longer just about rankings. It is about being clear, helpful, and easy to understand for both readers and AI systems.
Throughout this guide, we focused on the fundamentals that still matter: understanding search intent, structuring content well, improving readability, using inclusive language, and supporting your writing with media, internal links, and thoughtful metadata. These are not new tricks. They are strong SEO foundations, adapted for how search and discovery work in the AI era.
If there is one takeaway, it is this: always write for your readers first. When your content genuinely helps people, answers their questions, and respects how they search and read, it naturally becomes easier to surface across SERPs and AI-driven experiences.
Good SEO has not changed. It has simply become more human.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-10 14:16:342026-02-10 14:16:34Tips and tricks to write SEO-friendly blog posts in the AI era
If you’ve been managing PPC accounts for any length of time, you don’t need a research report to tell you something has changed.
You see it in the day-to-day work:
GCLIDs missing from URLs.
Conversions arriving later than expected.
Reports that take longer to explain while still feeling less definitive than they used to.
When that happens, the reflex is to assume something broke – a tracking update, a platform change, or a misconfiguration buried somewhere in the stack.
But the reality is usually simpler. Many measurement setups still assume identifiers will reliably persist from click to conversion, and that assumption no longer holds consistently.
Measurement hasn’t stopped working. The conditions it depends on have been shifting for years, and what once felt like edge cases now show up often enough to feel like a systemic change.
Why this shift feels so disorienting
I’ve been close to this problem for most of my career.
Before Google Ads had native conversion tracking, I built my own tracking pixels and URL parameters to optimize affiliate campaigns.
Later, while working at Google, I was involved in the acquisition of Urchin as the industry moved toward standardized, comprehensive measurement.
That era set expectations that nearly everything could be tracked, joined, and attributed at the click level. Google made advertising feel measurable, controllable, and predictable.
As the ecosystem now shifts toward more automation, less control, and less data, that contrast can be jarring.
It has been for me. Much of what I once relied on to interpret PPC data no longer applies in the same way.
Making sense of today’s measurement environment requires rethinking those assumptions, not trying to restore the old ones. This is how I think about it now.
The old world: click IDs and deterministic matching
For many years, Google Ads measurement followed a predictable pattern.
A user clicked an ad.
A click ID, or gclid, was appended to the URL.
The site stored it in a cookie.
When a conversion fired, that identifier was sent back and matched to the click.
This produced deterministic matches, supported offline conversion imports, and made attribution relatively easy to explain to stakeholders.
As long as the identifier survived the journey, the system behaved in ways most advertisers could reason about.
We could literally see what happened with each click and which ones led to individual conversions.
That reliability depended on a specific set of conditions.
Browsers needed to allow parameters through.
Cookies had to persist long enough to cover the conversion window.
Users had to accept tracking by default.
Luckily, those conditions were common enough that the model worked really well.
Why that model breaks more often now
Browsers now impose tighter limits on how identifiers are stored and passed.
Apple’s Intelligent Tracking Prevention, enhanced tracking protection, private browsing modes, and consent requirements all reduce how long tracking data persists, or whether it’s stored at all.
URL parameters may be stripped before a page loads. Cookies set via JavaScript may expire quickly. Consent banners may block storage entirely.
Click IDs sometimes never reach the site, or they disappear before a conversion occurs.
This is expected behavior in modern browser environments, not an edge case, so we have to account for it.
Trying to restore deterministic click-level tracking usually means working against the constant push toward more privacy and the resulting browser behaviors.
This is another of the many evolutions of online advertising we simply have to get on board with, and I’ve found that designing systems to function with partial data beats fighting the tide.
The adjustment isn’t just technical
On my own team, GA4 is a frequent source of frustration. Not because it’s incapable, but because it’s built for a world where some data will always be missing.
We hear the same from other advertisers: the data isn’t necessarily wrong, but it’s harder to reason about.
This is the bigger challenge. Moving from a world where nearly everything was observable to one where some things are inferred requires accepting that measurement now operates under different conditions.
That mindset shift has been uneven across the industry because measurement lives at the periphery of where many advertisers spend most of their time, working in ad platforms.
A lot of effort goes into optimizing ad platform settings when sometimes the better use of time might’ve been fixing broken data so better decisions could be made.
What still works: Client-side and server-side approaches
So what approaches hold up under current constraints? The answer involves both client-side and server-side measurement.
Pixels still matter, but they have limits
Client-side pixels, like the Google tag, continue to collect useful data.
They fire immediately, capture on-site actions, and provide fast feedback to ad platforms, whose automated bidding systems rely on this data.
But these pixels are constrained by the browser. Scripts can be blocked, execution can fail and consent settings can prevent storage. A portion of traffic will never be observable at the individual level.
When pixel tracking is the only measurement input, these gaps affect both reporting and optimization. Pixels haven’t stopped working. They just no longer cover every case.
Changing how pixels are delivered
Some responses to declining pixel data focus on the mechanics of how pixels are served rather than measurement logic.
Google Tag Gateway changes where tag requests are routed, sending them through a first-party, same-origin setup instead of directly to third-party domains.
This can reduce failures caused by blocked scripts and simplify deployment for teams using Google Cloud.
What it doesn’t do is define events, decide what data is collected, or correct poor tagging choices. It improves delivery reliability, not measurement logic.
This distinction matters when comparing Tag Gateway and server-side GTM.
Tag Gateway focuses on routing and ease of setup.
Server-side GTM enables event processing, enrichment, and governance. It requires more maintenance and technical oversight, but it provides more control.
The two address different problems.
Here’s the key point: better infrastructure affects how data moves, not what it means.
Event definitions, conversion logic, and consistency across systems still determine data quality.
A reliable pipeline delivers whatever it’s given, so it’d be just as good at making sure the garbage you put in also comes back out.
Offline conversion imports: Moving measurement off the browser
Offline conversion imports take a different approach, moving measurement away from the browser entirely. Conversions are recorded in backend systems and sent to Google Ads after the fact.
Because this process is server to server, it’s less affected by browser privacy restrictions. It works for longer sales cycles, delayed purchases, and conversions that happen outside the site.
This is why Google commonly recommends running offline imports alongside pixel-based tracking. The two cover different parts of the journey. One is immediate, the other persists.
Offline imports also align with current privacy constraints. They rely on data users provide directly, such as email addresses during a transaction or signup.
The data is processed server-side and aggregated, reducing reliance on browser identifiers and short-lived cookies.
Offline imports don’t replace pixels. They reduce dependence on them.
Even with pixels and offline imports working together, some conversions can’t be directly observed.
Matching when click IDs are missing
When click IDs are unavailable, Google Ads can still match conversions using other inputs.
This often begins with deterministic matching through hashed first-party identifiers such as email addresses, when those identifiers can be associated with signed-in Google users.
When deterministic matching, if this then that, isn’t possible, the system relies on aggregated and validated signals rather than reconstructing individual click paths.
These can include session-level attributes and limited, privacy-safe IP information, combined with timing and contextual constraints.
This doesn’t recreate the old click-level model, but it allows conversions to be associated with prior ad interactions at an aggregate level.
One thing I’ve noticed: adding these inputs typically improves matching before it affects bidding.
Bidding systems account for conversion lag and validate new signals over time, which means imported or modeled conversions may appear in reporting before they’re fully weighted in optimization.
Matching, attribution, and bidding are related but separate processes. Improvements in one don’t immediately change the others.
Modeled conversions as a standard input
Modeled conversions are now a standard part of Google Ads and GA4 reporting.
They’re used when direct observation isn’t possible, such as when consent is denied or identifiers are unavailable.
These models are constrained by available data and validated through consistency checks and holdback experiments.
When confidence is low, modeling may be limited or not applied. Modeled data should be treated as an expected component of measurement rather than an exception.
Tools like Google Tag Gateway or Enhanced Conversions for Leads help recover measurement signal, but they don’t override user intent.
Routing data through a first-party domain doesn’t imply consent. Ad blockers and restrictive browser settings are explicit signals.
Overriding them may slightly increase the measured volume, but it doesn’t align with users’ expectations regarding how your organization uses their data.
Legal compliance and user intent aren’t the same thing. Measurement systems can respect both, but doing so requires deliberate choices.
Designing for partial data
Missing signals are normal. Measurement systems that assume full visibility will continue to break under current conditions.
Redundancy helps: pixels paired with hardened delivery, offline imports paired with enhanced identifiers, and multiple incomplete signals instead of a single complete one.
But here’s where things get interesting. Different systems will see different things, and this creates a tension many advertisers now face daily.
Some clients tell us their CRM data points clearly in one direction, while Google Ads automation, operating on less complete inputs, nudges campaigns another way.
In most cases, neither system is wrong. They’re answering different questions with different data, on different timelines. Operating in a world of partial observability means accounting for that tension rather than trying to eliminate it.
The shift toward privacy-first measurement changes how much of the user journey can be directly observed. That changes our jobs.
The goal is no longer perfect reconstruction of every click, but building measurement systems that remain useful when signals are missing, delayed, or inferred.
Different systems will continue to operate with different views of reality, and alignment comes from understanding those differences rather than trying to eliminate them.
In this environment, durable measurement depends less on recovering lost identifiers and more on thoughtful data design, redundancy, and human judgment.
Agentic AI is increasingly appearing in leadership conversations, often accompanied by big claims and unclear expectations. For SEO leaders working with ecommerce brands, this creates a familiar challenge.
Executives hear about autonomous agents, automated purchasing, and AI-led decisions, and they want to know what this really means for growth, risk, and competitiveness.
What they don’t need is more hype. They need clear explanations, grounded thinking, and practical guidance.
This is where SEO leaders can add real value, not by predicting the future, but by helping leadership understand what is changing, what isn’t, and how to respond without overreacting. Here’s how.
Start by explaining what ‘agentic’ actually means
A useful first step is to remove the mystery from the term itself. Agentic systems don’t replace customers, they act on behalf of customers. The intent, preferences, and constraints still come from a person.
What changes is who does the work.
Discovery, comparison, filtering, and sometimes execution are handled by software that can move faster and process more information than a human can.
When speaking to executive teams, a simple framing works best:
“We’re not losing customers, we’re adding a new decision-maker into the journey. That decision-maker is software acting as a proxy for the customer.”
Once this is clear, the conversation becomes calmer and more practical, and the focus moves away from fear and toward preparation.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
Keep expectations realistic and avoid the hype
Another important role for SEO leaders is to slow the conversation down. Agentic behavior will not arrive everywhere at the same time. Its impact will be uneven and gradual.
Some categories will see change earlier because their products are standardized and data is already well structured. Others will move more slowly because trust, complexity, or regulation makes automation harder.
This matters because leadership teams often fall into one of two traps:
Panic, where plans are rewritten too quickly, budgets move too fast, and teams chase futures that may still be some distance away.
Dismissal, where nothing changes until performance clearly drops, and by then the response is rushed.
SEO leaders can offer a steadier view. Agentic AI accelerates trends that already exist. Personalized discovery, fewer visible clicks, and more pressure on data quality are not new problems.
Agents simply make them more obvious. Seen this way, agentic AI becomes a reason to improve foundations, not a reason to chase novelty.
Change the conversation from rankings to eligibility
One of the most helpful shifts in executive conversations is moving away from rankings as the main outcome of SEO. In an agent-led journey, the key question isn’t “do we rank well?” but “are we eligible to be chosen at all?”
Eligibility depends on clarity, consistency, and trust. An agent needs to understand what you sell, who it is for, how much it costs, whether it is available, and how risky it is to choose you on behalf of a user. This is a strong way to connect SEO to commercial reality.
Questions worth raising include whether product information is consistent across systems, whether pricing and availability are reliable, and whether policies reduce uncertainty or create it. Framed this way, SEO becomes less about chasing traffic and more about making the business easy to select.
Explain why SEO no longer sits only in marketing
Many executives still see SEO as a marketing channel, but agentic behavior challenges that view.
Selection by an agent depends on factors that sit well beyond marketing. Data quality, technical reliability, stock accuracy, delivery performance, and payment confidence all play a role.
SEO leaders should be clear about this. This isn’t about writing more content. It’s about making sure the business is understandable, reliable, and usable by machines.
Positioned correctly, SEO becomes a connecting function that helps leadership see where gaps in systems or data could prevent the brand from being selected. This often resonates because it links SEO to risk and operational health, not just growth.
For most ecommerce brands, the earliest impact of agentic systems will be at the top of the funnel. Discovery becomes more conversational and more personal.
Users describe situations, needs, and constraints instead of typing short search phrases, and the agent then turns that context into actions.
This reduces the value of simply owning category head terms. If an agent knows a user’s budget, preferences, delivery expectations, and past behavior, it doesn’t behave like a first-time visitor. It behaves like a well-informed repeat customer.
This creates a reporting challenge. Some SEO work will no longer look like direct demand creation, even though it still influences outcomes. Leadership teams need to be prepared for this shift.
Reframe consideration as filtering, not persuasion
The middle of the funnel also changes shape. Today, consideration often involves reading reviews, comparing options, and seeking reassurance.
In an agent-led journey, consideration becomes a filtering process, where the agent removes options it believes the user would reject and keeps those that fit.
This has clear implications. Generic content becomes less effective as a traffic driver because agents can generate summaries and comparisons instantly. Trust signals become structural, meaning claims need to be backed by consistent and verifiable information.
In many cases, a brand may be chosen without the user being consciously aware of it. That can be positive for conversion, but risky for long-term brand strength if recognition isn’t built elsewhere.
Executives care about measurement, and agentic AI makes this harder. As more discovery and consideration happen inside AI systems, fewer interactions leave clean attribution trails. Some impact will show up as direct traffic, and some will not be visible at all.
SEO leaders should address this early. This isn’t a failure of optimization. It reflects the limits of today’s analytics in a more mediated world.
The conversation should move toward directional signals and blended performance views, rather than precise channel attribution that no longer reflects how decisions are made.
Promote a proactive, low-risk response
The most important part of the leadership discussion is what to do next. The good news is that most sensible responses to agentic AI are low risk.
Improving product data quality, reducing inconsistencies across platforms, strengthening reliability signals, and fixing technical weaknesses all help today, regardless of how quickly agents mature.
Investing in brand demand outside search also matters. If agents handle more of the comparison work, brands that users already trust by name are more likely to be selected.
This reassures leaders that action doesn’t require dramatic change, only disciplined improvement.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
Agentic AI changes the focus, not the fundamentals
For SEO leaders, agentic AI changes the focus of the role. The work shifts from optimizing pages to protecting eligibility, from chasing visibility to reducing ambiguity, and from reporting clicks to explaining influence.
This requires confidence, clear communication, and a willingness to challenge hype. Agentic AI makes SEO more strategic, not any less important.
Agentic AI should not be treated as an immediate threat or a guaranteed advantage. It’s a shift in how decisions are made.
For ecommerce brands, the winners will be those that stay calm, communicate clearly, and adapt their SEO thinking from driving clicks to earning selection.
That is the conversation SEO leaders should be having now.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-10 13:00:002026-02-10 13:00:00How SEO leaders can explain agentic AI to ecommerce executives
And it has big implications for how we should think about tracking AI visibility for brands.
In his research, he tested prompts asking for recommendations in all sorts of products and services, including everything from chef’s knives to cancer care hospitals and Volvo dealerships in Los Angeles.
Basically, he found that:
AIs rarely recommend the same list of brands in the same order twice.
For a given topic (e.g., running shoes), AIs recommend a certain handful of brands far more frequently than others.
For my research, as always, I’m focusing exclusively on B2B use cases. Plus, I’m building on Fishkin’s work by addressing these additional questions:
Does prompt complexity affect the consistency of AI recommendations?
Does the competitiveness of the category affect the consistency of recommendations?
Methodology
To explore those questions, I first designed 12 prompts:
Competitive vs. niche: Six of the prompts are about highly competitive B2B software categories (e.g., accounting software), and the other six are about less crowded categories (e.g., user entity behavior analytics (UEBA) software). I identified the categories using Contender’s database, which tracks how many brands ChatGPT associates with 1,775 different software categories.
Simple vs. nuanced prompts: Within both sets of “competitive” and “niche” prompts, half of the prompts are simple (“What’s the best accounting software?”) and the other half are nuanced prompts including a persona and use case (”For a Head of Finance focused on ensuring financial reporting accuracy and compliance, what’s the best accounting software?”)
I ran the 12 prompts 100 times, each, through the logged-out, free version of ChatGPT at chatgpt.com (i.e., not the API). I used a different IP address for each of the 1,200 interactions to simulate 1,200 different users starting new conversations.
Limitations: This research only covers responses from ChatGPT. But given the patterns in Fishkin’s results and the similar probabilistic nature of LLMs, you can probably generalize the directional (not absolute value) findings below to most/all AIs.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
Findings
So what happens when 100 different people submit the same prompt to ChatGPT, asking for product recommendations?
How many ‘open slots’ in ChatGPT responses are available to brands?
On average, ChatGPT will mention 44 brands across 100 different responses. But one of the response sets included as many as 95 brands – it really depends on the category.
Competitive vs. niche categories
On that note, for prompts covering competitive categories, ChatGPT mentions about twice as many brands per 100 responses compared to the responses to prompts covering “niche” categories. (This lines up with the criteria I used to select the categories I studied.)
Simple vs. nuanced prompts
On average, ChatGPT mentioned slightly fewer brands in response to nuanced prompts. But this wasn’t a consistent pattern – for any given software category, sometimes nuanced questions ended up with more brands mentioned, and sometimes simple questions did.
This was a bit surprising, since I expected more specific requests (e.g., “For a SOC analyst needing to triage security alerts from endpoints efficiently, what’s the best EDR software?”) to consistently yield a narrower set of potential solutions from ChatGPT.
I think ChatGPT might not be better at tailoring a list of solutions to a specific use case because it doesn’t have a deep understanding of most brands. (More on this data in an upcoming note.)
Return of the ’10 blue links’
In each individual response, ChatGPT will, on average, mention only 10 brands.
There’s quite a range, though – a minimum of 6 brands per response and a maximum of 15 when averaging across response sets.
But a single response typically names about 10 brands regardless of category or prompt type.
The big difference is in how much the pool of brands rotates across responses – competitive categories draw from a much deeper bench, even though each individual response names a similar count.
Everything old (in SEO) truly is new again (in GEO/AEO). It reminds me of trying to get a placement in one of Google’s “10 blue links”.
How consistent are ChatGPT’s brand recommendations?
When you ask ChatGPT for a B2B software recommendation 100 different times, there are only ~5 brands, on average, that it’ll mention 80%+ of the time.
To put it in context, that’s just 11% of all the 44 brands it’ll mention at all across those 100 responses.
So it’s quite competitive to become one of the brands ChatGPT consistently mentions whenever someone asks for recommendations in your category.
As you’d expect, these “dominant” brands tend to be big, established brands with strong recognition. For example, the dominant brands in the accounting software category are QuickBooks, Xero, Wave, FreshBooks, Zoho, and Sage.
If you’re not a big brand, you’re better off being in a niche category:
When you operate in a niche category, not only are you literally competing with fewer companies, but there are also more “open slots” available to you to become a dominant brand in ChatGPT’s responses.
In niche categories, 21% of all the brands ChatGPT mentions are dominant brands, getting mentioned 80%+ of the time.
Compare this to just 7% of all brands being dominant in competitive categories, where the majority of brands (72%) are languishing in the long tail, getting mentioned less than 20% of the time.
A nuanced prompt doesn’t dramatically change the long tail of little-seen brands (with <20% visibility), but it does change the “winner’s circle.” Adding persona context to a prompt makes it a bit more difficult to reach the dominant tier – you can see the steeper “cliff” a brand has to climb in the “nuanced prompts” graph above.
This makes intuitive sense: when someone asks “best accounting software for a Head of Finance,” ChatGPT has a more specific answer in mind and commits a bit more strongly to fewer top picks.
Still, it’s worth noting that the overall pool doesn’t shrink much – ChatGPT mentions ~42 brands in 100 responses to nuanced prompts, just a handful fewer than the ~46 mentioned in response to simple prompts. If nuanced prompts make the winner’s circle a bit more exclusive, why don’t they also narrow the total field?
Partly, it could be that the “nuanced” questions we fed it weren’t meaningfully more narrow and specific than what was implied in the simple questions we asked.
But, based on other data I’m seeing, I think this is partly about ChatGPT not knowing enough about most brands to be more selective. I’ll share more on this in an upcoming note.
If you’re not a dominant brand, pick your battles – niche down
It’s never been more important to differentiate. 21% of mentioned brands reach dominant status in niche categories vs. 7% in competitive ones.
Without time and a lot of money for brand marketing, an upstart tech company isn’t going to become a dominant brand in a broad, established category like accounting software.
But the field is less competitive when you lean into your unique, differentiating strengths. ChatGPT is more likely to treat you like a dominant brand if you work to make your product known as “the best accounting software for commercial real estate companies in North America.”
Most AI visibility tracking tools are grossly misleading
Given the inconsistency of ChatGPT’s recommendations, a single spot-check for any given prompt is nearly meaningless. Unfortunately, checking each prompt just once per time period is exactly what most AI visibility tracking tools do.
If you want anything approaching a statistically-significant visibility score for any given prompt, you need to run the prompt at least dozens of times, even 100+ times, depending on how precise you need the data to be.
But that’s obviously not practical for most people, so my suggestion is: For the key, bottom-of-funnel prompts you’re tracking, run them each ~5 times whenever you pull data.
That’ll at least give you a reasonable sense of whether your brand tends to show up most of the time, some of the time, or never.
Your goal should be to have a confident sense of whether your brand is in the little-seen long tail, the visible middle, or the dominant top-tier for any given prompt. Whether you use my tiers of ‘under 20%’, ‘20–80%’, and ‘80%+’, or your own thresholds, this is the approach that follows the data and common sense.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
What’s next?
In future newsletters and LinkedIn posts, I’m going to build on these findings with new research:
How does ChatGPT talk about the brands it consistently recommends? Is it indicative of how much ChatGPT “knows” about brands?
Do different prompts with the same search intent tend to produce the same set of recommendations?
How consistent is “rank” in the responses? Do dominant brands tend to get mentioned first?
Eighty million people use Reddit search every week, Reddit said on its Q4 2025 earnings call last week. The increase followed a major change: Reddit merged its core search with its AI-powered Reddit Answers and began positioning the platform as a place where users can start — and finish — their searches.
Executives framed the move as a response to changing behavior. People are increasingly researching products and making decisions by asking questions within communities rather than relying solely on traditional search engines.
Reddit is betting it can keep more of that intent on-platform, rather than acting mainly as a source of links for elsewhere.
Why we care. Reddit is becoming a place where people start — and complete — their searches without ever touching Google. For brands, that means visibility on Reddit now matters as much as ranking in traditional and AI search for many buying decisions.
Reddit’s search ambitions. CEO Steve Huffman said Reddit made “significant progress” in Q4 by unifying keyword search with Reddit Answers, its AI-driven Q&A experience. Users can now move between standard search results and AI answers in a single interface, with Answers also appearing directly inside search results.
“Reddit is already where people go to find things,” Huffman said, adding the company wants to become an “end-to-end search destination.”
More than 80 million people searched Reddit weekly in Q4, up from 60 million a year earlier, as users increasingly come to the platform to research topics — not just scroll feeds or click through from Google.
Reddit Answers is growing. Reddit Answers is driving much of that growth. Huffman said Answers queries jumped from about 1 million a year ago to 15 million in Q4, while overall search usage rose sharply in parallel.
He said Answers performs best for open-ended questions—what to buy, watch, or try—where people want multiple perspectives instead of a single factual answer. Those queries align naturally with Reddit’s community-driven discussions.
Reddit is also expanding Answers beyond text. Huffman said the company is piloting “dynamic agentic search results” that include media formats, signaling a more interactive and immersive search experience ahead.
Search is a ‘big one’ for Reddit. Huffman said the company is testing new app layouts that give search prominent placement, including versions with a large, always-visible search bar at the top of the home screen.
COO Jennifer Wong said search and Answers represent a major opportunity, even though monetization remains early on some surfaces.
Wong described Reddit search behavior as “incremental and additive” to existing engagement and often tied to high-intent moments, such as researching purchases or comparing options.
AI answers make Reddit more important. Huffman also linked Reddit’s search push to its partnerships with Google and OpenAI. He said Reddit content is now the most-cited source in AI-generated answers, highlighting the platform’s growing influence on how people find information.
Reddit sees AI summaries as an opportunity — to move users from AI answers into Reddit communities, where they can read discussions, ask follow-up questions, and participate.
If someone asks “what the best speaker is,” he said, Reddit wants users to discover not just a summary, but the community where real people are actively debating the topic.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-09 18:56:372026-02-09 18:56:37Reddit says 80 million people now use its search weekly
OpenAI confirmed today that it’s rolling out its first live test of ads in ChatGPT, showing sponsored messages directly inside the app for select users.
The details. The ads will appear in a clearly labeled section beneath the chat interface, not inside responses, keeping them visually separate from ChatGPT’s answers.
OpenAI will show ads to logged-in users on the free tier and its lower-cost Go subscription.
Advertisers won’t see user conversations or influence ChatGPT’s responses, even though ads will be tailored based on what OpenAI believes will be helpful to each user, the company said.
How ads are selected. During the test, OpenAI matches ads to conversation topics, past chats, and prior ad interactions.
For example: A user researching recipes might see ads for meal kits or grocery delivery. If multiple advertisers qualify, OpenAI shows the most relevant option first.
User controls. Users get granular controls over the experience. They can dismiss ads, view and delete separate ad history and interest data, and toggle personalization on or off.
Turning personalization off limits ads to the current chat.
Free users can also opt out of ads in exchange for fewer daily messages or upgrade to a paid plan.
Why we care. ChatGPT is one of the world’s largest consumer AI platforms. Even a limited ad rollout could mark a major shift in how conversational AI gets monetized — and how brands reach users.
Bottom line. OpenAI is officially moving into ads inside ChatGPT, testing how sponsored content can coexist with conversational AI at massive scale.