Posts

Web Design and Development San Diego

Why governance maturity is a competitive advantage for SEO

How SEO governance shifts teams from reaction to prevention

Let me guess: you just spent three months building a perfectly optimized product taxonomy, complete with schema markup, internal linking, and killer metadata. 

Then, the product team decided to launch a site redesign without telling you. Now half your URLs are broken, the new templates strip out your structured data, and your boss is asking why organic traffic dropped 40%.

Sound familiar?

Here’s the thing: this isn’t an SEO failure, but a governance failure. It’s costing you nights and weekends trying to fix problems that should never have happened in the first place.

This article covers why weak governance keeps breaking SEO, how AI has raised the stakes, and how a visibility governance maturity model helps SEO teams move from firefighting to prevention.

Governance isn’t bureaucracy – it’s your insurance policy

I know what you’re thinking. “Great, another framework that means more meetings and approval forms.” But hear me out.

The Visibility Governance Maturity Model (VGMM) isn’t about creating red tape. It’s about establishing clear ownership, documented processes, and decision rights that prevent your work from being accidentally destroyed by teams who don’t understand SEO.

Think of it this way: VGMM is the difference between being the person who gets blamed when organic traffic tanks versus being the person who can point to documentation showing exactly where the process broke down – and who approved skipping the SEO review.

This maturity model:

  • Protects your work from being undone by releases you weren’t consulted on.
  • Documents your standards so you’re not explaining canonical tags for the 47th time.
  • Establishes clear ownership so you’re not expected to fix everything across six different teams.
  • Gets you a seat at the table when decisions affecting SEO are being made.
  • Makes your expertise visible to leadership in ways they understand.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

The real problem: AI just made everything harder

Remember when SEO was mostly about your website and Google? Those were simpler times.

Now you’re trying to optimize for:

  • AI Overviews that rewrite your content.
  • ChatGPT citations that may or may not link back.
  • Perplexity summaries that pull from competitors.
  • Voice assistants that only cite one source.
  • Knowledge panels that conflict with your site.

And you’re still dealing with:

  • Content teams who write AI-generated fluff.
  • Developers who don’t understand crawl budget.
  • Product managers who launch features that break structured data.
  • Marketing directors who want “just one small change” that tanks rankings.

Without governance, you’re the only person who understands how all these pieces fit together. 

When something breaks, everyone expects you to fix it – usually yesterday. When traffic is up, it’s because marketing ran a great campaign. When it’s down, it’s your fault.

You become the hero the organization depends on, which sounds great until you realize you can never take a real vacation, and you’re working 60-hour weeks.

Dig deeper: Why most SEO failures are organizational, not technical

What VGMM actually measures – in terms you care about

VGMM doesn’t care about your keyword rankings or whether you have perfect schema markup. It evaluates whether your organization is set up to sustain SEO performance without burning you out. Below are the five maturity levels that translate to your daily reality:

Level 1: Unmanaged (your current nightmare)

  • Nobody knows who’s responsible for SEO decisions.
  • Changes happen without SEO review.
  • You discover problems after they’ve tanked traffic.
  • You’re constantly firefighting.
  • Documentation doesn’t exist or is ignored.

Level 2: Aware (slightly better)

  • Leadership admits SEO matters.
  • Some standards exist but aren’t enforced.
  • You have allies but no authority.
  • Improvements happen but get reversed next quarter.
  • You’re still the only one who really gets it.

Level 3: Defined (getting somewhere)

  • SEO ownership is documented.
  • Standards exist, and some teams follow them.
  • You’re consulted before major changes.
  • QA checkpoints include SEO review.
  • You’re working normal hours most weeks.

Level 4: Integrated (the dream)

  • SEO is built into release workflows.
  • Automated checks catch problems before they ship.
  • Cross-functional teams share accountability.
  • You can actually take a vacation without a disaster.
  • Your expertise is respected and resourced.

Level 5: Sustained (unicorn territory)

  • SEO survives leadership changes.
  • Governance adapts to new AI surfaces automatically.
  • Problems are caught before they impact traffic.
  • You’re doing strategic work, not firefighting.
  • The organization values prevention over reaction.

Most organizations sit at Level 1 or 2. That’s not your fault – it’s a structural problem that VGMM helps diagnose and fix.

Dig deeper: SEO’s future isn’t content. It’s governance

How VGMM works: The less boring explanation

VGMM coordinates multiple domain-specific maturity models. Think of it as a health checkup that looks at all your vital signs, not just one metric.

It evaluates maturity across domains like:

  • SEO governance: Your core competency.
  • Content governance: Are writers following standards?
  • Performance governance: Is the site actually fast?
  • Accessibility governance: Is the site inclusive?
  • Workflow governance: Do processes exist and work?

Each domain gets scored independently, then VGMM looks at how they work together. Because excellent SEO maturity doesn’t matter if the performance team deploys code that breaks the site every Tuesday or if the content team publishes AI-generated nonsense that tanks your E-E-A-T signals.

VGMM produces a 0–100% score based on:

  • Domain scores: How mature is each area?
  • Weighting: Which domains matter most for your business?
  • Dependencies: Are weaknesses in one area breaking strengths in another?
  • Coherence: Do decision rights and accountability actually align?

The final score isn’t about effort – it’s about whether governance actually works.

Get the newsletter search marketers rely on.


What this means for your daily life

Before VGMM-style governance:

  • Product launches a redesign → You find out when traffic drops.
  • Content team uses AI → You discover thin content in Search Console.
  • Dev changes URL structure → You spend a week fixing redirects.
  • Marketing wants “quick changes” → You explain why it’s not quick (again).
  • Site goes down → Everyone asks why you didn’t catch it.

After governance maturity improves:

  • Product can’t launch without SEO sign-off.
  • Content AI usage has review checkpoints.
  • URL changes require documented SEO approval.
  • Marketing requests go through defined workflows.
  • Site monitoring includes automated SEO health checks.

You move from reactive firefighting to proactive prevention. Your weekends become yours again.

The supporting models: What they actually check

VGMM doesn’t score you on technical SEO execution. It checks whether the organization has processes in place to prevent SEO disasters.

SEO Governance Maturity Model (SEOGMM) asks:

  • Are there documented SEO standards?
  • Who can override them, and how?
  • Do templates enforce SEO requirements?
  • Are there QA checkpoints before releases?
  • Can SEO block launches that will cause problems?

Content Governance Maturity Model (CGMM) asks:

  • Are content quality standards documented?
  • Is AI-generated content reviewed?
  • Are writers trained on SEO basics?
  • Is there a process for updating outdated content?

Website Performance Maturity Model (WPMM) asks:

  • Are Core Web Vitals monitored?
  • Can releases be rolled back if they break performance?
  • Is there a performance budget?
  • Are third-party scripts governed?

You get the idea. Each domain has its own checklist, and VGMM shows leadership where gaps create risk.

Dig deeper: SEO execution: Understanding goals, strategy, and planning

How to pitch this to your boss

You don’t need to explain VGMM theory. You need to connect it to problems leadership already knows exist.

  • Frame it as risk reduction: “We’ve had three major traffic drops this year from changes that SEO didn’t review. VGMM helps us identify where our process breaks down so we can prevent this.”
  • Frame it as efficiency: “I’m spending 60% of my time firefighting problems that could have been prevented. VGMM establishes processes so I can focus on growth opportunities instead.”
  • Frame it as a competitive advantage: “Our competitors are getting cited in AI Overviews, and we’re not. VGMM evaluates whether we have the governance structure to compete in AI-mediated search.”
  • Frame it as scalability: “Right now, our SEO capability depends entirely on me. If I get hit by a bus tomorrow, nobody knows how to maintain what we’ve built. VGMM establishes documentation and processes that make our SEO sustainable.”
  • The ask: “I’d like to conduct a VGMM assessment to identify where our processes need strengthening.”

What success actually looks like

Organizations with higher VGMM maturity experience measurably better outcomes:

  • Fewer unexplained traffic drops because changes are reviewed.
  • More stable AI citations because content quality is governed.
  • Less rework after launches because SEO is built into workflows.
  • Clearer accountability because ownership is documented.
  • Better resource allocation because gaps are visible to leadership.

But the real win for you personally: 

  • You stop being the hero who saves the day and become the strategist who prevents disasters. 
  • Your expertise is recognized and properly resourced. 
  • You can take actual vacations. 
  • You work normal hours most of the time.

Your job becomes about building and improving, not constantly fixing.

Getting started: Practical next steps

Step 1: Self-assessment

Look at the five maturity levels. Where is your organization honestly sitting? If you’re at Level 1 or 2, you have evidence for why governance matters.

Step 2: Document current-state pain

Make a list of the last six months of SEO incidents:

  • Changes that weren’t reviewed.
  • Traffic drops from preventable problems.
  • Time spent fixing avoidable issues.
  • Requests that had to be explained multiple times.

This becomes your business case.

Step 3: Start with one domain

You don’t need to implement full VGMM immediately. Start with SEOGMM:

  • Document your standards.
  • Create a review checklist.
  • Establish who can approve exceptions.
  • Get stakeholder sign-off on the process.

Step 4: Show results 

Track prevented problems. When you catch an issue before it ships, document it. When a process prevents a regression, quantify the impact. Build your case for expanding governance.

Step 5: Expand systematically

Once SEOGMM is working, expand to related domains (content, performance, accessibility). Show how integrated governance catches problems that individual domain checks miss.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Why governance determines whether SEO survives

Governance isn’t about making your job harder. It’s about making your organization work better so your job becomes sustainable.

VGMM gives you a framework for diagnosing why SEO keeps getting undermined by other teams and a roadmap for fixing it. It translates your expertise into language that leadership understands. It protects your work from accidental destruction.

Most importantly, it moves you from being the person who’s always fixing emergencies to being the person who builds systems that prevent them.

You didn’t become an SEO professional to spend your career firefighting. VGMM helps you get back to doing the work that actually matters – the strategic, creative, growth-focused work that attracted you to SEO in the first place.

If you’re tired of watching your best work get undone by teams who don’t understand SEO, if you’re exhausted from being the only person who knows how everything works, if you want your expertise to be recognized and protected – start the VGMM conversation with your leadership.

The framework exists. What’s missing is someone in your organization saying, “We need to govern visibility like we govern everything else that matters.”

That someone is you.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Read more at Read More

Tips and tricks to write SEO-friendly blog posts in the AI era

It is no secret that publishing SEO-friendly blog posts is one of the easiest and most effective ways to drive organic traffic and improve SERP rankings. However, in the era of artificial intelligence, blog posts matter more than ever. They help establish brand authority by consistently delivering fresh, valuable content that can be cited in AI-generated answers.

In this guide, we will share a practical, detailed approach to writing SEO-friendly blog content that not only ranks on Google SERPs but is also surfaced by AI models.

Key takeaways

  • SEO friendly blog post now means writing with search intent, ensuring content is clear and quotable for AI systems
  • Key factors for SEO friendly blog posts include trustworthiness, machine-readability, answer-first structure, and topical authority
  • Conduct thorough keyword research and find readers’ questions to match search intent effectively
  • Use clear headings, improve readability, include inclusive language, and add relevant media to engage readers
  • Write compelling meta titles and descriptions, link to existing content, and focus on building authority to enhance visibility

What does an SEO-friendly blog post mean in the AI era?

The way people search for information has changed, and with it, the meaning of an SEO-friendly blog post. Before the rise of generative AI, writing an SEO-friendly blog post mostly meant this:

‘Writing content with the intention of ranking highly in search engine results pages (SERPs). The content is optimized for specific target keywords, easy to read, and provides value to the reader.’

That definition is not wrong. But it is no longer complete.

In the AI era, an SEO-friendly blog post is written with search intent first, answering a user’s question clearly and efficiently. It is not just about placing keywords in the right spots. It is about creating an information-dense piece with accurate, well-structured, and quotable sentences that AI systems can confidently extract and surface as direct answers.

The new definition clearly shows that strong SEO foundations still matter, and they matter more than ever. What has changed is how content is evaluated and discovered. Search engines and AI models now look beyond clicks and rankings to understand whether your content is trustworthy, helpful, and easy to interpret.

Here are some key factors that play a key role in determining whether a blog post is truly SEO-friendly:

  • Trustworthiness (E-E-A-T): Demonstrating real-world experience, expertise, and credibility helps your content stand out from low-value AI-generated rehashes
  • Machine-readability: Clear structure, clean HTML, and technical signals such as schema markup help search engines and AI systems understand what your content is about
  • Answer-first structure: Placing concise, direct answers at the beginning of sections makes it easier for AI models to extract and reference your content
  • Topical authority: Publishing interconnected, in-depth content around a subject is far more effective than creating isolated blog posts

9 tips to write SEO-friendly blogs for LLM and SERP visibility

Now we get to the core of this guide. Below are some foundational tips to help you plan and write SEO-friendly blog posts that are genuinely helpful, easy to understand, and focused on solving real reader problems. When done right, these practices not only improve search visibility but also shape how your brand is perceived by both users and AI systems.

1. Conduct thorough keyword research

Before you start writing a single word, start with solid keyword research. This step helps you understand how people search for a topic, which terms carry demand, and how competitive those searches are. It also ensures your content aligns with real user intent instead of assumptions.

You can use tools like Google Keyword Planner, Ahrefs, or Semrush for this. Personally, I prefer using Semrush’s Keyword Magic Tool because it quickly surfaces thousands of relevant keyword ideas around a single topic.

keyword magic tool by semrush for keyword researcg
Keyword Magic Tool by Semrush for the relevant keyword list

Here’s how I usually approach it. I enter a broad keyword related to my topic, for example, ‘SEO.’ The tool then returns an extensive list of related keywords along with important metrics. I mainly focus on three of them:

  • Search intent, to understand what the user is really looking for
  • Keyword Difficulty (KD%), to estimate how hard it is to rank
  • Search volume, to gauge demand

This combination helps me choose keywords that are realistic to rank for and meaningful for readers.

If you use Yoast SEO, this process becomes even easier. Semrush is integrated into Yoast SEO (both free and Premium), giving you keyword suggestions directly in Yoast SEO. With a single click, you can access relevant keyword data while writing, making it easier to create focused, useful content from the start.

Looking for keyphrase suggestions? When you’ve set a focus keyword in Yoast SEO, you can click on ‘Get related keyphrases’ and our Semrush integration will help you find high-performing keyphrases!

Also read: How to use the Semrush related keyphrases feature in Yoast SEO for WordPress

2. Finding readers’ questions

Keyword research tells you what people search for. Questions tell you why they search.

When you actively look for the questions your audience is asking, you move closer to matching search intent. This is especially important in the AI era, where search engines and AI models prioritize clear, answer-driven content.

For example, consider these two queries:

What are the key features of good running shoes?

This shows informational intent. The searcher wants to understand what makes a running shoe good.

What are the best running shoes?

This suggests a transactional or commercial intent. The searcher is likely comparing options before making a purchase.

Both questions are valid, but they require very different content approaches.

There are two simple ways I usually find relevant questions. The first is by checking the People also ask section in Google search results. By typing in a broad keyphrase, you can see related questions that Google itself considers relevant.

people also ask section on google serps
The People also ask section showing questions related to the broad keyphrase ‘SEO’

The second method is to use the Questions filter in Semrush’s Keyword Magic Tool. This helps uncover question-based queries directly tied to your main topic.

Apart from these methods, I also like using Google’s AI Overview and AI mode as a quick research layer. When I search for my main topic, I pay close attention to AI-cited sources, as they often surface broad questions people are actively seeking. The structured points and highlighted terms usually reflect the answers and subtopics that matter most to users. If I want to go deeper, I click “Show more,” which reveals additional angles and follow-up questions I might not have considered initially.

google ai overview citing resources
AI cited sources by Google AI Overview

Finding and answering these questions helps you do lightweight online audience research and create content that feels genuinely helpful. It also increases the chances of your blog post being referenced in AI-generated answers, since LLMs are designed to surface clear responses to specific questions.

3. Structure your content with headings and subheadings

In our 2026 SEO predictions, we highlighted that editorial quality is no longer just about good writing. It has become a machine-readability requirement. Content that is clearly structured is easier to understand, reuse, and surface across both search and AI-driven experiences.

How LLMs use headings

AI models rely on headings to identify topics, questions, and answers within a page. When your content is broken into clear sections, it becomes easier for them to extract key information and include it in AI-generated summaries.

Why headings still matter for SEO

Headings help search engines understand the hierarchy of your content and the main points you are trying to rank for. They also improve scannability and usability, especially on mobile devices, and increase the chances of earning featured snippets.

Good structure has always been a core SEO principle. In the AI era, it remains one of the simplest and most effective ways to improve visibility and discoverability.

4. Focus on readability aspects

An SEO-friendly blog post should be easy to read before it can rank or get picked up by AI systems. Readability helps readers stay engaged and helps search engines and AI models better understand your content.

A few key readability aspects to focus on while writing:

  • Avoid passive voice where possible
    Active sentences are clearer and more direct. They make it easier for readers to understand who is doing what, and they reduce ambiguity for AI systems processing your content.
  • Use transition words
    Transition words like “because,” “for example,” and “however” guide readers through your content. They improve flow and make it easier to follow relationships between sentences and paragraphs.
  • Keep sentences and paragraphs short
    Long, complex sentences reduce clarity. Breaking content into shorter sentences and paragraphs improves scannability and comprehension.
  • Avoid consecutive sentences starting in the same way
    Varying sentence structure keeps your writing engaging and prevents it from sounding repetitive or robotic.
The readability analysis in the Yoast SEO for WordPress metabox
The readability analysis in the Yoast SEO for WordPress metabox

If you are a WordPress or Shopify user, Yoast SEO (and Yoast SEO for Shopify for Shopify users) can help here. Its readability analysis checks for passive voice, transition words, sentence length, and other clarity signals while you write. If you prefer drafting in Google Docs, you can use the Yoast SEO Google Docs add-on to get the same readability feedback before publishing.

Use Yoast SEO in Google Docs

Optimize as you draft for SEO, inclusivity, and readability. The Yoast SEO Google Docs add-on lets you export content ready for WordPress, no reformatting required.

Get Yoast for Google Docs add-onOnly $5 / month (ex VAT)

 

Good readability is not just about pleasing algorithms. It helps readers understand your message more quickly and makes your content easier to reuse in AI-generated responses.

5. Use inclusive language

Inclusive language helps ensure your content is respectful, clear, and welcoming to a broader audience. It avoids assumptions about gender, ability, age, or background, and focuses on people-first communication.

From an SEO and AI perspective, inclusive language also improves clarity. Content that avoids vague or biased terms is easier to interpret, digest, and trust. This directly supports brand perception, especially when your content is surfaced in AI-generated responses.

Yoast SEO supports this through its inclusive language check, which flags potentially non-inclusive terms and suggests better alternatives. This feature is available in Yoast SEO, Yoast SEO Premium, and in the Yoast SEO Google Docs add-on, making it easier to build inclusive habits directly into your writing workflow.

Inclusive language ensures your content is intentional, thoughtful, and clear, aligning closely with what modern SEO and AI systems value.

6. Add relevant media and interaction points

A well-written blog post should not feel like a long block of text. Adding the right media and interaction points helps guide readers through your content, keeps them engaged, and encourages them to take action.

Why media matters

Media elements such as images, videos, embeds, and infographics make your content easier to consume and more engaging. Blog posts that include images receive 94% more views than those without, simply because visuals break up large blocks of text and make pages easier to scan.

Video content plays an even bigger role. Embedded videos help explain complex ideas faster and can significantly improve organic visibility compared to text-only posts. Together, these elements encourage readers to stay longer on your page, which is a strong signal of content quality for search engines and AI systems alike.

Media also improves accessibility. Properly optimized images with descriptive alt text make content usable for screen readers, while original visuals, screenshots, or diagrams help reinforce credibility and expertise.

Use interaction points to guide and engage readers

Interaction does not always mean complex features. Even simple elements can significantly improve engagement when used well.

Table of contents and sidebar CTA used as interaction points in a Yoast blog post

A table of contents, for example, allows readers to jump directly to the section they care about most.

Other interaction points include clear calls to action (CTAs) that guide readers to the next step, relevant recommendations that encourage users to keep exploring your site, and social sharing buttons that make it easy to amplify your content. Interactive elements like polls, quizzes, or embedded tools further encourage participation and increase time on page.

7. Plan your content length

Content length still matters, but not in the way many people think it does.

A common question is what the ideal word count is for a blog post that performs well. A 2024 study by Backlinko found that while longer content tends to attract more backlinks, the average page ranking on Google’s first page contains around 1,500 words.

That said, this should not be treated as a fixed benchmark. The ideal length is the one that fully answers the user’s question. In an AI-driven era, publishing long content that adds little value or is padded with unnecessary fluff can do more harm than good.

If a topic genuinely requires a longer format, breaking the content into clear subheadings makes a big difference. I personally prefer structuring long articles this way because it improves readability, helps readers navigate the page more easily, and makes the content easier for search engines and AI systems to understand.

Must read: How to use headings on your site

If you use Yoast SEO or Yoast SEO Premium, the paragraph and sentence length checks can help here. These checks exist to prevent pages from being too thin to provide real value. Pages with very low word counts often lack context and struggle to demonstrate relevance or expertise. Yoast SEO flags such cases as a warning, while clearly indicating that adding more words alone does not guarantee better rankings.

Think of word count as a guideline, not a goal. Your focus should always be on clarity, completeness, and usefulness.

8. Link to existing content

Internal linking is one of the most underrated SEO practices, yet it does a lot of heavy lifting behind the scenes.

By linking to relevant content within your site, you help readers discover additional resources and help search engines understand how your content is connected. Over time, this strengthens topical authority and signals that your site consistently covers a subject in depth.

Good internal linking follows a few simple principles:

  • Link only when it adds value and feels natural in context
  • Use clear, descriptive anchor text so users and search engines know what to expect
  • Avoid linking to outdated URLs or pages that redirect, as this wastes crawl signals

Internal links also keep readers engaged longer by guiding them to related articles. This improves overall site engagement while reinforcing your expertise on a topic.

From an AI and search perspective, internal linking plays an even bigger role. Modern search systems analyze content structure, metadata hierarchies, schema markup, and internal links to assess topical depth and clarity. Well-linked content clusters make it easier for search engines and AI systems to understand what your site is about and which pages are most important.

For WordPress users, Yoast SEO Premium offers internal linking suggestions directly in the editor. This makes it easier to spot relevant linking opportunities as you write, helping you build stronger content connections without interrupting your workflow.

A smarter analysis in Yoast SEO Premium

Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!

Get Yoast SEO Premium Only $118.80 / year (ex VAT)

9. Write compelling meta titles and descriptions

Meta titles and meta descriptions help users decide whether to click on your content. While meta descriptions are not a direct ranking factor, they strongly influence click-through rates, making them an essential part of writing SEO-friendly blog posts.

A good meta title clearly communicates what the page is about. Place your main keyword near the beginning, keep it concise, and aim for roughly 55-60 characters so it doesn’t get truncated in search results.

Meta descriptions act like a short invitation. They should explain what the reader will gain from clicking and why it matters. Instead of stuffing keywords, focus on clarity and usefulness. Mention what aspects of the topic your content covers and how it helps the reader. Simple language works best.

Pro tip: Using action-oriented verbs such as “learn,” “discover,” or “read” can also encourage clicks and make your description more engaging.

If you use Yoast SEO Premium, this process becomes much easier. The AI-powered meta title and description generation feature helps you create relevant, well-structured metadata in just one click. It follows SEO best practices while producing descriptions and titles that are clear, engaging, and aligned with search intent.

Bonus tips

Once you have the fundamentals in place, a few extra refinements can go a long way. The following bonus tips help improve usability, clarity, and long-term discoverability. They are not mandatory, but when applied thoughtfully, they can make your blog posts more helpful for readers and easier to surface across search engines and AI-driven experiences.

1. Add a table of contents

A table of contents (TOC) helps readers quickly understand what your blog post covers and jump straight to the section they care about. This is especially useful for long-form content, where users often scan rather than scroll from top to bottom.

From an SEO perspective, a TOC improves structure and readability and can create jump links in search results, which may increase click-through rates. It reduces bounce rates by helping users find answers faster and improves accessibility by offering clear navigation.

By the way, did you know Yoast can help you here too? Yes, the Yoast SEO Internal linking blocks feature lets you add a TOC block to your blog post that automatically includes all the headings with just one click!

2. Add key takeaways

Key takeaways help readers quickly grasp the main points of your blog post without having to read the whole post. This is especially helpful for time-constrained users who want quick, actionable insights.

Summaries also support SEO by reinforcing topic relevance and improving content comprehension for search engines and AI systems. Well-written takeaways might increase visibility in featured snippets and “People also ask” results.

If you use Yoast SEO Premium, the Yoast AI Summarize feature can generate key takeaways for your content in just one click, making it easier to add concise summaries without extra effort.

3. Add an FAQ section

An FAQ section gives you space to answer specific questions your readers may still have after reading your post. This improves user experience by addressing concerns directly and building trust.

FAQs also help search engines better understand your content by clearly outlining common questions and answers related to your topic. While they can support rankings, their real value lies in reducing friction, improving clarity, and even supporting conversions by clearing doubts.

4. Short permalinks

A permalink is the permanent URL of your blog post. Short, descriptive permalinks are easier to read, easier to share, and more likely to be clicked.

Good permalinks clearly describe what the page is about, avoid unnecessary words, and include the main topic where relevant. They improve usability and help search engines understand page context at a glance.

5. Focus on building authority (EEAT aspect)

Building authority is critical, especially for sites that cover sensitive or high-impact topics. Demonstrating Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) helps both users and search engines trust your content.

This includes citing reliable sources, showing real-world experience, maintaining consistent quality, and clearly communicating who is behind the content. Strong E-E-A-T signals are especially important for YMYL topics, where accuracy and credibility matter most.

6. Plan content distribution

Writing a great blog post is only half the work. Distribution helps your content reach the right audience.

Sharing posts on social media, repurposing key insights into newsletters, and earning backlinks from relevant sites can drive more traffic and visibility. Distribution also increases engagement signals and helps your content gain traction faster, which supports long-term SEO performance.

Target your readers always!

In AI-driven search, retrieval beats ranking. Clarity, structure, and language alignment now decide if your content gets seen. – Carolyn Shelby

This perfectly sums up what writing SEO-friendly blog posts looks like today. Success is no longer just about rankings. It is about being clear, helpful, and easy to understand for both readers and AI systems.

Throughout this guide, we focused on the fundamentals that still matter: understanding search intent, structuring content well, improving readability, using inclusive language, and supporting your writing with media, internal links, and thoughtful metadata. These are not new tricks. They are strong SEO foundations, adapted for how search and discovery work in the AI era.

If there is one takeaway, it is this: always write for your readers first. When your content genuinely helps people, answers their questions, and respects how they search and read, it naturally becomes easier to surface across SERPs and AI-driven experiences.

Good SEO has not changed. It has simply become more human.

The post Tips and tricks to write SEO-friendly blog posts in the AI era appeared first on Yoast.

Read more at Read More

Web Design and Development San Diego

Why PPC measurement feels broken (and why it isn’t)

Why PPC measurement works differently in a privacy-first world

If you’ve been managing PPC accounts for any length of time, you don’t need a research report to tell you something has changed. 

You see it in the day-to-day work: 

  • GCLIDs missing from URLs.
  • Conversions arriving later than expected.
  • Reports that take longer to explain while still feeling less definitive than they used to.

When that happens, the reflex is to assume something broke – a tracking update, a platform change, or a misconfiguration buried somewhere in the stack.

But the reality is usually simpler. Many measurement setups still assume identifiers will reliably persist from click to conversion, and that assumption no longer holds consistently.

Measurement hasn’t stopped working. The conditions it depends on have been shifting for years, and what once felt like edge cases now show up often enough to feel like a systemic change.

Why this shift feels so disorienting

I’ve been close to this problem for most of my career. 

Before Google Ads had native conversion tracking, I built my own tracking pixels and URL parameters to optimize affiliate campaigns. 

Later, while working at Google, I was involved in the acquisition of Urchin as the industry moved toward standardized, comprehensive measurement.

That era set expectations that nearly everything could be tracked, joined, and attributed at the click level. Google made advertising feel measurable, controllable, and predictable. 

As the ecosystem now shifts toward more automation, less control, and less data, that contrast can be jarring.

It has been for me. Much of what I once relied on to interpret PPC data no longer applies in the same way. 

Making sense of today’s measurement environment requires rethinking those assumptions, not trying to restore the old ones. This is how I think about it now.

Dig deeper: How to evolve your PPC measurement strategy for a privacy-first future

The old world: click IDs and deterministic matching

For many years, Google Ads measurement followed a predictable pattern. 

  • A user clicked an ad. 
  • A click ID, or gclid, was appended to the URL. 
  • The site stored it in a cookie. 
  • When a conversion fired, that identifier was sent back and matched to the click.

This produced deterministic matches, supported offline conversion imports, and made attribution relatively easy to explain to stakeholders. 

As long as the identifier survived the journey, the system behaved in ways most advertisers could reason about. 

We could literally see what happened with each click and which ones led to individual conversions.

That reliability depended on a specific set of conditions.

  • Browsers needed to allow parameters through. 
  • Cookies had to persist long enough to cover the conversion window. 
  • Users had to accept tracking by default. 

Luckily, those conditions were common enough that the model worked really well.

Why that model breaks more often now

Browsers now impose tighter limits on how identifiers are stored and passed.

Apple’s Intelligent Tracking Prevention, enhanced tracking protection, private browsing modes, and consent requirements all reduce how long tracking data persists, or whether it’s stored at all.

URL parameters may be stripped before a page loads. Cookies set via JavaScript may expire quickly. Consent banners may block storage entirely.

Click IDs sometimes never reach the site, or they disappear before a conversion occurs.

This is expected behavior in modern browser environments, not an edge case, so we have to account for it.

Trying to restore deterministic click-level tracking usually means working against the constant push toward more privacy and the resulting browser behaviors.

This is another of the many evolutions of online advertising we simply have to get on board with, and I’ve found that designing systems to function with partial data beats fighting the tide.

The adjustment isn’t just technical

On my own team, GA4 is a frequent source of frustration. Not because it’s incapable, but because it’s built for a world where some data will always be missing. 

We hear the same from other advertisers: the data isn’t necessarily wrong, but it’s harder to reason about.

This is the bigger challenge. Moving from a world where nearly everything was observable to one where some things are inferred requires accepting that measurement now operates under different conditions. 

That mindset shift has been uneven across the industry because measurement lives at the periphery of where many advertisers spend most of their time, working in ad platforms.

A lot of effort goes into optimizing ad platform settings when sometimes the better use of time might’ve been fixing broken data so better decisions could be made.

Dig deeper: Advanced analytics techniques to measure PPC

Get the newsletter search marketers rely on.


What still works: Client-side and server-side approaches

So what approaches hold up under current constraints? The answer involves both client-side and server-side measurement.

Pixels still matter, but they have limits

Client-side pixels, like the Google tag, continue to collect useful data.

They fire immediately, capture on-site actions, and provide fast feedback to ad platforms, whose automated bidding systems rely on this data.

But these pixels are constrained by the browser. Scripts can be blocked, execution can fail and consent settings can prevent storage. A portion of traffic will never be observable at the individual level.

When pixel tracking is the only measurement input, these gaps affect both reporting and optimization. Pixels haven’t stopped working. They just no longer cover every case.

Changing how pixels are delivered

Some responses to declining pixel data focus on the mechanics of how pixels are served rather than measurement logic.

Google Tag Gateway changes where tag requests are routed, sending them through a first-party, same-origin setup instead of directly to third-party domains.

This can reduce failures caused by blocked scripts and simplify deployment for teams using Google Cloud.

What it doesn’t do is define events, decide what data is collected, or correct poor tagging choices. It improves delivery reliability, not measurement logic.

This distinction matters when comparing Tag Gateway and server-side GTM.

  • Tag Gateway focuses on routing and ease of setup.
  • Server-side GTM enables event processing, enrichment, and governance. It requires more maintenance and technical oversight, but it provides more control.

The two address different problems.

Here’s the key point: better infrastructure affects how data moves, not what it means.

Event definitions, conversion logic, and consistency across systems still determine data quality.

A reliable pipeline delivers whatever it’s given, so it’d be just as good at making sure the garbage you put in also comes back out.

Offline conversion imports: Moving measurement off the browser

Offline conversion imports take a different approach, moving measurement away from the browser entirely. Conversions are recorded in backend systems and sent to Google Ads after the fact.

Because this process is server to server, it’s less affected by browser privacy restrictions. It works for longer sales cycles, delayed purchases, and conversions that happen outside the site. 

This is why Google commonly recommends running offline imports alongside pixel-based tracking. The two cover different parts of the journey. One is immediate, the other persists.

Offline imports also align with current privacy constraints. They rely on data users provide directly, such as email addresses during a transaction or signup.

The data is processed server-side and aggregated, reducing reliance on browser identifiers and short-lived cookies.

Offline imports don’t replace pixels. They reduce dependence on them.

Dig deeper: Offline conversion tracking: 7 best practices and testing strategies

How Google fills the gaps

Even with pixels and offline imports working together, some conversions can’t be directly observed.

Matching when click IDs are missing

When click IDs are unavailable, Google Ads can still match conversions using other inputs.

This often begins with deterministic matching through hashed first-party identifiers such as email addresses, when those identifiers can be associated with signed-in Google users.

This is what Enhanced Conversions help achieve.

When deterministic matching, if this then that, isn’t possible, the system relies on aggregated and validated signals rather than reconstructing individual click paths.

These can include session-level attributes and limited, privacy-safe IP information, combined with timing and contextual constraints.

This doesn’t recreate the old click-level model, but it allows conversions to be associated with prior ad interactions at an aggregate level.

One thing I’ve noticed: adding these inputs typically improves matching before it affects bidding.

Bidding systems account for conversion lag and validate new signals over time, which means imported or modeled conversions may appear in reporting before they’re fully weighted in optimization.

Matching, attribution, and bidding are related but separate processes. Improvements in one don’t immediately change the others.

Modeled conversions as a standard input

Modeled conversions are now a standard part of Google Ads and GA4 reporting.

They’re used when direct observation isn’t possible, such as when consent is denied or identifiers are unavailable.

These models are constrained by available data and validated through consistency checks and holdback experiments.

When confidence is low, modeling may be limited or not applied. Modeled data should be treated as an expected component of measurement rather than an exception.

Dig deeper: Google Ads pushes richer conversion imports

Boundaries still matter

Tools like Google Tag Gateway or Enhanced Conversions for Leads help recover measurement signal, but they don’t override user intent. 

Routing data through a first-party domain doesn’t imply consent. Ad blockers and restrictive browser settings are explicit signals. 

Overriding them may slightly increase the measured volume, but it doesn’t align with users’ expectations regarding how your organization uses their data.

Legal compliance and user intent aren’t the same thing. Measurement systems can respect both, but doing so requires deliberate choices.

Designing for partial data

Missing signals are normal. Measurement systems that assume full visibility will continue to break under current conditions.

Redundancy helps: pixels paired with hardened delivery, offline imports paired with enhanced identifiers, and multiple incomplete signals instead of a single complete one.

But here’s where things get interesting. Different systems will see different things, and this creates a tension many advertisers now face daily.

Some clients tell us their CRM data points clearly in one direction, while Google Ads automation, operating on less complete inputs, nudges campaigns another way.

In most cases, neither system is wrong. They’re answering different questions with different data, on different timelines. Operating in a world of partial observability means accounting for that tension rather than trying to eliminate it.

Dig deeper: Auditing and optimizing Google Ads in an age of limited data

Making peace with partial observability

The shift toward privacy-first measurement changes how much of the user journey can be directly observed. That changes our jobs.

The goal is no longer perfect reconstruction of every click, but building measurement systems that remain useful when signals are missing, delayed, or inferred.

Different systems will continue to operate with different views of reality, and alignment comes from understanding those differences rather than trying to eliminate them.

In this environment, durable measurement depends less on recovering lost identifiers and more on thoughtful data design, redundancy, and human judgment.

Measurement is becoming more strategic than ever.

Read more at Read More

Web Design and Development San Diego

How SEO leaders can explain agentic AI to ecommerce executives

How to communicate agentic AI to ecommerce leadership without the hype

Agentic AI is increasingly appearing in leadership conversations, often accompanied by big claims and unclear expectations. For SEO leaders working with ecommerce brands, this creates a familiar challenge.

Executives hear about autonomous agents, automated purchasing, and AI-led decisions, and they want to know what this really means for growth, risk, and competitiveness.

What they don’t need is more hype. They need clear explanations, grounded thinking, and practical guidance. 

This is where SEO leaders can add real value, not by predicting the future, but by helping leadership understand what is changing, what isn’t, and how to respond without overreacting. Here’s how.

Start by explaining what ‘agentic’ actually means

A useful first step is to remove the mystery from the term itself. Agentic systems don’t replace customers, they act on behalf of customers. The intent, preferences, and constraints still come from a person.

What changes is who does the work.

Discovery, comparison, filtering, and sometimes execution are handled by software that can move faster and process more information than a human can.

When speaking to executive teams, a simple framing works best:

  • “We’re not losing customers, we’re adding a new decision-maker into the journey. That decision-maker is software acting as a proxy for the customer.” 

Once this is clear, the conversation becomes calmer and more practical, and the focus moves away from fear and toward preparation.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Keep expectations realistic and avoid the hype

Another important role for SEO leaders is to slow the conversation down. Agentic behavior will not arrive everywhere at the same time. Its impact will be uneven and gradual.

Some categories will see change earlier because their products are standardized and data is already well structured. Others will move more slowly because trust, complexity, or regulation makes automation harder.

This matters because leadership teams often fall into one of two traps:

  1. Panic, where plans are rewritten too quickly, budgets move too fast, and teams chase futures that may still be some distance away. 
  2. Dismissal, where nothing changes until performance clearly drops, and by then the response is rushed.

SEO leaders can offer a steadier view. Agentic AI accelerates trends that already exist. Personalized discovery, fewer visible clicks, and more pressure on data quality are not new problems. 

Agents simply make them more obvious. Seen this way, agentic AI becomes a reason to improve foundations, not a reason to chase novelty.

Dig deeper: Are we ready for the agentic web?

Change the conversation from rankings to eligibility

One of the most helpful shifts in executive conversations is moving away from rankings as the main outcome of SEO. In an agent-led journey, the key question isn’t “do we rank well?” but “are we eligible to be chosen at all?”

Eligibility depends on clarity, consistency, and trust. An agent needs to understand what you sell, who it is for, how much it costs, whether it is available, and how risky it is to choose you on behalf of a user. This is a strong way to connect SEO to commercial reality.

Questions worth raising include whether product information is consistent across systems, whether pricing and availability are reliable, and whether policies reduce uncertainty or create it. Framed this way, SEO becomes less about chasing traffic and more about making the business easy to select.

Explain why SEO no longer sits only in marketing

Many executives still see SEO as a marketing channel, but agentic behavior challenges that view.

Selection by an agent depends on factors that sit well beyond marketing. Data quality, technical reliability, stock accuracy, delivery performance, and payment confidence all play a role.

SEO leaders should be clear about this. This isn’t about writing more content. It’s about making sure the business is understandable, reliable, and usable by machines.

Positioned correctly, SEO becomes a connecting function that helps leadership see where gaps in systems or data could prevent the brand from being selected. This often resonates because it links SEO to risk and operational health, not just growth.

Dig deeper: How to integrate SEO into your broader marketing strategy

Be clear that discovery will change first

For most ecommerce brands, the earliest impact of agentic systems will be at the top of the funnel. Discovery becomes more conversational and more personal.

Users describe situations, needs, and constraints instead of typing short search phrases, and the agent then turns that context into actions.

This reduces the value of simply owning category head terms. If an agent knows a user’s budget, preferences, delivery expectations, and past behavior, it doesn’t behave like a first-time visitor. It behaves like a well-informed repeat customer.

This creates a reporting challenge. Some SEO work will no longer look like direct demand creation, even though it still influences outcomes. Leadership teams need to be prepared for this shift.

Get the newsletter search marketers rely on.


Reframe consideration as filtering, not persuasion

The middle of the funnel also changes shape. Today, consideration often involves reading reviews, comparing options, and seeking reassurance.

In an agent-led journey, consideration becomes a filtering process, where the agent removes options it believes the user would reject and keeps those that fit.

This has clear implications. Generic content becomes less effective as a traffic driver because agents can generate summaries and comparisons instantly. Trust signals become structural, meaning claims need to be backed by consistent and verifiable information.

In many cases, a brand may be chosen without the user being consciously aware of it. That can be positive for conversion, but risky for long-term brand strength if recognition isn’t built elsewhere.

Dig deeper: How to align your SEO strategy with the stages of buyer intent

Set honest expectations about measurement

Executives care about measurement, and agentic AI makes this harder. As more discovery and consideration happen inside AI systems, fewer interactions leave clean attribution trails. Some impact will show up as direct traffic, and some will not be visible at all.

SEO leaders should address this early. This isn’t a failure of optimization. It reflects the limits of today’s analytics in a more mediated world.

The conversation should move toward directional signals and blended performance views, rather than precise channel attribution that no longer reflects how decisions are made.

Promote a proactive, low-risk response

The most important part of the leadership discussion is what to do next. The good news is that most sensible responses to agentic AI are low risk.

Improving product data quality, reducing inconsistencies across platforms, strengthening reliability signals, and fixing technical weaknesses all help today, regardless of how quickly agents mature.

Investing in brand demand outside search also matters. If agents handle more of the comparison work, brands that users already trust by name are more likely to be selected.

This reassures leaders that action doesn’t require dramatic change, only disciplined improvement.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Agentic AI changes the focus, not the fundamentals

For SEO leaders, agentic AI changes the focus of the role. The work shifts from optimizing pages to protecting eligibility, from chasing visibility to reducing ambiguity, and from reporting clicks to explaining influence.

This requires confidence, clear communication, and a willingness to challenge hype. Agentic AI makes SEO more strategic, not any less important.

Agentic AI should not be treated as an immediate threat or a guaranteed advantage. It’s a shift in how decisions are made.

For ecommerce brands, the winners will be those that stay calm, communicate clearly, and adapt their SEO thinking from driving clicks to earning selection.

That is the conversation SEO leaders should be having now.

Dig deeper: The future of search visibility: What 6 SEO leaders predict for 2026

Read more at Read More

Web Design and Development San Diego

What repeated ChatGPT runs reveal about brand visibility

What repeated ChatGPT runs reveal about brand visibility

We know AI responses are probabilistic – if you ask an AI the same question 10 times, you’ll get 10 different responses.

But how different are the responses?

That’s the question Rand Fishkin explored in some interesting research.

And it has big implications for how we should think about tracking AI visibility for brands.

In his research, he tested prompts asking for recommendations in all sorts of products and services, including everything from chef’s knives to cancer care hospitals and Volvo dealerships in Los Angeles.

Basically, he found that:

  • AIs rarely recommend the same list of brands in the same order twice.
  • For a given topic (e.g., running shoes), AIs recommend a certain handful of brands far more frequently than others.

For my research, as always, I’m focusing exclusively on B2B use cases. Plus, I’m building on Fishkin’s work by addressing these additional questions:

  • Does prompt complexity affect the consistency of AI recommendations?
  • Does the competitiveness of the category affect the consistency of recommendations?

Methodology

To explore those questions, I first designed 12 prompts:

  • Competitive vs. niche: Six of the prompts are about highly competitive B2B software categories (e.g., accounting software), and the other six are about less crowded categories (e.g., user entity behavior analytics (UEBA) software). I identified the categories using Contender’s database, which tracks how many brands ChatGPT associates with 1,775 different software categories.
  • Simple vs. nuanced prompts: Within both sets of “competitive” and “niche” prompts, half of the prompts are simple (“What’s the best accounting software?”) and the other half are nuanced prompts including a persona and use case (”For a Head of Finance focused on ensuring financial reporting accuracy and compliance, what’s the best accounting software?”)

I ran the 12 prompts 100 times, each, through the logged-out, free version of ChatGPT at chatgpt.com (i.e., not the API). I used a different IP address for each of the 1,200 interactions to simulate 1,200 different users starting new conversations.

Limitations: This research only covers responses from ChatGPT. But given the patterns in Fishkin’s results and the similar probabilistic nature of LLMs, you can probably generalize the directional (not absolute value) findings below to most/all AIs.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Findings

So what happens when 100 different people submit the same prompt to ChatGPT, asking for product recommendations?

How many ‘open slots’ in ChatGPT responses are available to brands?

On average, ChatGPT will mention 44 brands across 100 different responses. But one of the response sets included as many as 95 brands – it really depends on the category.

How many brands does ChatGPT draw from, on average?

Competitive vs. niche categories

On that note, for prompts covering competitive categories, ChatGPT mentions about twice as many brands per 100 responses compared to the responses to prompts covering “niche” categories. (This lines up with the criteria I used to select the categories I studied.)

Simple vs. nuanced prompts

On average, ChatGPT mentioned slightly fewer brands in response to nuanced prompts. But this wasn’t a consistent pattern – for any given software category, sometimes nuanced questions ended up with more brands mentioned, and sometimes simple questions did.

This was a bit surprising, since I expected more specific requests (e.g., “For a SOC analyst needing to triage security alerts from endpoints efficiently, what’s the best EDR software?”) to consistently yield a narrower set of potential solutions from ChatGPT.

I think ChatGPT might not be better at tailoring a list of solutions to a specific use case because it doesn’t have a deep understanding of most brands. (More on this data in an upcoming note.)

Return of the ’10 blue links’

In each individual response, ChatGPT will, on average, mention only 10 brands.

There’s quite a range, though – a minimum of 6 brands per response and a maximum of 15 when averaging across response sets.

How many brands per response, on average?

But a single response typically names about 10 brands regardless of category or prompt type.

The big difference is in how much the pool of brands rotates across responses – competitive categories draw from a much deeper bench, even though each individual response names a similar count.

Everything old (in SEO) truly is new again (in GEO/AEO). It reminds me of trying to get a placement in one of Google’s “10 blue links”.

Dig deeper: How to measure your AI search brand visibility and prove business impact

Get the newsletter search marketers rely on.


How consistent are ChatGPT’s brand recommendations?

When you ask ChatGPT for a B2B software recommendation 100 different times, there are only ~5 brands, on average, that it’ll mention 80%+ of the time.

To put it in context, that’s just 11% of all the 44 brands it’ll mention at all across those 100 responses.

ChatGPT knows ~44 brands in your category

So it’s quite competitive to become one of the brands ChatGPT consistently mentions whenever someone asks for recommendations in your category.

As you’d expect, these “dominant” brands tend to be big, established brands with strong recognition. For example, the dominant brands in the accounting software category are QuickBooks, Xero, Wave, FreshBooks, Zoho, and Sage.

If you’re not a big brand, you’re better off being in a niche category:

It's easier to get good AI visibility in niche categories

When you operate in a niche category, not only are you literally competing with fewer companies, but there are also more “open slots” available to you to become a dominant brand in ChatGPT’s responses.

In niche categories, 21% of all the brands ChatGPT mentions are dominant brands, getting mentioned 80%+ of the time.

Compare this to just 7% of all brands being dominant in competitive categories, where the majority of brands (72%) are languishing in the long tail, getting mentioned less than 20% of the time.

The responses to nuanced prompts are harded to dominate

A nuanced prompt doesn’t dramatically change the long tail of little-seen brands (with <20% visibility), but it does change the “winner’s circle.” Adding persona context to a prompt makes it a bit more difficult to reach the dominant tier – you can see the steeper “cliff” a brand has to climb in the “nuanced prompts” graph above.

This makes intuitive sense: when someone asks “best accounting software for a Head of Finance,” ChatGPT has a more specific answer in mind and commits a bit more strongly to fewer top picks.

Still, it’s worth noting that the overall pool doesn’t shrink much – ChatGPT mentions ~42 brands in 100 responses to nuanced prompts, just a handful fewer than the ~46 mentioned in response to simple prompts. If nuanced prompts make the winner’s circle a bit more exclusive, why don’t they also narrow the total field?

Partly, it could be that the “nuanced” questions we fed it weren’t meaningfully more narrow and specific than what was implied in the simple questions we asked.

But, based on other data I’m seeing, I think this is partly about ChatGPT not knowing enough about most brands to be more selective. I’ll share more on this in an upcoming note.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

What does this mean for B2B marketers?

If you’re not a dominant brand, pick your battles – niche down

It’s never been more important to differentiate. 21% of mentioned brands reach dominant status in niche categories vs. 7% in competitive ones.

Without time and a lot of money for brand marketing, an upstart tech company isn’t going to become a dominant brand in a broad, established category like accounting software.

But the field is less competitive when you lean into your unique, differentiating strengths. ChatGPT is more likely to treat you like a dominant brand if you work to make your product known as “the best accounting software for commercial real estate companies in North America.”

Most AI visibility tracking tools are grossly misleading

Given the inconsistency of ChatGPT’s recommendations, a single spot-check for any given prompt is nearly meaningless. Unfortunately, checking each prompt just once per time period is exactly what most AI visibility tracking tools do.

If you want anything approaching a statistically-significant visibility score for any given prompt, you need to run the prompt at least dozens of times, even 100+ times, depending on how precise you need the data to be.

But that’s obviously not practical for most people, so my suggestion is: For the key, bottom-of-funnel prompts you’re tracking, run them each ~5 times whenever you pull data.

That’ll at least give you a reasonable sense of whether your brand tends to show up most of the time, some of the time, or never.

Your goal should be to have a confident sense of whether your brand is in the little-seen long tail, the visible middle, or the dominant top-tier for any given prompt. Whether you use my tiers of ‘under 20%’, ‘20–80%’, and ‘80%+’, or your own thresholds, this is the approach that follows the data and common sense.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

What’s next?

In future newsletters and LinkedIn posts, I’m going to build on these findings with new research:

  • How does ChatGPT talk about the brands it consistently recommends? Is it indicative of how much ChatGPT “knows” about brands?
  • Do different prompts with the same search intent tend to produce the same set of recommendations?
  • How consistent is “rank” in the responses? Do dominant brands tend to get mentioned first?

This article was originally published on Visible on beehiiv (as Most AI visibility tracking is misleading (here’s my new data)) and is republished with permission.

Read more at Read More

B2B Social Media Marketing: Build a Winning Strategy

While most direct-to-consumer brands are maximizing their social media presence with polished content and paid ads, many business-to-business companies (B2Bs) are still stuck in broadcast mode. They treat social like a checkbox or, worse, avoid it altogether. That’s a miss.

Your buyers are on these platforms every day, scrolling LinkedIn between meetings, watching YouTube explainers, and even picking up insights on TikTok.

The good news is that most of your competitors aren’t doing this well. And B2B social follows different rules. It’s less about selling, more about showing up with value and building trust over time.

This guide breaks down the platforms, strategy, and mistakes to avoid so you can stop blending in and start building something that drives real results.

Key Takeaways

  • Most B2B brands underperform on social because they focus on broadcasting, not solving problems or creating value.
  • LinkedIn leads for B2B, but platforms like YouTube, X, and even TikTok can work if you match the content to your audience.
  • B2B social content should educate, not sell. Use it to build trust and stay relevant throughout long sales cycles.
  • Build a strategy around real personas, funnel stages, and platform-specific content—not random posting or vanity metrics.
  • Avoid common mistakes like generic messaging and chasing impressions over actions like clicks or demo signups.

Why B2B Social Media Is (Still) Underrated

Many B2B companies still treat social media as an afterthought. They post a few updates, maybe recycle some blog content, and call it a day. But here’s the truth: social media isn’t just about brand awareness anymore. It plays a fundamental role in demand gen, and even sales.

Your buyers are on these platforms every day. LinkedIn? Still essential. YouTube? Massive for education. Twitter (X)? Great for thought leadership. Even TikTok is becoming a serious B2B player in some niches.

If you’re only thinking top-of-funnel, you’re missing the bigger picture. Social gives you direct access to influence buying decisions, build relationships, and stay top of mind during long sales cycles. It’s also a powerful signal for search. That’s why smart B2B brands treat social like a core channel, right alongside their email, paid, and B2B SEO strategies.

So yes, B2B social still flies under the radar but that’s your opportunity. While your competitors play it safe, you can build a strategy that actually drives the pipeline.

Top B2B Social Platforms

Not all platforms are worth your time, but these are a good starting point. Here’s a breakdown of the top B2B social channels and how to use each one to actually move the needle.

LinkedIn

B2B marketers love LinkedIn, with 97% of them using it for their content marketing strategy.

There’s a reason for this: LinkedIn is effective at securing leads.

The social goal of most B2Bs isn’t just traffic. It’s the right kind of traffic. More specifically, it’s leads from that traffic. That’s why LinkedIn has been the social media sweet spot of most B2Bs.

Social platformrs compared in a graphic.

LinkedIn does for B2Bs what Facebook, X, and Pinterest have all failed to do. It forms professional connections based on a single goal.

It’s not that Facebook, X, and all the rest are more personal and less professional than LinkedIn. LinkedIn brands itself as a professional networking site. On LinkedIn, you see fewer baby pictures, fewer cat videos, and nothing about “Dave just checked in at Downtown Bar.”

LinkedIn, devoid as it is of issues like “relationship status” and “favorite TV shows,” is much more appealing to the world of B2B exchanges.

A HubSpot Linkedin Post.

Image Source

Post link

X

X still punches above its weight for B2B if you use it right. It’s built for real-time conversations. That makes it great for PR moments and quick interactions with your audience or peers.

If you’re in tech or SaaS, this is where your buyers and early adopters are already talking. Threads and hot takes can build credibility fast, as long as you’re consistent and actually say something worth engaging with.

Just don’t expect conversions. X is a conversation starter, not a closer. Use it to build visibility, shape perception, and stay in the mix.

An Adobe X post.

Source

YouTube

YouTube is a goldmine for B2B content that keeps working long after you hit publish. Think product demos, how-to explainers, or customer stories, anything that helps prospects see your value in action.

It’s perfect for long-form content with high evergreen potential. A solid video can rank in search, appear in recommended feeds, and continue to drive traffic for months (or even years). And because Google owns YouTube, it plays nice with your overall SEO strategy.

Use it to educate, build trust, and answer the questions your audience is already Googling. Just keep the production clean and the content useful.

Image Source

Link to Video

TikTok + Instagram (Yes, Really)

These aren’t just playgrounds for influencers anymore. TikTok and Instagram can actually work for B2B if you play to their strengths. Short-form video is perfect for showing off your brand personality, simplifying complex ideas, or giving a behind-the-scenes look at your team.

They’re especially useful for building an audience that sees your brand as more than just a logo. Quick explainers and team moments go a long way here.

The key is to be intentional. You don’t need to chase every trend, but you do need to show up as a genuine person, not a corporate account.

A Zapier TikTok post.

Image Source

Video Link

A Shopify Instagram post.

Image Source

Post Link

How B2B Social Media Needs To Work Differently

Most B2B social strategies fall flat because they treat platforms like a digital brochure. Too much product pushing. Not enough problem-solving.

Your buyers don’t scroll through LinkedIn or YouTube looking for a sales pitch; they’re looking for answers. That’s your opportunity. When you lead with value, you earn attention. And in B2B, attention is the first step toward trust.

This isn’t about trying to “go viral.” It’s about consistently showing up with content that solves real problems. That might look like a short video explaining a common pain point or a post breaking down industry trends.

Educational content works because it positions you as a guide, not just a vendor. It says, “We get your world. Here’s how to navigate it better.” That’s way more powerful than just listing your features.

You also need to show up like a human. Buyers are smart. They can sniff out polished sales copy in seconds. What they actually want is an honest perspective, clear thinking, and content that feels like it came from someone who’s done the work. That’s how you build an audience that actually wants to hear from you, and buyers who remember your name when it’s time to act.

Building a B2B Social Media Strategy That Works

A solid B2B social strategy doesn’t mean posting constantly. It means making smarter posts. Here’s how to build a plan that actually drives results across the funnel.

Know Your Customer Profiles

Before planning content, you need to be clear about who you’re actually talking to. Who’s following you now, and who do you want to attract?

An ideal customer profile.

Source: The Smarketers

B2B audiences aren’t one-size-fits-all. A CMO wants high-level insights and strategic trends. A sales manager cares more about tactics and results. Founders might look for big-picture thinking or lessons from the trenches. If you post the same content to all of them, you’ll miss the mark every time.

Start by segmenting your audience. Review your analytics and consult with your sales team, then map out which personas matter most for your business and what they care about.

Also, know where they hang out. Your audience might be active on LinkedIn and totally absent on Instagram. Or maybe they’re watching explainers on YouTube but ignoring X. Match your platform and content format to what your ideal customer actually uses and engages with. That’s how you create content that lands.

Set The Right Goals/KPIs

If you don’t know what you’re aiming for, it’s easy to waste time chasing the wrong metrics. Start by defining what success actually looks like for your brand.

Is your focus on awareness? Then you’re tracking reach, impressions, and follower growth. Want to drive engagement? Look at comments, shares, and saves, not just likes. If lead gen is the goal, prioritize CTRs or traffic to high-intent landing pages.

You might also be building community or educating users on your product. In those cases, qualitative feedback can be a stronger signal than raw numbers.

The key is to tie your content back to goals that matter for the business—and track the right KPIs for each. Don’t get distracted by vanity metrics that look good but don’t move the needle. Set benchmarks, track consistently, and optimize based on what’s actually working.

Build A Content Marketing Calendar

An effective content calendar maps content to each stage of the funnel, so you’re guiding prospects from awareness to action and making the most of your b2b content strategy.

At the top of the funnel (TOFU), focus on educational content. Think industry stats and quick tips that stop the scroll and add value fast. For the middle (MOFU), shift to case studies and testimonials that build trust and show proof. Bottom-of-funnel (BOFU) content should drive action—think offers and clear (call to actions) CTAs.

A Linkedin Post from Neil Patel with a graphic.

A well-planned calendar also helps you stay consistent without burning out your team. You can batch content and avoid that last-minute “what do we post today?” panic.

Turn Employees and Executives Into Advocates

People trust people, not brands. That’s why employee advocacy is one of the most powerful (and underused) tools in B2B social.

When your team shares content, adds their take, or shows up in the comments, it expands your reach and adds credibility. Their networks are often full of the exact decision-makers you’re trying to reach. And posts from real people perform better than anything coming from a company page.

The same goes for your leadership team. Help your CEO or founder post in their own voice, not just polished PR copy. A short LinkedIn post sharing a real insight or lesson learned often lands better than a glossy video.

A Linkedin Post from an NP Digital employee.

Make it easy for your team to participate. Share post templates, content ideas, or just ask them to weigh in on relevant threads. The goal isn’t to turn everyone into a creator—it’s to activate your people as trusted voices for your brand. The image above shows how to do this versus something to the effect of “helping your CEO or founder.”

Measure, Learn, Optimize

If you’re not measuring, you’re just guessing. The best B2B social strategies are built on real data, not hunches.

Start with the basics: engagement rate, impressions, and click-throughs. Track how often people interact with your content and where they go next. Are they hitting your demo page? Signing up for a webinar? Those are signals your content is working.

Use tools like GA4 and each social platform’s native analytics to connect the dots. Don’t just track what performs best. Look at why. Was it the topic? The format? The tone?

Speaking of format, test everything. Short videos. Carousels. Polls. First-person posts. What works on LinkedIn might fall flat on X. What drives DMs might not drive clicks. The only way to know is to try.

Then optimize. Double down on what works. Cut what doesn’t. Keep tweaking until your content not only earns attention but drives action.

Additional Strategies For B2B Social Media

Once your core strategy’s in place, these advanced plays can help you scale faster, get more mileage from your content, and squeeze more value out of every post.

Figure Out a Non-boring Angle

A lot of B2Bs feel like they’re boring, and this perception of being a boring company becomes a self-fulfilling prophecy. Because they think they are boring, they write boring articles and make boring social media posts.

Let’s look at a company that sells project management software. On the surface, nothing is exciting about that product or industry, but when you start to look at how the product can help your customer, things become unboring very quickly.

A new project management platform can include cool features for collaboration. It could also increase productivity or help business teams achieve goals that previously seemed out of reach.

Your job is to “sell the sizzle.” Put yourself in your customer’s shoes and brainstorm the solutions your product or service can provide their business that will get them excited!

Each B2B with an unintelligible product or service needs to develop an angle that is both understandable and appealing to a broader audience. This will allow them to create an initiative or idea that can gain traction on social media.

You can find an unboring angle. Once you do, you’re ready to roll forward with your social media efforts.

Feature A Human Aspect

One of the major shortcomings of many B2Bs, is the lack of a genuine human backing their efforts.

The lack of real people makes the B2B company seem so distant and unreal. It’s like talking to a robot. It just doesn’t feel right.

Every B2B needs to make an intense effort to humanize their brand tone and voice on social media and content marketing. Here’s what this looks like in practice:

  • Using first-person voice when writing updates and articles
  • Using a brand front person to tweet, post updates, and write articles
  • Using real people with their names in customer service
  • Initiating engagement and outreach from a real person
A Hubspot Linkedin Post.

Image Source

Post Link

Hire The Right Person

B2Bs are often challenged in social media because they don’t hire the right person to manage their social media efforts.

Here are a few tips to help a B2B hire the right person for social media:

  • Hire an expert in social media. Look for someone who has social media success in a similar niche, but not necessarily in your own niche.
  • Hire a social media consulting company or agency, not just an individual. Companies often have more resources at their disposal. For a lower price, they can help you engage on multiple levels, such as creating social media graphics and writing content.

Anyone leading a social media initiative must have familiarity with the industry. But B2Bs also need someone who is a social media ninja. Why? Because B2B social media is a hard nut to crack. It’s not inherently sexy or awesome. It doesn’t automatically generate buzz. It takes a social media expert to really unleash the hidden power in B2B social media.

Brands need someone who can develop a social media movement, shaping the brand’s voice and expanding its reach. It’s not just status updates. It’s an entire identity creation.

If the first objective of social media is leads, then things have gotten off on the wrong foot. Leads don’t come first. Engagement and presence come first. Leads are a byproduct. This goes back to the “unboring angle” I mentioned above.

Back Your Social Media With Content Marketing

There is no such thing as a successful social media campaign without a successful content marketing campaign. They’re like two links in an indestructible chain.

Fortunately, about half of B2B companies understand the importance of content marketing, according to Statista. They realize it’s essential for customers to trust their brand, and they know how far content marketing can go in solidifying that trust. 

I’m convinced that the better a B2B company is at content marketing, the more effective they will be at social media.

This article is not the place to discuss the ins and outs of B2B content marketing. Instead, I’ll point out that the company should find the most engaging form of content and share it on social media.

Common B2B Social Mistakes

Most B2B social feeds feel like a wall of noise. Why? Because too many brands treat social like a megaphone instead of a conversation. Here are some of the biggest mistakes I see with B2B social accounts:

  • Constantly pushing products and making salesy updates, treating your account like a billboard. If your posts aren’t solving a problem or offering insight, don’t expect engagement.
  • Posting just to stay “active.” If your content calendar is driven by days of the week instead of strategy, your audience will feel it. Every post should aim to educate, engage, or move someone closer to buying.
  • The platform dilemma. What works on LinkedIn won’t work on TikTok. You need to adapt your message, tone, and format based on where you’re showing up and who you’re trying to reach.
  • Tracking the wrong metrics. Chasing impressions or vanity metrics won’t tell you what’s driving value. Prioritize metrics like click-throughs and demo page visits—things that tie back to real business outcomes.

Avoid these traps, and you’ll be in much better shape than most of your competition.

FAQs

Which social media platform is best for B2B marketing?

LinkedIn is the go-to platform for most B2B brands. It’s built for professional networking and decision-maker engagement, making it ideal for thought leadership and brand awareness. But depending on your audience, YouTube, X (Twitter), and even TikTok can play a role too.

How to use social media for B2B marketing?

Start by sharing content that solves real problems—think educational posts, customer stories, and product demos. Focus on building trust and staying visible across the buyer journey, not just selling. Then measure what works and keep improving.

What is B2B social media marketing?

B2B social media marketing uses platforms like LinkedIn, YouTube, and X to connect with business buyers. It’s about building relationships and sharing valuable insights as you guide potential customers through the sales funnel.

Conclusion

In the next few years, I predict that we’ll see more and more B2B markets focus more time and energy on their social media skills. Already, there are a few bright spots in the B2B social horizon. 

Using these tips are a great way to optimize your cross-channel marketing efforts. Becoming a platform ninja who understands social media trends, and can incorporate them into the B2B marketing sales funnel, is the clear path forward for today’s marketers.

Read more at Read More

Digital Marketing Trends & Predictions 2026

If 2025 taught us anything, it’s that AI is no longer just a side tool. It’s the engine running campaigns and reshaping how people discover brands.  

At the same time, platforms have declared war on the “click.” We’re seeing an aggressive push for native conversions, where the goal isn’t to drive traffic to the website but to close the deal right in the feed. 

That shift toward “frictionless” experiences, combined with the saturation of AI-generated noise, has forced another major change. Content with deep educational value is starting to outperform the high-volume, “101-level” content that simply fills space. 

As we get deeper into the new year, those shifts are accelerating. 

The top digital marketing trends for 2026 reflect this reality: Automation handles execution, while human elements like strategy and storytelling set the winners apart.  

If you want to stay relevant, abandon the old metrics of “rankings” and “reach.” They no longer guarantee relevance. Here’s what’s actually moving the needle in 2026 (and how the best digital marketers are keeping up). 

Key Takeaways

  • With the rise of agentic AI, machines can now handle the lifecycle and campaigns, but human oversight is essential. 
  • User discovery spans platforms like TikTok, Reddit, YouTube, and Meta. Each one requires unique formats, signals, and intent-based optimization. 
  • Funnels are no longer static. AI personalizes journeys in real time based on user behavior, replacing manual segmentation and drip campaigns. 
  • Chat assistants recommend brands based on trust and content relevance. Consistency and large language model optimization (LLMO) are key to inclusion. 
  • Google’s traditional and AI systems (PMax, AI Overviews, Demand Gen, and Search) now operate as one. Aligning creative and goals across all touchpoints boosts results. 

AI Agents Take Over Execution

We’re already seeing AI streamline much of a marketing team’s content production. But the new flex is agentic AI. We’re talking about autonomous “team members” that can now handle your entire campaign workflow.  

According to PwC, nearly 80 percent of organizations have already adopted AI agents to some degree. And most plan to expand use as these systems move from experimentation into day-to-day operations. 

 AI agent adoption levels across organizations, with most reporting broad or limited adoption. 

This goes far beyond production and publishing. Large language models (LLMs) have advanced to the point that they can manage the full lifecycle. We’re talking about agents embedded into tools that can help: 

  • Manage your customer relationship management (CRM) data 
  • Analyze data performance 
  • Provide campaign insights 
  • Adjust ad bids for paid campaigns in real time 

This year, AI is going from writing your content to autonomous operations. It handles the execution while you focus on strategy and oversight. 

Search Everywhere Optimization Becomes Mandatory

For the last few years, “search everywhere” has been a catchy conference buzzword. In 2026, it’s a baseline for survival. 

The era of the “Google-default” mindset is over. Discovery now happens across platforms, feeds, and AI systems. Today’s SEO is drifting more and more toward search everywhere optimization and less search engine optimization. 

Your audience isn’t just “Googling it” anymore. They’re asking questions and validating purchases on the platforms they trust most. And each has its own algorithm, formats, and user behavior.  

For example: 

  • TikTok viewer wants quick, visual tips.  
  • Reddit user wants deep, authentic discussion.  
  • Pinterest needs eye-catching visuals with keyword-rich descriptions.  
  • YouTube demands longer, high-value content with tight intros and strong engagement. 

The most disruptive shift, however, is happening outside traditional feeds. Voice assistants like Alexa and Siri, and generative chat tools like ChatGPT, Gemini, or Claude are increasingly acting as answer engines.  

The numbers show where we’re headed. Nearly 1 in 5 people use voice search, and Statista predicts 36 percent of the global population will be searching via AI by 2028.  

Example of an AI chat assistant returning a summarized product recommendation list, showing how search increasingly happens inside answer engines.

Prompt-Driven Campaigns and Product Development

Digital marketers no longer need full engineering cycles to test new ideas.  

Prompt-driven tools now make it possible to prototype calculators, quizzes, internal tools, and campaign utilities in hours instead of weeks. 

Tools like Cursor and Replit let marketers translate plain-language instructions into working interfaces, lowering the barrier to experimentation. You still need engineering for production-scale products, but prompts now handle much of the early build and validation work. 

Base44 is another example of a “vibe coding” platform that can turn your detailed descriptions into functional tools, reinforcing the same idea: Prompts are becoming a new control layer.  

Everyone’s an engineer now. Look out, Silicon Valley!  

The game has changed. You can now test fast, learn faster, and skip the bottlenecks that used to slow everything down. 

Funnels Become Dynamic and Self-Optimizing

Static funnels are out. In 2026, customer journeys are becoming shorter and increasingly influenced in real time by AI systems. 

It may seem shocking at first, but it makes sense when you zoom out and think about it. We are no longer pushing users through a pre-set funnel. We’re letting AI agents build the funnel around the user in real time. 

In the early days of Google (and online shopping), a customer would have to visit several sites to research and read reviews—and, eventually, make a purchase. This is the classic marketing funnel we’re all familiar with. There’s a clearly defined top-of-funnel, mid-funnel, and bottom-of-funnel. 

With generative AI tools now offering in-platform purchases, that funnel shrinks significantly. Your typical user can now research, build trust, and make a purchase all within an LLM like ChatGPT.  

We’ve even begun to see major retailers like Walmart and Amazon move toward this model.  

Walmart Sparky can answer user queries and pull in product recommendations to answer deeper questions. It even leads you to check out when you’re ready to purchase.  

Walmart interface showing its AI shopping assistant answering product questions, comparing options, summarizing reviews, and guiding users toward checkout within a single on-platform experience. 

(Image Source) 

The same setup applies to Amazon Rufus, enabling customers to get details, get suggestions, get help, and get inspiration (and ultimately get stuff) all within one platform.  

Amazon’s Rufus AI assistant helping users research products, get recommendations, and shop without leaving Amazon

(Image Source

The result is higher engagement and faster conversions with way less manual work. These tools provide a hyper-personalized shopping experience faster than ever before. Platforms like Shopify and Etsy have also partnered with ChatGPT to purchase products directly in the LLM. 

AI Attribution Connects Content to Revenue

Attribution isn’t new, but it’s getting more accurate. AI-powered attribution now connects every touchpoint—from the first video view to the final click—with real revenue outcomes. 

Platforms like Wicked Reports are enabling marketers to tie initial ad clicks to lifetime purchases and provide “first click” and “time decay” tools to help you pinpoint the most successful starting point for your customers’ buying journeys. This app also provides revenue forecasting to help B2C and e-commerce businesses reliably predict and scale their growth. 

Marketing analytics dashboard showing AI-driven measurement, signal correction, and performance insights used to connect campaigns to real revenue outcomes. 

(Image Source

Your latest blog post may not have converted immediately, but it made the visitor trust you enough to subscribe for email updates. That email is the next stop in their journey, pushing them to check out your pricing page. AI sees it all and assigns value accordingly. 

With these new insights, you finally know which content moves the needle.  

And it’s having a real financial impact. Teams using AI-driven marketing analytics report return on investment (ROI) improvements of roughly 300 percent and customer acquisition costs dropping by more than 30 percent. 

Chat Assistants Reshape Discovery

We mentioned earlier how people’s search has evolved into asking AI chat tools like ChatGPT, Gemini, and Perplexity to answer their product questions. These platforms now include brand recommendations built right into the response, as well as the ability to shop for Shopify and Etsy products. 

This is the same dynamic powering tools like Walmart Sparky and Amazon Rufus, where research and recommendations happen within a single AI experience.  

These assistants don’t list 10 “sponsored” links, a la Google. They summarize what they trust. If they don’t mention your brand, you’re invisible in this new layer of discovery. 

AI answer engine Perplexity showing summarized recommendations for ‘best email marketing tools for SaaS,’ with brands cited directly in the response instead of traditional search links. 

It takes more than gaming keywords to show up on these platforms. It’s all about relevance and consistency.  

The more helpful, high-quality content you create around a topic, the more citations you’ll receive from users sharing it across the internet. Signals like structured content, schema markup, and consistent third-party validation help AI systems interpret your authority and decide when your brand is worth referencing. 

This shift has given rise to large language model optimization (LLMO), a new branch of SEO focused on training AI to recognize and recommend your brand. If you’re not already thinking about LLMO, it’s time to get caught up. 

The big takeaway here is that usefulness matters more than volume as discovery moves into AI systems. Provide enough high-quality answers to your audience’s questions, and the bots will start to bring your name up first. 

Content Structure Becomes Even More Important

Old-school SEO was all about keywords. In 2026, performance increasingly comes from covering topics in depth and structuring content so both people and machines can understand it. 

As we mentioned in the last section, search engines and AI assistants care more about how well you answer a question than how many times you use a keyword. That means your content needs to be thorough and easy to interpret at a glance, no matter who (or what) is doing the glancing. 

NerdWallet does this well by organizing credit card content into a clear hub, then breaking it into tightly related subtopics that cover a ton of topical ground. It’s no longer a game of relying on individual keyword pages. Notably, Nerdwallet is one of the most frequently cited websites in LLMs. 

NerdWallet credit cards hub showing a structured topic cluster with subcategories like travel, cash back, balance transfer, and student cards organized under a single pillar. 

So, switch your strategy mindset from pages to topic clusters. Cover a topic from every angle across multiple assets. Use headers, FAQs, schema markup, and internal links to connect the dots.  

The better you structure your content, the easier it is for AI to find and promote it. 

Your target audience is searching across multiple channels in today’s environment. Focusing on individual keywords leaves a lot of opportunity on the table.  

Today’s rising search platforms, like social media apps and LLMs, revolve around semantic queries. 

People talk to these tools naturally and conversationally (some of them even use ChatGPT’s voice functionality). This means you can’t hone in on a specific keyword. Using a keyword cluster that covers the most popular phrasings customers may use is a much better way to make sure you’re covering what people are asking, increasing your probability of being found.  

This query within Perplexity demonstrates how people interact with search tools. They’re not always typing keywords. They’re asking full, conversational questions and expecting a clear answer. 

AI answer engine responding to a conversational question, ‘Which is better for a headache, Tylenol or ibuprofen?,’ with a summarized comparison pulled from multiple medical sources. 

You also have to consider that many users never click through to your site. Zero-click searches are growing fast, which means your content needs to deliver value right in the SERP—or immediately on platforms like social, LLMs, and voice. 

If you’re still chasing individual keywords, you’re missing the bigger opportunity: becoming the trusted source on your topic. 

Brand Trust Is Measured in Citations and Sentiment

AI doesn’t care how loud you are. It cares how often others talk about you, and what they say when they do. 

Large language models prioritize brands with consistent, credible citations across the web. That includes mentions in blog posts, news articles, podcasts, reviews, and Reddit threads. The more quality signals you earn, the more likely AI is to recommend you.  

But the mentions are just the beginning. Your performance in 2026 really boils down to your audience’s perception of you. Sentiment analysis now plays a big role in ranking. Positive discussions boost your chances of surfacing in AI results, while negativity can drag you down. 

Until recently, this layer of discovery was almost impossible to measure. Traditional analytics don’t show when your brand is cited inside AI-generated answers. But a new class of AI visibility tools now tracks where and how often brands appear across platforms like ChatGPT, Perplexity, Claude, and Google’s AI Overviews (along with the surrounding context). But what types of brands are succeeding using this strategy? 

Brands like Patagonia and TOMS are shining examples of this. These companies leverage philanthropy to increase their goodwill and, in turn, their customers’ positive sentiment toward them.  

Leveraging elements like philanthropy the right way switches these brands’ audiences from customers to loyal supporters. 

Patagonia webpage outlining causes the company funds and does not fund, illustrating clear brand values and consistent public positioning. 

This shift rewards brands that build goodwill rather than just backlinks. If your strategy still centers on shouting the loudest, you’ll get buried by brands that are being talked about, and for the right reasons. 

A ChatGPT result talking about TOMS philanthropy efforts.

Trust is now your most important ranking factor. Earn it or fade out. 

Blogs Influence AI Models, Not Just Traffic

If you think blogs don’t “work” like they used to, you’re missing the bigger picture. They still do heavy lifting behind the scenes to shape AI output and position your brand as a go-to source. 

In modern search, everything you publish helps shape how AI models understand your brand. When you consistently cover a topic with depth and clarity, models start to associate your name with that subject.  

This new reality turns your blogs from content assets into signals of authority. 

Even if search traffic dips due to zero-click results or AI summaries, the long-term payoff is still there. The more high-quality content you create, the more likely your brand is to be cited by the higher-profile AI channels and included in trusted content roundups. 

Social Platforms Function as Search Engines 

As the search everywhere trend shows us, search behavior is spreading. And, according to Statista, nearly a quarter of U.S. adults treat social media as their starting point for search. 

People are searching TikTok to see how something works or whether a restaurant’s worth trying.  

TikTok search results for ‘best places to eat in Las Vegas,’ showing short videos answering a local restaurant query instead of traditional search links. 

They’re using YouTube to learn how to install software or compare skincare brands. Considering that this is the largest search engine after Google, it’s a great platform to focus efforts on. 

This matters because social search runs on a different logic than traditional SEO or AI answer engines. These platforms reward relevance through engagement. 

Each platform has its own discovery logic. TikTok rewards watch time and velocity. YouTube favors relevance and retention. Instagram leans on recency and interaction. 

Without optimizing for these platforms, you’re missing a huge part of the search pie. You should be treating social platforms like search engines, because your audience already does. 

This is where more traditional on-page SEO comes into play. That means digging into the types of questions your audience is asking and focusing on tried-and-true tactics like using clear, searchable titles and engaging hooks to “stop the scroll” and get your viewers’ attention in the first three seconds. 

Content Quality Outperforms Quantity Across Channels

Publishing more content won’t save you in 2026. 

Social platforms are flooded, and search is competitive. On top of that, AI is getting better every day at filtering out thin, repetitive, or regurgitated content.  

Consequently, original insights and pieces that actually teach something are rising to the top. 

We see this in emerging trends. For starters, the average number of posts per day among brands has decreased to 9.5. Engagement is moving in the opposite direction, with inbound interactions increasing by roughly 20 percent year over year.  

Instead of posting five times a day, focus on publishing things worth reading and sharing, even if it’s only one well-structured piece of content per week.  

A thoughtful video or long-form LinkedIn breakdown that sparks conversation will do much better than 100 pieces of AI-generated blogs that barely scratch the surface of a topic. 

Take National Geographic, for example. Rather than posting constantly, it focuses on educational storytelling. Check out its TikTok grid

National Geographic’s TikTok profile showcasing educational, documentary-style videos that prioritize learning and storytelling over high-volume posting. 

Content creators are experiencing the benefits of this strategy in real time.  

recent survey finds that 35 percent of creators say they’re seeing higher potential ROI from longer-form content formats, with 39 percent saying they’re seeing better engagement. And almost half (49 percent) say that the choice to produce longer-form content is helping them reach a wider audience.  

If your strategy is still built around churning out content to “stay active,” it’s time to shift. Fewer pieces. Bigger impact. Better outcomes. 

That’s what wins in 2026. 

Conversion Happens On-Platform, Not On-Site 

The platforms people use every day are getting very good at keeping them there.  

Think about it: Nearly every social platform has lead forms and lets you shop inside the app. The goal of these features is to help you convert without ever leaving their platform. 

Instagram and TikTok, for example, have fully integrated shopping experiences. And it’s working. Sales through social media channels are forecasted to reach nearly 21 percent in 2026. 

Google’s even testing AI-generated product recommendations with built-in checkout links, like Etsy and ChatGPT. The whole point is to remove friction and keep the experience seamless. 

That shift changes what a “landing page” even means. In many cases, it’s a native form, a product card, or an in-app checkout flow that closes the deal on the spot. 

Your website still matters, but forcing every conversion to happen there can introduce unnecessary drop-off. When users are ready to act, the simplest path usually wins. 

This shift is giving rise to what some teams now call checkout optimization, and it’s getting some pretty serious results. E-commerce brands with 1,000 to 2,000 orders per month are implementing checkout optimization and seeing measurable gains in shipping revenue and order total.  

Comparison of e-commerce checkout flows before and after optimization, showing fewer steps, clearer shipping options, and reduced friction at checkout. 

(Image Source) 

When you meet users where they are, you lower the barrier to action. No load times. No messy redirects. Just a quick tap or swipe to buy, book, or sign up. 

Video Becomes a Primary Search and AI Input 

Video is increasingly becoming more than just a distribution format. It’s now a primary way people search—and a growing input for AI systems. 

Search engines and AI platforms now index video much like they do written content, pulling from structural signals to generate results. If those signals aren’t there, the video might as well not exist. 

ChatGPT interface responding to the prompt ‘Hit me with some funny cat videos’ by embedding a YouTube video thumbnail of a cat sitting in a plastic container in water. 

What do those signals look like in practice? 

Well, because search engines and AI platforms can’t watch your videos, they instead rely on clean transcripts, keyword-rich titles and descriptions, and clear segmentation. Think chapters, not rambles. Structure is what makes video searchable. 

This video from Neil Patel uses chapters, summaries, and clear topic segmentation, making it easier for search engines and AI systems to interpret and reference specific sections. 

The more structured and searchable your video content, the more likely it is to be cited by AI assistants. 

Text still matters. But if video isn’t part of your SEO and discovery strategy, you’re leaving serious visibility on the table. 

Paid Media Shifts to AI-Led Campaigns

We’ve seen AI-driven paid media campaigns for some time now, but platforms like Google’s Performance Max and Meta’s Advantage+ are refining and elevating how it’s done. We’re seeing these platforms automatically testing creative and placements to hit performance goals, and even testing the benefits of AI-powered segmentation or ad bidding. 

The result is less manual control and more system-led optimization, which is a benefit for many marketers. Retail marketers, for example, have seen a 10 percent to 25 percent lift in their return on ad spend (ROAS) by implementing AI-powered campaign elements.  

But “hands-off” doesn’t mean “set it and forget it.” 

In this model, your role shifts from managing campaigns to training the system. The better your inputs—creative variety, first-party data, and clear conversion signals—the better your results.  

Lazy targeting and generic ads just get ignored. 

Want to lower customer acquisition cost (CAC) or increase return on ad spend (ROAS)? Focus on refining your creative and uploading strong first-party data. AI will handle testing and optimization, but it can’t fix bad inputs. 

Savvy marketers are shifting their roles from campaign operators to strategy leads. They’re spending less time on dashboards and more time building assets that actually convert, such as a robust content library or unique, impactful insights from proprietary data. 

It all comes down to this: AI runs the ads, but you train it. If you’re not giving the algorithm something great to work with, you’re not going to like what it gives back. 

FAQs

What are the digital marketing trends for 2026?

In 2026, AI is running full campaigns, dynamic funnels are replacing traditional static ones, and users are increasingly discovering brands across platforms. Chat assistants like ChatGPT now also recommend brands, and SEO is more about structured topics than keywords. Quality content outperforms quantity, and conversion often happens off your site. 

How can businesses stay updated on marketing trends?

Follow trusted industry blogs (like NeilPatel.com), subscribe to marketing newsletters, and keep an eye on platform updates from the big players (Google, Meta, and TikTok). Tools like Ubersuggest can also help spot shifts in search behavior. But more than anything, continue testing and tracking, and stay close to what your audience responds to. 

Conclusion

Many experts say that marketing is changing, but the fact is that it’s already changed.  

AI now drives the full spectrum of content marketing. Platforms prioritize native conversion. Content shapes how machines and people see your brand. If you’re still playing by old rules—keyword-centric strategy, manual funnels, or high-volume posting—you’re going to get left behind. 

Winning in 2026 means adapting quickly to emerging digital marketing trends by thinking strategically and building trust across every touchpoint. 

If you’re not sure where to start, check out my guide on search engine trends to see how modern discovery actually works today. 

The marketers who move first always get the advantage. So, make your move. 

Read more at Read More

January 2026 Digital Marketing Roundup: What Changed and What You Should Do About It

January didn’t bring flashy product launches. It brought something more valuable: clarity.

Platforms spent the month explaining how their systems actually work. Google detailed JavaScript indexing rules that matter for modern sites. Reddit opened up automation insights most platforms keep hidden. Amazon positioned itself as a legitimate cross-screen player with first-party data advantages traditional TV can’t match.

Automation kept expanding, but with firmer guardrails. AI continued to compress discovery. Zero-click experiences grew. Brands without clear expertise signals or off-site authority started disappearing from AI-generated answers.

For digital marketers, January reinforced one reality: performance in 2026 depends less on clever tactics and more on getting fundamentals right across channels.

Key Takeaways

  • Indexing logic must live in base HTML, not JavaScript. Google may skip rendering pages with noindex directives in initial HTML, leaving valuable content invisible even if JavaScript removes the tag later.
  • Performance Max channel reporting is now essential, not optional. Budget pressure is currently your sharpest lever for managing underperforming surfaces like Display or Discover.
  • Share of search is becoming a better demand signal than traffic alone. As AI reduces click-through rates, measuring how often people search for your brand versus competitors reveals momentum better than vanishing clicks.
  • Digital PR now directly impacts AI visibility. Authoritative mentions and credible coverage determine whether AI systems recognize and recommend your brand in zero-click answers.
  • Influencer marketing reached enterprise maturity in January. Unilever’s 20x creator expansion and 50% social budget shift prove influence at scale is baseline strategy, not experimentation.
  • Review monitoring must track losses, not just gains. Google’s AI is deleting legitimate reviews without notice, affecting rankings and trust faster than new reviews can rebuild them.

Search, SEO, and Indexing Reality Checks

Search teams started 2026 with clearer rules, not more flexibility. Google spent January confirming how it treats indexing signals on JavaScript-heavy sites.

Google Clarifies Noindex and JavaScript Behavior

Google confirmed that pages with a noindex directive in their initial HTML may not get rendered at all. Any JavaScript meant to remove or modify that directive might never execute.

Indexing intent belongs in base HTML. JavaScript should enhance experiences, not define crawl behavior. For headless stacks and dynamic frameworks, search engines respond to what they see first, not what you hope they’ll see after rendering.

If your site uses React, Next.js, Angular, or Vue with client-side rendering, audit how noindex tags are implemented. Server-side rendering or static generation solves most of these issues.

Google Clarifies JavaScript Canonical Rules

Google detailed how canonical tags work on JavaScript-driven pages. Canonicals can be evaluated twice: once in raw HTML and again after rendering. Conflicts between the two create real indexing problems.

Server-rendered HTML pointing to one canonical while client-side JavaScript points to another forces Google to pick. That choice often hurts rankings quietly, without throwing obvious errors in Search Console.

Teams need to decide where canonicals live and enforce consistency. One canonical after rendering. No ambiguity between server and client.

December Core Algorithm Update Wraps

Google’s December 2025 core update finished after roughly 18 days of volatility. Sites with stale content, weak expertise signals, or unclear intent lost ground. Others gained visibility by being more useful and better aligned with user needs.

Core updates no longer feel disruptive because they’re frequent. Three broad core updates rolled out in 2025 alone. The advantage now comes from consistent execution, not post-update recovery tactics.

Paid Search, Automation, and Audience Control

Paid media keeps moving toward automation. January showed where control still exists and where it doesn’t.

Using Google’s PMax Channel Report More Strategically

The Performance Max Channel Performance Report keeps evolving. You can now see performance broken down across Search, YouTube, Display, Discover, Gmail, and Maps.

The PMAX Channel Performance Report.

You still can’t control bids or exclusions at a granular level. What you can control is budget pressure. One surface consistently underperforming? Budget becomes your corrective lever. Pull back overall spend and PMax reallocates to better-performing channels automatically.

Teams that review this report monthly make better creative and investment decisions. Track this data over time. Patterns emerge. You start understanding which channels deliver at which funnel stages, even inside automation.

Google Drops Audience Size Minimums

Google lowered minimum audience size thresholds to 100 users across Search, Display, and YouTube. Previous minimums ranged from 1,000 users down to a few hundred depending on network and list type.

This opens doors for smaller advertisers and niche segments. Remarketing lists, CRM uploads, and custom audiences that previously failed minimums now become usable.

Smart teams will use this to test tighter segmentation strategies. But don’t chase volume that isn’t there. A 100-user audience won’t scale into a growth channel overnight.

Bing Tests Google-Style Ad Grouping

A Bing Ad Example.

Bing briefly tested a sponsored results format similar to Google’s recent changes. Multiple ads grouped under a single label, with only the first result carrying an ad marker.

The test ended quickly, but the signal matters. Search platforms are converging on similar layouts. How ads appear now affects click quality and intent, not just click-through rate.

Social Platforms and Performance Content

Social platforms spent January rewarding clarity while punishing shortcuts.

Reddit Launches Max Campaigns

Reddit introduced Max Campaigns, an automated ad product handling targeting, placements, creative, and budget allocation in real-time.

What stands out is visibility. Reddit surfaces audience personas and engagement insights that most automated systems hide. Early testers report 27% more conversions and 17% lower CPA on average.

Testing works best when anchored to existing campaigns. Replicate your best-performing Reddit campaign as a Max Campaign. Let automation prove efficiency gains with known benchmarks.

Instagram Caps Hashtags

Instagram rolled out a five-hashtag limit across posts and reels. This confirms discovery on Instagram is driven by AI-based content understanding, not hashtag volume.

Hashtags now function like keywords. They clarify intent and help Instagram’s systems categorize content. They don’t manufacture reach.

Captions, on-screen text, subtitles, and visuals do the heavy lifting. Choose five hashtags that directly describe your content. Mix specificity levels: one broad category tag, two niche topic tags, one community hashtag, one branded hashtag.

LinkedIn Shares Performance Guidance for 2026

LinkedIn reiterated that human perspective drives performance. Video continues outperforming other formats. Hashtags do not impact distribution. Automated engagement and content pods face increased scrutiny.

Posting two to five times per week remains effective. AI can support thinking, but content still needs lived experience and clear points of view.

Brand Visibility, Authority, and Demand Measurement in an AI Era

AI-driven discovery is reshaping how brands get surfaced and evaluated.

What AI Search Means for Your Business

AI-generated summaries and zero-click experiences shape early discovery now. Users often form opinions before visiting a site. Google’s AI Overviews, ChatGPT’s SearchGPT, and Perplexity answer questions directly, compressing or eliminating the need to click through.

AI favors brands with clear expertise, structured content, and external validation. Generic explanations get compressed into summaries that strip away brand identity. Thin content disappears entirely.

Optimization now includes being understandable and credible to machines, not just persuasive to human readers. That means structured data markup, clear content hierarchy, author credentials, and topical authority signals.

Share of Search Becomes a Core KPI

As AI reduces click-through rates, traffic becomes a weaker signal of demand. Share of search fills that gap.

It measures how often people look for your brand compared to competitors. That correlates strongly with market share and future growth. Brands with rising share of search typically see revenue growth follow within quarters, even if organic traffic stays flat.

Calculate share of search by tracking branded search volume for your brand and key competitors over time. Tools like Google Trends, Semrush, or Ahrefs make this accessible.

Digital PR Matters More Than Ever

AI systems recommend brands they recognize and trust. That trust is built off-site, not through on-page optimization.

Authoritative mentions, expert commentary, and credible coverage now influence visibility across AI-driven experiences. Links still matter, but reputation matters more.

PR, SEO, and content strategy can no longer operate independently. Authority compounds when they align. If you’re not investing in Digital PR alongside traditional SEO, you’re optimizing for a search ecosystem that’s rapidly shrinking.

Video, CTV, and Cross-Screen Media Strategy

Video buying is consolidating across screens.

Amazon Emerges as a Cross-Screen Advertising Player

Amazon is positioning itself as a unified advertising ecosystem across Prime Video, live sports, audio, and programmatic inventory. Layered with first-party shopper data, this creates a powerful performance and measurement advantage traditional TV buyers can’t match.

Amazon now competes higher in the funnel through premium video and live sports while retaining lower-funnel accountability through its commerce data. Interactive features let you add “add to cart” overlays directly in OTT video ads.

CTV Breaks the 30-Second Format

Streaming dominates TV consumption. Ad formats are finally catching up. Interactive and nontraditional CTV units are gaining traction, supported by early standardization efforts from IAB Tech Lab.

Traditional :15 and :30 second spots still work, but they blend into an increasingly crowded environment. Emerging formats offer differentiation in lower-clutter streaming contexts.

Brands that test early build creative and performance advantages before these formats normalize and competition increases.

Pinterest Acquires tvScientific

Pinterest’s acquisition of tvScientific connects intent-driven discovery with CTV buying. This closes a long-standing measurement gap between inspiration and awareness channels.

For brands rooted in discovery—home decor, fashion, food, travel, DIY, beauty—this creates a clearer path from interest to action.

Brand-Led Attention and Influence at Scale

Attention increasingly flows through people, communities, and culture-driven media.

Unilever’s Influencer Expansion

Unilever announced plans to work with 20 times more influencers and shift half its ad budget to social. This isn’t a test. It’s a structural reallocation signaling influencer marketing has reached enterprise maturity.

Unilever’s SASSY framework now activates nearly 300,000 creators. The company reported category-wide outperformance, attributing significant gains to influencer-driven campaigns.

Brands still treating creators as side projects will struggle to compete against organizations running influencer programs with the same rigor and budget as paid search or programmatic display.

Google’s AI Is Deleting Reviews

Google’s AI moderation is removing reviews at scale, including legitimate ones, often without notice. Business owners report hundreds of reviews disappearing overnight.

That affects rankings, conversion rates, and consumer trust. Reputation strategy now includes monitoring review loss, not just tracking new reviews.

Check your Google Business Profile weekly. Document total review count and average rating. When drops occur, investigate patterns. Better yet, diversify review platforms beyond Google.

Experimentation and Growth Discipline

Sustainable growth depends on knowing why a test exists before judging its outcome.

Growth vs Optimization: Drawing the Line

Growth experiments explore new opportunities. Optimization improves what already works. Blurring the two creates misaligned expectations and poor decision-making.

Clear intent leads to clearer measurement and stronger buy-in. Teams that label tests correctly scale with more confidence.

What Digital Marketers Should Take Forward

Platforms are clarifying rules. AI rewards authority and consistency. Measurement is shifting away from clicks alone.

The advantage in 2026 comes from alignment across teams and channels. Durable signals outperform clever workarounds.

Indexing logic must live in base HTML. Performance Max channel reporting is essential. Share of search reveals momentum. Digital PR impacts AI visibility. Influencer marketing reached enterprise maturity. Review monitoring must track losses.

This is the work we focus on every day at NP Digital.

If you want help aligning fundamentals across SEO, paid media, content, and PR in a way that compounds over time, let’s talk.

Read more at Read More

AI Hallucinations, Errors, and Accuracy: What the Data Shows

AI hallucinations became a headline story when Google’s AI Overviews told people that cats can teleport and suggested eating rocks for health.

Those bizarre moments spread fast because they’re easy to point at and laugh about.

But that’s not the kind of AI hallucination most marketers deal with. The tools you probably use, like ChatGPT or Claude, likely won’t produce anything that bizarre. Their misses are sneakier, like outdated numbers or confident explanations that fall apart once you start looking under the hood.

In a fast-moving industry like digital marketing, it’s easy to miss those subtle errors. 

This made us curious: How often is AI actually getting it wrong? What types of questions trip it up? And how are marketers handling the fallout?

To find out, we tested 600 prompts across major large language model (LLM) platforms and surveyed 565 marketers to understand how often AI gets things wrong. You’ll see how these mistakes show up in real workflows and what you can do to catch hallucinations before they hurt your work.

Key Takeaways

  • Nearly half of marketers (47.1 percent) encounter AI inaccuracies several times a week, and over 70 percent spend hours fact-checking each week.
  • More than a third (36.5 percent) say hallucinated or incorrect AI content has gone live publicly, most often due to false facts, broken source links, or inappropriate language.
  • In our LLM test, ChatGPT had the highest accuracy (59.7 percent), but even the best models made errors, especially on multi-part reasoning, niche topics, or real-time questions.
  • The most common hallucination types were fabrication, omission, outdated info, and misclassification—often delivered with confident language.
  • Despite knowledge of hallucinations, 23 percent of marketers feel confident using AI outputs without review. Most teams add extra approval layers or assign dedicated fact-checkers to their processes.

What Do We Know About AI Hallucinations and Errors?

An AI hallucination happens when a model gives you an answer that sounds correct but isn’t. We’re talking about made-up facts or claims that don’t stand up to fact-checking or a quick Google search.

And they’re not rare.

In our research, over 43% of marketers say hallucinated or false information has slipped past review and gone public. These errors come in a few common forms:

  • Fabrication: The AI simply makes something up.
  • Omission: It skips critical context or details.
  • Outdated info: It shares data that’s no longer accurate.
  • Misclassification: It answers the wrong question, or only part of it.
A graphic showing common AI Hallucination Types

Hallucinations tend to happen when prompts are too vague or require multi-step reasoning. Sometimes the AI model tries to fill the gaps with whatever seems plausible.

AI hallucinations aren’t new, but our dependence on these tools is. As they become part of everyday workflows, the cost of a single incorrect answer increases.

Once you recognize the patterns behind these mistakes, you can catch them early and keep them out of your content.

AI Hallucination Examples

AI hallucinations can be ridiculous or dangerously subtle. These real AI hallucination examples give you a sense of the range:

  • Fabricated legal citations: Recent reporting shows a growing number of lawyers relying on AI-generated filings, only to learn that the cases or citations don’t exist. Courts are now flagging these hallucinations at an alarming rate.
  • Health misinformation: Revisiting our example from earlier, Google’s AI Overviews once claimed eating rocks had health benefits in an error that briefly went viral.
  • Fake academic references: Some LLMs will list fake studies or broken source links if asked for citations. A peer-reviewed Nature study found that ChatGPT frequently produced academic citations that look legitimate but reference papers that don’t exist.
  • Factual contradictions: Some tools have answered simple yes/no questions with completely contradictory statements in the same paragraph.
  • Outdated or misattributed data: Models can pull statistics from the wrong year or tie them to the wrong sources. And that creates problems once those numbers sneak into presentations or content.

Our Surveys/Methodology

To get a clear picture of how AI hallucinations show up in real-world marketing work, we pulled data from two original sources:

  1. Marketers survey: We surveyed 565 U.S.-based digital marketers using AI in their workflows. The questions covered how often they spot errors, what kinds of mistakes they see, and how their teams are adjusting to AI-assisted content. We also asked about public slip-ups, trust in AI, and whether they want clearer industry standards.
  1. LLM accuracy test: We built a set of 600 prompts across five categories: SEO/marketing, general business, industry-specific verticals, consumer queries, and control questions with a known correct answer. We then tested them across six major AI platforms: ChatGPT, Gemini, Claude, Perplexity, Grok, and Copilot. Humans graded each output, classifying them as fully correct, partially correct, or incorrect. For partially correct or incorrect outputs, we also logged the error type (omission, outdated info, fabrication, or misclassification).

For this report, we focused only on text-based hallucinations and content errors, not visual or video generation. The insights that follow combine both data sets to show how hallucinations happen and what marketers should watch for across tools and task types.

How AI Hallucinations and Errors Impact Digital Marketers

A graphic that shows how often Marketers Encounter AI Errors.

We asked marketers how AI errors show up in their work, and the results were clear: Hallucinations are far from a rarity.

Nearly half of marketers (47.1 percent) encounter AI inaccuracies multiple times a week. And more than 70 percent say they spend one to five hours each week just fact-checking AI-generated output. That’s a lot of time spent fixing “helpful” content.

Those misses don’t always stay hidden. 

More than a third (36.5 percent) say hallucinated content has made it all the way to the public. Another 39.8 percent have had close calls where bad AI info almost went live. 

And it’s not just teams spotting the problems. More than half of marketers (57.7 percent) say clients or stakeholders have questioned the quality of AI-assisted outputs.

These aren’t minor formatting issues, either. When mistakes make it through, the most common offenders are:

  • Inappropriate or brand-unsafe content (53.9 percent)
  • Completely false or hallucinated information (43.5 percent)
  • Formatting glitches that break the user experience (42.5 percent)

So where does it break down?

AI errors are most common in tasks that require structure or precision. Here are the daily error rates by task:

  • HTML or schema creation: 46.2 percent
  • Full content writing: 42.7 percent
  • Reporting and analytics: 34.2 percent

Brainstorming or idea generation had far fewer issues, with each landing at right about 25 percent.

A graphic showing where marketers encounter AI errors most often.

When we looked at confidence levels, only 23 percent of marketers felt fully comfortable using AI output without review. The rest? They were either cautious or not confident at all.

Teams hit hardest by public-facing AI mistakes include:

  • Digital PR (33.3 percent)
  • Content marketing (20.8 percent)
  • Paid media (17.8 percent)
A graphic showing teams most affected by public AI mistakes.

These are the same departments most likely to face direct brand damage when AI gets it wrong.

AI can save you time, but it also creates a lot of cleanup without checks in place. And most marketers feel the pressure to catch hallucinations before clients or customers do.

AI Hallucinations and Errors: How Do the Top LLMs Stack Up?

To figure out how often leading AI platforms hallucinate, we tested 600 prompts across six major models: ChatGPT, Claude, Gemini, Perplexity, Grok, and Copilot.

Each model received the same set of queries across five categories: marketing/SEO, general business, industry-specific use cases, consumer questions, and fact-checkable control prompts. Human reviewers graded each response for accuracy and completeness.

Here’s how they performed:

  • ChatGPT delivered the highest percentage of fully correct answers at 59.7 percent, with the lowest rate of serious hallucinations. Most of its mistakes were subtle, like misinterpreting the question rather than fabricating facts.
  • Claude was the most consistent. While it scored slightly lower on fully correct responses (55.1 percent), it had the lowest overall error rate at just 6.2 percent. When it missed, it usually left something out rather than getting it wrong.
  • Gemini performed well on simple prompts (51.3 percent fully correct) but tended to skip over complex or multi-step answers. Its most common error was omission.
  • Perplexity showed strength in fast-moving fields like crypto and AI, thanks to its strong real-time retrieval features. But that speed came with risk: 12.2 percent of responses were incorrect, often due to misclassifications or minor fabrications.
  • Copilot sat in the middle of the pack. It gave safe, brief answers. While that’s good for overviews, it often misses the deeper context.
  • Grok struggled across the board. It had the highest error rate at 21.8 percent and the lowest percentage of fully correct answers (39.6 percent). Hallucinations, contradictions, and vague outputs were common.
A graphic showing how major LLMs performed in our 600-prompt accuracy test.
A graphich showing most common error types across models.

So, what does this mean for marketers?

Well, most teams aren’t expecting perfection. According to our survey, 77.7 percent of marketers will accept some level of AI inaccuracy, likely because the speed and efficiency gains still outweigh the cleanup.

The takeaway isn’t that one model is flawless. It’s that every tool has its strengths and weaknesses. Knowing each platform’s tendencies helps you know when (and how) to pull a human into the loop and what to be on guard against.

What Question Types Gave LLMs The Most Trouble

Some questions are harder for AI to handle than others. In our testing, three prompt types consistently tripped up all the models, regardless of how accurate they were overall:

  • Multi-part prompts: When asked to explain a concept and give an example, many tools did only half the job. They either defined the term or gave an example, but not both. This was a common source of partial answers and context gaps.
  • Recently updated or real-time topics: If the ask was about something that changed in the last few months (like a Google algorithm update or an AI model release), responses were often inaccurate or completely fabricated. Some tools made confident claims using outdated info that sounded fresh.
  • Niche or domain-specific questions: Verticals like crypto, legal, SaaS, or even SEO created problems for most LLMs. In these cases, tools either made up terminology or gave vague responses that missed key industry context.

Even models like Claude and ChatGPT, which scored relatively high for accuracy, showed cracks when asked to handle layered prompts that required nuance or specialized knowledge.

Knowing which types of prompts increase the risk of hallucination is the first step in writing better ones and catching issues before they cost you.

AI Hallucination Tells to Look Out For

AI hallucinations don’t always scream “wrong.” In fact, the most dangerous ones sound reasonable (at least until you check the details). Still, there are patterns worth watching for:

Here are the red flags that showed up most often across the models we tested:

  • No source, or a broken one: If an AI gives you a link, check it. A lot of hallucinated answers include made-up or outdated citations that don’t exist when you click.
  • Answers to the wrong questions: Some models misinterpret the prompt and go off in a related (but incorrect) direction. If the response feels slightly off topic, dig deeper.
  • Big claims with no specifics: Watch for sweeping statements without specific stats or dates. That’s often a sign it’s filling in blanks with plausible-sounding fluff.
  • Stats with no attribution: Hallucinated numbers are a common issue. If the stat sounds surprising or overly convenient, verify it with a trusted source.
  • Contradictions inside the same answer: We experienced cases where an AI said one thing in the first paragraph and contradicted itself by the end. That’s a major warning sign.
  • “Real” examples that don’t exist: Some hallucinations involve fake product names, companies, case studies, or legal precedents. These details feel legit, but a quick search reveals no facts to verify these claims.

The more complex your prompt, the more important it is to sanity-check the output. If something feels even slightly off, assume it’s worth a second look. After all, subtle hallucinations are the ones most likely to slip through the cracks.

Best Practices for Avoiding AI Hallucinations and Errors

You can’t eliminate AI hallucinations completely, but you can make it a lot less likely they slip through. Here’s how to stay ahead of the risk:

  • Always request and verify sources: Some models will confidently provide links that look legit but don’t exist. Others reference real studies or stats, but take them out of context. Before you copy/paste, click through. This matters even more for AI SEO work, where accuracy and citation quality directly affect rankings and trust.
  • Fine-tune your prompts: Vague prompts are hallucination magnets, so be clear about what you want the model to reference or avoid. That might mean building prompt template libraries or using follow-up prompts to guide models more effectively. That’s exactly what LLM optimization (LLMO) focuses on.
  • Assign a dedicated fact-checker: Our survey results showed this to be one of the most effective internal safeguards. Human review might take more time, but it’s how you keep hallucinated claims from damaging trust or a brand’s credibility.
  • Set clear internal guidelines: Many teams now treat AI like a junior content assistant: It can draft, synthesize, and suggest, but humans own the final version. That means reviewing and fact-checking outputs and correcting anything that doesn’t hold up. This approach lines up with the data. Nearly half (48.3 percent) of marketers support industry-wide standards for responsible AI use.
  • Add a final review layer every time: Even fast-moving brands are building in one more layer of review for AI-assisted work. In fact, the most common adjustment marketers reported making was adding a new round of content review to catch AI errors. That said, 23 percent of respondents reported skipping human review if they trust the tool enough. That’s a risky move.
  • Don’t blindly trust brand-safe output: AI can sound polished even when it’s wrong. In our LLM testing, some of the most confidently written outputs were factually incorrect or missing key context. If it feels too clean, double-check it.

FAQs

What are AI hallucinations?

AI hallucinations occur when an AI tool gives you an answer that sounds accurate, but it’s not. These mistakes can include made-up facts, fake citations, or outdated info packaged in confident language.

Why Does AI hallucinate?

AI models don’t “know” facts. They generate responses based on patterns in the data they were trained on. When there’s a gap or ambiguity, the model fills it in with what sounds most likely (even if it’s completely wrong).

What causes AI hallucinations?

Hallucinations usually happen when prompts are vague, complex, or involve topics the model hasn’t seen enough data on. They’re also more common in fast-changing fields like SEO and crypto.

Can you stop AI from hallucinating?

Not entirely. Even the best models make things up sometimes. That’s because LLMs are built to generate language, not verify facts. Occasional hallucinations are baked into how they work.

How can you reduce AI hallucinations?

Use more specific prompts, request citation sources, and always double-check the output for accuracy. Add a human review step before anything goes live. The more structure and context you give the AI, the fewer hallucinations you’ll run into.

Conclusion

AI is powerful, but it’s not perfect. 

Our research shows that hallucinations happen regularly, even with the best tools. From made-up stats to misinterpreted prompts, the risks are real. That’s especially the case for fast-moving marketers.

If you’re using AI to create content or guide strategy, knowing where these tools fall short is like a cheat code. 

The best defense? Smarter prompts, tighter reviews, and clear internal guidelines that treat AI as a co-pilot (not the driver).

Want help building a more reliable AI workflow? Talk to our team at NP Digital if you’re ready to scale content without compromising accuracy. Also, you can check out the full report here on the NP Digital website.

Read more at Read More

Web Design and Development San Diego

Inspiring examples of responsible and realistic vibe coding for SEO

Vibe coding is a new way to create software using AI tools such as ChatGPT, Cursor, Replit, and Gemini. It works by describing to the tool what you want in plain language and receiving written code in return. You can then simply paste the code into an environment (such as Google Colab), run it, and test the results, all without ever actually programming a single line of code.

Collins Dictionary named “vibe coding” word of the year in 2025, defining it as “the use of artificial intelligence prompted by natural language to write computer code.”

In this guide, you’ll understand how to start vibe coding, learn its limitations and risks, and see examples of great tools created by SEOs to inspire you to vibe code your own projects.

Vibe coding variations

While “vibe coding” is used as an umbrella term, there are subsets of coding with support or AI, including the following:

Type Description Tools
AI-assisted coding  AI helps write, refactor, explain, or debug code. Used by actual developers or engineers to support their complex work. GitHub Copilot, Cursor, Claude, Google AI Studio
Vibe coding Platforms that handle everything except the prompt/idea. AI does most of the work. ChatGPT, Replit, Gemini, Google AI Studio
No-code platforms Platforms that handle everything you ask (“drag and drop” visual updates while the code happens in the background). They tend to use AI but existed long before AI became mainstream. Notion, Zapier, Wix

We’ll focus exclusively on vibe coding in this guide. 

With vibe coding, while there’s a bit of manual work to be done, the barrier is still low — you basically need a ChatGPT account (free or paid) and access to a Google account (free). Depending on your use case, you might also need access to APIs or SEO tools subscriptions such as Semrush or Screaming Frog.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

To set expectations, by the end of this guide, you’ll know how to run a small program on the cloud. If you expect to build a SaaS or software to sell, AI-assisted coding is a more reasonable option to take, which will involve costs and deeper coding knowledge.

Vibe coding use cases

Vibe coding is great when you’re trying to find outcomes for specific buckets of data, such as finding related links, adding pre-selected tags to articles, or doing something fun where the outcome doesn’t need to be exact.

For example, I’ve built an app to create a daily drawing for my daughter. I type a phrase about something that she told me about her day (e.g., “I had carrot cake at daycare”). The app has some examples of drawing styles I like and some pictures of her. The outputs (drawings) are the final work as they come from AI.

When I ask for specific changes, however, the program tends to worsen and redraw things I didn’t ask for. I once asked to remove a mustache and it recolored the image instead. 

If my daughter were a client who’d scrutinize the output and require very specific changes, I’d need someone who knows Photoshop or similar tools to make specific improvements. In this case, though, the results are good enough. 

Building commercial applications solely on vibe coding may require a company to hire vibe coding cleaners. However, for a demo, MVP (minimum viable product), or internal applications, vibe coding can be a useful, effective shortcut. 

How to create your SEO tools with vibe coding

Using vibe coding to create your own SEO tools require three steps:

  1. Write a prompt describing your code
  2. Paste the code into a tool such as Google Colab
  3. Run the code and analyze the results

Here’s a prompt example for a tool I built to map related links at scale. After crawling a website using Screaming Frog and extracting vector embeddings (using the crawler’s integration with OpenAI), I vibe coded a tool that would compare the topical distance between the vectors in each URL.

This is exactly what I wrote on ChatGPT:

I need a Google Colab code that will use OpenAI to:

Check the vector embeddings existing in column C. Use cosine similarity to match with two suggestions from each locale (locale identified in Column A). 

The goal is to find which pages from each locale are the most similar to each other, so we can add hreflang between these pages.

I’ll upload a CSV with these columns and expect a CSV in return with the answers.

Then I pasted the code that ChatGPT created on Google Colab, a free Jupyter Notebook environment that allows users to write and execute Python code in a web browser. It’s important to run your program by clicking on “Run all” in Google Colab to test if the output does what you expected.

This is how the process works on paper. Like everything in AI, it may look perfect, but it’s not always functioning exactly how you want it. 

You’ll likely encounter issues along the way — luckily, they’re simple to troubleshoot.

First, be explicit about the platform you’re using in your prompt. If it’s Google Colab, say the code is for Google Colab. 

You might still end up with code that requires packages that aren’t installed. In this case, just paste the error into ChatGPT and it’ll likely regenerate the code or find an alternative. You don’t even need to know what the package is, just show the error and use the new code. Alternatively, you can ask Gemini directly in your Google Colab to fix the issue and update your code directly.

AI tends to be very confident about anything and could return completely made-up outputs. One time I forgot to say the source data would come from a CSV file, so it simply created fake URLs, traffic, and graphs. Always check and recheck the output because “it looks good” can sometimes be wrong.

If you’re connecting to an API, especially a paid API (e.g., from Semrush, OpenAI, Google Cloud, or other tools), you’ll need to request your own API key and keep in mind usage costs. 

Should you want an even lower execution barrier than Google Colab, you can try using Replit. 

Simply prompt your request and the software will create the code, design, and allow testing all on the same screen. This means a lower chance of coding errors, no copy and paste, and a URL you can share right away with anyone to see your project built with a nice design. (You should still check for poor outputs and iterate with prompts until your final app is built.)

Keep in mind that while Google Colab is free (you’ll only spend if you use API keys), Replit charges a monthly subscription and per-usage fee on APIs. So the more you use an app, the more expensive it gets.

Inspiring examples of SEO vibe-coded tools

While Google Colab is the most basic (and easy) way to vibe code a small program, some SEOs are taking vibe coding even further by creating programs that are turned into Chrome extensions, Google Sheets automation, and even browser games.

The goal behind highlighting these tools is not only to showcase great work by the community, but also to inspire, build, and adapt to your specific needs. Do you wish any of these tools had different features? Perhaps you can build them for yourself — or for the world.

GBP Reviews Sentiment Analyzer (Celeste Gonzalez)

After vibe coding some SEO tools on Google Colab, Celeste Gonzalez, Director of SEO Testing at RicketyRoo Inc, took her vibing skills a step further and created a Chrome extension. “I realized that I don’t need to build something big, just something useful,” she explained.

Her browser extension, the GBP Reviews Sentiment Analyzer, summarizes sentiment analysis for reviews over the last 30 days and review velocity. It also allows the information to be exported into a CSV. The extension works on Google Maps and Google Business Profile pages.

Instead of ChatGPT, Celeste used a combination of Claude (to create high-quality prompts) and Cursor (to paste the created prompts and generate the code).

AI tools used: Claude (Sunner 4.5 model) and Cursor 

APIs used: Google Business Profile API (free)

Platform hosting: Chrome Extension

Knowledge Panel Tracker (Gus Pelogia)

I became obsessed with the Knowledge Graph in 2022, when I learned how to create and manage my own knowledge panel. Since then, I found out that Google has a Knowledge Graph Search API that allows you to check the confidence score for any entity.

This vibe-coded tool checks the score for your entities daily (or at any frequency you want) and returns it in a sheet. You can track multiple entities at once and just add new ones to the list at any time.

The Knowledge Panel Tracker runs completely on Google Sheets, and the Knowledge Graph Search API is free to use. This guide shows how to create and run it in your own Google account, or you can see the spreadsheet here and just update the API key under Extensions > App Scripts. 

AI models used: ChatGPT 5.1

APIs used: Google Knowledge Graph API (free)

Platform hosting: Google Sheets

Inbox Hero Game (Vince Nero)

How about vibe coding a link building asset? That’s what Vince Nero from BuzzStream did when creating the Inbox Hero Game. It requires you to use your keyboard to accept or reject a pitch within seconds. The game is over if you accept too many bad pitches.

Inbox Hero Game is certainly more complex than running a piece of code on Google Colab, and it took Vince about 20 hours to build it all from scratch. “I learned you have to build things in pieces. Design the guy first, then the backgrounds, then one aspect of the game mechanics, etc.,” he said.

The game was coded in HTML, CSS, and JavaScript. “I uploaded the files to GitHub to make it work. ChatGPT walked me through everything,” Vince explained.

According to him, the longer the prompt continued, the less effective ChatGPT became, “to the point where [he’d] have to restart in a new chat.” 

This issue was one of the hardest and most frustrating parts of creating the game. Vince would add a new feature (e.g., score), and ChatGPT would “guarantee” it found the error, update the file, but still return with the same error. 

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

In the end, Inbox Hero Game is a fun game that demonstrates it’s possible to create a simple game without coding knowledge, yet taking steps to perfect it would be more feasible with a developer.

AI models used: ChatGPT

APIs used: None

Platform hosting: Webpage

Vibe coding with intent

Vibe coding won’t replace developers, and it shouldn’t. But as these examples show, it can responsibly unlock new ways for SEOs to prototype ideas, automate repetitive tasks, and explore creative experiments without heavy technical lift. 

The key is realism: Use vibe coding where precision isn’t mission-critical, validate outputs carefully, and understand when a project has outgrown “good enough” and needs additional resources and human intervention.

When approached thoughtfully, vibe coding becomes less about shipping perfect software and more about expanding what’s possible — faster testing, sharper insights, and more room for experimentation. Whether you’re building an internal tool, a proof of concept, or a fun SEO side project, the best results come from pairing curiosity with restraint.

Read more at Read More