Posts

7 real-world AI failures that show why adoption keeps going wrong

7 real-world AI failures that show why adoption keeps going wrong

AI has quickly risen to the top of the corporate agenda. Despite this, 95% of businesses struggle with adoption, MIT research found.

Those failures are no longer hypothetical. They are already playing out in real time, across industries, and often in public. 

For companies exploring AI adoption, these examples highlight what not to do and why AI initiatives fail when systems are deployed without sufficient oversight.

1. Chatbot participates in insider trading, then lies about it

In an experiment driven by the UK government’s Frontier AI Taskforce, ChatGPT placed illegal trades and then lied about it

Researchers prompted the AI bot to act as a trader for a fake financial investment company. 

They told the bot that the company was struggling, and they needed results. 

They also fed the bot insider information about an upcoming merger, and the bot affirmed that it should not use this in its trades. 

The bot still made the trade anyway, citing that “the risk associated with not acting seems to outweigh the insider trading risk,” then denied using the insider information.  

Marius Hobbhahn, CEO of Apollo Research (the company that conducted the experiment), said that helpfulness “is much easier to train into the model than honesty,” because “honesty is a really complicated concept.”

He says that current models are not powerful enough to be deceptive in a “meaningful way” (arguably, this is a false statement, see this and this).

However, he warns that it’s “not that big of a step from the current models to the ones that I am worried about, where suddenly a model being deceptive would mean something.”

AI has been operating in the financial sector for some time, and this experiment highlights the potential for not only legal risks but also risky autonomous actions on the part of AI.  

Dig deeper: AI-generated content: The dangers of overreliance

2. Chevy dealership chatbot sells SUV for $1 in ‘legally binding’ offer

An AI-powered chatbot for a local Chevrolet dealership in California sold a vehicle for $1 and said it was a legally binding agreement. 

In an experiment that went viral across forums on the web, several people toyed with the local dealership’s chatbot to respond to a variety of non-car-related prompts.  

One user convinced the chatbot to sell him a vehicle for just $1, and the chatbot confirmed it was a “legally binding offer – no takesies backsies.”

Fullpath, the company that provides AI chatbots to car dealerships, took the system offline once it became aware of the issue.

The company’s CEO told Business Insider that despite viral screenshots, the chatbot resisted many attempts to provoke misbehavior.

Still, while the car dealership didn’t face any legal liability from the mishap, some argue that the chatbot agreement in this case may be legally enforceable. 

3. Supermarket’s AI meal planner suggests poison recipes and toxic cocktails

A New Zealand supermarket chain’s AI meal planner suggested unsafe recipes after certain users prompted the app to use non-edible ingredients. 

Recipes like bleach-infused rice surprise, poison bread sandwiches, and even a chlorine gas mocktail were created before the supermarket caught on.

A spokesperson for the supermarket said they were disappointed to see that “a small minority have tried to use the tool inappropriately and not for its intended purpose,” according to The Guardian 

The supermarket said it would continue to fine-tune the technology for safety and added a warning for users. 

That warning stated that recipes are not reviewed by humans and do not guarantee that “any recipe will be a complete or balanced meal, or suitable for consumption.”

Critics of AI technology argue that chatbots like ChatGPT are nothing more than improvisational partners, building on whatever you throw at them. 

Because of the way these chatbots are wired, they could pose a real safety risk for certain companies that adopt them.  

Get the newsletter search marketers rely on.


4. Air Canada held liable after chatbot gives false policy advice

An Air Canada customer was awarded damages in court after the airline’s AI chatbot assistant made false claims about its policies

The customer inquired about the airline’s bereavement rates via its AI assistant after the death of a family member. 

The chatbot responded that the airline offered discounted bereavement rates for upcoming travel or for travel that has already occurred, and linked to the company’s policy page. 

Unfortunately, the actual policy was the opposite, and the airline did not offer reduced rates for bereavement travel that had already happened. 

The fact that the chatbot linked to the policy page with the correct information was an argument the airline made in court when trying to prove its case.

However, the tribunal (a small claims-type court in Canada) did not side with the defendant. As reported by Forbes, the tribunal called the scenario “negligent misrepresentation.”

Christopher C. Rivers, Civil Resolution Tribunal Member, said this in the decision:

  • “Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”

This is just one of many examples where people have been dissatisfied with chatbots due to their technical limitations and propensity for misinformation – a trend that is sparking more and more litigation. 

Dig deeper: 5 SEO content pitfalls that could be hurting your traffic

5. Australia’s largest bank replaces call center with AI, then apologizes and rehires staff

The largest bank in Australia replaced its call center team with AI voicebots with the promise of boosted efficiency, but admitted it made a big mistake. 

The Commonwealth Bank of Australia (CBA) believed the AI voicebots could reduce call volume by 2,000 calls per week. But it didn’t.

Instead, left without the assistance of its 45-person call center, the bank scrambled to offer overtime to remaining workers to keep up with the calls, and get other management workers to answer calls, too.

Meanwhile, the union representing the displaced workers elevated the situation to the Finance Sector Union (like the Equal Opportunity Commission in the U.S.). 

It was only one month after CBA replaced workers that it issued an apology and offered to hire them back.

CBA said in a statement that they did not “adequately consider all relevant business considerations and this error meant the roles were not redundant.”

Other U.S. companies have faced PR nightmares as well when attempting to replace human roles with AI.

Perhaps that’s why certain brands have deliberately gone in the opposite direction, making sure people remain central to every AI deployment.

Nevertheless, the CBA debacle shows that replacing people with AI without fully weighing the risks can backfire quickly and publicly.

6. New York City’s chatbot advises employers to break labor and housing laws

New York City launched an AI chatbot to provide information on starting and running a business, and it advised people to carry out illegal activities

Just months after its launch, people started noticing the inaccuracies provided by the Microsoft-powered chatbot.

The chatbot offered unlawful guidance across the board, from telling bosses they could pocket employees’ tips and skip notifying staff about schedule changes to tenant discrimination and cashless stores.

“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup
“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup

This is despite the city’s initial announcement promising that the chatbot would provide trusted information on topics such as “compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines.” 

Still, then-mayor Eric Adams defended the technology, saying: 

  • “Anyone that knows technology knows this is how it’s done,” and that “only those who are fearful sit down and say, ‘Oh, it is not working the way we want, now we have to run away from it all together.’ I don’t live that way.” 

Critics called his approach reckless and irresponsible. 

This is yet another cautionary tale in AI misinformation and how organizations can better handle the integration and transparency around AI technology. 

Dig deeper: SEO shortcuts gone wrong: How one site tanked – and what you can learn

7. Chicago Sun-Times publishes fake book list generated by AI

The Chicago Sun-Times ran a syndicated “summer reading” feature that included false, made-up details about books after the writer relied on AI without fact-checking the output. 

King Features Syndicate, a unit of Hearst, created the special section for the Chicago Sun-Times.  

Not only were the book summaries inaccurate, but some of the books were entirely fabricated by AI. 

“Syndicated content in Sun-Times special section included AI-generated misinformation,” Chicago Sun-Times

The author, hired by King Features Syndicate to create the book list, admitted to using AI to put the list together, as well as for other stories, without fact-checking. 

And the publisher was left trying to determine the extent of the damage. 

The Chicago Sun-Times said print subscribers would not be charged for the edition, and it put out a statement reiterating that the content was produced outside the newspaper’s newsroom. 

Meanwhile, the Sun-Times said they are in the process of reviewing their relationship with King Features, and as for the writer, King Features fired him.  

Oversight matters

The examples outlined here show what happens when AI systems are deployed without sufficient oversight. 

When left unchecked, the risks can quickly outweigh the rewards, especially as AI-generated content and automated responses are published at scale.

Organizations that rush into AI adoption without fully understanding those risks often stumble in predictable ways. 

In practice, AI succeeds only when tools, processes, and content outputs keep humans firmly in the driver’s seat.

Read more at Read More

Why LLM-only pages aren’t the answer to AI search

Why LLM-only pages aren’t the answer to AI search

With new updates in the search world stacking up in 2026, content teams are trying a new strategy to rank: LLM pages.

They’re building pages that no human will ever see: markdown files, stripped-down JSON feeds, and entire /ai/ versions of their articles.

The logic seems sound: if you make content easier for AI to parse, you’ll get more citations in ChatGPT, Perplexity, and Google’s AI Overviews.

Strip out the ads. Remove the navigation. Serve bots pure, clean text.

Industry experts such as Malte Landwehr have documented sites creating .md copies of every article or adding llms.txt files to guide AI crawlers.

Teams are even building entire shadow versions of their content libraries.

Google’s John Mueller isn’t buying it.

  • “LLMs have trained on – read and parsed – normal web pages since the beginning,” he said in a recent discussion on Bluesky. “Why would they want to see a page that no user sees?”
JohnMu, Lily Ray on BlueSky

His comparison was blunt: LLM-only pages are like the old keywords meta tag. Available for anyone to use, but ignored by the systems they’re meant to influence.

So is this trend actually working, or is it just the latest SEO myth?

The rise of ‘LLM-only’ web pages

The trend is real. Sites across tech, SaaS, and documentation are implementing LLM-specific content formats.

The question isn’t whether adoption is happening, it’s whether these implementations are driving the AI citations teams hoped for.

Here’s what content and SEO teams are actually building.

llms.txt files

A markdown file at your domain root listing key pages for AI systems.

The format was introduced in 2024 by AI researcher Simon Willison to help AI systems discover and prioritize important content. 

Plain text lives at yourdomain.com/llms.txt with an H1 project name, brief description, and organized sections linking to important pages.

Stripe’s implementation at docs.stripe.com/llms.txt shows the approach in action:

markdown# Stripe Documentation

> Build payment integrations with Stripe APIs

## Testing

- [Test mode](https://docs.stripe.com/testing): Simulate payments

## API Reference

- [API docs](https://docs.stripe.com/api): Complete API reference

The payment processor’s bet is simple: if ChatGPT can parse their documentation cleanly, developers will get better answers when they ask, “how do I implement Stripe.”

They’re not alone. Current adopters include Cloudflare, Anthropic, Zapier, Perplexity, Coinbase, Supabase, and Vercel.

Markdown (.md) page copies

Sites are creating stripped-down markdown versions of their regular pages.

The implementation is straightforward: just add .md to any URL. Stripe’s docs.stripe.com/testing becomes docs.stripe.com/testing.md.

Everything gets stripped out except the actual content. No styling. No menus. No footers. No interactive elements. Just pure text and basic formatting.

The thinking: if AI systems don’t have to wade through CSS and JavaScript to find the information they need, they’re more likely to cite your page accurately.

/ai and similar paths

Some sites are building entirely separate versions of their content under /ai/, /llm/, or similar directories.

You might find /ai/about living alongside the regular /about page, or /llm/products as a bot-friendly alternative to the main product catalog. 

Sometimes these pages have more detail than the originals. Sometimes they’re just reformatted.

The idea: give AI systems their own dedicated content that’s built for machine consumption, not human eyes. 

If a person accidentally lands on one of these pages, they’ll find something that looks like a website from 2005.

JSON metadata files

Dell took this approach with their product specs.

Instead of creating separate pages, they built structured data feeds that live alongside their regular ecommerce site.

The files contain clean JSON – specs, pricing, and availability.

Everything an AI needs to answer “what’s the best Dell laptop under $1000” without having to parse through product descriptions written for humans.

You’ll typically find these files as /llm-metadata.json or /ai-feed.json in the site’s directory.

# Dell Technologies

> Dell Technologies is a leading technology provider, specializing in PCs, servers, and IT solutions for businesses and consumers.

## Product and Catalog Data

- [Product Feed - US Store](https://www.dell.com/data/us/catalog/products.json): Key product attributes and availability.

- [Dell Return Policy](https://www.dell.com/return-policy.md): Standard return and warranty information.

## Support and Documentation

- [Knowledge Base](https://www.dell.com/support/knowledge-base.md): Troubleshooting guides and FAQs.

This approach makes the most sense for ecommerce and SaaS companies that already keep their product data in databases. 

They’re just exposing what they already have in a format AI systems can easily digest.

Dig deeper: LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Real-world citation data: What actually gets referenced

The theory sounds good. The adoption numbers look impressive. 

But do these LLM-optimized pages actually get cited?

The individual analysis

Landwehr, CPO and CMO at Peec AI, ran targeted tests on five websites using these tactics. He crafted prompts specifically designed to surface their LLM-friendly content.

Some queries even contained explicit 20+ word quotes designed to trigger specific sources.

Landwehr - LLM experiment 1

Across nearly 18,000 citations, here’s what he found.

llms.txt: 0.03% of citations

Out of 18,000 citations, only six pointed to llms.txt files. 

The six that did work had something in common: they contained genuinely useful information about how to use an API and where to find additional documentation. 

The kind of content that actually helps AI systems answer technical questions. The “search-optimized” llms.txt files, the ones stuffed with content and keywords, received zero citations.

Markdown (.md) pages: 0% of citations

Sites using .md copies of their content got cited 3,500+ times. None of those citations pointed to the markdown versions. 

The one exception: GitHub, where .md files are the standard URLs. 

They’re linked internally, and there’s no HTML alternative. But these are just regular pages that happen to be in markdown format.

/ai pages: 0.5% to 16% of citations

Results varied wildly depending on implementation. 

One site saw 0.5% of its citations point to its/ai pages. Another hit 16%. 

The difference? 

The higher-performing site put significantly more information in their /ai pages than existed anywhere else on their site. 

Keep in mind, these prompts were specifically asking for information contained in these files. 

Even with prompts designed to surface this content, most queries ignored the /ai versions.

JSON metadata: 5% of citations

One brand saw 85 out of 1,800 citations (5%) come from their metadata JSON file. 

The critical detail here is that the file contained information that didn’t exist anywhere else on the website. 

Once again, the query specifically asked for those pieces of information.

Landwehr - LLM experiment 1

The large-scale analysis

SE Ranking took a different approach

Instead of testing individual sites, they analyzed 300,000 domains to see if llms.txt adoption correlated with citation frequency at scale.

Only 10.13% of domains, or 1 in 10, had implemented llms.txt. 

For context, that’s nowhere near the universal adoption of standards like robots.txt or XML sitemaps.

During the study, an interesting relationship between adoption rates and traffic levels emerged.

Sites with 0-100 monthly visits adopted llms.txt at 9.88%. 

Sites with 100,001+ visits? Just 8.27%. 

The biggest, most established sites were actually slightly less likely to use the file than mid-tier ones.

But the real test was whether llms.txt impacted citations. 

SE Ranking built a machine learning model using XGBoost to predict citation frequency based on various factors, including the presence of llms.txt.

The result: removing llms.txt from the model actually improved its accuracy. 

The file wasn’t helping predict citation behavior, it was adding noise.

The pattern

Both analyses point to the same conclusion: LLM-optimized pages get cited when they contain unique, useful information that doesn’t exist elsewhere on your site.

The format doesn’t matter. 

Landwehr’s conclusion was blunt: “You could create a 12345.txt file and it would be cited if it contains useful and unique information.”

A well-structured about page achieves the same result as an /ai/about page. API documentation gets cited whether it’s in llms.txt or buried in your regular docs.

The files themselves get no special treatment from AI systems. 

The content inside them might, but only if it’s actually better than what already exists on your regular pages.

SE Ranking’s data backs this up at scale. There’s no correlation between having llms.txt and getting more citations. 

The presence of the file made no measurable difference in how AI systems referenced domains.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

What Google and AI platforms actually say

No major AI company has confirmed using llms.txt files in their crawling or citation processes.

Google’s Mueller made the sharpest critique in April 2025, comparing llms.txt to the obsolete keywords meta tag: 

  • “[As far as I know], none of the AI services have said they’re using LLMs.TXT (and you can tell when you look at your server logs that they don’t even check for it).”

Google’s Gary Illyes reinforced this at the July 2025 Search Central Deep Dive in Bangkok, explicitly stating Google “doesn’t support LLMs.txt and isn’t planning to.”

Google Search Central’s documentation is equally clear: 

  • “The best practices for SEO remain relevant for AI features in Google Search. There are no additional requirements to appear in AI Overviews or AI Mode, nor other special optimizations necessary.”

OpenAI, Anthropic, and Perplexity all maintain their own llms.txt files for their API documentation to make it easy for developers to load into AI assistants. 

But none have announced their crawlers actually read these files from other websites.

The consistent message from every major platform: standard web publishing practices drive visibility in AI search. 

No special files, no new markup, and no separate versions needed.

What this means for SEO teams

The evidence points to a single conclusion: stop building content that only machines will see.

Mueller’s question cuts to the core issue: 

  • “Why would they want to see a page that no user sees?” 

If AI companies needed special formats to generate better responses, they would tell you. As he noted:

  • “AI companies aren’t really known for being shy.” 

The data proves him right. 

Across Landwehr’s nearly 18,000 citations, LLM-optimized formats showed no advantage unless they contained unique information that didn’t exist anywhere else on the site. 

SE Ranking’s analysis of 300,000 domains found that llms.txt actually added confusion to their citation prediction model rather than improving it.

Instead of creating shadow versions of your content, focus on what actually works.

Build clean HTML that both humans and AI can parse easily. 

Reduce JavaScript dependencies for critical content, which Mueller identified as the real technical barrier: 

  • “Excluding JS, which still seems hard for many of these systems.” 

Heavy client-side rendering creates actual problems for AI parsing.

Use structured data when platforms have published official specifications, such as OpenAI’s ecommerce product feeds

Improve your information architecture so key content is discoverable and well-organized.

The best page for AI citation is the same page that works for users: well-structured, clearly written, and technically sound. 

Until AI companies publish formal requirements stating otherwise, that’s where your optimization energy belongs.

Dig deeper: GEO myths: This article may contain lies

Read more at Read More

SEO in 2026: What will stay the same

SEO in 2026 what will stay the same

Around the turn of the year, search industry media fills up with reviews and predictions. Bold, disruptive ideas steal the spotlight and trigger a sense of FOMO (fear of missing out).

However, sustainable online sales growth doesn’t come from chasing the next big trend. In SEO, what truly matters stays the same.

FOMO is bad for you 

We regularly get excited about the next big thing. Each new idea is framed as a disruptive force that will level the playing field.

Real shifts do happen, but they are rare. More often, the promised upheaval fades into a storm in a teacup.

Over the years, search has introduced many innovations that now barely raise an eyebrow. Just a few examples:

  • Voice search.
  • Universal Search.
  • Google Instant.
  • The Knowledge Graph.
  • HTTPS as a ranking signal.
  • RankBrain.
  • Mobile-first indexing.
  • AMP.
  • Featured snippets and zero-click searches.
  • E-A-T and E-E-A-T.
  • Core Web Vitals.
  • Passage indexing.
  • AI Overviews.

Some claimed these developments would revolutionize SEO or wipe it out entirely. That never happened.

The latest addition to the SEO hype cycle, LLMs and AI, fits neatly into this list. After the initial upheaval, the excitement has already started to fade.

The benefits of LLMs are clear in some areas, especially coding and software development. AI tools boost efficiency and significantly shorten production cycles.

In organic search, however, their impact remains limited, despite warnings from attention-seeking doomsayers. No AI-driven challenger has captured meaningful search market share.

Beyond ethical concerns about carbon footprint and extreme energy use, accuracy remains the biggest hurdle. Because they rely on unverified inputs, LLM-generated answers often leave users more confused than informed.

AI-driven platforms still depend on crawling the web and using core SEO signals to train models and answer queries. Like any bot, they need servers and content to be accessible and crawlable.

Without strong quality controls, low-quality inputs produce inconsistent and unreliable outputs. This is just one reason why Google’s organic search market share remains close to 90%.

It also explains why Google is likely to remain the dominant force in ecommerce search for the foreseeable future. For now, a critical mass of users will continue to rely on Google as their search engine of choice.

It’s all about data 

Fundamentally, it makes little difference whether a business focuses on Google, LLM-based alternatives, or both. All search systems depend on crawled data, and that won’t change.

Fast, reliable, and trustworthy indexing signals sit at the core of every ranking system. Instead of chasing hype, brands and businesses are better served by focusing on two core areas: their customers’ needs and the crawlability of their web platforms.

Customer needs always come first.

Most users do not care whether a provider uses the latest innovation. They care about whether expectations are met and promises are kept. That will not change.

Meeting user expectations will remain a core objective of SEO.

Crawlability is just as critical. A platform that cannot be properly crawled or indexed has no chance in competitive sectors such as retail, travel, marketplaces, news, or affiliate marketing.

Making sure bots can crawl a site, and algorithms can clearly understand the unique value of its content, will remain a key success factor in both SEO and GEO for the foreseeable future.

Won’t change: Uncrawled content won’t rank

Other factors are unlikely to change as well, including brand recognition, user trust, ease of use, and fast site performance.

These factors have always mattered and will continue to do so. They only support SEO and GEO if a platform can be properly crawled and understood. That is why regular reviews of technical signals are a critical part of a successful online operation.

Won’t change: server errors prevent indexing by any bot

At the start of a new year, you should resist the fear of missing out on the latest novelty. Following the herd rarely helps anyone stand out.

A better approach is to focus on what is certain to remain consistent in 2026 and beyond.

What to do next

Publishers can breathe a sigh of relief. There is no need to rush into a new tool just because everyone else is. Adopt it if it makes sense, but no tool alone will make a business thrive.

Focus on what you do best and make it even better. Your customers will notice and appreciate it.

At the same time, make sure your web platform is fast and reliable, that your most relevant content is regularly re-crawled, and that bots clearly understand its purpose. These are the SEO and GEO factors that will endure.

Holistic SEO is both an art and a science. While it is far more complex in 2026, it is the unchanging foundational signals that matter most.

Read more at Read More

Yext’s Visibility Brief: Your guide to brand visibility in AI search by Yext

Search visibility isn’t what it used to be. Rankings still matter, but they’re no longer the whole story. 

Today, discovery happens across traditional search results, local listings, brand knowledge panels, and increasingly, AI-driven experiences that surface answers without a click. For marketers, that makes visibility harder to measure — and easier to lose.

SEO teams now operate in a landscape where accuracy, consistency, and trust signals matter as much as keywords. Business information, reviews, and brand authority determine whether a brand shows up at all, especially as AI-powered search reshapes how results are generated and displayed. As a result, many brands think they’re visible — until they look closer.

The Visibility Brief was created to show you what’s really happening. Built on real data from thousands of brands, it provides a practical view of how visibility plays out across today’s search and discovery ecosystem.

Instead of focusing on a single channel or metric, it takes a broader view. The content highlights where brands are gaining ground, where gaps appear, and which trends are shaping performance.

You’ll see how traditional search and AI-driven discovery now overlap, why data accuracy has become a baseline requirement, and where brands are losing exposure without realizing it. 

The goal is simple: help you understand how visibility is changing and what to focus on now.

Watch or listen to the Visibility Brief to get a clearer view of today’s search landscape — and what it means for your brand’s visibility.

Subscribe to the Visibility Brief on Spotify or Apple Podcasts.

Read more at Read More

Web Design and Development San Diego

Inside SearchGuard: How Google detects bots and what the SerpAPI lawsuit reveals

Google SearchGuard

We fully decrypted Google’s SearchGuard anti-bot system, the technology at the center of its recent lawsuit against SerpAPI.

After fully deobfuscating the JavaScript code, we now have an unprecedented look at how Google distinguishes human visitors from automated scrapers in real time.

What happened. Google filed a lawsuit on Dec. 19 against Texas-based SerpAPI LLC, alleging the company circumvented SearchGuard to scrape copyrighted content from Google Search results at a scale of “hundreds of millions” of queries daily. Rather than targeting terms-of-service violations, Google built its case on DMCA Section 1201 – the anti-circumvention provision of copyright law.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

The complaint describes SearchGuard as “the product of tens of thousands of person hours and millions of dollars of investment.”

Why we care. The lawsuit reveals exactly what Google considers worth protecting – and how far it will go to defend it. For SEOs and marketers, understanding SearchGuard matters because any large-scale automated interaction with Google Search now triggers this system. If you’re using tools that scrape SERPs, this is the wall they’re hitting.

The OpenAI connection

Here’s where it gets interesting: SerpAPI isn’t just any scraping company.

OpenAI has been partially using Google search results scraped by SerpAPI to power ChatGPT’s real-time answers. SerpAPI listed OpenAI as a customer on its website as recently as May 2024, before the reference was quietly removed.

Google declined OpenAI’s direct request to access its search index in 2024. Yet ChatGPT still needed fresh search data to compete.

The solution? A third-party scraper that pillages Google’s SERPs and resells the data.

Google isn’t attacking OpenAI directly. It’s targeting a key link in the supply chain that feeds its main AI competitor.

The timing is telling. Google is striking at the infrastructure that powers rival search products — without naming them in the complaint.

What we found inside SearchGuard

We fully decrypted version 41 of the BotGuard script – the technology underlying SearchGuard. The script opens with an unexpectedly friendly message:

Anti-spam. Want to say hello? Contact botguard-contact@google.com */

Behind that greeting sits one of the most sophisticated bot detection systems ever deployed.

BotGuard vs. SearchGuard. BotGuard is Google’s proprietary anti-bot system, internally called “Web Application Attestation” (WAA). Introduced around 2013, it now protects virtually all Google services: YouTube, reCAPTCHA v3, Google Maps, and more.

In its complaint against SerpAPI, Google revealed that the system protecting Search specifically is called “SearchGuard” – presumably the internal name for BotGuard when applied to Google Search. This is the component that was deployed in January 2025, breaking nearly every SERP scraper overnight.

Unlike traditional CAPTCHAs that require clicking images of traffic lights, BotGuard operates completely invisibly. It continuously collects behavioral signals and analyzes them using statistical algorithms to distinguish humans from bots – all without the user knowing.

The code runs inside a bytecode virtual machine with 512 registers, specifically designed to resist reverse engineering.

How Google knows you’re human

The system tracks four categories of behavior in real time. Here’s what it measures:

Mouse movements

Humans don’t move cursors in straight lines. We follow natural curves with acceleration and deceleration – tiny imperfections that reveal our humanity.

Google tracks:

  • Trajectory (path shape)
  • Velocity (speed)
  • Acceleration (speed changes)
  • Jitter (micro-tremors)

A “perfect” mouse movement – linear, constant speed – is immediately suspicious. Bots typically move in precise vectors or teleport between points. Humans are messier.

Detection threshold: Mouse velocity variance below 10 flags as bot behavior. Normal human variance falls between 50-500.

Keyboard rhythm

Everyone has a unique typing signature. Google measures:

  • Inter-key intervals (time between keystrokes)
  • Key press duration (how long each key is held)
  • Error patterns
  • Pauses after punctuation

A human typically shows 80-150ms variance between keystrokes. A bot? Often less than 10ms with robotic consistency.

Detection threshold: Key press duration variance under 5ms indicates automation. Normal human typing shows 20-50ms variance.

Scroll behavior

Natural scrolling has variable velocity, direction changes, and momentum-based deceleration. Programmatic scrolling is often too smooth, too fast, or perfectly uniform.

Google measures:

  • Amplitude (how far)
  • Direction changes
  • Timing between scrolls
  • Smoothness patterns

Scrolling in fixed increments – 100px, 100px, 100px – is a red flag.

Detection threshold: Scroll delta variance under 5px suggests bot activity. Humans typically show 20-100px variance.

Timing jitter

This is the killer signal. Humans are inconsistent, and that’s exactly what makes us human.

Google uses Welford’s algorithm to calculate variance in real-time with constant memory usage – meaning it can analyze patterns without storing massive amounts of data, regardless of how many events occur. As each event arrives, the algorithm updates its running statistics.

If your action intervals have near-zero variance, you’re flagged.

The math: If timing follows a Gaussian distribution with natural variance, you’re human. If it’s uniform or deterministic, you’re a bot.

Detection threshold: Event counts exceeding 200 per second indicate automation. Normal human interaction generates 10-50 events per second.

The 100+ DOM elements Google monitors

Beyond behavior, SearchGuard fingerprints your browser environment by monitoring over 100 HTML elements. The complete list extracted from the source code includes:

  • High-priority elements (forms): BUTTON, INPUT – these receive special attention because bots often target interactive elements.
  • Structure: ARTICLE, SECTION, NAV, ASIDE, HEADER, FOOTER, MAIN, DIV
  • Text: P, PRE, BLOCKQUOTE, EM, STRONG, CODE, SPAN, and 25 others
  • Tables: TABLE, CAPTION, TBODY, THEAD, TR, TD, TH
  • Media: FIGURE, CANVAS, PICTURE
  • Interactive: DETAILS, SUMMARY, MENU, DIALOG

Environmental fingerprinting

SearchGuard also collects extensive browser and device data:

Navigator properties:

  • userAgent
  • language / languages
  • platform
  • hardwareConcurrency (CPU cores)
  • deviceMemory
  • maxTouchPoints

Screen properties:

  • width / height
  • colorDepth / pixelDepth
  • devicePixelRatio

Performance:

  • performance.now() precision
  • performance.timeOrigin
  • Timer jitter (fluctuations in timing APIs)

Visibility:

  • document.hidden
  • visibilityState
  • hasFocus()

WebDriver detection: The script specifically checks for signatures that betray automation tools:

  • navigator.webdriver (true if automated)
  • window.chrome.runtime (absent in headless mode)
  • ChromeDriver signatures ($cdc_ prefixes)
  • Puppeteer markers ($chrome_asyncScriptInfo)
  • Selenium indicators (__selenium_unwrapped)
  • PhantomJS artifacts (_phantom)

Why bypasses become obsolete in minutes

Here’s the critical discovery: SearchGuard uses a cryptographic system that can invalidate any bypass within minutes.

The script generates encrypted tokens using an ARX cipher (Addition-Rotation-XOR) – similar to Speck, a family of lightweight block ciphers released by the NSA in 2013 and optimized for software implementations on devices with limited processing power.

But there’s a twist.

The magic constant rotates. The cryptographic constant embedded in the cipher isn’t fixed. It changes with every script rotation.

Observed values from our analysis:

  • Timestamp 16:04:21: Constant = 1426
  • Timestamp 16:24:06: Constant = 3328

The script itself is served from URLs with integrity hashes: //www.google.com/js/bg/{HASH}.js. When the hash changes, the cache invalidates, and every client downloads a fresh version with new cryptographic parameters.

Even if you fully reverse-engineer the system, your implementation becomes invalid with the next update.

It’s cat and mouse by design.

The statistical algorithms

Two algorithms power SearchGuard’s behavioral analysis:

  • Welford’s algorithm calculates variance in real time with constant memory usage – meaning it processes each event as it arrives and updates a running statistical summary, without storing every past interaction. Whether the system has seen 100 or 100 million events, memory consumption stays the same.
  • Reservoir sampling maintains a random sample of 50 events per metric to estimate median behavior. This provides a representative sample without storing every interaction.

Combined, these algorithms build a statistical profile of your behavior and compare it against what humans actually do.

SerpAPI’s response

SerpAPI’s founder and CEO, Julien Khaleghy, shared this statement with Search Engine Land:

“SerpApi has not been served with Google’s complaint, and prior to filing, Google did not contact us to raise any concerns or explore a constructive resolution. For more than eight years, SerpApi has provided developers, researchers, and businesses with access to public search data. The information we provide is the same information any person can see in their browser without signing in. We believe this lawsuit is an effort to stifle competition from the innovators who rely on our services to build next-generation AI, security, browsers, productivity, and many other applications.”

The defense may face challenges. The DMCA doesn’t require content to be non-public – it prohibits circumventing technical protection measures, period. If Google proves SerpAPI deliberately bypassed SearchGuard protections, the “public data” argument may not hold.

What this means for SEO – and the bigger picture

If you’re building SEO tools that programmatically access Google Search, 2025 was brutal.

In January, Google deployed SearchGuard. Nearly every SERP scraper suddenly stopped returning results. SerpAPI had to scramble to develop workarounds – which Google now calls illegal circumvention.

Then in September, Google removed the num=100 parameter – a long-standing URL trick that allowed tools to retrieve 100 results in a single request instead of 10. Officially, Google said it was “not a formally supported feature.” But the timing was telling: forcing scrapers to make 10x more requests dramatically increased their operational costs. Some analysts suggested the move specifically targeted AI platforms like ChatGPT and Perplexity that relied on mass scraping for real-time data.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

The combined effect: traditional scraping approaches are increasingly difficult and expensive to maintain.

For the industry: This lawsuit could reshape how courts view anti-scraping measures. If SearchGuard qualifies as a valid “technological protection measure” under DMCA, every platform could deploy similar systems with legal teeth.

Under DMCA Section 1201, statutory damages range from $200 to $2,500 per circumvention act. With hundreds of millions of alleged violations daily, the theoretical liability is astronomical – though Google’s complaint acknowledges that “SerpApi will be unable to pay.”

The message isn’t about money. It’s about setting precedent.

Meanwhile, the antitrust case rolls on. Judge Mehta ordered Google to share its index and user data with “Qualified Competitors” at marginal cost. One hand is being forced open while the other throws punches.

Google’s position: “You want our data? Go through the antitrust process and the technical committee. Not through scraping.”

Here’s the uncomfortable truth: Google technically offers publishers controls, but they’re limited. Google-Extended allows publishers to opt out of AI training for Gemini models and Vertex AI – but it doesn’t apply to Search AI features including AI Overviews.

Google’s documentation states:

“AI is built into Search and integral to how Search functions, which is why robots.txt directives for Googlebot is the control for site owners to manage access to how their sites are crawled for Search.”

Court testimony from DeepMind VP Eli Collins during the antitrust trial confirmed this separation: content opted out via Google-Extended could still be used by the Search organization for AI Overviews, because Google-Extended isn’t the control mechanism for Search.

The only way to fully opt out of AI Overviews? Block Googlebot entirely – and lose all search traffic.

Publishers face an impossible choice: accept that your content feeds Google’s AI search products, or disappear from search results altogether.

Your move, courts.

Dig deeper

This analysis is based on version 41 of the BotGuard script, extracted and deobfuscated from challenge data in January 2026. The information is provided for informational purposes only.

Read more at Read More

Choosing the right WordPress SEO plugin for your business – Yoast vs Rank Math 

Selecting an SEO plugin for your WordPress site is one of the most important decisions you’ll make for your online presence. It’s not just about installing software; it’s about choosing a long-term partner that will grow with your business, adapt to changing search algorithms, and support you in the age of AI. While the market offers several options, understanding what truly matters is key. Two of the most popular plugins in the market today are Yoast and Rank Math. Therefore, factors such as reliability, innovation, ecosystem, and trust help you make a choice that will serve your business for years to come. 

This guide provides an in-depth comparison of the key differentiating factors between Yoast and Rank Math. We will understand why millions of websites worldwide have made Yoast their trusted comrade in the search business. 

Key takeaways

  • Choosing an SEO plugin like Yoast SEO impacts your online presence and future growth.
  • Yoast offers reliability with over 15 years of experience and millions of active installations, unlike newer competitors.
  • Innovations such as AI integration and a unified schema graph set Yoast apart from other plugins.
  • Yoast provides comprehensive support, education, and a multi-platform ecosystem tailored for long-term success.
  • Trust industry leaders like Microsoft and Spotify who use Yoast SEO to enhance their online visibility.

What really matters when choosing an SEO plugin

When evaluating WordPress SEO plugins, it’s easy to get distracted by feature lists and flashy interfaces. But experienced marketers, agencies, and business owners know that the best tools are defined by much more than what they promise on paper. 

The questions that matter most: 

  • Can you trust this plugin to work reliably as your business scales? 
  • Will the company behind it still be innovating five years from now? 
  • What happens when you need help before a critical deadline? 
  • Does the plugin anticipate future SEO trends, or just react to them? 
  • Is this a tool you install, or an ecosystem that supports your growth and development? 

These aren’t trivial questions. Your SEO plugin touches essential pages on your site, influences the content you publish, and directly impacts your ability to be found by potential customers.  
Choosing poorly can lead to migration headaches, compatibility issues, and lost rankings. Choosing wisely means peace of mind, ongoing innovation, and a solid foundation to build upon. 

Why legacy and proven trust matter in SEO plugins

Trust isn’t given. It’s earned. Yoast has defined the WordPress SEO landscape for over 15 years, with more than 13 million active installations and over 850 million downloads. This extensive legacy reflects a consistent track record of innovation, stability, and trust. Brands such as The Guardian, Microsoft, Spotify, and others rely on Yoast SEO as a foundation for their SEO strategies. This depth of experience is invaluable as SEO requires ongoing adaptation to algorithm changes and new technologies. 

While Rank Math is an ambitious and feature-rich plugin with a growing user base, its presence in the market is relatively recent. For businesses seeking a proven solution with a long-standing heritage, Yoast’s established positioning offers confidence that the plugin will continue to evolve and provide reliable support for years to come. 

Innovation that shapes the industry

Yoast has always been at the forefront of defining what modern SEO looks like. This isn’t a reactive development; it’s proactive innovation that anticipates where search is heading. Both plugins invest in innovation, but Yoast’s leadership in integrating AI and collaboration with Google sets it apart. 

AI and Automation 

We have introduced an industry-first AI-powered optimization toolset, including: 

  • AI Generate: Creates multiple optimized title and meta description variations instantly, giving you professionally crafted options in seconds instead of struggling for the perfect phrasing.
  • AI Optimize: Scans your content and provides precise, actionable suggestions to improve keyphrase placement, sentence structure, and readability, teaching you SEO best practices while you write. 
  • AI Summarize: Instantly generates bullet-point summaries of your content, making it more scannable and engaging for readers who skim before diving deep. 
  • AI Brand Insights: This is where Yoast truly separates from the pack. As AI platforms like ChatGPT reshape how people find information, AI Brand Insights, included in the Yoast SEO AI+ package, tracks how your brand appears in AI-generated responses. You can monitor your AI visibility, compare it against competitors, and ensure AI platforms accurately represent your business. 

While Rank Math includes helpful automation features such as AI keyword suggestions, Yoast’s AI integration is more comprehensive and positioned as a core pillar of modern SEO strategy. 

Schema markup that search engines can understand

While many plugins output disconnected structured data, Yoast SEO automatically generates a unified semantic graph on every page, linking your organization, content, authors, and products through a single JSON-LD structure that search engines and AI platforms can interpret consistently. 

What makes this different 

Automatic and invisible: 
Yoast outputs rich structured data representing your content, business, and relationships without requiring technical configuration. You focus on creating content; Yoast handles the complexity of structured data behind the scenes. 

Single unified graph format: 
Instead of fragmented schema markup, Yoast creates one cohesive graph structure per page, connecting all entities with unique IDs. When plugins output conflicting schema, search engines can’t reliably interpret your site. Yoast’s unified graph ensures consistent interpretation at scale, whether Google, ChatGPT, or any API is reading your content. 

Minimal configuration: 
Choose whether your site represents a person or organization; Yoast handles the rest automatically. Specialized blocks like FAQ and How-To map directly to correct schema types and link into the graph without additional setup. 

Why this matters for AI-driven search 

As AI platforms increasingly rely on structured data to understand websites, Yoast’s approach of creating a full semantic model of your site positions you for how search and discovery are evolving. The framework scales reliably from 100 to 100,000 pages while maintaining valid entity relationships. For developers, Yoast’s Schema API provides clean filters to extend or customize the graph without breaking its integrity. 

Rank Math and other plugins support Schema markup, but Yoast’s unified graph framework represents a fundamentally different approach: automatic generation, consistent entity relationships, and architecture built for scale. 

Continuous algorithm adaptation

Search engines make thousands of updates every year. Google alone rolls out over 5,000 algorithm changes annually. Now, as search engines evolve to incorporate AI tooling and platforms like ChatGPT reshape the way people discover information, the SEO landscape is changing faster than ever.  

Most website owners can’t possibly track these shifts across traditional search AND emerging AI platforms, let alone understand their implications. Yoast’s dedicated SEO team monitors every significant update, from Google algorithm changes to how AI platforms index and reference content, and proactively adjusts the plugin to ensure your site stays optimized for both traditional and AI-driven discovery.  

When you use Yoast, you’re not just getting software. You’re getting a team of experts working behind the scenes to keep your SEO strategy current across the entire discovery ecosystem. 

An ecosystem built to support your SEO workflow

Yoast offers an ecosystem beyond the plugin. While Yoast SEO itself is a plugin, Yoast provides a comprehensive ecosystem to support your growth: 

  • 24/7 real human expert support available for Yoast SEO Premium users. It ensures that you get fast, knowledgeable help when you need it. 
  • Yoast SEO Academy offers comprehensive SEO education, covering a range of topics from basics to advanced, with accompanying certifications. 
  • A massive knowledge base and community for continuous learning and troubleshooting. 
     

Multi-Platform Support 

Your business doesn’t exist on WordPress alone. That’s why Yoast extends beyond a single platform: 

  • Yoast SEO for Shopify: Brings Yoast’s trusted optimization to Shopify stores, helping ecommerce businesses improve product visibility and drive more sales. 
  • Yoast WooCommerce SEO: Specifically designed for WooCommerce stores with automated product schema, smart breadcrumbs, and ecommerce-focused content analysis. 

This ecosystem approach means Yoast grows with your business, supporting you across platforms as your needs evolve. Rank Math primarily focuses on the WordPress environment with a strong feature set, but lacks the same breadth of educational resources and multi-platform reach. 

Stability and reliability at enterprise-grade scale

Flashy features attract attention. Rock-solid reliability keeps businesses running. Yoast rigorously tests every update for compatibility and performance across different WordPress versions and server configurations. This commitment ensures: 

  • Backward compatibility: Updates maintain existing functionality without requiring extensive reconfiguration 
  • WordPress core integration: Seamless compatibility with new WordPress releases 
  • Performance at any scale: Optimized for sites ranging from personal blogs to high-traffic enterprise installations 

With over 15 years in the market and more than 13 million active installations, Yoast has proven its reliability across millions of sites, hosting environments, and various use cases. 

Rigorous testing and quality assurance 

Yoast maintains strict development standards that prioritize stability above rapid feature deployment. Every update undergoes extensive testing across the latest WordPress versions, most PHP configurations, and common plugin combinations before release. 

This disciplined approach means Yoast users rarely experience plugin conflicts, broken updates, or compatibility issues that plague WordPress sites using less mature plugins. 

Backward compatibility 

Major updates usually shake the functionality of plugins and software. However, Yoast maintains backward compatibility, ensuring that updating your plugin doesn’t suddenly break critical SEO features or require extensive reconfiguration. 

WordPress core compatibility 

As a plugin deeply integrated with WordPress development, Yoast maintains close relationships with the WordPress core team. This ensures seamless compatibility with new WordPress releases, often supporting new versions on launch day while other plugins scramble to catch up. 

Performance optimized for scale 

Whether you run a small blog or an enterprise site with millions of pages, Yoast performs efficiently without slowing down your site. The plugin is engineered for performance, using best practices for database queries, resource loading, and caching integration. 

Enterprises trust Yoast precisely because it scales reliably. Small teams appreciate that the same plugin powering major corporations works flawlessly on their modest sites, too. 

Ready to make a difference with Yoast SEO Premium?

Explore Yoast SEO Premium and the Yoast SEO AI+ package to discover advanced tools built for serious marketers.

Get Yoast SEO Premium Only $118.80 / year (ex VAT)

Where Yoast takes the lead

While comprehensive feature-by-feature comparisons can be overwhelming, certain capabilities distinguish truly professional SEO plugins from the rest. Here’s where Yoast’s innovation and depth shine through. 

AI-powered optimization 

Yoast leads the industry in AI integration for SEO optimization: 

  • AI-generated titles and meta descriptions 
  • Real-time content optimization suggestions 
  • An instant content summarization plugin 
  • AI Brand Insights for tracking your presence in AI search platforms 

No competing plugin offers this comprehensive AI integration designed specifically for modern SEO workflows. 

Schema Graph 

Yoast’s Schema implementation creates a complete structured data graph connecting your organization, content, authors, and brand identity. This goes far beyond basic Schema markup, providing search engines with rich context that improves your chances of appearing in knowledge panels, rich results, and AI-generated answers. 

Smart internal linking 

Yoast SEO Premium includes intelligent internal linking suggestions that analyze your content and recommend relevant pages to link to. This isn’t just a list of posts; it’s context-aware suggestions that strengthen your site architecture and improve crawlability. 

Advanced redirect manager 

Managing redirects is critical when restructuring sites, changing URLs, or handling broken links. Yoast’s redirect manager offers: 

  • Automatic redirects when you change a post URL 
  • Bulk CSV import/export for large-scale migrations 
  • REGEX support for complex redirect patterns 
  • Full redirect history and management 

WooCommerce-specific optimization 

If you run an online store, Yoast WooCommerce SEO provides: 

  • Automated product schema markup (price, availability, reviews) 
  • Smart breadcrumbs for product categories 
  • Ecommerce-focused content analysis 
  • Duplicate content prevention for product variations 

Comprehensive crawl settings 

Advanced users appreciate Yoast’s granular control over crawl optimization, robots.txt management, and indexation settings, giving technical SEO professionals the precision they need without overwhelming casual users. 

Bot blocker for LLM training control 

As AI companies scrape the web to train large language models, Yoast gives you control over whether your content is used for AI training via Bot Blocker. This cutting-edge feature addresses a concern most plugins haven’t even acknowledged yet. 

Recognized and trusted by industry leaders 

The company you keep says a lot about who you are. When the world’s most recognized brands trust Yoast to power their WordPress SEO, it’s a powerful testament to the quality, reliability, and effectiveness of our solutions. 

Global brands* using Yoast include: 

  • The Guardian 
  • Microsoft 
  • Spotify 
  • Rolling Stones 
  • Taylor Swift 
  • Facebook 
  • eBay 

These organizations have teams of developers, SEO experts, and decision-makers who have evaluated every available option. They chose Yoast, not because it was the newest, but because it was the best. 

*Disclaimer: Based on third party data sources.

Industry Recognition: 

  • Global Search Awards Finalist: Recognized among the world’s leading SEO solutions 
  • Women’s Choice Awards Winner: Acknowledged for excellence and customer satisfaction 

Yoast isn’t just popular, it’s the default choice for WordPress SEO professionals worldwide. 

Understanding what you really need

Before making your final decision, consider what matters most for your specific situation: 

If you value reliability and stability: Choose a plugin with a proven track record of consistent updates, compatibility, and performance. Longevity matters because it signals the company will be around to support you for years to come. 

If innovation matters to your strategy: Look for a plugin that anticipates SEO trends rather than reacting to them. AI integration, Schema excellence, and algorithm adaptation separate forward-thinking tools from those playing catch-up. 

If support is critical: Consider whether you need community forums or access to real SEO experts who can troubleshoot complex issues quickly. When your business relies on organic traffic, response time is crucial. 

If education is important: Some plugins provide features; others teach you how to use them effectively. Comprehensive training resources and certifications demonstrate a commitment to your success. 

If you’re building for the long term: Think about whether this plugin will grow with your business. Multi-platform support, scalability, and an ecosystem approach ensure that your investment pays dividends for years to come. 

Make the choice that drives real growth

Choosing an SEO plugin isn’t about finding the tool with the longest feature list; it’s about finding the one that best suits your needs. It’s about partnering with a company that shares your commitment to long-term growth, innovation, and excellence. 

Over 13 million websites trust Yoast SEO because it delivers on these promises: 

  • Reliability: 15+ years of consistent innovation and stability 
  • Trust: Used by global brands and industry leaders 
  • Innovation: Leading the industry in AI integration and Schema excellence 
  • Support: 24/7 access to real SEO professionals 
  • Education: Comprehensive training through Yoast Academy 
  • Ecosystem: Multi-platform support and continuous learning resources 
  • Stability: Enterprise-grade performance at any scale 

When you choose Yoast, you’re not just installing a plugin; you’re joining millions of websites that have made the strategic decision to partner with the most trusted name in WordPress SEO. 

A smarter analysis in Yoast SEO Premium

Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!

Get Yoast SEO Premium Only $118.80 / year (ex VAT)

The post Choosing the right WordPress SEO plugin for your business – Yoast vs Rank Math  appeared first on Yoast.

Read more at Read More

How To Adapt Your Entire Marketing Funnel With AI

Marketing is moving faster than most teams can keep up with. Users expect answers immediately. They jump across channels before they ever land on your website. Search results summarize key points before they show links. AI Overviews and other LLMs give people clean, structured answers that used to require a full research session.

This change affects every part of the funnel, not because the fundamentals changed, but because AI reshaped how information flows and how decisions get made.

If you want your marketing system to keep up, you need to adapt your funnel to fit the way people learn, compare, and act. That requires new workflows, smarter content systems, and teams who know how to direct AI instead of wrestling with it.

Here is how to rebuild your entire marketing funnel for the AI era.

Key Takeaways

  • AI changes how users research, compare, and choose products, which means your funnel needs to adapt to shifting intent and new behavior patterns.
  • Teams that rely on structured systems can apply AI consistently across planning, content, outreach, and optimization.
  • Content needs to be created for humans and models at the same time, with clarity, structure, and trustworthy signals built in.
  • AI increases speed, insight, and variation, but human judgment still guides strategy and protects brand quality.
  • Funnel performance improves when your systems evolve continuously, using real-time data and predictive insights to guide action.

The New AI Reality in Marketing

With the advent of AI, users expect fast answers everywhere. They expect straightforward explanations and content that gets to the point. They expect the next step to feel obvious.

A graphic showing how AI has impacted marketing.;

Search engines now summarize information before they send traffic. AI tools analyze questions and give people simple paths to follow. Teams that rely on slow planning cycles or rigid workflows fall behind because the landscape shifts too quickly.

AI also gives marketers more information. You can spot friction faster. You can discover demand signals earlier. You can build variations of a single idea in seconds instead of hours. The speed and clarity AI provides changes how you think, plan, and publish.

This is why systems matter. AI works best when your inputs are strong, your workflows are structured, and your team knows how to guide models with purpose.

The Funnel Rebuilt for AI

Funnels used to follow a predictable path. People saw a message, explored options, compared details, and made a decision. AI changed that pattern. Users often skip steps. They expect answers before they even start researching, mix channels and search surfaces, and they compare brands in less time using more tools.

You need a funnel that adapts to intent in real time. Let’s talk about things have changed over time.

A graphic showing how the funnel has been rebuilt for AI.

Awareness: Earn Visibility in a Summarized World

Brand awareness used to mean ranking in search or showing up in social feeds. Now it means being visible wherever models and search engines pull information. Your content needs to be clear and structured, so AI systems can understand it instantly. That includes using strong definitions, concise explanations, and content that answers emerging questions.

AI can also help you plan faster. It can reveal topic clusters, related interests, language patterns, and questions users ask before they search. That insight helps you create content that works for both humans and models.

Consideration: Personalize and Adapt as Users Explore

Users take unpredictable paths. One person might read a comparison page, then watch a video, then search for alternatives. Another might start with a chatbot, skim reviews, and jump straight into pricing.

AI helps you adapt to these differences. You can tailor the next piece of information based on behavior, not assumptions. You can understand objections earlier and give people specific proof that supports their decision-making. You can create educational paths that feel natural, not forced.

Conversion: Speed Up Decisions With Smarter Insight

AI improves how you analyze signals across campaigns. You can see which touchpoints matter most. You can understand where people drop off and what gets them to return. You can time outreach based on behavior instead of sending messages on a fixed schedule.

Models also help you support decisions. You can create guided tools, calculators, and tailored content that answers the final questions users have before they convert. These experiences help users feel confident about their choice.

Upgrade Your Team’s Skills for an AI-Driven Funnel

AI changes workflows, but the impact depends on how your team uses it. You need people who can orchestrate systems, think strategically, and refine outputs with intention.

From Doers to Directors of Intelligence

AI accelerates execution. That means your team shifts from doing every step manually to guiding the process. They need to know how to set the direction, review outputs, and make judgment calls that models cannot.

A graphic comparing AI-only copy vs. AI-Assisted copy.

This is where strategy and quality control become more important. Your team’s experience becomes the intelligence that powers the system.

Build Systems, Not Isolated Tasks

AI performs best when it has structure. You need workflows with clear inputs, expected outputs, and consistent guardrails. That includes:

  • Prompt libraries
  • Structured briefs
  • Standardized content formats
  • Quality assurance criteria
  • Automation playbooks

When these systems exist, you can scale execution without losing quality. The last thing you want to do is invest time in AI materials with little value.

A graphic saying how much time marketers are spending on "AI slop."

Run an AI Literacy Sprint

A simple two-week sprint helps teams adopt AI confidently. The idea is to identify a few repetitive tasks, replace them with AI workflows, refine the prompts, and share results across the team.

This builds trust in the system and helps everyone learn from real examples.

Five Core Capabilities Modern Marketers Need

Teams need the ability to:

  • Guide models with strong prompts
  • Interpret data and validate insights
  • Design basic automations
  • Blend creativity with AI acceleration
  • Apply ethical judgment to protect quality
A graphic showing 5 capabilitiies modern marketers need.

These skills support every stage of an AI-driven funnel.

AI at the Top of the Funnel: Attract

Top-of-funnel work moves faster with AI. You can build content calendars, briefs, and outlines in minutes. You can analyze emerging trends and understand what people are searching for before those topics peak.

AI also helps you identify gaps. When you study how search experiences present information, you can see which answers, examples, or evidence are missing. That insight becomes your content roadmap.

A graphic showing what gets cited from Google AI Overviews.

You need content that models can interpret easily. Pages should include clear summaries, simple explanations, structured sections, and credible sources. Models scan for signals of clarity and authority. When your content is well structured, it has a better chance of being displayed and referenced.

Repurposing becomes easier too. Long-form content can become social posts, email snippets, video scripts, and answers for community threads. With AI, you can extract angles and variations quickly without losing the core message.

Creating Content That AI Can Interpret

Models look for patterns. They favor content with consistent formatting, headings that reflect questions, concise explanations, and supporting details like data or examples. When your pages follow these patterns, your visibility improves.

A graphic of NeilPatel.com referral sessions from ChatGPT.

Turning Content Into Multi-Format Assets

AI can help you transform one asset into many. A blog post becomes video ideas, social carousels, email sequences, and outline drafts for deeper content. This helps you move faster and create consistent messaging across channels.

A graphic covering if AI has increased daily content production.

AI in the Middle of the Funnel: Nurture and Convert

Middle-of-funnel work thrives when you combine expertise with AI-driven insight. You can turn educational content into interactive tools. You can enrich lead profiles with data about company size, tools used, or behavior patterns. You can score leads based on signals instead of guessing who is most interested.

A graphic showing the effectiveness of free tools in lead generation.

Personalization becomes more natural. You can adapt messaging to match how each user learns. You can offer the right format for each segment, whether that is a video, a comparison chart, or a detailed guide.

AI also strengthens outbound efforts. You can build smarter lists, generate personalized outreach, and adjust timing based on reply patterns. This helps your team focus on conversations that matter.

A graphic showing outbound efforts.
A graphic showing audience modeling.

Audience modeling becomes more precise. Instead of relying on broad personas, you can identify micro-segments based on motivations, predicted actions, and friction points. This leads to journeys that respond to real behavior.

Building Guided Tools That Turn Expertise Into Self-Serve Experiences

AI makes it easier to convert long-form content into calculators, quizzes, assessments, and guided flows. These tools educate users, gather signals, and qualify leads at the same time.

AI at the Bottom of the Funnel: Retain and Expand

AI changes how you manage customer relationships. It helps you capture insights from conversations, identify churn early, and create proactive outreach. It also helps you spot expansion opportunities by analyzing usage patterns and engagement.

Teams can turn sales calls and support conversations into repeatable playbooks. You can extract objections, winning responses, and communication patterns that help new reps ramp faster.

Retention becomes more proactive. You can monitor behavior for early signals, trigger personalized save sequences, and direct account outreach based on needs.

Upsell and expansion become more personalized too. You can focus on value moments and highlight features or products that match each customer’s journey.

Build an AI-Ready Growth Engine

Adapting your entire funnel to AI does not happen in one step. The most effective approach is to start with workflows that produce quick wins. Research, content briefs, reporting, and follow-ups are the easiest places to start.

You also need to train your team to review AI outputs like editors. They should think critically, refine prompts, and guide models toward better results. When teams treat AI as a collaborator, quality stays high.

Document every win. When a workflow works, turn it into a repeatable playbook. Build a culture where experimentation is normal. Share wins and failures openly. This helps your team learn faster and improve together.

A graphic showing the AI stages of adoption.

Your growth engine becomes stronger every time you refine these systems.

FAQs

Where should I start if my funnel is not AI-ready?

Begin with workflows that affect every channel. Research, briefs, reporting, and follow-ups are easy to replace with AI-assisted versions and offer immediate gains.

Will AI replace my marketing team?

No. AI accelerates execution, but your team guides strategy, applies judgment, and protects quality. The work shifts from doing everything manually to directing intelligent systems.

How do I keep brand quality high when using AI?

Set clear guardrails. Use structured briefs, standardized formats, and defined editorial criteria. Review outputs carefully and refine prompts until they consistently match your voice and standards.

How do I introduce automation without breaking workflows?

Start small. Automate simple, repetitive tasks and build confidence. Add complexity only when your team has mastered the basics.

How do I measure improvements across the funnel?

Track speed, quality, and impact. Look at how quickly your team produces content.

Conclusion

AI is not replacing marketing funnels. It is reshaping how they work. Every stage of the journey changes when users rely on faster information, clearer answers, and smarter systems.

Teams that build structures around AI will move faster, make better decisions, and adapt to real-time behavior. Small changes add up. When you refine workflows, train your team, and document wins, you create a system that improves with every cycle.

The future belongs to marketers who learn how to direct AI with clarity and purpose. Let’s build a funnel that matches the way people make decisions today.

Read more at Read More

How brands can respond to misleading Google AI Overviews

Misleading -Google AI Overview

Google’s AI Overviews feature has become the face of our search engine results.

Type almost any question into your Google search bar, and the first answer you receive will be AI generated.

Many are thrilled about this. Others are wary.

Marketers and those in the online reputation management (ORM) field are among those urging caution.

Why? Because Google AI Overviews are often littered with information stemming from online forums like Reddit and Quora. 

And oftentimes, this user-generated content can be inaccurate — or entirely false. 

Why Google AI Overviews heavily rely on content from Reddit and Quora

But how and why have Google AI Overviews come to rely on user-generated content forums?

The answer is quite simple. Google AI Overviews sources much of its information from “high-authority” domains. These happen to be platforms like Reddit and Quora.

Google also prioritizes “conversational content” and “real user experiences.” They want searchers to receive answers firsthand from other online humans.

Furthermore, Google places the same amount of weight on these firsthand anecdotes as it does on factual reporting. 

How negative threads end up on AI summaries

Obviously, the emphasis placed on Reddit and Quora threads can lead to issues. Especially for professionals and those leading product- or service-driven organizations.

Many of the Reddit threads that rise to the surface are those that are complaint-driven. Think of threads where users are asking, “Does Brand X actually suck?” or “Is Brand Z actually a scam?”

The main problem is that these threads become extremely popular. AI Overviews gather the consensus of many comments and combine them into a single resounding answer. 

In essence, minority opinions end up being represented as fact.

Additionally, Google AI Overviews often resurface old threads that lack timestamps. This can lead to the resurfacing of outdated, often inaccurate information. 

Patterns that SEO, ORM, and brands are noticing

Those in the ORM field have been noticing troubling patterns in Google AI Overviews for a while now. For instance, we’ve identified the following trends:

  • Overwhelming Reddit criticism: Criticism on Reddit rises to the top at alarming rates. Google AI Overviews even seem to ignore official responses from brands at times, instead opting for the opinions of users on forum platforms.
  • Pros vs. cons summaries: These sorts of lists are supposed to implore balance. (Isn’t that the entire point of identifying both the pros and cons of a brand?) However, sites like Reddit and Quora tend to accentuate the negative aspects of brands, at times ignoring the pros altogether. 
  • Outdated content resurfacing: As mentioned in the previous section, outdated content can hold far too much value. Aa troubling amount of “resolved issues” gain prevalence in the Google AI Overviews feature.

The amplification effect: AI can turn opinion into fact

We live in an era defined by instantaneous knowledge.

Gen Z takes in information at startling rates. What’s seen on TikTok is absorbed as immediate fact. Instagram is where many turn to get both breaking news and updates on the latest brands

This has led to an amplification effect, where algorithms quickly turn opinion into fact. We’re seeing it widely across social media, and now on Google AI Overviews, too.

On top of what we listed in the previous section, those in the ORM realm are noticing the following take effect:

  • Nuance-less summarization: Because AI Overviews take such overwhelming negative criticism from Reddit, we’re getting less nuanced responses. The focus in AI Overviews is often one-sided and seemingly biased, featuring emotional, extreme language. 
  • Feedback loops: As others in the ORM field have pointed out, many citations in Overview come from deep pages. It’s also common to see feedback loops wherein one negative Reddit thread can hold multiple citations, leading to quick AI validation.
  • Enhanced trust in AI Overviews: Perhaps most troubling of all has been society’s immediate jump to accept AI Overviews and all the answers it has to offer. Many users now turn to Google’s feature as their ultimate encyopledia — without even caring to view the citations AI Overviews has listed. 

Misinformation and bias create risk

All in all, the rise of information from Reddit and Quora on AI Overviews has led to enhanced risk for businesses and entrepreneurs alike.

False statements and defamatory claims posted online can be accepted as fact. And incomplete narratives or opinion-based criticism floating around on forums are filtered through the lens of AI Overviews.

Making matters worse is that Google does not automatically remove or filter AI summaries that are linked to harmful content. 

This can be damaging to a company’s reputation, as users absorb what they see on AI Overviews at face value. They take it as fact, even though it might be fiction.

Building a reputation strategy for false AI-driven searches

As a business owner, it’s critical to have response strategies in place for Google AI Overviews. 

Working with an ORM team is a critical first step. They might suggest the following measures:

  • Monitoring online forums: Yes, our modern world dictates that you stay on top of online forums like Reddit and Quora. Monitor the name of your business and the top players on your team. If you’re aware of the dialogue, you’re already one step ahead.
  • Creating “AI-readable” content: It’s also important to always be creating content designed to land on AI Overviews. This content should boost your platform on search engines, be citation-worthy, and push down less favorable results.
  • Addressing known criticism: Ever notice criticism directed at your brand? Seek to address it with proper business practices. Respond to online reviews kindly, suppress or remove negative content with your ORM team, and establish your business as a caring practice online.
  • Coordinating various teams: It’s imperative to establish the right teams around your business. We already mentioned ORM, but what about your legal, SEO, and PR teams? Have the right experts in place to deal with any controversies before they arise.

Also, remember to keep an eye on the future. Online reputation management is constantly evolving, and if your intention is to manage and elevate your brand, you must evolve with the times.

That means staying up-to-date with AI literacy and adapting to new KPIs, including sentiment framing, source attribution, and AI visibility. 

Staying on top of Google AI Overviews

We live in a new age. One where AI Overviews dictates much of what searches think and react to.

And the honest truth is that much of the knowledge AI Overviews gleans comes from user-dominated forums like Reddit and Quora.

As a brand manager, you can no longer be idle. You have to act. You have to manage the sources that Google AI Overviews summarizes, constantly staying one step ahead.

If you don’t, then you’re not properly managing your search reputation. 

Read more at Read More

7 Marketing AI Adoption Challenges (And How to Fix Them)

You’ve likely invested in AI tools for your marketing team, or at least encouraged people to experiment.

Some use the tools daily. Others avoid them. A few test them quietly on the side.

This inconsistency creates a problem.

An MIT study found that 95% of AI pilots fail to show measurable ROI.

Scattered marketing AI adoption doesn’t translate to proven time savings, higher output, or revenue growth.

AI usage ≠ AI adoption ≠ effective AI adoption.

To get real results, your whole team needs to use AI systematically with clear guidelines and documented outcomes.

But getting there requires removing common roadblocks.

In this guide, I’ll explain seven marketing AI adoption challenges and how to overcome them. By the end, you’ll know how to successfully roll out AI across your team.

Free roadmap: I created a companion AI adoption roadmap with step-by-step tasks and timeframes to help you execute your pilot. Download it now.


First up: One of the biggest barriers to AI adoption — lack of clarity on when and how to use it.

1. No Clear AI Use Cases to Guide Your Team

Companies often mandate AI usage but provide limited guidance on which tasks it should handle.

In my experience, this is one of the most common AI adoption challenges teams face. Regardless of industry or company size.

Reddit – r/antiwork – AI usage

Vague directives like “use AI more” leave people guessing.

The solution is to connect tasks to tools so everyone knows exactly how AI fits into their workflow.

The Fix: Map Team Member Tasks to Your Tech Stack

Start by gathering your marketing team for a working session.

Ask everyone to write down the tasks they perform daily or weekly. (Not job descriptions, but actual tasks they repeat regularly.)

Then look for patterns.

Which tasks are repetitive and time-consuming?

Common AI Use Cases for Marketing Teams

Maybe your content team realizes they spend four hours each week manually tracking competitor content to identify gaps and opportunities. That’s a clear AI use case.

Or your analytics lead notices they are wasting half a day consolidating campaign performance data from multiple regions into a single report.

AI tools can automatically pull and format that data.

Once your team has identified use cases, match each task to the appropriate tool.

Task-to-Tool Decision

After your workshop, create assignments for each person based on what they identified in the session.

For example: “Automate competitor tracking with [specific tool].”

When your team knows exactly what to do, adoption becomes easier.

2. No Structured Plan to Roll Out AI Across the Organization

If you give AI tools to everyone at once, don’t be surprised if you get low adoption in return.

The issue isn’t your team or the technology. It’s launching without testing first.

The Fix: Start with a Pilot Program

A pilot program is a small-scale test where one team uses AI tools. You learn what works, fix problems, and prove value — before rolling it out to everyone else.

A company-wide launch doesn’t give you this learning period.

Everyone struggles with the same issues at once. And nobody knows if the problem is the tool, their approach, or both.

Which means you end up wasting months (and money) before realizing what went wrong.

Two Approaches to Marketing AI Adoption

Plan to run your pilot for 8-12 weeks.

Note: Your pilot timeline will vary by team.

Small teams can move fast and test in 4-8 weeks. Larger teams might need 3-4 months to gather enough feedback.

Start with three months as your baseline. Then adjust based on how quickly your team adapts.


Content, email, or social teams work best because they produce repetitive outputs that show AI’s immediate value.

Select 3-30 participants from this department, depending on your team size.

(Smaller teams might pilot with 3-5 people. Larger organizations can test with 20-30.)

Then, set measurable goals with clear targets you can track. Like:

  • Cut blog production time from 8 hours to 5 hours
  • Reduce email draft revisions from 3 rounds to 1
  • Create 50 social media posts weekly instead of 20

Schedule weekly meetings to gather feedback throughout the pilot.

The pilot will produce department-specific workflows. But you’ll also discover what transfers: which training methods work, where people struggle, and what governance rules you need.

When you expand to other departments, they’ll adapt these frameworks to their own AI tasks.

After three months, you’ll have proven results and trained users who can teach the next group.

3-Month Pilot

At that point, expand the pilot to your second department (or next batch of the same team).

They’ll learn from the first group’s mistakes and scale faster because you’ve already solved common problems.

Pro tip: Keep refining throughout the pilot.

  • Update prompts when they produce poor results
  • Add new tools when you find workflow gaps
  • Remove friction points the moment they appear


Your third batch will move even quicker.

Within a year, you’ll have organization-wide marketing AI adoption with measurable results.

3. Your Team Lacks the Training to Use AI Confidently

Most marketing teams roll out AI tools without training team members how to use them.

In fact, only 39% of people who use AI at work have received any training from their company.

61% of workers who use AI at work received no training from their company

And when training does exist, it might focus on generic AI concepts rather than specific job applications.

The answer is better training that connects to the work your team does.

The Fix: Role-Specific Training

Generic training explains how AI works. Role-specific training shows people how to use AI in their actual jobs.

Here’s the difference:

Role Generic Training (Lower Priority) Role-Specific Training (Start Here)
Social Media Manager AI concepts and how large language models work How to automate content calendars and schedule posts faster
SEO Specialist Understanding neural networks and machine learning AI-powered keyword research and competitor analysis
Email Marketer Machine learning algorithms and data processing Using AI for personalization and subject line testing
Content Writer How AI models generate text and natural language processing Using AI to research topics, create outlines, and edit drafts
Paid Ads Manager Deep learning fundamentals and algorithmic optimization AI tools for ad copy testing, audience targeting, and bid management

When training connects directly to someone’s daily tasks, they actually use what they learn.

For example, Mastercard applies this approach with three types of training:

  • Foundational knowledge for everyone
  • Job-specific applications for different roles
  • Reskilling programs where needed.

Mastercard – Putting the "I"in AI

Companies like KPMG, Accenture, and IKEA have also developed dedicated AI training programs for their teams.

This is likely because they learned that generic training creates enterprise AI adoption challenges at scale.

Employees complete courses but never apply what they learned to their actual work.

Ikea – AI training programs for their teams

But you don’t need enterprise-scale resources to make this work.

Start by mapping what each role actually does with AI.

For example:

  • Your content team uses AI for research, strategy, outlines, and drafts
  • Your ABM team uses it for account research and personalized outreach
  • Your social team uses it for video creation and caption variations
  • Your marketing ops team uses it for workflow automation and data integration

Once you know what each role needs, pick your training approach.

Platforms like Coursera and LinkedIn Learning offer specific AI training programs that work well for flexible, self-paced learning.

Coursera – GenAI for PR Specialists

Training may also be available from your existing tools.

Check whether your current marketing platforms offer AI training resources, such as courses or documentation.

For example, Semrush Academy offers various training programs that also cover its AI capabilities.

Semrush Academy – AI Courses

For teams with highly specific workflows, external trainers can be useful.

This costs more. But it delivers the most relevant results because the trainer focuses only on what your team actually needs to learn.

For example, companies like Section offer AI adoption programs for enterprises, including coaching and custom workshops.

Sectionai – Homepage

But keep in mind that training alone won’t sustain marketing AI adoption.

AI tools evolve constantly, and your team needs continuous support to adapt.

Create these support systems:

  • Set up a dedicated Slack channel for AI questions where your team can share wins and troubleshoot problems
  • Run weekly Q&A sessions where people discuss specific challenges
  • Update training materials as new features and use cases emerge

4. Team Members Fear AI Will Replace Their Roles

Employees may resist AI marketing adoption because they fear losing their jobs to automation.

Headlines about AI replacing workers don’t help.

Forbes – AI Is Killing Marketing

Your goal is to address these fears directly rather than dismissing them.

The Fix: Have Honest Conversations About Job Security

Meet with each team member and walk through how AI affects their workflow.

Point out which repetitive tasks AI will automate. Then explain what they’ll work on with that freed-up time.

Be careful about the language you use. Be empathetic and reassuring.

For example, don’t say “AI makes you more strategic.”

Say: “AI will pull performance reports automatically. You’ll analyze the insights, identify opportunities, and make strategic decisions on budget allocation.”

One is vague. The other shows them exactly how their role evolves.

How to Address AI Fears With Your Team

Don’t just spring changes on your team. Give them a clear timeline.

Explain when AI tools will roll out, when training starts, and when you expect them to start using the new workflows.

For example: “We’re implementing AI for competitor tracking in Q2. Training happens in March. By April, this becomes part of your weekly process.”

When people know what’s coming and when, they have time to prepare instead of panicking.

Sample Timeline

Pro tip: Let people choose which AI features align with their interests and work style.

Some team members might gravitate toward AI for content creation. Others prefer using it for data analysis or reporting.

When people have autonomy over which features they adopt first, resistance decreases. They’re exploring tools that genuinely interest them rather than following mandates.


5. Your Team Resists AI-Driven Workflow Changes

People resist AI when it disrupts their established workflows.

Your team has spent years perfecting their processes. AI represents change, even when the benefits are obvious.

Resistance gets stronger when organizations mandate AI usage without considering how people actually work.

Reddit – Why AI

New platforms can be especially intimidating.

It means new logins, new interfaces, and completely new workflows to learn.

Rather than forcing everyone to change their workflows at once, let a few team members test the new approach first using familiar tools.

The Fix: Start with AI Features in Existing Tools

Your team likely already uses HubSpot, Google Ads, Adobe, or similar platforms daily.

When you use AI within existing tools, your team learns new capabilities without learning an entirely new system.

If you’re running a pilot program, designate 2-3 participants as AI champions.

Their role goes beyond testing — they actively share what they’re learning with the broader team.

What Do AI Champions Do

The AI champions should be naturally curious about new tools and respected by their colleagues (not just the most senior people).

Have them share what they discover in a team Slack channel or during standups:

  • Specific tasks that are now faster or easier
  • What surprised them (good or bad)
  • Tips or advice on how others can use the tool effectively

When others see real examples, such as “I used Social Content AI to create 10 LinkedIn posts in 20 minutes instead of 2 hours,” it carries more weight than reassurance from leadership.

Slack – Message

For example, if your team already uses a tool like Semrush, your champions can demonstrate how its AI features improve their workflows.

Keyword Magic Tool’s AI-powered Personal Keyword Difficulty (PKD%) score shows which keywords your site can realistically rank for — without requiring any manual research or analysis.

Keyword Magic Tool – Newsletter platform – PKD

AI Article Generator creates SEO-friendly drafts from keywords.

Your content writers can input a topic, set their brand voice, and get a structured first draft in minutes. This reduces the time spent staring at a blank page.

Semrush – AI Article Generator

Social Content AI handles the repetitive parts of social media planning. It generates post ideas, copy variations, and images.

Your social team can quickly build out a week’s content calendar instead of creating each post from scratch.

Semrush – Social Content AI Kit – Ideas by topic

Don’t have a Semrush subscription? Sign up now and get a 14-day free trial + get a special 17% discount on annual plan.

6. No Governance or Guardrails to Keep AI Usage Safe

Without clear guidelines, your team may either avoid AI entirely or use it in ways that create risk.

In fact, 57% of enterprise employees input confidential data into AI tools.

Types of Sensitive Data Employees Input Into AI Tools

They paste customer data into ChatGPT without realizing it violates data policies.

Or publish AI-generated content without approval because the review process was never explained.

Your team needs clear guidelines on what’s allowed, what’s not, and who approves what.

Free AI policy template: Need help creating your company’s AI policy? Download our free AI Marketing Usage Policy template. Customize it with your team’s tools and workflows, and you’re ready to go.


The Fix: Create a One-Page AI Usage Policy

When creating your policy, keep it simple and accessible. Don’t create a 20-page document nobody will read.

Aim for 1-2 pages that are straightforward and easy to follow.

Include four key areas to keep AI usage both safe and productive.

Policy Area What to Include Example
Approved Tools List which AI tools your team can use — both standalone tools and AI features in platforms you already use “Approved: ChatGPT, Claude, Semrush’s AI Article Generator, Adobe Firefly”
Data Sharing Rules Define specifically what data can and can’t be shared with AI tools “Safe to share: Product descriptions, blog topics, competitor URLs

Never share: Customer names, email addresses, revenue data, internal campaign plans, pricing strategies, unannounced product details”

Review Requirements Document who reviews what type of content before publication “Social posts: Peer review

Blog posts: Content lead approval

Legal/compliance content: Legal team review”

Approval Workflows (optional) Clarify who approves AI content at each stage “Internal drafts: Content team

Customer-facing materials: Marketing director

Compliance-related content: Legal sign-off”

Beyond documenting the rules, establish who team members should contact when they encounter situations the policy doesn’t address.

Designate a department lead, governance contact, or weekly office hours as the escalation point for:

  • Scenarios not covered in your guidelines
  • Technical site issues with approved AI tools
  • Concerns about whether AI-generated content is accurate or appropriate
  • Questions about data sharing

Marketing AI Escalation Process

The goal is to give them a clear path to get help, rather than guessing or avoiding AI altogether.

Then, post the policy where your team will see it.

This might be your Slack workspace, project management tool, or a pinned document in your shared drive.

AI Policy document

And treat it as a living document.

When the same question comes up multiple times, add the answer to your policy.

For example, if three people ask, “Can I use AI to write email subject lines?” update your policy to explicitly say yes (and clarify who reviews them before sending).

AI Governance Checklist

7. No Reliable Way to Measure AI’s Impact or ROI

Without clear proof that AI improves their results, team members may assume it’s just extra work and return to old methods.

And if leadership can’t see a measurable impact, they might question the investment.

This puts your entire AI program at risk.

Avoid this by establishing the right metrics before implementing AI.

The Fix: Track Business Metrics (Not Just Efficiency)

Here’s how to measure AI’s business impact properly.

Pick 2-3 metrics your leadership already reviews in reports or meetings.

These are typically:

  • Leads generated
  • Conversion rate
  • Revenue growth
  • Customer acquisition
  • Customer retention

Measure Marketing AI's Business Impact

These numbers demonstrate to your team and leadership that AI is helping your business.

Then, establish your baseline by recording your current numbers. (Do this before implementing AI tools.)

For example, if you’re tracking leads and conversion rate, write down:

  • Current monthly leads: 200
  • Current conversion rate: 3%

This baseline lets you show your team (and leadership) exactly what changed after implementing AI.

Pro tip: Avoid making multiple changes simultaneously during your pilot or initial rollout.

If you implement AI while also switching platforms or restructuring your team, you won’t know which change drove results.

Keep other variables stable so you can clearly attribute improvements to AI.


Once AI is in use, check your metrics monthly to see if they’re improving. Use the same tools you used to record your baseline.

Write down your current numbers next to your baseline numbers.

For example:

  • Baseline leads (before AI): 200 per month
  • Current leads (3 months into AI): 280 per month

But don’t just check if numbers went up or down.

Look for patterns:

Did one specific campaign or content type perform better after using AI?

Are certain team members getting better results than others?

Track individual output alongside team metrics.

For example, compare how many blog posts each writer completes per week, or email open rates by the person who drafted them.

Email report overview page

If someone’s consistently performing better, ask them to share their AI workflow with the team.

This shows you what’s working, and helps the rest of your team improve.

Share results with both your team and leadership regularly.

When reporting, connect AI’s impact to the metrics you’ve been tracking.

For example:

Say: “AI cut email creation time from 4 hours to 2.5 hours. We used that time to run 30% more campaigns, which increased quarterly revenue from email by $5,000.”

Not: “We saved 90 hours with AI email tools.”

The first shows business impact — what you accomplished with the time saved. The second only shows time saved.

Other examples of how to frame your reporting include:

How to Report AI Results to Leadership

Build Your Marketing AI Adoption Strategy

When AI usage is optional, undefined, or unsupported, it stays fragmented.

Effective marketing AI adoption looks different.

It’s built on:

  • Role-specific training people actually use
  • Guardrails that reduce uncertainty and risk
  • Metrics that drive business outcomes

When those pieces are in place, AI becomes part of how work gets done.

If you want a step-by-step implementation plan, download our Marketing AI Adoption Roadmap.

Need help choosing which AI tools to pilot? Our AI Marketing Tools guide breaks down the best options by use case.

The post 7 Marketing AI Adoption Challenges (And How to Fix Them) appeared first on Backlinko.

Read more at Read More

SEO in 2026: Key predictions from Yoast experts

If there’s one takeaway as we look toward SEO in 2026, it’s that visibility is no longer just about ranking pages, but about being understood by increasingly selective AI-driven systems. In 2025, SEO proved it was not disappearing, but evolving, as search engines leaned more heavily on structure, authority, and trust to interpret content beyond the click. In this article, we share SEO predictions for 2026 from Yoast SEO experts, Alex Moss and Carolyn Shelby, highlighting the shifts that will shape how brands earn visibility across search and AI-powered discovery experiences.

Key takeaways

  • In 2026, SEO focuses on visibility defined by clarity, authority, and trust rather than just page rankings
  • Structured data becomes essential for eligibility in AI-driven search and shopping experiences
  • Editorial quality must meet machine readability standards, as AI evaluates content based on structure and clarity
  • Rankings remain important as indicators of authority, but visibility now also includes citations and brand sentiment
  • Brands should align their SEO strategies with social presence and aim for consistency across all platforms to enhance visibility

A brief recap of SEO in 2025: what actually changed?

2025 marked a clear shift in how SEO works. Visibility stopped being defined purely by pages and rankings and began to be shaped by how well search engines and AI systems could interpret content, brands, and intent across multiple surfaces. AI-generated summaries, richer SERP features, and alternative discovery experiences made it harder to rely solely on traditional metrics, while signals such as authority, trust, and structure played a larger role in determining what was surfaced and reused.

As we outlined in our SEO in 2025 wrap-up, the brands that performed best were those with strong foundations: clear content, credible signals, and structured information that search systems could confidently understand. That shift set the direction for what was to come next.

By the end of 2025, it was clear that SEO had entered a new phase, one shaped by interpretation rather than isolated optimizations. The SEO predictions for 2026 from Yoast experts build directly on this evolution.

2026 SEO predictions by Yoast experts

The SEO predictions for 2026 shared here come from our very own Principal SEOs at Yoast, Alex Moss and Carolyn Shelby. Built on the lessons SEO revealed in 2025, these predictions focus less on reacting to individual updates and more on how search and AI systems are evolving at a foundational level, and what that means for sustainable visibility going forward.

TL;DR

SEO in 2026 is about understanding how signals such as structure, authority, clarity, and trust are now interpreted across search engines, AI-powered experiences, and discovery platforms. Each prediction below explains what is changing, why it matters, and how brands can practically adapt in the coming year.

Prediction 1: Structured data shifts from ranking enhancer to retrieval qualifier

In 2026, structured data will no longer be a competitive advantage; it will become a baseline requirement. Search engines and AI systems increasingly rely on structured data as a layer of eligibility to determine whether content, products, and entities can be confidently retrieved, compared, or surfaced in AI-powered experiences.

For ecommerce brands, this shift is especially significant. Product information such as pricing, availability, shipping details, and merchant data is now critical for visibility in AI-driven shopping agents and comparison interfaces. At the enterprise level, the move toward canonical identifiers reflects a growing need to avoid misattribution and data decay across systems that reuse information at scale.

What this means in practice:

Brands without clean, comprehensive entity and product data will not rank lower. They will simply not appear in AI-driven shopping and comparison flows at all.

Also read: Optimizing ecommerce product variations for SEO and conversions

How to act on this:

Treat structured data as part of your SEO foundation, not an enhancement. Tools like Yoast SEO help standardize the implementation of structured data. The plugin’s structured data features make it easier to generate rich, meaningful schema markup, helping search engines better understand your site and take control of how your content is described.

A smarter analysis in Yoast SEO Premium

Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!

Get Yoast SEO Premium Only $118.80 / year (ex VAT)

Prediction 2: Agentic commerce becomes a visibility battleground, not a checkout feature

Agentic commerce marks a shift in how users discover and choose brands. Instead of browsing, comparing, and transacting manually, users increasingly rely on AI-driven agents to recommend, reorder, or select products and services on their behalf. In this environment, visibility is established before a checkout ever happens, often without a traditional search query.

This shift is becoming more concrete as search and commerce platforms move toward standardised ways for agents to understand and transact with merchants. Recent developments around agentic commerce protocols and Universal Commerce Protocol (UCP) highlight how AI systems are being designed to access product, pricing, availability, and merchant information more directly. As a result, platforms such as Shopify, Stripe, and WooCommerce are no longer just infrastructure. They increasingly act as distribution layers, where agent compatibility influences which brands are surfaced, recommended, or selected.

What this means in practice:

In 2026, SEO teams will be accountable for agent readiness in much the same way they were once accountable for mobile-first readiness. If agents cannot consistently interpret your brand, product data, or availability, they are more likely to default to competitors that they can understand with greater confidence.

How to act on this:

Focus on making your brand legible to automated decision systems. Ensure product information, pricing, availability, and supporting metadata are clear, structured, and consistent across your site and feeds. This is not about optimising for a single platform or protocol, but about reducing ambiguity so AI agents can accurately interpret and act on your information across emerging agent-driven discovery and commerce experiences.

Prediction 3: Editorial quality becomes a machine readability requirement

In 2026, editorial quality is no longer judged only by human readers. AI systems increasingly evaluate content based on how efficiently it can be parsed, summarized, cited, and reused. Verbosity, fluff, and circular explanations do not fail editorially. They fail functionally.

Content that is concise, clearly structured, and well-attributed has higher chances of performing well. Headings, lists, definitions, and tables directly influence how information is chunked and reused across AI-generated summaries and search experiences.

Must read: Why is summarizing essential for modern content?

What this means in practice:

“Helpful content” is being held to higher editorial standards. Content that cannot be summarized cleanly without losing meaning becomes less useful to AI systems, even if it remains readable to human audiences.

How to act on this:

Make editorial quality measurable and machine actionable. Utilize tools that assist you in aligning content with modern discoverability requirements. Yoast SEO Premium’s AI features, AI Generate, AI Optimize, and AI Summarize, help you assess and improve how content is structured and optimized, supporting both search engines and AI systems in understanding your intent.

Prediction 4: Rankings still matter, but as training signals, not endpoints

Despite ongoing speculation, rankings do not disappear in 2026. Instead, their role changes. AI agents and search systems continue to rely on top-ranked, trusted pages to understand authority, relevance, and consensus within a topic.

While rankings are no longer the final KPI, abandoning them entirely creates blind spots in understanding why certain brands are included or ignored in AI-driven experiences.

What this means in practice:

Teams that stop tracking rankings altogether risk losing insight into how authority is established and reinforced across search and AI systems.

How to act on this:

Continue to use rankings as diagnostic signals, but don’t treat them as the sole indicator of success in 2026. Alongside traditional performance metrics for SEO in 2026, look at how often your brand is mentioned, cited, or summarized in AI-generated answers and recommendations.

Tools like Yoast AI Brand Insights, available as part of Yoast SEO AI+, help surface these broader visibility signals by showing how your brand appears across AI platforms, including sentiment, citation patterns, and competitive context.

See how visible your brand is in AI search

Track mentions, sentiment, and AI visibility. With AI Brand Insights and Yoast SEO AI+, you can start monitoring and improving your performance.

Prediction 5: Brand sentiment becomes a core visibility signal

Brand sentiment increasingly influences how search engines and AI systems assess credibility and trust. Mentions, whether linked or unlinked, contribute to a broader understanding of how a brand is perceived across the web. AI systems synthesize signals from reviews, forums, social platforms, media coverage, and knowledge bases to form a composite view of legitimacy and expertise.

What makes this shift more impactful is amplification. Inconsistent messaging or negative sentiment is not smoothed out over time. Instead, it becomes more apparent when systems attempt to summarize, compare, or recommend brands across search and AI-driven experiences.

What this means in practice:

SEO, brand, PR, and social teams increasingly influence the same visibility signals. When these efforts are misaligned, credibility weakens. When they reinforce one another, trust becomes easier for systems to establish and maintain.

How to act on this:

Focus on consistency across owned, earned, and shared channels. Pay attention not only to where your brand ranks, but also to how it is discussed, described, and contextualized across various platforms. As discovery expands beyond traditional search results, reputation and narrative coherence become essential inputs into how brands are surfaced and understood.

Prediction 6: Multimodal optimization becomes baseline, not optional

Search behavior is no longer text-first. Images, video, audio, and transcripts now function as retrievable knowledge objects that feed both traditional search and AI-powered experiences. In particular, video platforms continue to influence how expertise and authority are understood at scale.

Platforms like YouTube function not only as discovery engines, but also as training corpora for AI systems learning how to interpret topics, brands, and creators.

What this means in practice:

Brands with strong written content but weak visual or video assets may appear incomplete or “thin” to AI systems, even if their articles are well-optimized.

How to act on this:

Treat multimodal content as part of your SEO foundation. Support written content with relevant visuals, video, and transcripts. Clear structure and readability remain essential, and tools like Yoast SEO help ensure your core content remains accessible and well-organized as it is reused across formats.

Prediction 7: Social platforms become secondary search indexes

Discovery will increasingly happen outside traditional search engines. Platforms such as TikTok, LinkedIn, Reddit, and niche communities now act as secondary search indexes where users validate expertise and intent.

AI systems reference these platforms to verify whether a brand’s claims, expertise, and messaging are substantiated in public discourse.

What this means in practice:

Presence alone is not enough. Inconsistent or unclear messaging across platforms weakens trust signals, while focused, repeatable narratives reinforce authority.

How to act on this:

Align your SEO strategy with social and community visibility to enhance your online presence. Ensure that your expertise, terminology, and positioning remain consistent across all discussions about your brand.

Must read: When AI gets your brand wrong: Real examples and how to fix it

Prediction 8: Email reasserts itself as the most controllable growth channel

As discovery fragments and platforms increasingly gate access to audiences, email regains importance as a high-signal, low-distortion channel. Unlike search or social platforms, email offers direct access to users without algorithmic mediation.

In 2026, email plays a supporting role in reinforcing authority, engagement, and intent signals, especially as AI systems evaluate how audiences interact with trusted sources over time.

What this means in practice:

Brands that underinvest in email become overly dependent on platforms they do not control, which increases volatility and reduces long-term resilience.

How to act on this:

Focus on relevance over volume. Segment audiences, align content with intent, and use email to reinforce expertise and trust, not just drive clicks.

Prediction 9: Authority outweighs freshness for most non-news queries

For non-news content, AI systems increasingly prioritize credible, historically consistent sources over frequent updates or constant publishing. Freshness still matters, but only when it meaningfully improves accuracy or relevance.

Long-standing domains with coherent narratives and well-maintained content benefit, provided their foundations remain clean and trustworthy.

What this means in practice:

Scaled/programmatic content strategies lose effectiveness. Publishing frequently without maintaining quality or consistency introduces noise rather than value.

How to act on this:

Invest in maintaining and improving existing content. Update thoughtfully, reinforce expertise, and ensure that your most important pages remain accurate, structured, and authoritative.

Prediction 10: SEO teams evolve into visibility and narrative stewards

In 2026, SEO will extend far beyond search engines. SEO teams are increasingly influencing how brands are perceived by both humans and machines across search, AI-generated answers, and discovery platforms.

Success is measured not only by traffic alone, but also by inclusion, citation, and trust. SEO becomes a strategic function that shapes how a brand is represented and understood.

What this means in practice:

SEO teams that focus solely on production or technical fixes risk losing influence as visibility becomes a cross-channel concern.

How to act on this:

Shift focus toward clarity, consistency, and long-term trust. The most effective teams help define how a brand is understood, not just how it ranks.

What SEO is no longer about in 2026 (misconceptions to discard)

As SEO evolves in 2026, many long-standing assumptions no longer reflect how search engines and AI-driven systems actually determine visibility. The table below contrasts common SEO myths with the realities shaped by recent changes and expert insights from Yoast.

Diminishing relevance What actually matters in 2026
SEO is mainly about ranking pages Rankings still matter, but they serve as signals for authority and relevance, rather than the final measure of visibility
Structured data is optional or a ranking boost Structured data is now a baseline requirement for eligibility in AI-driven search, shopping, and comparison experiences
Publishing more content leads to better performance Authority, clarity, and maintenance of fewer strong assets outperform high-volume publishing
Editorial quality is subjective Content quality is increasingly evaluated by machines based on structure, clarity, and reusability
Brand reputation is a PR concern, not an SEO one Brand sentiment directly influences how AI systems interpret, trust, and recommend brands
Search is still primarily text-based Images, video, audio, and transcripts are now core retrievable knowledge objects
SEO can be measured only through traffic Visibility spans AI answers, social platforms, agents, and citations, requiring broader performance signals

Looking ahead: what will shape SEO in 2026

The focus is no longer on isolated tactics or short-term wins, but on building visibility systems that search engines and AI platforms can reliably understand, trust, and reuse.

Clarity and interpretability matter more than clever optimization. Content, products, and brand narratives need to be easy for machines to interpret without ambiguity. Structured data has become foundational, not optional, determining whether brands are eligible to appear in AI-powered shopping, comparison, and answer-driven experiences.

Authority is built over time, not manufactured at scale. Search and AI systems increasingly favor sources with consistent, well-maintained narratives over those chasing volume. Visibility also extends beyond the SERP, spanning AI-generated answers, citations, recommendations, and cross-platform mentions, making it essential to look beyond traffic as the sole measure of success.

Finally, SEO in 2026 demands alignment. Brand, content, product, and platform signals all contribute to how systems interpret trust and relevance.

The post SEO in 2026: Key predictions from Yoast experts appeared first on Yoast.

Read more at Read More