EU puts Google’s AI and search data under DMA spotlight

Google vs. publishers: What the EU probe means for SEO, AI answers, and content rights

The European Commission has formally opened new proceedings to spell out how Google must share key Android features and Google Search data with rivals under the Digital Markets Act.

The Commission on Tuesday opened two formal “specification proceedings” to guide how Google must comply with key DMA obligations, effectively turning regulatory dialogue into a structured process with defined outcomes.

Why we care. The European Commission is escalating its oversight of Google under the Digital Markets Act, with moves that could reshape competition in mobile AI and search — and limit how much advantage Google can extract from its own platforms. If Google is required to share search data and Android AI capabilities more broadly, it could accelerate competition from alternative search engines and AI assistants, potentially fragmenting reach and measurement.

Over time, that may affect where advertisers spend, how much inventory is available, and how dependent campaigns are on Google-owned platforms.

First focus — Android and AI interoperability. Regulators are examining how Google must give third-party developers free and effective access to Android hardware and software features used by Google’s own AI services, including Gemini.

  • The goal is to ensure rival AI providers can integrate just as deeply into Android devices as Google’s first-party tools.

Second focus — search data sharing. The Commission is also moving to define how Google should share anonymised search ranking, query, click and view data with competing search engines on fair, reasonable and non-discriminatory terms.

  • That includes clarifying what data is shared, how it’s anonymised, who qualifies for access, and whether AI chatbot providers can tap into the dataset.

Between the lines. This isn’t just about compliance checklists. The Commission is signaling that AI services are now squarely in scope of DMA enforcement, especially where platform control over data and device features could tilt fast-growing markets before competitors have a chance to scale.

What’s next: Within three months, the Commission will send Google its preliminary findings and proposed measures. The full proceedings are set to conclude within six months, with non-confidential summaries published so third parties can weigh in.

The backdrop. Google has been required to comply with DMA obligations since March 2024, after being designated a gatekeeper across services including Search, Android, Chrome, YouTube, Maps, Shopping and online ads.

Bottom line. The EU is moving from theory to execution on the DMA — and Google’s handling of AI features and search data is becoming an early test of how aggressively regulators will shape competition in the next phase of the digital economy.

Read more at Read More

ChatGPT ads come with premium prices — and limited data

The agentic web is here: Why NLWeb makes schema your greatest SEO asset

OpenAI is pitching premium-priced ads in ChatGPT — with far less data than advertisers are used to getting.

What’s happening. According to a report, OpenAI is pricing ChatGPT ads at roughly $60 per 1,000 impressions — about three times higher than typical Meta ads. Despite the cost, advertisers will receive only high-level reporting, such as total impressions or clicks, with no insight into downstream actions like purchases.

Why we care. ChatGPT is emerging as a brand-new, high-attention ad environment — but one that comes with trade-offs. The high CPMs and limited reporting mean early tests will be more about brand exposure and learning than performance efficiency.

For marketers willing to experiment, this offers a first-mover chance to understand how ads perform inside AI conversations before the format scales or measurement improves.

The tradeoff. OpenAI has left the door open to expanding measurement in the future, but it has publicly committed to never selling user data to advertisers and keeping conversations private. That stance limits the kind of targeting and attribution advertisers expect from platforms like Google or Meta.

Who will see ads. The first ads will roll out in the coming weeks to users on ChatGPT’s free and lower-cost Go tiers, excluding users under 18 and conversations involving sensitive topics such as mental health or politics.

Between the lines. OpenAI is positioning ChatGPT ads as a premium, trust-first product — betting that context, attention, and brand safety can justify higher prices even without granular performance data.

Bottom line. ChatGPT ads may appeal to brands willing to pay more for visibility in a new AI-driven environment, but the lack of measurement will make performance-focused advertisers think twice.

Dig deeper. OpenAI Seeks Premium Prices in Early Ads Push (Subscription needed)

Read more at Read More

Google research points to a post-query future for search intent

Google intent extraction

Google is working toward a future where it understands what you want before you ever type a search.

Now Google is pushing that thinking onto the device itself, using small AI models that perform nearly as well as much larger ones.

What’s happening. In a research paper presented at EMNLP 2025, Google researchers show that a simple shift makes this possible: break “intent understanding” into smaller steps. When they do, small multimodal LLMs (MLLMs) become powerful enough to match systems like Gemini 1.5 Pro — while running faster, costing less, and keeping data on the device.

The future is intent extraction. Large AI models can already infer intent from user behavior, but they usually run in the cloud. That creates three problems. They’re slower. They’re more expensive. And they raise privacy concerns, because user actions can be sensitive.

Google’s solution is to split the task into two simple steps that small, on-device models can handle well.

  • Step one: Each screen interaction is summarized separately. The system records what was on the screen, what the user did, and a tentative guess about why they did it.
  • Step two: Another small model reviews only the factual parts of those summaries. It ignores the guesses and produces one short statement that explains the user’s overall goal for the session.
    • By keeping each step focused, the system avoids a common failure mode of small models: breaking down when asked to reason over long, messy histories all at once.

How the researchers measure success. Instead of asking whether an intent summary “looks similar” to the right answer, they use a method called Bi-Fact. Using its main quality metric, an F1 score, small models with the step-by-step approach consistently outperform other small-model methods:

  • Gemini 1.5 Flash, an 8B model, matches the performance of Gemini 1.5 Pro on mobile behavior data.
  • Hallucinations drop because speculative guesses are stripped out before the final intent is written.
  • Even with extra steps, the system runs faster and cheaper than cloud-based large models.

How it works. Intent is broken into small pieces of information, or facts. Then they measure which facts are missing and which ones were invented. This:

  • Shows how intent understanding fails, not just that it fails.
  • Reveals where systems tend to hallucinate meaning versus where they drop important details.

The paper also shows that messy training data hurts large, end-to-end models more than it hurts this step-by-step approach. When labels are noisy — which is common with real user behavior — the decomposed system holds up better.

Why we care. If Google wants agents that suggest actions or answers before people search, it needs to understand intent from user behavior (how people move through apps, browsers, and screens). This research moves this idea closer to reality. Keywords will still matter, but the query will be just one signal. In this future, you’ll have to optimize for clear, logical user journeys — not just the words typed at the end.

The Google Research blog post. Small models, big results: Achieving superior intent extraction through decomposition

Read more at Read More

From searching to delegating: Adapting to AI-first search behavior

From searching to delegating- Adapting to AI-first search behavior

AI Overviews, which place generated answers directly at the top of search results, are improving the search experience for users. 

For businesses that rely on content to drive traffic from search engines, the impact is far less positive.

Google has been moving toward more “helpful” results for years, and zero-click searches are nothing new. 

AI Overviews accelerate that shift, absorbing much of the traffic opportunity that search has historically provided.

How AI changes the work of search

For years, search followed a familiar pattern:

  • A user entered a short query, such as “team building companies.”
  • Google returned a page of paid and organic results.
  • The user did the work of reviewing and refining.

Most of the effort happened at the end of the process. 

Google organized results based on intent and behavioral signals, but users still had to click through listings, conduct follow-up searches, and piece together an answer.

AI reverses that flow:

  • The user asks a more detailed question.
  • AI runs multiple searches and processes the results.
  • AI delivers a summarized response.

Traditional search allows for refinement, but each new query effectively resets the experience. 

AI, by contrast, is conversational. Each interaction builds on the last, narrowing in on what the user actually wants.

The result is a faster, cleaner path to an answer – with far less effort required from the user.

The path of least resistance

This shift matters because it aligns with a basic human tendency.

People generally choose the easiest available option. If something is easier and produces a better result, adoption follows quickly.

This is how search replaced older marketing channels such as the Yellow Pages.

Seeking the path of least resistance is an evolutionary trait that likely served humans well in earlier eras. 

Today, however, it often shapes behavior in less intentional ways, including how people interact with ads and information.

AI is not perfect, but it is typically faster, easier, and more effective than digging through traditional search results. 

That advantage makes widespread adoption inevitable, especially as AI continues to be integrated into the websites, apps, and devices people already use.

What does this mean for search marketing?

Recent studies have shown that more users are beginning their research with AI tools rather than search engines. 

These studies always have their critics, but the broader point is something of a moot one: AI is everywhere.

AI is now so integrated into the tools people already use that it is becoming the default. 

Search engines, messaging platforms like WhatsApp, and mobile devices are all moving in this direction, and this is just the beginning. 

With Google having signed a multiyear deal with Apple, Google AI will power a significant share of mobile devices, accelerating the shift toward AI-first experiences.

It’s easy to envision an AI-first future, much like the shift from desktop to mobile and then mobile-first.

Get the newsletter search marketers rely on.


What this change actually looks like

Generative answers are shifting where users enter the funnel, with engagement increasingly starting mid-funnel around content that demonstrates experience and expertise.

This is the type of content users historically would only engage with on a company’s website, or through other owned channels such as YouTube.

This does not mean top-of-the-funnel content is no longer important. Blogs, guides, and videos still matter, videos in particular. However, it may be worth reconsidering how that content is distributed rather than relying solely on traditional organic search.

With the rise of AI tools such as Gemini and ChatGPT, users can now handle much of this comparison work through AI, saving significant time.

For example, the shift looks like this:

  • From “Mid market ERP platforms.” Where the user must sift through results, compare options, build spreadsheets, and conduct extensive manual review.
  • To “Which mid-market ERP platforms work best for manufacturing firms, integrate with our existing stack of X, Y, and Z, and won’t collapse during implementation?”

This changes where the user must exert effort.

A more detailed question or input produces a far stronger response or output.

You could argue that traditional search had degraded into a form of garbage in, garbage out (GIGO), where short, generic queries produced ad-heavy, blended results that were time-consuming to mine for real answers.

The result is user fatigue. Endless clicking, avoiding ads, and sorting through widely varying content has become a chore.

And the experience often does not improve once users reach the destination. Traffic-starved, ad-heavy websites can be just as difficult to navigate and extract useful information from.

AI offers a cleaner, faster, and less cluttered experience, delivering summarized pros, cons, and supporting evidence at each stage of the decision-making process.

All of this can happen inside an AI tool, without the user ever needing to visit the site where the content originated.

AI is increasingly becoming the default interface for information. These are still early days, and the experience will continue to improve, becoming faster, smoother, and more effective over time.

The crux of the SEO vs. GEO/AEO/AIO conversation is often that, despite a changing landscape, SEO and GEO are largely the same.

This is broadly true and, if anything, feels similar to the early days of SEO, when long-tail opportunities were real. 

You can now go much deeper with mid-funnel content because it no longer requires humans to read it all. 

Instead, AI can consume it and summarize the relevant parts.

The tactics are largely the same. Much of AI still sits on top of traditional search, but SEO strategies and execution may need adjustment to ensure all bases are covered.

It’s also important not to throw the baby out with the bathwater. 

SEO, PPC, and related channels all retain value in the age of AI.

Dig deeper: SEO, GEO, or ASO? What to call the new era of brand visibility in AI [Research]

How to adapt in an AI-first search environment

The game has changed. Planning for 2026 and beyond requires accepting that change and making practical adjustments to thrive in the age of AI search

Website

In traditional SEO and PPC models, users often land on the most relevant page for their query. 

That may be upper-funnel marketing content that leads deeper into the journey or directly to product or service pages.

This still happens, but there is now a noticeable increase in homepage visits driven by brand searches after AI-based research.

As a result, website navigation and messaging must be exceptionally clear. 

You need to understand user needs and make the path to relevant content as simple as possible.

The ALCHEMY website planning framework can help restructure sites around the expectations of an AI-savvy user.

Content 

In the age of AI, the devil is in the details.

If you want AI to recommend your brand or include it in increasingly nuanced research, your most important content must be visible and accessible so it can be retrieved and used to generate AI answers through retrieval-augmented generation, or RAG.

Frameworks such as “They Ask, You Answer” (TAYA) by Marcus Sheridan are particularly effective here. 

The premise is simple: If customers ask the question, you should answer it.

The framework focuses on five core areas, identified through extensive research, that address customer needs, drive engagement, and provide AI with the detailed information it needs to map to real user questions.

This approach works because it makes sense. It benefits users, improves visibility, drives leads, and supports sales. It is not an abstract AI strategy. It is good marketing.

These are the five key areas that TAYA focuses on:

  • Pricing and cost: If users search for pricing and cannot find it, they do not assume they should call for details. They often assume the product is too expensive or that information is being withheld, and they move on, or ask AI for a competitor’s pricing. Even when pricing is custom, you should explain the factors that influence cost.
  • Problems: Address the obvious issues. This includes problems with your product, your industry, and the drawbacks of specific solutions. Being transparent about limitations builds trust more effectively than excessive positivity.
  • Versus and comparisons: Buyers are choosing between alternatives. If you do not create comparison content, someone else will. Be objective. If a competitor is better for a specific use case, say so and focus on your ideal customer profile.
  • Reviews and ratings: People look for the best options and trust peer opinions more than brand claims. Create honest reviews of products and services in your space, including competitors. This process is informative for both users and brands.
  • Best in class: Users frequently search for “best” solutions. Lists such as “Top AI marketing agencies in [city]” are effective, even when they include competitors. Including alternatives demonstrates that customer fit matters more than self-promotion.

From an AI and SEO perspective in 2026, these five topics represent some of the highest-value data points for RAG systems.

Tools such as the Value Proposition Canvas and SCAMPER can support ideation and content variation, helping AI better understand your offerings.

Checklist: RAG-friendly formatting tips

Do not break content into meaningless fragments. Instead, use formatting that helps RAG systems navigate comprehensive resources:

  • Use question-based headers: Mirror real user questions in H2s and H3s, such as “How much does X cost?”
  • Lead with the answer: Apply the inverted pyramid. Start with the direct response, then add context.
  • Use bulleted lists for attributes: Bullets help RAG systems extract structured information.
  • Define key terms: Provide clear, one-sentence definitions for industry jargon.
  • Link to evidence: Cite sources for statistics and results to support credibility.

Treat blog posts as a knowledge base for AI. The clearer and more specific the information, the more retrievable your brand becomes.

Write for humans, not for bots

It bears repeating: Content should not be simplified solely for AI. 

Google Search Liaison Danny Sullivan has clarified that Google does not want content rewritten into bite-sized chunks for AI consumption.

Modern search systems and RAG pipelines can extract relevant information from well-structured, long-form content. 

There is no need to dilute expertise or create multiple versions of the same page.

A familiar example is being deep-linked to a specific section of a page from search results. This is established behavior, not new technology.

Some formats, such as FAQs, naturally benefit from concise structure. Use judgment based on the question being answered.

SEO v2026.0

These are positive changes. SEO is becoming more closely aligned with marketing and less of a fringe discipline.

The environment is shifting, and new tools are changing how people find information and make decisions. Yet many fundamentals remain.

SEO tactics still apply, but AI now acts as a superconsumer and summarizer of the information that influences choice.

The task is to identify, create, and structure that information so that when users ask a question, you have already answered it and are part of the conversation.

Read more at Read More

Web Design and Development San Diego

Search Central Live is coming back to South America

We’re excited to announce that Search Central Live is returning to São Paulo and to Buenos Aires in 2026.
Following many years of successful events in the region, we’re continuing our mission to help local businesses
enhance their site’s performance in Google Search.

Read more at Read More

How to Automate Marketing With 8 Simple Workflows

Everybody wants smoother workflows and fewer manual tasks. And thanks to AI models, automation is at the center of conversations in marketing departments across all industries.

But most rarely get the results they’re looking for.

According to Ascend2’s State of Marketing Automation Report, only 28% of marketers say their automation “very successfully” supports their objectives.

While 69% felt it was only somewhat successful.

How successful is your marketing automation

While this specific stat is from 2024, I imagine the broad idea is still true. Especially since there are so many more automation options and tools. It can get overwhelming to decide a go-forward plan and implement effectively.

So if you feel stuck in the camp of “not bad, but not great” marketing automation, you’re not alone.

The good news?

Once you understand the core building blocks, you can turn messy, half-automated systems into workflows that actually move the needle.

A good marketing automation usually involves four basic steps:

  • A trigger: A catalyst event that starts the automation
  • An action: One or more steps that happen in sequence after the trigger
  • An output: The end result
  • A loop or exit point: A new trigger, or an event that stops the automation

Four Basic Elements of Automation

In this article, we’re going to discuss how to use these steps to automate:

  • The mechanics of content creation (and no, we won’t just be telling you to “write it with AI”)
  • Beyond the basics of email nurtures
  • Your PR strategy
  • Social media engagement

Automate the Mechanics of Content Creation

Content marketers are creative people. We don’t want to automate away the creative work that drives results.

That said, we can automate marketing workflows that come before and after creating. (So we can spend more time on high-impact work.)

Here are some simple ways to get started.

1. Basic Brief Builder

Tools required:

  • Make (free for 1,000 credits per month, paid plans start at $9/month)
  • Your favorite keyword research tool (plans vary)
  • Project management platform (tools like Asana offer a free plan)
  • Google Sheets, Google Docs (free plan available)

Every week, content marketers around the world spend hours researching keywords, pulling search data, creating new briefs, and adding tasks to their project management systems.

What if you could do most of that with one automation?

Here are the basics of how this works:

  • Trigger: A new row is added to a Google Sheet (your new keyword)
  • Action: That keyword is run through your SEO tool, which pulls keyword difficulty, search volume, related terms, and top organic results
  • Output: A new Google Doc with the data inside, and a new task in your project management tool

In the end, the automation will look like this:

How automation look like

And if this seems scary, don’t worry: I’m going to walk you through each step to create this with Make. (Or, you can go ahead and copy this Scenario into your own Make account here.)

First, you’ll need a Google Sheet for your source.

Start with columns for your new keyword, status, brief URL, and task URL. To get started faster, copy this template here.

New Content Ideas Template from Backlinko – Empty

Next, add Google Sheets as the trigger step, and select “Watch New Rows.”

Watch New Rows

After that, select the Google Sheet you want to watch.

Spreadsheet ID

This runs the automation every time you add a new keyword to that sheet.

Now, it’s time to gather information from your SEO tool. For this example, we’re going to use Semrush. (You could also use an API like DataForSEO.)

Our first Semrush module will be “Get Keyword Overview.” (You might see different options depending on the specific tool you use.)

You can choose whether to see the keyword data in all regional databases, or just one region.

Get Keyword Overview

In this task, you’ll map the “Phrase” to the “Keyword” column from your Google Sheet. Then, choose what you want to get as an output. (In this case, I only want to see the search volume.)

Phrase field to the Google sheet

Now, let’s create another Semrush model to “Get Related Keywords” to gather relevant keywords from Semrush.

Again, you’ll map the “Phrase” to the keyword column from our Google Sheet, and choose what data you want to export. (I chose the keyword and search volume.)

Maping "Phrase" to the keyword column

You can also decide:

  • How the results are sorted
  • Whether to add filters
  • How many results to retrieve

Now, you’ll need to add a text aggregator into your workflow. This tool compiles the results from Semrush so we can use them in a Google Doc later on.

Here, simply map the source (our Semrush module).

Then, in the “Text” field, map the data as you want it to appear.

Map the data in the text field

Next, we’ll create a Semrush module that runs “Get Keyword Difficulty.”

Again, we’ll map the “Phrase” to our keyword from the Google Sheet, and choose to export the “Keyword Difficulty Index.”

Phrase Keyword & Keyword Difficulty Index

Next, run the “Get Organic Results” module from Semrush to export the sites that are ranking for your new target keyword.

Select the “Export Columns,” or the data that you want to see, and limit the number of results you get (we chose 10).

Export columns & Limit field

Since we’re getting multiple results, this module will also need a text aggregator to transform those results into plain text for our Google Doc.

We’ll set it up exactly the same way, but this time map the “Get Organic Results” module.

In the “Text” field, I’ve added “Bundle order position” (where that result is ranking in the SERP), and the URL of the ranking page.

Bundle order position & URL

Now, for the fun part.

It’s time to build your basic content brief in a Google doc.

Before you add this into Make, you’ll need to create a Google Doc as a template. This template should have variables that can be mapped to the results you get in your automation.

To show up as variables, you’ll need to wrap them in curly brackets. So, your template will look something like this:

  • Primary Keyword = {{keyword}}
  • Keyword Difficulty = {{difficulty}}
  • Related Keywords = {{related_keyword}}
  • Competing URLs = {{organic_result}}

Variables in curly brackets

(Want to save some time? Copy this template here.)

Now, you’ll create a new module in your Make scenario to “Create a Document from a Template.”

Create a Document from a Template

Once you connect the Google Doc template you created, you’ll see all of the variables you added in curly brackets as fields in the configuration page.

Now, all you have to do is map those variables to the results you’ve gotten from Semrush and your text aggregators.

Values fields

Now it’s time to add this new brief into your project management tool. Make lets you connect several tools, including Asana, Trello, Monday, and Notion.

In this scenario, I already have an Asana project for content production.

So I choose the “Create a Task or a Subtask” module for Asana, and map that existing project.

I can also add project custom fields (like a link to the brief in Google Docs), choose the task name (like the keyword), and automatically assign it to someone on my team.

Brief Link & Task Name

Lastly, I want to go back and update my original Google Sheet so that I can see which keywords have already been run, and where their briefs and tasks live.

So, I add Google Sheets again as the final step in the automation and connect the same spreadsheet that we had at the beginning. Under “Values,” I can map the brief URL from Google Docs and the new task URL from Asana to columns in my spreadsheet.

I also set this so the “Status” column is updated to “Done.”

Status column is updated – Done

Now, let’s run this scenario and see what happens.

First, I add a new keyword to my Google Sheet.

New Content Ideas Template from Backlinko – Add a new keyword

This triggers the automation to run.

The first thing that’s produced is a brand new Google Doc with all of the SEO data from Semrush. You’ll see this new doc appear in your Drive, and you’ll find the link in Asana.

Brand new Google Doc

Next, I’ll see a new task appear in my Asana project (with the brief link included).

New task appear in Asana project

And finally, the Google sheet will be updated to show us that the task has been completed.

Plus, it adds in the links to the new brief in Google Docs and the new task in Asana.

New Content Ideas Template from Backlinko

And there you go: you now have a basic content brief builder automation.

Are these complete briefs? No. But the information provides a great start, gives the writer SERP context, and frees up more time to fill out other important content brief elements.

 

2. Content Workflow

Tools required: Your favorite project management tool (paid or free options available)

Project management tools are great for organizing your content workflow.

But the more tasks you create over time, the harder it is to keep track of and manage those systems.

Many project management platforms give you built-in automation tools to help things run more smoothly. Let’s talk about automations that can help your content workflow specifically.

Triggers might include:

  • A new task is added to a project
  • A custom field changes
  • A new assignee is added
  • A subtask is completed
  • Due date is changed (or coming up soon)
  • A task is overdue

And actions could be:

  • Add to a new project
  • Auto-assign to a team member
  • Update a status
  • Move task to a new section
  • Create a subtask
  • Add a comment

For this example, we’re going to use the Rules system in Asana, but the same basic principles apply to almost any major project management tool.

To start, click the “Customize” button in the upper-right corner of your content management project, and create some custom fields.

Rules system in Asana

Especially important here is the “Status” field. The options here should follow the steps in your content process, and will probably mirror the sections in your Project.

Edit field – Save changes

Once your “Sections” and “Fields” are set up, you can create some rules.

These can help dictate what happens when a new brief enters your content workflow and assign it to whoever is in charge of moving it forward in the process.

Use a Rule to auto-assign someone on your team (for example, your content manager or editor) to the task.

Use a rule to auto assign someone on your team

Now, let’s say a new article is now in progress with a writer.

Create a rule that moves the task to the corresponding section of your project when the status is set to “Writing.”

Move the task to corresponding section

If your content tasks have subtasks (like “create outline,” “write article,” “edit,” or “design”), you can track completion and use that to move pieces forward.

In this case, you can set a rule that once all subtasks are complete, the task moves to the “Ready to Publish” section.

Ready for publishing

Once the task moves to that section, set a rule to auto-assign it to the team member who publishes posts.

Set a rule to auto-assign

Then, when the status is set to “Published,” the task could be moved into a separate project where completed tasks of published content are stored.

This allows you to clear the tasks from your main production workflow, but still keep them on hand in case the piece needs to be updated in the future.

Published Content

What if a piece of content isn’t completed by its deadline?

Set up an automation that checks in with the team to see what the status is.

Automation that checks

There are plenty of other automations you can run in Asana or other tools.

But these basic workflow automations will help your content production process have better handoffs and less friction.

We do this at Backlinko using Monday.com as our project management tool.

Semrush – Monday board

Read more about how we scale content creation here.

Go Beyond Basic Email Nurtures

Email nurtures are relatively easy to put together in any basic email tool: for example, sending a welcome email to a new newsletter subscriber, or a transactional email to a new customer.

But let’s talk about some ways to take those automations even further.

Email marketing automation involves:

  • A trigger: Such as someone signing up for an email list
  • An action: The new contact is added to a list or segment
  • An output: They new receive a series of pre-made emails
  • An exit condition: The sequence finishes once all the emails are sent, or once the contact takes a specific action, like buying a product

Essential Elements of an Email Nurture

Exit conditions are especially important, because you don’t want people to receive another email from you after they’ve already completed an action. (Hello, promo email that arrives after I already made a purchase.)

Let’s walk through how to use marketing automation tools for email.

3. Behavior-Based Nurtures and Follow-Ups

Tools required: ActiveCampaign (paid plans start at $15/month, although other email platforms offer automation capabilities too)

When you trigger an email sequence based on real behavior, you’re catching people in the moment when they’re more likely to engage.

For example, if you want to help a new user get to know your platform, you can trigger onboarding emails based on the actions they’ve taken so far.

Or, if you want to reduce cart abandonment, you can send a special promotion for customers who have items in their cart.

This improved targeting can lead to better engagement from your email list.

All you have to do is match the right trigger to the right action. For example:

Trigger Action
Someone downloads a resource They receive a series of emails on that topic
A customer purchased a product a few months ago They get a reminder to replenish their stock
A contact browses a product category, but doesn’t make a purchase They get an email reminding them of what they looked at
A new user subscribes to your platform They get a series of emails walking them through specific actions

Your exit condition could be when the person:

  • Completes their purchase
  • Books a call
  • Starts a free trial
  • Replies to your email

For example, let’s say you want to send a series of emails reminding someone that their subscription is reaching its end date. It could look something like this:

  • Trigger: End date is within 20 days from now
  • Action: Send series of three emails up to the last day of their subscription (we don’t want to send too many)
  • Exit condition: Customer responds to the email, or renews their subscription

Here’s a great example for home insurance renewal:

Home Insurance Renewal

​​Or, let’s say a new lead just signed up for a free trial or freemium account.

You could create a workflow that pulls information from the onboarding survey in your tool, and builds a personalized, 1:1 email sequence.

Check out this example from HubSpot:

Example from HubSpot

When I signed up for the account, I identified myself as a self-employed marketer. HubSpot pulled that information into this new trial campaign to make the email even more personalized.

So the question is: how do you get started?

Here’s a quick overview of how you could build a behavior-based email nurture automation in ActiveCampaign.

Let’s say you want to send an email sequence to a known contact who visited a certain page on your website. For example, imagine someone who subscribes to your email newsletter, but isn’t a customer, just visited your pricing page. (In other words, they may be close to signing up — they just aren’t quite convinced yet.)

Before you start this automation, you’ll need to enable Site Tracking on your account in ActiveCampaign. To do this, install the tracking code on your website so ActiveCampaign can see page views.

Enable Site Tracking in ActiveCampaign

To start the automation, you’ll add new contacts who enter through any pipeline.

Now, when a known contact (someone who’s already in your database) visits a tracked page, ActiveCampaign associates that page view with the contact’s record, and can start an automation.

Contact enters a pipeline

The real trigger is the next step: “Wait until conditions are met.”

Wait until conditions are met

In this case, the condition is that the contact has visited an exact URL on your website.

Pro tip: You can also adjust this so the email series only runs when the person visits a page multiple times, showing a higher level of interest.


Next, set a waiting period from the time the person sees the page to when the email is sent.

And finally, write your email and add it to the workflow.

Add your email to the workflow

After that, you could:

  • Wait a certain amount of time, then send another email
  • Set an exit condition if the contact replies or makes a purchase

All of this effort turns into an email like this one that I received from Brooks after visiting one of their product pages:

Brooks email after visiting their product page

This makes me way more likely to revisit the shoes I was looking at than a generic reminder email (or no email at all).

4. Webinar Lifecycle Automation

Tools required:

  • Demio (plans start at $45/month)
  • HubSpot (limited free plan available)

Webinars are an entire customer journey, including promotion, confirmation, reminders, and post-event follow-ups.

The trigger is normally one event: Someone signed up for your webinar.

The actions include:

  • Confirmation email
  • Day before and day-of reminders
  • “Happening now” email
  • Post-event replay email

For example, here’s a great reminder email from Kiwi Wealth:

Great email reminder

Immediately after the webinar is finished, you might send an email like this one from Beefree:

Email from Beefree

And you’ll also want to follow up later with a replay and some action items for people who attended, like this:

Action items for people who attended

Note: We got these examples from Really Good Emails, which is a great resource for getting inspiration for your own campaigns.


So, how do you create this automation?

Most great webinar tools allow you to do this. Demio, for example, allows you to automate marketing emails when you create a new event:

​​

Demio – Automate marketing emails

If you want to get really fancy, you can segment your post-webinar follow-up emails by whether or not the contact attended the webinar:

Segment your follow-up emails

Demio’s built-in email is somewhat limited beyond an actual event.

So, you can connect it to HubSpot to add a new layer of segmentation to your lists.

Connecting Demio to HubSpot

Once this connection is live, Demio will import webinar attendance data into HubSpot.

Demio import data into HubSpot

For example, you can import data like:

  • Contacts who registered for the webinar
  • People who registered, but missed the event
  • People who attended the event
  • How long a contact stayed in the webinar
  • People who watched the replay

HubSpot – Field Mapping

You can even add new contacts to lists directly in Hubspot if they don’t exist there already.

Add new conctacts to lists in HubSpot

This automation will help your pre- and post-webinar flows run more smoothly. And hopefully get you more valuable engagement with those webinars.

Grow Your PR Strategy

For small marketing teams, PR outreach can use up a lot of valuable time.

Here are some easy automations to keep doing inbound and outbound PR requests, without spending your entire week on it.

Resource: Get your free PR Plan Template to help you pick the right goals, discover journalists, and make pitches that get press coverage.


5. PR Radar

Tools required:

  • BrandMentions (paid plans start at $79/month)
  • Zapier (free for 100 tasks/month, paid plans start at $19.99/month)
  • Google Sheets (free option available)

Want to keep an eye on new articles that are related to your brand that you could potentially get featured in or a backlink from? Let’s build an automatic PR radar.

Note: Most monitoring tools send alerts, but those notifications disappear into your inbox. This workflow creates a shared, searchable log your whole team can access without extra logins—plus you’ll have a historical record for spotting PR trends over time.


This workflow looks like:

  • Trigger: A new article mentions your brand or related topics
  • Action: Pull all new mentions into one place to scan through them easily
  • Output: A simple, regularly-updated list of PR mentions

There are several tools that do this, but for this example, we’re going to use BrandMentions.

Once you set up your account and your project, head into settings to adjust which sources you’ll collect data from.

Remove social media, and just leave the web option. That way, you’ll get a clean list of articles and webpages that mention your brand or the keywords you added.

BrandMentions – Settings

Once this is set up, you can connect your BrandMentions project to Zapier.

This will trigger the automation to start when any new mentions are added.

You can choose whatever output works best for you: whether that’s a Slack message, a new row in Airtable, or an addition to an ongoing Google Sheet.

For this example, I chose Google Sheets as my output. All I had to do was tie the data pulled from BrandMentions to the right columns in my spreadsheet.

From BrandMentions to Google Sheets

Once that’s done, the automation adds new articles like this automatically into my spreadsheet:

Automation ads new articles into spreadsheet

Pro tip: Want to add a reminder? You can add another step that sends a daily Slack message summarizing all the newly added rows.


6. Media Request Matchmaker

Tools required:

  • RSS.app (free plan available)
  • Zapier (free for 100 tasks/month, paid plans start at $19.99/month)
  • Airtable (free plan available)

PR would be nothing without the relationships we build with journalists and writers.

But it’s hard to know who’s writing about a topic that’s related to your brand. Or where your company’s internal subject matter experts can add their thoughts to promote your brand.

So, let’s build an automation to match new requests to your internal experts.

This involves:

  • Trigger: A new media request that matches relevant topics
  • Action: Classify new requests and match them to the internal expert with the most relevant expertise
  • Output: New requests are automatically routed to the right person

One of the most frequently updated places to find PR requests is on X/Twitter.

Search the hashtag #journorequest, and you’ll see hundreds of writers asking for expert contributions.

X – Hashtag – #journorequest

To prepare this for your automation, start by setting up an RSS feed with the hashtag #journorequest or #prrequest along with a relevant keyword.

You can do this for free with RSS.app.

Setting up an RSS Feed

Then, you’ll get results like this:

RSS app – Results

For the simplest version of this, you can connect RSS.app directly to Slack and send a new message every time a new request is added to the feed.

But let’s be real: that could get overwhelming pretty quickly.

So, we’ll use Zapier for a more in-depth automation.

Start by adding “RSS by Zapier” as the trigger, and paste your RSS feed link into the configuration.

RSS by Zapier

Pro tip: If you want to track journo requests for multiple topics, change the trigger event to “New Items in Multiple Feeds.” Then, simply paste in all of the RSS feed links. That way, they’ll all run through the same automation.


Next use “Formatter by Zapier” to extract the necessary information from the tweets.

First, in Formatter, choose the Action event “Text.”

Then, in the Configure menu, select “Extract Email Address,” and map the input to the description from your RSS feed.

Extract email address

Next, with another Formatter step, select “Text,” and “Extract Pattern.”

The input is still the same description (the original tweet).

In the Pattern box, in parentheses, add the keywords you want to track separated by a vertical bar, like this:

code icon
(cybersecurity|fintech|pets|saas)

Make sure that IGNORECASE is set to “Yes” so that the search isn’t case sensitive.

Ignorecase is set to yes

Now, it’s time to add that to a system you can use to keep track of new requests and route them to SMEs.

For this example, I’ve chosen to use Airtable. If you want to use this exact database, you can copy it here and we’ll use it as we move forward.

Airtable – PR Requests – SMEs

This database has tabs to keep track of your SMEs, the topics they can respond to, and the new requests that come in.

So, let’s connect that Airtable base to Zapier.

Our first step will be to find the right SME for the topic of our journo request.

To start, set the Action as “Find Record,” and link your Airtable base. We’ll pull from the SMEs table, and for “Search by Field” we’ll choose “Topics,” where we’ve previously added our SME’s favorite topics into the Airtable base.

Lastly for this step, map the “Search Value” to the previous step’s result (the topic from the PR query on X/Twitter).

Map the Search Value

Now, we’re going to create a new row in our “Requests” table in Airtable.

Add Airtable as the next step in this Zap, and select “Create Record” as the action. Link the same Airtable base, but this time select “Requests” as the Table.

Then, map the columns in that base to the information you’ve gathered. In this case, that would include:

  • Source = X/Twitter
  • Raw Text = The “Description” from RSS feed
  • Contact name = The “Raw Creator” from RSS feed
  • Contact Email = The output from our first Formatter step, which pulled the email from the original post
  • URL = Link from RSS feed
  • Topics = The output from our second Formatter step, which pulled the topic from the original post
  • SMEs = The “Fields Name” from our Airtable search step
  • Status = New

In the end, it should look like this:

Information fields

And a new record is added into Airtable, like this:

Record is added into Airtable

If you want to get fancy with this, you can dig down into:

  • Which publications are requesting expertise, and rank them by their credibility
  • Automate messages to your SMEs to let them know there’s a new request for them

Get the Most Out of Social Media

For busy marketers, social media can be an incredible time-suck.

Keeping track of trends. Trying to post consistently.

All without getting stuck in an infinite doomscroll.

But a few simple automations can help you get back some of the time you spend on manually managing your socials.

7. Video Clip Automator

Tools required:

  • Zoom (free plan available)
  • Dropbox (free plan available)
  • OpusClip (plans start at $15/month)
  • Zapier (free for 100 tasks/month, paid plans start at $19.99/month)

Short-form video has been gradually gaining a bigger voice in marketing.

In fact, 39% of marketers said that videos under 60 seconds are the most effective.

The problem: they take time to make.

If you’re already creating long-form video (or even just doing recorded interviews with in-house experts), we have a handy automation to help you create video clips faster.

Here’s how it works:

  • Trigger: New Zoom cloud recording is ready
  • Action: Auto-create clips, burn captions, and create a new task in Asana
  • Output: You get social-ready video clips, and a new task to publish them

First, adjust your Zoom settings so your recordings upload automatically into a folder in Dropbox.

Adjusting Zoom Settings

Next, head over to Zapier.

Your trigger step will be a new video uploaded to that folder in Dropbox.

Clip your video by OpusClip

Your next step will use OpusClip, an AI video editing tool. Select “Clip Your Video,” and map that new video file to the one uploaded in Dropbox.

Dropbox step – Video file

OpusClip will then take your long-form video from Dropbox and use AI to clip key pieces. It also crops the video for vertical sharing and embeds captions.

You can also add your own brand template so that videos are edited with your brand’s colors and font.

Now that you have new video clips to share, it’s time to add a task to review and publish them.

So the final step in your Zap is “Create Task” in Asana (or your preferred project management tool).

Create task in Asana

You’ll tie this to a project you’ve already created in Asana, and link the project ID from OpusClip.

In the end, you’ll have a few video clips prepared and ready — all you have to do is download, review, and publish them to your social channels.

8. Comment & Community Nudge

Tools required:

  • Social media monitoring tool (like BrandMentions, paid plans start at $79/month)
  • Automation tool (like Zapier, free for 100 tasks/month, paid plans start at $19.99/month)

Are people talking about your brand online?

To keep positive sentiment high, you need to engage in those conversations. But finding the right conversations, and knowing how to reply, can take a lot of time.

Using a tool like BrandMentions, you can create a similar automation to what we built for the PR Radar earlier:

  • Trigger: A new mention of your brand appears on Reddit, Facebook, or LinkedIn
  • Action: Those new mentions are added to a Google Sheet, and you get a daily Slack message summarizing new mentions

To build this, all you’d need to do is swap out the Sources in your BrandMentions settings. Instead of Web, you’d include all of the social media channels you want to track.

BrandMentions – Social Media

After that, you can build an automation with Zapier, the same way we did in the PR Strategy automation above.

If you want to get notifications for every new mention, you could connect the workflow to Slack. Then, a new message will be sent in the channel every time your brand is mentioned.

Notification for every new mention

This basic automation could work for smaller brands.

But when you start getting hundreds of mentions per day, this will quickly become chaotic.

Here’s an example of how one company faced with this issue was able to automate this process in a deeper way:

Webflow was getting over 500 mentions per day. Their two-person team couldn’t keep up with monitoring and responding (alongside their regular workload).

So, they built an automation.

With Gumloop, they monitor, analyze, and flag only the posts that require a response.

They started with a Reddit scraper to pull relevant threads.

Starting with Reddit

Then, they added an AI analyzer to gauge sentiment, rank priority, and assign a category.

AI Categorizer node

After that, they added a step that would send all high-priority mentions to Slack for a team member to handle directly.

High priority post

The result?

After testing and scaling this process, they were able to build an automation that processes 500+ mentions per day and escalates only the 10-15 that need immediate attention.

If you’ve ever thought, “How can I use AI to automate my marketing tasks?”

This is a great example of an AI automation that works for you without taking over your job.

Is Automation the Right Move? Ask Yourself These Questions First

Automation is the hottest trend.

But it’s hard to know what’s going to save you time and money, and what’s just another fad.

If you’ve ever spent more time trying to automate a task than it would’ve taken you to do the task manually, you’ll know what I mean.

To weigh up whether an automation is worth building, ask yourself these questions:

  • How much time does it take me to do this task manually every week?
  • Is the automation available with a tool I currently use, or would I have to pay for a new tool?
  • Is there a documented automation/integration I can follow?
  • Would this task still require human intervention (even with automation)?
  • Does this fit easily into our current workflow or process?

If the task:

  • Doesn’t take much time to do manually
  • Would still require human intervention even when automated
  • Isn’t easy to build an automation for

…it may not be worth your time.

On the other hand, if the task:

  • Is repetitive
  • Uses up hours of your workweek
  • Can be automated in tools you already have in your stack

…it’s probably time to give automation a try.

Build Your Automation Foundations, Then Keep Growing

The hype cycle of automation and AI can be overwhelming.

But don’t feel like you’re behind just because you haven’t automated away your entire marketing team yet.

Instead, focus on the automations that save you time and are sustainable.

We’ve just discussed eight different automations. Why not choose one or two that are most relevant to your business and team?

Start with the foundational automations that help smooth out your existing processes.

Then, you’ll have a better basis for building more complex automations.

To automate even more areas of your marketing workflows, check out our curated list of our favorite AI marketing tools right now.

The post How to Automate Marketing With 8 Simple Workflows appeared first on Backlinko.

Read more at Read More

Google Shopping API cutoff looms, putting ad delivery at risk

Inside Google Ads’ AI-powered Shopping ecosystem: Performance Max, AI Max and more

Google Shopping API migration deadlines are approaching, and advertisers who don’t act risk disrupted Shopping and Performance Max campaigns.

What’s happening. Google is sunsetting older API versions and pushing all merchants toward the Merchant API as the single source of truth for Shopping Ads. Advertisers can confirm which API they’re using in Merchant Center Next by checking the “Source” column under Settings > Data sources, where any listing marked “Content API” requires action.

Why we care. Google is actively reminding advertisers to migrate to the new Merchant API, with beta users required to complete the switch by Feb. 28th, and Content API users by Aug. 18th. If feeds aren’t properly reconnected, campaigns that rely on product data — especially those using feed labels — may stop serving altogether.

The risk. Feed labels don’t automatically carry over during migration. If advertisers don’t update their campaign and feed configurations in Google Ads, Shopping and Performance Max setups that depend on those labels for structure or bidding logic can quietly break.

What to do now. Google recommends completing the migration well ahead of the deadline, reviewing feed labels, and validating campaign delivery after reconnecting feeds. The transition was first outlined in mid-2024, but enforcement is now imminent as Google moves closer to fully retiring legacy APIs.

Bottom line. This isn’t a cosmetic backend change — it’s a technical cutoff that can directly impact revenue if ignored.

First seen. This update was spotted by Google Shopping Specialist Emmanuel Flossie, who shared the warnings he received on LinkedIn.

Read more at Read More

Does llms.txt matter? We tracked 10 sites to find out

Does llms.txt matter

The debate around llms.txt has become one of the most polarized topics in web optimization.

Some treat llms.txt as foundational infrastructure, while many SEO veterans dismiss it as speculative theater. Platform tools flag missing llms.txt files as site issues, yet server logs show that AI crawlers rarely request them.

Google even adopted it. Sort of. In December, the company added llms.txt files across many developer and documentation sites.

The signal seemed clear: if the company behind the sitemap standard is implementing llms.txt, it likely matters.

Except Google pulled it from its Search developer docs within 24 hours.

Google’s John Mueller said the change came from a sitewide CMS update that many content teams didn’t realize was happening. When asked why the files still exist on other Google properties, Mueller said they aren’t “findable by default because they’re not at the top-level” and “it’s safe to assume they’re there for other purposes,” not discovery.

The llms.txt research

We wanted data, not debates.

So we tracked llms.txt adoption across 10 sites in finance, B2B SaaS, ecommerce, insurance, and pet care — 90 days before implementation and 90 days after.

We measured AI crawl frequency, traffic from ChatGPT, Claude, Perplexity, and Gemini, and what else these sites changed during the same window.

The results:

  • Two of the 10 sites saw AI traffic increases of 12.5% and 25%, but llms.txt wasn’t the cause.
  • Eight sites saw no measurable change.
  • One site declined by 19.7%.

The 2 ‘success’ stories weren’t about the file

The Neobank: 25% growth

This digital banking platform implemented llms.txt early in Q3 2025. Ninety days later, AI traffic was up 25%.

Here’s what else happened in that window:

  • A PR campaign around its banking license, with coverage in major national publications.
  • Product pages restructured with extractable comparison tables for interest rates, fees, and minimums.
  • Twelve new FAQ pages optimized for extraction.
  • A rebuilt resource center with new banking information and concepts.
  • Technical SEO issues, like header structures, fixed. 

When a company gets Bloomberg coverage the same month it launches optimized content and fixes crawl errors, you can’t isolate the llms.txt as the growth driver.

The B2B SaaS platform: 12.5% growth

This workflow automation company saw traffic jump 12.5% two weeks after implementing llms.txt.

Perfect timing. Case closed. Except…

Three weeks earlier, the company published 27 downloadable AI templates covering project management frameworks, financial models, and workflow planners. Functional tools, not content marketing, drove the engagement behind the spike.

Google organic traffic to the templates rose 18% during the same period and continued climbing throughout the 90 days we measured.

Search engines and AI models surfaced the templates because they solved real problems and launched an entirely new site section — not because they were listed in an llms.txt file.

The 8 sites where nothing happened after uploading llms.txt

Eight sites saw no measurable change. One declined by 19.7%.

The decline came from an insurance site that implemented llms.txt in early September. The drop likely had nothing to do with the file.

The same pattern showed up across all traffic channels. Llms.txt neither prevented the decline nor created any advantage.

The other seven sites — ecommerce (pet supplies, home goods, fashion), B2B SaaS (HR tech, marketing analytics), finance, and pet care — all documented their best existing content in llms.txt. That included product pages, case studies, API docs, and buying guides.

Ninety days later, nothing changed. Traffic stayed flat. Crawl frequency was identical. The content was already indexed and discoverable, and the file didn’t alter that.

Sites that launched new, functional content saw gains. Sites that documented existing content saw no gains.

Why the disconnect?

No major LLM provider has officially committed to parsing llms.txt. Not OpenAI. Not Anthropic. Not Google. Not Meta.

Google’s Mueller put it plainly:

  • “None of the AI services have said they’re using llms.txt, and you can tell when you look at your server logs that they don’t even check for it.”

That’s the reality. The file exists. The advocacy exists. The adoption by platforms doesn’t show it (yet!). 

The token efficiency argument (and its limits)

The strongest case for llms.txt is about efficiency. Markdown saves time and tokens when AI agents parse documentation. Clean structure instead of complex HTML with navigation, ads, and JavaScript.

Vercel says 10% of their signups come from ChatGPT. Its llms.txt includes contextual API descriptions that help agents decide what to fetch.

This matters — but almost exclusively for developer tools and API documentation. If your audience uses AI coding assistants like Cursor or GitHub Copilot to interact with your product, token efficiency improves integration.

For ecommerce selling pet supplies, insurance explaining coverage, or B2B SaaS targeting nontechnical buyers, token efficiency doesn’t translate into traffic.

llms.txt is a sitemap, not a strategy

The most accurate comparison is a sitemap.

Sitemaps are valuable infrastructure. They help search engines discover and index content more efficiently. But no one credits traffic growth to adding a sitemap. The sitemap documents what exists; the content drives discovery.

Llms.txt works the same way. It may help AI models parse your site more efficiently if they choose to use it, but it doesn’t make your content more useful, authoritative, or likely to answer user queries.

In our analysis, the sites that grew did so because they:

  • Created functional assets like downloadable templates, comparison tables, and structured data.
  • Earned external visibility through press and backlinks.
  • Fixed technical barriers such as crawl and indexing issues.
  • Published content optimized for extraction, including FAQs and structured comparisons.

Llms.txt documented those efforts. It didn’t drive them.

What actually works

The two successful sites show what matters:

  • Create functional, extractable assets. The SaaS platform built 27 downloadable templates that users could deploy immediately. AI models surfaced these because they solved real problems, not because they were listed in a markdown file.
  • Structure content for extraction. The neobank rebuilt product pages with comparison tables with interest rates, fees, and account minimums. This is data AI models can pull directly into answers without interpretation.
  • Fix technical barriers first. The neobank fixed crawl errors that had blocked content for months. If AI models can’t access your content, no amount of documentation helps.
  • Earn external validation. Coverage from Bloomberg and other major publications drove referral traffic, branded searches, and likely influenced how AI models assess authority.
  • Optimize for user intent. Both sites answered specific queries: “best project management templates” and “how do [brand] interest rates compare?” Models surface content that maps to what users are asking, not content that’s merely well documented.

None of this requires llms.txt. All of it drives results.

Should you implement an llms.txt file?

If you’re a developer tool where AI coding assistants are a primary distribution channel, then yes — token efficiency matters. Your audience is already using agents to interact with documentation.

For everyone else, treat llms.txt like a sitemap: useful infrastructure, not a growth lever.

It’s good practice to have. It won’t hurt. But the hour spent implementing llms.txt is often better spent restructuring product pages with extractable data, publishing functional assets, fixing technical SEO issues, creating FAQ content, or earning press coverage.

Those tactics have shown real ROI in AI discovery. Llms.txt hasn’t — at least not yet.

The lesson isn’t that llms.txt is bad. It’s that we’re reaching for control in a system where the rules aren’t written yet. Llms.txt offers that comfort: something concrete, actionable, and familiar, shaped like the web standards we already know.

But looking like infrastructure isn’t the same as functioning like infrastructure.

Focus on what actually works:

  • Create useful content.
  • Structure it for extraction.
  • Make it technically accessible.
  • Earn external validation.

Platforms and formats will change. The fundamentals won’t.

Read more at Read More

7 real-world AI failures that show why adoption keeps going wrong

7 real-world AI failures that show why adoption keeps going wrong

AI has quickly risen to the top of the corporate agenda. Despite this, 95% of businesses struggle with adoption, MIT research found.

Those failures are no longer hypothetical. They are already playing out in real time, across industries, and often in public. 

For companies exploring AI adoption, these examples highlight what not to do and why AI initiatives fail when systems are deployed without sufficient oversight.

1. Chatbot participates in insider trading, then lies about it

In an experiment driven by the UK government’s Frontier AI Taskforce, ChatGPT placed illegal trades and then lied about it

Researchers prompted the AI bot to act as a trader for a fake financial investment company. 

They told the bot that the company was struggling, and they needed results. 

They also fed the bot insider information about an upcoming merger, and the bot affirmed that it should not use this in its trades. 

The bot still made the trade anyway, citing that “the risk associated with not acting seems to outweigh the insider trading risk,” then denied using the insider information.  

Marius Hobbhahn, CEO of Apollo Research (the company that conducted the experiment), said that helpfulness “is much easier to train into the model than honesty,” because “honesty is a really complicated concept.”

He says that current models are not powerful enough to be deceptive in a “meaningful way” (arguably, this is a false statement, see this and this).

However, he warns that it’s “not that big of a step from the current models to the ones that I am worried about, where suddenly a model being deceptive would mean something.”

AI has been operating in the financial sector for some time, and this experiment highlights the potential for not only legal risks but also risky autonomous actions on the part of AI.  

Dig deeper: AI-generated content: The dangers of overreliance

2. Chevy dealership chatbot sells SUV for $1 in ‘legally binding’ offer

An AI-powered chatbot for a local Chevrolet dealership in California sold a vehicle for $1 and said it was a legally binding agreement. 

In an experiment that went viral across forums on the web, several people toyed with the local dealership’s chatbot to respond to a variety of non-car-related prompts.  

One user convinced the chatbot to sell him a vehicle for just $1, and the chatbot confirmed it was a “legally binding offer – no takesies backsies.”

Fullpath, the company that provides AI chatbots to car dealerships, took the system offline once it became aware of the issue.

The company’s CEO told Business Insider that despite viral screenshots, the chatbot resisted many attempts to provoke misbehavior.

Still, while the car dealership didn’t face any legal liability from the mishap, some argue that the chatbot agreement in this case may be legally enforceable. 

3. Supermarket’s AI meal planner suggests poison recipes and toxic cocktails

A New Zealand supermarket chain’s AI meal planner suggested unsafe recipes after certain users prompted the app to use non-edible ingredients. 

Recipes like bleach-infused rice surprise, poison bread sandwiches, and even a chlorine gas mocktail were created before the supermarket caught on.

A spokesperson for the supermarket said they were disappointed to see that “a small minority have tried to use the tool inappropriately and not for its intended purpose,” according to The Guardian 

The supermarket said it would continue to fine-tune the technology for safety and added a warning for users. 

That warning stated that recipes are not reviewed by humans and do not guarantee that “any recipe will be a complete or balanced meal, or suitable for consumption.”

Critics of AI technology argue that chatbots like ChatGPT are nothing more than improvisational partners, building on whatever you throw at them. 

Because of the way these chatbots are wired, they could pose a real safety risk for certain companies that adopt them.  

Get the newsletter search marketers rely on.


4. Air Canada held liable after chatbot gives false policy advice

An Air Canada customer was awarded damages in court after the airline’s AI chatbot assistant made false claims about its policies

The customer inquired about the airline’s bereavement rates via its AI assistant after the death of a family member. 

The chatbot responded that the airline offered discounted bereavement rates for upcoming travel or for travel that has already occurred, and linked to the company’s policy page. 

Unfortunately, the actual policy was the opposite, and the airline did not offer reduced rates for bereavement travel that had already happened. 

The fact that the chatbot linked to the policy page with the correct information was an argument the airline made in court when trying to prove its case.

However, the tribunal (a small claims-type court in Canada) did not side with the defendant. As reported by Forbes, the tribunal called the scenario “negligent misrepresentation.”

Christopher C. Rivers, Civil Resolution Tribunal Member, said this in the decision:

  • “Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”

This is just one of many examples where people have been dissatisfied with chatbots due to their technical limitations and propensity for misinformation – a trend that is sparking more and more litigation. 

Dig deeper: 5 SEO content pitfalls that could be hurting your traffic

5. Australia’s largest bank replaces call center with AI, then apologizes and rehires staff

The largest bank in Australia replaced its call center team with AI voicebots with the promise of boosted efficiency, but admitted it made a big mistake. 

The Commonwealth Bank of Australia (CBA) believed the AI voicebots could reduce call volume by 2,000 calls per week. But it didn’t.

Instead, left without the assistance of its 45-person call center, the bank scrambled to offer overtime to remaining workers to keep up with the calls, and get other management workers to answer calls, too.

Meanwhile, the union representing the displaced workers elevated the situation to the Finance Sector Union (like the Equal Opportunity Commission in the U.S.). 

It was only one month after CBA replaced workers that it issued an apology and offered to hire them back.

CBA said in a statement that they did not “adequately consider all relevant business considerations and this error meant the roles were not redundant.”

Other U.S. companies have faced PR nightmares as well when attempting to replace human roles with AI.

Perhaps that’s why certain brands have deliberately gone in the opposite direction, making sure people remain central to every AI deployment.

Nevertheless, the CBA debacle shows that replacing people with AI without fully weighing the risks can backfire quickly and publicly.

6. New York City’s chatbot advises employers to break labor and housing laws

New York City launched an AI chatbot to provide information on starting and running a business, and it advised people to carry out illegal activities

Just months after its launch, people started noticing the inaccuracies provided by the Microsoft-powered chatbot.

The chatbot offered unlawful guidance across the board, from telling bosses they could pocket employees’ tips and skip notifying staff about schedule changes to tenant discrimination and cashless stores.

“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup
“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup

This is despite the city’s initial announcement promising that the chatbot would provide trusted information on topics such as “compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines.” 

Still, then-mayor Eric Adams defended the technology, saying: 

  • “Anyone that knows technology knows this is how it’s done,” and that “only those who are fearful sit down and say, ‘Oh, it is not working the way we want, now we have to run away from it all together.’ I don’t live that way.” 

Critics called his approach reckless and irresponsible. 

This is yet another cautionary tale in AI misinformation and how organizations can better handle the integration and transparency around AI technology. 

Dig deeper: SEO shortcuts gone wrong: How one site tanked – and what you can learn

7. Chicago Sun-Times publishes fake book list generated by AI

The Chicago Sun-Times ran a syndicated “summer reading” feature that included false, made-up details about books after the writer relied on AI without fact-checking the output. 

King Features Syndicate, a unit of Hearst, created the special section for the Chicago Sun-Times.  

Not only were the book summaries inaccurate, but some of the books were entirely fabricated by AI. 

“Syndicated content in Sun-Times special section included AI-generated misinformation,” Chicago Sun-Times

The author, hired by King Features Syndicate to create the book list, admitted to using AI to put the list together, as well as for other stories, without fact-checking. 

And the publisher was left trying to determine the extent of the damage. 

The Chicago Sun-Times said print subscribers would not be charged for the edition, and it put out a statement reiterating that the content was produced outside the newspaper’s newsroom. 

Meanwhile, the Sun-Times said they are in the process of reviewing their relationship with King Features, and as for the writer, King Features fired him.  

Oversight matters

The examples outlined here show what happens when AI systems are deployed without sufficient oversight. 

When left unchecked, the risks can quickly outweigh the rewards, especially as AI-generated content and automated responses are published at scale.

Organizations that rush into AI adoption without fully understanding those risks often stumble in predictable ways. 

In practice, AI succeeds only when tools, processes, and content outputs keep humans firmly in the driver’s seat.

Read more at Read More

Why LLM-only pages aren’t the answer to AI search

Why LLM-only pages aren’t the answer to AI search

With new updates in the search world stacking up in 2026, content teams are trying a new strategy to rank: LLM pages.

They’re building pages that no human will ever see: markdown files, stripped-down JSON feeds, and entire /ai/ versions of their articles.

The logic seems sound: if you make content easier for AI to parse, you’ll get more citations in ChatGPT, Perplexity, and Google’s AI Overviews.

Strip out the ads. Remove the navigation. Serve bots pure, clean text.

Industry experts such as Malte Landwehr have documented sites creating .md copies of every article or adding llms.txt files to guide AI crawlers.

Teams are even building entire shadow versions of their content libraries.

Google’s John Mueller isn’t buying it.

  • “LLMs have trained on – read and parsed – normal web pages since the beginning,” he said in a recent discussion on Bluesky. “Why would they want to see a page that no user sees?”
JohnMu, Lily Ray on BlueSky

His comparison was blunt: LLM-only pages are like the old keywords meta tag. Available for anyone to use, but ignored by the systems they’re meant to influence.

So is this trend actually working, or is it just the latest SEO myth?

The rise of ‘LLM-only’ web pages

The trend is real. Sites across tech, SaaS, and documentation are implementing LLM-specific content formats.

The question isn’t whether adoption is happening, it’s whether these implementations are driving the AI citations teams hoped for.

Here’s what content and SEO teams are actually building.

llms.txt files

A markdown file at your domain root listing key pages for AI systems.

The format was introduced in 2024 by AI researcher Simon Willison to help AI systems discover and prioritize important content. 

Plain text lives at yourdomain.com/llms.txt with an H1 project name, brief description, and organized sections linking to important pages.

Stripe’s implementation at docs.stripe.com/llms.txt shows the approach in action:

markdown# Stripe Documentation

> Build payment integrations with Stripe APIs

## Testing

- [Test mode](https://docs.stripe.com/testing): Simulate payments

## API Reference

- [API docs](https://docs.stripe.com/api): Complete API reference

The payment processor’s bet is simple: if ChatGPT can parse their documentation cleanly, developers will get better answers when they ask, “how do I implement Stripe.”

They’re not alone. Current adopters include Cloudflare, Anthropic, Zapier, Perplexity, Coinbase, Supabase, and Vercel.

Markdown (.md) page copies

Sites are creating stripped-down markdown versions of their regular pages.

The implementation is straightforward: just add .md to any URL. Stripe’s docs.stripe.com/testing becomes docs.stripe.com/testing.md.

Everything gets stripped out except the actual content. No styling. No menus. No footers. No interactive elements. Just pure text and basic formatting.

The thinking: if AI systems don’t have to wade through CSS and JavaScript to find the information they need, they’re more likely to cite your page accurately.

/ai and similar paths

Some sites are building entirely separate versions of their content under /ai/, /llm/, or similar directories.

You might find /ai/about living alongside the regular /about page, or /llm/products as a bot-friendly alternative to the main product catalog. 

Sometimes these pages have more detail than the originals. Sometimes they’re just reformatted.

The idea: give AI systems their own dedicated content that’s built for machine consumption, not human eyes. 

If a person accidentally lands on one of these pages, they’ll find something that looks like a website from 2005.

JSON metadata files

Dell took this approach with their product specs.

Instead of creating separate pages, they built structured data feeds that live alongside their regular ecommerce site.

The files contain clean JSON – specs, pricing, and availability.

Everything an AI needs to answer “what’s the best Dell laptop under $1000” without having to parse through product descriptions written for humans.

You’ll typically find these files as /llm-metadata.json or /ai-feed.json in the site’s directory.

# Dell Technologies

> Dell Technologies is a leading technology provider, specializing in PCs, servers, and IT solutions for businesses and consumers.

## Product and Catalog Data

- [Product Feed - US Store](https://www.dell.com/data/us/catalog/products.json): Key product attributes and availability.

- [Dell Return Policy](https://www.dell.com/return-policy.md): Standard return and warranty information.

## Support and Documentation

- [Knowledge Base](https://www.dell.com/support/knowledge-base.md): Troubleshooting guides and FAQs.

This approach makes the most sense for ecommerce and SaaS companies that already keep their product data in databases. 

They’re just exposing what they already have in a format AI systems can easily digest.

Dig deeper: LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Real-world citation data: What actually gets referenced

The theory sounds good. The adoption numbers look impressive. 

But do these LLM-optimized pages actually get cited?

The individual analysis

Landwehr, CPO and CMO at Peec AI, ran targeted tests on five websites using these tactics. He crafted prompts specifically designed to surface their LLM-friendly content.

Some queries even contained explicit 20+ word quotes designed to trigger specific sources.

Landwehr - LLM experiment 1

Across nearly 18,000 citations, here’s what he found.

llms.txt: 0.03% of citations

Out of 18,000 citations, only six pointed to llms.txt files. 

The six that did work had something in common: they contained genuinely useful information about how to use an API and where to find additional documentation. 

The kind of content that actually helps AI systems answer technical questions. The “search-optimized” llms.txt files, the ones stuffed with content and keywords, received zero citations.

Markdown (.md) pages: 0% of citations

Sites using .md copies of their content got cited 3,500+ times. None of those citations pointed to the markdown versions. 

The one exception: GitHub, where .md files are the standard URLs. 

They’re linked internally, and there’s no HTML alternative. But these are just regular pages that happen to be in markdown format.

/ai pages: 0.5% to 16% of citations

Results varied wildly depending on implementation. 

One site saw 0.5% of its citations point to its/ai pages. Another hit 16%. 

The difference? 

The higher-performing site put significantly more information in their /ai pages than existed anywhere else on their site. 

Keep in mind, these prompts were specifically asking for information contained in these files. 

Even with prompts designed to surface this content, most queries ignored the /ai versions.

JSON metadata: 5% of citations

One brand saw 85 out of 1,800 citations (5%) come from their metadata JSON file. 

The critical detail here is that the file contained information that didn’t exist anywhere else on the website. 

Once again, the query specifically asked for those pieces of information.

Landwehr - LLM experiment 1

The large-scale analysis

SE Ranking took a different approach

Instead of testing individual sites, they analyzed 300,000 domains to see if llms.txt adoption correlated with citation frequency at scale.

Only 10.13% of domains, or 1 in 10, had implemented llms.txt. 

For context, that’s nowhere near the universal adoption of standards like robots.txt or XML sitemaps.

During the study, an interesting relationship between adoption rates and traffic levels emerged.

Sites with 0-100 monthly visits adopted llms.txt at 9.88%. 

Sites with 100,001+ visits? Just 8.27%. 

The biggest, most established sites were actually slightly less likely to use the file than mid-tier ones.

But the real test was whether llms.txt impacted citations. 

SE Ranking built a machine learning model using XGBoost to predict citation frequency based on various factors, including the presence of llms.txt.

The result: removing llms.txt from the model actually improved its accuracy. 

The file wasn’t helping predict citation behavior, it was adding noise.

The pattern

Both analyses point to the same conclusion: LLM-optimized pages get cited when they contain unique, useful information that doesn’t exist elsewhere on your site.

The format doesn’t matter. 

Landwehr’s conclusion was blunt: “You could create a 12345.txt file and it would be cited if it contains useful and unique information.”

A well-structured about page achieves the same result as an /ai/about page. API documentation gets cited whether it’s in llms.txt or buried in your regular docs.

The files themselves get no special treatment from AI systems. 

The content inside them might, but only if it’s actually better than what already exists on your regular pages.

SE Ranking’s data backs this up at scale. There’s no correlation between having llms.txt and getting more citations. 

The presence of the file made no measurable difference in how AI systems referenced domains.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

What Google and AI platforms actually say

No major AI company has confirmed using llms.txt files in their crawling or citation processes.

Google’s Mueller made the sharpest critique in April 2025, comparing llms.txt to the obsolete keywords meta tag: 

  • “[As far as I know], none of the AI services have said they’re using LLMs.TXT (and you can tell when you look at your server logs that they don’t even check for it).”

Google’s Gary Illyes reinforced this at the July 2025 Search Central Deep Dive in Bangkok, explicitly stating Google “doesn’t support LLMs.txt and isn’t planning to.”

Google Search Central’s documentation is equally clear: 

  • “The best practices for SEO remain relevant for AI features in Google Search. There are no additional requirements to appear in AI Overviews or AI Mode, nor other special optimizations necessary.”

OpenAI, Anthropic, and Perplexity all maintain their own llms.txt files for their API documentation to make it easy for developers to load into AI assistants. 

But none have announced their crawlers actually read these files from other websites.

The consistent message from every major platform: standard web publishing practices drive visibility in AI search. 

No special files, no new markup, and no separate versions needed.

What this means for SEO teams

The evidence points to a single conclusion: stop building content that only machines will see.

Mueller’s question cuts to the core issue: 

  • “Why would they want to see a page that no user sees?” 

If AI companies needed special formats to generate better responses, they would tell you. As he noted:

  • “AI companies aren’t really known for being shy.” 

The data proves him right. 

Across Landwehr’s nearly 18,000 citations, LLM-optimized formats showed no advantage unless they contained unique information that didn’t exist anywhere else on the site. 

SE Ranking’s analysis of 300,000 domains found that llms.txt actually added confusion to their citation prediction model rather than improving it.

Instead of creating shadow versions of your content, focus on what actually works.

Build clean HTML that both humans and AI can parse easily. 

Reduce JavaScript dependencies for critical content, which Mueller identified as the real technical barrier: 

  • “Excluding JS, which still seems hard for many of these systems.” 

Heavy client-side rendering creates actual problems for AI parsing.

Use structured data when platforms have published official specifications, such as OpenAI’s ecommerce product feeds

Improve your information architecture so key content is discoverable and well-organized.

The best page for AI citation is the same page that works for users: well-structured, clearly written, and technically sound. 

Until AI companies publish formal requirements stating otherwise, that’s where your optimization energy belongs.

Dig deeper: GEO myths: This article may contain lies

Read more at Read More