Search Central Live is coming back to South America! After many successful events in the region,
we’re continuing our mission to help you enhance your site’s performance in Google Search.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-26 06:00:002026-01-26 06:00:00Search Central Live is coming back to South America
Everybody wants smoother workflows and fewer manual tasks. And thanks to AI models, automation is at the center of conversations in marketing departments across all industries.
But most rarely get the results they’re looking for.
According to Ascend2’s State of Marketing Automation Report, only 28% of marketers say their automation “very successfully” supports their objectives.
While 69% felt it was only somewhat successful.
While this specific stat is from 2024, I imagine the broad idea is still true. Especially since there are so many more automation options and tools. It can get overwhelming to decide a go-forward plan and implement effectively.
So if you feel stuck in the camp of “not bad, but not great” marketing automation, you’re not alone.
The good news?
Once you understand the core building blocks, you can turn messy, half-automated systems into workflows that actually move the needle.
A good marketing automation usually involves four basic steps:
A trigger: A catalyst event that starts the automation
An action: One or more steps that happen in sequence after the trigger
An output: The end result
A loop or exit point: A new trigger, or an event that stops the automation
In this article, we’re going to discuss how to use these steps to automate:
The mechanics of content creation (and no, we won’t just be telling you to “write it with AI”)
Beyond the basics of email nurtures
Your PR strategy
Social media engagement
Automate the Mechanics of Content Creation
Content marketers are creative people. We don’t want to automate away the creative work that drives results.
That said, we can automate marketing workflows that come before and after creating. (So we can spend more time on high-impact work.)
Here are some simple ways to get started.
1. Basic Brief Builder
Tools required:
Make (free for 1,000 credits per month, paid plans start at $9/month)
Your favorite keyword research tool (plans vary)
Project management platform (tools like Asana offer a free plan)
Google Sheets, Google Docs (free plan available)
Every week, content marketers around the world spend hours researching keywords, pulling search data, creating new briefs, and adding tasks to their project management systems.
What if you could do most of that with one automation?
Here are the basics of how this works:
Trigger: A new row is added to a Google Sheet (your new keyword)
Action: That keyword is run through your SEO tool, which pulls keyword difficulty, search volume, related terms, and top organic results
Output: A new Google Doc with the data inside, and a new task in your project management tool
In the end, the automation will look like this:
And if this seems scary, don’t worry: I’m going to walk you through each step to create this with Make. (Or, you can go ahead and copy this Scenario into your own Make account here.)
First, you’ll need a Google Sheet for your source.
Start with columns for your new keyword, status, brief URL, and task URL. To get started faster, copy this template here.
Next, add Google Sheets as the trigger step, and select “Watch New Rows.”
After that, select the Google Sheet you want to watch.
This runs the automation every time you add a new keyword to that sheet.
Now, it’s time to gather information from your SEO tool. For this example, we’re going to use Semrush. (You could also use an API like DataForSEO.)
Our first Semrush module will be “Get Keyword Overview.” (You might see different options depending on the specific tool you use.)
You can choose whether to see the keyword data in all regional databases, or just one region.
In this task, you’ll map the “Phrase” to the “Keyword” column from your Google Sheet. Then, choose what you want to get as an output. (In this case, I only want to see the search volume.)
Now, let’s create another Semrush model to “Get Related Keywords” to gather relevant keywords from Semrush.
Again, you’ll map the “Phrase” to the keyword column from our Google Sheet, and choose what data you want to export. (I chose the keyword and search volume.)
You can also decide:
How the results are sorted
Whether to add filters
How many results to retrieve
Now, you’ll need to add a text aggregator into your workflow. This tool compiles the results from Semrush so we can use them in a Google Doc later on.
Here, simply map the source (our Semrush module).
Then, in the “Text” field, map the data as you want it to appear.
Next, we’ll create a Semrush module that runs “Get Keyword Difficulty.”
Again, we’ll map the “Phrase” to our keyword from the Google Sheet, and choose to export the “Keyword Difficulty Index.”
Next, run the “Get Organic Results” module from Semrush to export the sites that are ranking for your new target keyword.
Select the “Export Columns,” or the data that you want to see, and limit the number of results you get (we chose 10).
Since we’re getting multiple results, this module will also need a text aggregator to transform those results into plain text for our Google Doc.
We’ll set it up exactly the same way, but this time map the “Get Organic Results” module.
In the “Text” field, I’ve added “Bundle order position” (where that result is ranking in the SERP), and the URL of the ranking page.
Now, for the fun part.
It’s time to build your basic content brief in a Google doc.
Before you add this into Make, you’ll need to create a Google Doc as a template. This template should have variables that can be mapped to the results you get in your automation.
To show up as variables, you’ll need to wrap them in curly brackets. So, your template will look something like this:
Now, you’ll create a new module in your Make scenario to “Create a Document from a Template.”
Once you connect the Google Doc template you created, you’ll see all of the variables you added in curly brackets as fields in the configuration page.
Now, all you have to do is map those variables to the results you’ve gotten from Semrush and your text aggregators.
Now it’s time to add this new brief into your project management tool. Make lets you connect several tools, including Asana, Trello, Monday, and Notion.
In this scenario, I already have an Asana project for content production.
So I choose the “Create a Task or a Subtask” module for Asana, and map that existing project.
I can also add project custom fields (like a link to the brief in Google Docs), choose the task name (like the keyword), and automatically assign it to someone on my team.
Lastly, I want to go back and update my original Google Sheet so that I can see which keywords have already been run, and where their briefs and tasks live.
So, I add Google Sheets again as the final step in the automation and connect the same spreadsheet that we had at the beginning. Under “Values,” I can map the brief URL from Google Docs and the new task URL from Asana to columns in my spreadsheet.
I also set this so the “Status” column is updated to “Done.”
Now, let’s run this scenario and see what happens.
First, I add a new keyword to my Google Sheet.
This triggers the automation to run.
The first thing that’s produced is a brand new Google Doc with all of the SEO data from Semrush. You’ll see this new doc appear in your Drive, and you’ll find the link in Asana.
Next, I’ll see a new task appear in my Asana project (with the brief link included).
And finally, the Google sheet will be updated to show us that the task has been completed.
Plus, it adds in the links to the new brief in Google Docs and the new task in Asana.
And there you go: you now have a basic content brief builder automation.
Are these complete briefs? No. But the information provides a great start, gives the writer SERP context, and frees up more time to fill out other important content brief elements.
Resources for this automation: To get started faster, use these templates:
Tools required: Your favorite project management tool (paid or free options available)
Project management tools are great for organizing your content workflow.
But the more tasks you create over time, the harder it is to keep track of and manage those systems.
Many project management platforms give you built-in automation tools to help things run more smoothly. Let’s talk about automations that can help your content workflow specifically.
Triggers might include:
A new task is added to a project
A custom field changes
A new assignee is added
A subtask is completed
Due date is changed (or coming up soon)
A task is overdue
And actions could be:
Add to a new project
Auto-assign to a team member
Update a status
Move task to a new section
Create a subtask
Add a comment
For this example, we’re going to use the Rules system in Asana, but the same basic principles apply to almost any major project management tool.
To start, click the “Customize” button in the upper-right corner of your content management project, and create some custom fields.
Especially important here is the “Status” field. The options here should follow the steps in your content process, and will probably mirror the sections in your Project.
Once your “Sections” and “Fields” are set up, you can create some rules.
These can help dictate what happens when a new brief enters your content workflow and assign it to whoever is in charge of moving it forward in the process.
Use a Rule to auto-assign someone on your team (for example, your content manager or editor) to the task.
Now, let’s say a new article is now in progress with a writer.
Create a rule that moves the task to the corresponding section of your project when the status is set to “Writing.”
If your content tasks have subtasks (like “create outline,” “write article,” “edit,” or “design”), you can track completion and use that to move pieces forward.
In this case, you can set a rule that once all subtasks are complete, the task moves to the “Ready to Publish” section.
Once the task moves to that section, set a rule to auto-assign it to the team member who publishes posts.
Then, when the status is set to “Published,” the task could be moved into a separate project where completed tasks of published content are stored.
This allows you to clear the tasks from your main production workflow, but still keep them on hand in case the piece needs to be updated in the future.
What if a piece of content isn’t completed by its deadline?
Set up an automation that checks in with the team to see what the status is.
There are plenty of other automations you can run in Asana or other tools.
But these basic workflow automations will help your content production process have better handoffs and less friction.
We do this at Backlinko using Monday.com as our project management tool.
Email nurtures are relatively easy to put together in any basic email tool: for example, sending a welcome email to a new newsletter subscriber, or a transactional email to a new customer.
But let’s talk about some ways to take those automations even further.
A trigger: Such as someone signing up for an email list
An action: The new contact is added to a list or segment
An output: They new receive a series of pre-made emails
An exit condition: The sequence finishes once all the emails are sent, or once the contact takes a specific action, like buying a product
Exit conditions are especially important, because you don’t want people to receive another email from you after they’ve already completed an action. (Hello, promo email that arrives after I already made a purchase.)
Let’s walk through how to use marketing automation tools for email.
3. Behavior-Based Nurtures and Follow-Ups
Tools required: ActiveCampaign (paid plans start at $15/month, although other email platforms offer automation capabilities too)
When you trigger an email sequence based on real behavior, you’re catching people in the moment when they’re more likely to engage.
For example, if you want to help a new user get to know your platform, you can trigger onboarding emails based on the actions they’ve taken so far.
Or, if you want to reduce cart abandonment, you can send a special promotion for customers who have items in their cart.
This improved targeting can lead to better engagement from your email list.
All you have to do is match the right trigger to the right action. For example:
Trigger
Action
Someone downloads a resource
They receive a series of emails on that topic
A customer purchased a product a few months ago
They get a reminder to replenish their stock
A contact browses a product category, but doesn’t make a purchase
They get an email reminding them of what they looked at
A new user subscribes to your platform
They get a series of emails walking them through specific actions
Your exit condition could be when the person:
Completes their purchase
Books a call
Starts a free trial
Replies to your email
For example, let’s say you want to send a series of emails reminding someone that their subscription is reaching its end date. It could look something like this:
Trigger: End date is within 20 days from now
Action: Send series of three emails up to the last day of their subscription (we don’t want to send too many)
Exit condition: Customer responds to the email, or renews their subscription
Here’s a great example for home insurance renewal:
Or, let’s say a new lead just signed up for a free trial or freemium account.
You could create a workflow that pulls information from the onboarding survey in your tool, and builds a personalized, 1:1 email sequence.
Check out this example from HubSpot:
When I signed up for the account, I identified myself as a self-employed marketer. HubSpot pulled that information into this new trial campaign to make the email even more personalized.
So the question is: how do you get started?
Here’s a quick overview of how you could build a behavior-based email nurture automation in ActiveCampaign.
Let’s say you want to send an email sequence to a known contact who visited a certain page on your website. For example, imagine someone who subscribes to your email newsletter, but isn’t a customer, just visited your pricing page. (In other words, they may be close to signing up — they just aren’t quite convinced yet.)
Before you start this automation, you’ll need to enable Site Tracking on your account in ActiveCampaign. To do this, install the tracking code on your website so ActiveCampaign can see page views.
To start the automation, you’ll add new contacts who enter through any pipeline.
Now, when a known contact (someone who’s already in your database) visits a tracked page, ActiveCampaign associates that page view with the contact’s record, and can start an automation.
The real trigger is the next step: “Wait until conditions are met.”
In this case, the condition is that the contact has visited an exact URL on your website.
Pro tip: You can also adjust this so the email series only runs when the person visits a page multiple times, showing a higher level of interest.
Next, set a waiting period from the time the person sees the page to when the email is sent.
And finally, write your email and add it to the workflow.
After that, you could:
Wait a certain amount of time, then send another email
Set an exit condition if the contact replies or makes a purchase
All of this effort turns into an email like this one that I received from Brooks after visiting one of their product pages:
This makes me way more likely to revisit the shoes I was looking at than a generic reminder email (or no email at all).
4. Webinar Lifecycle Automation
Tools required:
Demio (plans start at $45/month)
HubSpot (limited free plan available)
Webinars are an entire customer journey, including promotion, confirmation, reminders, and post-event follow-ups.
The trigger is normally one event: Someone signed up for your webinar.
The actions include:
Confirmation email
Day before and day-of reminders
“Happening now” email
Post-event replay email
For example, here’s a great reminder email from Kiwi Wealth:
Immediately after the webinar is finished, you might send an email like this one from Beefree:
And you’ll also want to follow up later with a replay and some action items for people who attended, like this:
Note: We got these examples from Really Good Emails, which is a great resource for getting inspiration for your own campaigns.
So, how do you create this automation?
Most great webinar tools allow you to do this. Demio, for example, allows you to automate marketing emails when you create a new event:
If you want to get really fancy, you can segment your post-webinar follow-up emails by whether or not the contact attended the webinar:
Demio’s built-in email is somewhat limited beyond an actual event.
So, you can connect it to HubSpot to add a new layer of segmentation to your lists.
Once this connection is live, Demio will import webinar attendance data into HubSpot.
For example, you can import data like:
Contacts who registered for the webinar
People who registered, but missed the event
People who attended the event
How long a contact stayed in the webinar
People who watched the replay
You can even add new contacts to lists directly in Hubspot if they don’t exist there already.
This automation will help your pre- and post-webinar flows run more smoothly. And hopefully get you more valuable engagement with those webinars.
Grow Your PR Strategy
For small marketing teams, PR outreach can use up a lot of valuable time.
Here are some easy automations to keep doing inbound and outbound PR requests, without spending your entire week on it.
Resource: Get your free PR Plan Template to help you pick the right goals, discover journalists, and make pitches that get press coverage.
5. PR Radar
Tools required:
BrandMentions (paid plans start at $79/month)
Zapier (free for 100 tasks/month, paid plans start at $19.99/month)
Google Sheets (free option available)
Want to keep an eye on new articles that are related to your brand that you could potentially get featured in or a backlink from? Let’s build an automatic PR radar.
Note: Most monitoring tools send alerts, but those notifications disappear into your inbox. This workflow creates a shared, searchable log your whole team can access without extra logins—plus you’ll have a historical record for spotting PR trends over time.
This workflow looks like:
Trigger: A new article mentions your brand or related topics
Action: Pull all new mentions into one place to scan through them easily
Output: A simple, regularly-updated list of PR mentions
There are several tools that do this, but for this example, we’re going to use BrandMentions.
Once you set up your account and your project, head into settings to adjust which sources you’ll collect data from.
Remove social media, and just leave the web option. That way, you’ll get a clean list of articles and webpages that mention your brand or the keywords you added.
Once this is set up, you can connect your BrandMentions project to Zapier.
This will trigger the automation to start when any new mentions are added.
You can choose whatever output works best for you: whether that’s a Slack message, a new row in Airtable, or an addition to an ongoing Google Sheet.
For this example, I chose Google Sheets as my output. All I had to do was tie the data pulled from BrandMentions to the right columns in my spreadsheet.
Once that’s done, the automation adds new articles like this automatically into my spreadsheet:
Pro tip: Want to add a reminder? You can add another step that sends a daily Slack message summarizing all the newly added rows.
6. Media Request Matchmaker
Tools required:
RSS.app (free plan available)
Zapier (free for 100 tasks/month, paid plans start at $19.99/month)
Airtable (free plan available)
PR would be nothing without the relationships we build with journalists and writers.
But it’s hard to know who’s writing about a topic that’s related to your brand. Or where your company’s internal subject matter experts can add their thoughts to promote your brand.
So, let’s build an automation to match new requests to your internal experts.
This involves:
Trigger: A new media request that matches relevant topics
Action: Classify new requests and match them to the internal expert with the most relevant expertise
Output: New requests are automatically routed to the right person
One of the most frequently updated places to find PR requests is on X/Twitter.
Search the hashtag #journorequest, and you’ll see hundreds of writers asking for expert contributions.
To prepare this for your automation, start by setting up an RSS feed with the hashtag #journorequest or #prrequest along with a relevant keyword.
For the simplest version of this, you can connect RSS.app directly to Slack and send a new message every time a new request is added to the feed.
But let’s be real: that could get overwhelming pretty quickly.
So, we’ll use Zapier for a more in-depth automation.
Start by adding “RSS by Zapier” as the trigger, and paste your RSS feed link into the configuration.
Pro tip: If you want to track journo requests for multiple topics, change the trigger event to “New Items in Multiple Feeds.” Then, simply paste in all of the RSS feed links. That way, they’ll all run through the same automation.
Next use “Formatter by Zapier” to extract the necessary information from the tweets.
First, in Formatter, choose the Action event “Text.”
Then, in the Configure menu, select “Extract Email Address,” and map the input to the description from your RSS feed.
Next, with another Formatter step, select “Text,” and “Extract Pattern.”
The input is still the same description (the original tweet).
In the Pattern box, in parentheses, add the keywords you want to track separated by a vertical bar, like this:
(cybersecurity|fintech|pets|saas)
Make sure that IGNORECASE is set to “Yes” so that the search isn’t case sensitive.
Now, it’s time to add that to a system you can use to keep track of new requests and route them to SMEs.
For this example, I’ve chosen to use Airtable. If you want to use this exact database, you can copy it here and we’ll use it as we move forward.
This database has tabs to keep track of your SMEs, the topics they can respond to, and the new requests that come in.
So, let’s connect that Airtable base to Zapier.
Our first step will be to find the right SME for the topic of our journo request.
To start, set the Action as “Find Record,” and link your Airtable base. We’ll pull from the SMEs table, and for “Search by Field” we’ll choose “Topics,” where we’ve previously added our SME’s favorite topics into the Airtable base.
Lastly for this step, map the “Search Value” to the previous step’s result (the topic from the PR query on X/Twitter).
Now, we’re going to create a new row in our “Requests” table in Airtable.
Add Airtable as the next step in this Zap, and select “Create Record” as the action. Link the same Airtable base, but this time select “Requests” as the Table.
Then, map the columns in that base to the information you’ve gathered. In this case, that would include:
Source = X/Twitter
Raw Text = The “Description” from RSS feed
Contact name = The “Raw Creator” from RSS feed
Contact Email = The output from our first Formatter step, which pulled the email from the original post
URL = Link from RSS feed
Topics = The output from our second Formatter step, which pulled the topic from the original post
SMEs = The “Fields Name” from our Airtable search step
Status = New
In the end, it should look like this:
And a new record is added into Airtable, like this:
If you want to get fancy with this, you can dig down into:
Which publications are requesting expertise, and rank them by their credibility
Automate messages to your SMEs to let them know there’s a new request for them
Get the Most Out of Social Media
For busy marketers, social media can be an incredible time-suck.
Keeping track of trends. Trying to post consistently.
All without getting stuck in an infinite doomscroll.
But a few simple automations can help you get back some of the time you spend on manually managing your socials.
7. Video Clip Automator
Tools required:
Zoom (free plan available)
Dropbox (free plan available)
OpusClip (plans start at $15/month)
Zapier (free for 100 tasks/month, paid plans start at $19.99/month)
Short-form video has been gradually gaining a bigger voice in marketing.
If you’re already creating long-form video (or even just doing recorded interviews with in-house experts), we have a handy automation to help you create video clips faster.
Here’s how it works:
Trigger: New Zoom cloud recording is ready
Action: Auto-create clips, burn captions, and create a new task in Asana
Output: You get social-ready video clips, and a new task to publish them
First, adjust your Zoom settings so your recordings upload automatically into a folder in Dropbox.
Next, head over to Zapier.
Your trigger step will be a new video uploaded to that folder in Dropbox.
Your next step will use OpusClip, an AI video editing tool. Select “Clip Your Video,” and map that new video file to the one uploaded in Dropbox.
OpusClip will then take your long-form video from Dropbox and use AI to clip key pieces. It also crops the video for vertical sharing and embeds captions.
You can also add your own brand template so that videos are edited with your brand’s colors and font.
Now that you have new video clips to share, it’s time to add a task to review and publish them.
So the final step in your Zap is “Create Task” in Asana (or your preferred project management tool).
You’ll tie this to a project you’ve already created in Asana, and link the project ID from OpusClip.
In the end, you’ll have a few video clips prepared and ready — all you have to do is download, review, and publish them to your social channels.
8. Comment & Community Nudge
Tools required:
Social media monitoring tool (like BrandMentions, paid plans start at $79/month)
Automation tool (like Zapier, free for 100 tasks/month, paid plans start at $19.99/month)
Are people talking about your brand online?
To keep positive sentiment high, you need to engage in those conversations. But finding the right conversations, and knowing how to reply, can take a lot of time.
Using a tool like BrandMentions, you can create a similar automation to what we built for the PR Radar earlier:
Trigger: A new mention of your brand appears on Reddit, Facebook, or LinkedIn
Action: Those new mentions are added to a Google Sheet, and you get a daily Slack message summarizing new mentions
To build this, all you’d need to do is swap out the Sources in your BrandMentions settings. Instead of Web, you’d include all of the social media channels you want to track.
If you want to get notifications for every new mention, you could connect the workflow to Slack. Then, a new message will be sent in the channel every time your brand is mentioned.
This basic automation could work for smaller brands.
But when you start getting hundreds of mentions per day, this will quickly become chaotic.
Here’s an example of how one company faced with this issue was able to automate this process in a deeper way:
Webflow was getting over 500 mentions per day. Their two-person team couldn’t keep up with monitoring and responding (alongside their regular workload).
So, they built an automation.
With Gumloop, they monitor, analyze, and flag only the posts that require a response.
They started with a Reddit scraper to pull relevant threads.
Then, they added an AI analyzer to gauge sentiment, rank priority, and assign a category.
After that, they added a step that would send all high-priority mentions to Slack for a team member to handle directly.
The result?
After testing and scaling this process, they were able to build an automation that processes 500+ mentions per day and escalates only the 10-15 that need immediate attention.
If you’ve ever thought, “How can I use AI to automate my marketing tasks?”
This is a great example of an AI automation that works for you without taking over your job.
Is Automation the Right Move? Ask Yourself These Questions First
Automation is the hottest trend.
But it’s hard to know what’s going to save you time and money, and what’s just another fad.
If you’ve ever spent more time trying to automate a task than it would’ve taken you to do the task manually, you’ll know what I mean.
To weigh up whether an automation is worth building, ask yourself these questions:
How much time does it take me to do this task manually every week?
Is the automation available with a tool I currently use, or would I have to pay for a new tool?
Is there a documented automation/integration I can follow?
Would this task still require human intervention (even with automation)?
Does this fit easily into our current workflow or process?
If the task:
Doesn’t take much time to do manually
Would still require human intervention even when automated
Isn’t easy to build an automation for
…it may not be worth your time.
On the other hand, if the task:
Is repetitive
Uses up hours of your workweek
Can be automated in tools you already have in your stack
…it’s probably time to give automation a try.
Build Your Automation Foundations, Then Keep Growing
The hype cycle of automation and AI can be overwhelming.
But don’t feel like you’re behind just because you haven’t automated away your entire marketing team yet.
Instead, focus on the automations that save you time and are sustainable.
We’ve just discussed eight different automations. Why not choose one or two that are most relevant to your business and team?
Start with the foundational automations that help smooth out your existing processes.
Then, you’ll have a better basis for building more complex automations.
To automate even more areas of your marketing workflows, check out our curated list of our favorite AI marketing tools right now.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-22 21:47:412026-01-22 21:47:41How to Automate Marketing With 8 Simple Workflows
Does Google’s AI Mode mark a real shift in how search works? There’s a strong case that it does. And all businesses with an online presence need to pay attention, not just SEO folks.
Given how big the change is, you likely have a lot of questions.
What does AI Mode mean for your site traffic? How do you get featured? Do you need to change your content strategy? What happens to organic visibility as AI-generated answers become more common?
If you’re feeling uncertain, don’t worry. This guide breaks down what Google AI Mode actually is, how it works, and what it means for your site.
Key Takeaways
Google AI Mode is a search experience that builds on AI Overviews, offering deeper answers, reasoning, and more personalized responses.
AI Mode is currently available in English, with rollout expanding beyond early U.S. testing.
Users can access AI Mode directly from the Google homepage, where it functions through a conversational, ChatGPT-style interface.
Appearing in AI Mode is largely driven by strong SEO fundamentals, but brand mentions, structured data, and off-site signals play a growing role.
While AI Mode changes how results are presented, early data suggests users still click through to source content, especially for complex or high-consideration topics.
What Is Google’s AI Mode?
AI Mode is a search feature from Google designed to give direct, well-reasoned answers to complex queries. It builds on AI Overviews but uses a similar process that combines AI-generated responses with content from traditional search results and the Knowledge Graph (Google’s database of factual information).
It runs on a modified version of Gemini, Google’s core AI model, and analyzes information from multiple sources. It then synthesizes this information into a clear, concise answer that prioritizes reasoning and context, rather than just summarizing pages.
The interface feels a lot like an AI Overview—same layout and a similar answer—but with a box to ask follow-up questions at the bottom.
Here’s what Robby Stein, Google’s VP of Search, said about AI Mode in a post on The Keyword:
“Using a custom version of Gemini 2.0, AI Mode is particularly helpful for questions that need further exploration, comparisons and reasoning. You can ask nuanced questions that might have previously taken multiple searches — like exploring a new concept or comparing detailed options — and get a helpful AI-powered response with links to learn more.”
AI Mode integrates several elements from traditional search engine results pages (SERPs), such as Shopping listings and Maps.
Finally, Google has said that it will continue to add new features. These include agentic workflows in conjunction with Project Mariner, increasing levels of personalization, and even custom charts and graphs.
AI Mode Is Becoming an Interactive Application Layer
Google is actively turning AI Mode into a more interactive part of search, not just a place to read AI-generated answers.
Recent updates already point to deeper personalization, richer inline links, and more interactive result formats, including charts, comparisons, and visual outputs. With Gemini 3 now integrated directly into AI Mode, those interfaces are becoming more dynamic and tool-driven instead of purely informational.
“We spend a ton of time focused on this question of when and how to show links, and how we can really make the web shine. It will continue to be an ongoing effort as AI Mode and the Search Results Page evolves,” says Stein.
This shift matters. Rather than sending users to external calculators, templates, or apps, Google is starting to surface that functionality directly inside search. For certain queries, AI Mode can simulate outcomes, compare options, or guide users through multi-step decisions without requiring a click to another site.
Over time, this opens the door to agent-driven experiences. In those scenarios, AI Mode does not just explain an answer. It helps users complete tasks, from planning and analysis to evaluation and execution, inside the search interface itself.
As Gemini becomes more tightly integrated across Search, AI Mode is moving closer to a default experience. For brands, this raises the bar. Content that wins in AI-first search needs defensible value, interactive depth, or proprietary insight, not just basic information.
How to Access Google’s AI Mode and Availability
Google AI Modeis now available beyond early U.S.-only testing, with a broader global rollout underway. Users accessing Google in supported regions can enter AI Mode directly from the Google homepage, where it appears alongside the main search experience rather than as an experimental feature.
When users tap “show more” on certain AI-generated results, the AI Overview expands. Once in the expanded AI overview users can click “Dive Deeper in AI Mode” to enter AI mode. This signals a shift toward AI Mode acting as a default exploration layer, not a separate destination.
Once inside AI Mode, users can interact with responses conversationally, asking follow-up questions that carry context forward. Links to supporting pages remain available, and users can access their “AI mode history” once inside AI mode, so they can continue conversations that they previously started.
Google has moved away from positioning AI Mode as a Labs experiment, and there is no longer a separate opt-in process. Access is tied to Google’s standard search interface, and availability is expanding as Google refines performance, localization, and personalization features.
Timeline of Google AI Mode
While most people think of AI as starting with ChatGPT, Google’s been building AI tools for decades.
AI Mode is part of Google’s broader family of AI tools, which include Veo, a video maker, Imagen, a text-to-image model, Project Mariner, an agent that can automate tasks, and others.
Here’s a short timeline that puts AI Mode in context:
May 2017: CEO Sundar Pichai announces the launch of a dedicated AI division called Google AI at I/O, the company’s annual developer conference.
March 2023: Google opens up early access to Bard, its first gen AI chatbot. It is rolled out globally several months later. Global availability follows later that year.
December 2024: Google announces Gemini, a multimodal LLM that can work with different content inputs (images, voice, and text).
February 2024: Bard is coupled with Duet AI, Google’s Workplace AI assistant, and rebranded to Gemini.
May 2024: AI Overviews, initially called Search Generative Experience, are first released.The feature reaches broad availability later in the year, combining generative AI with Google’s traditional information retrieval systems.
May 2025: Google releases AI Mode, a ChatGPT-style interface available on its homepage. It builds on the core functionality of AI overviews. It is available only in America. Early access is limited, but usage expands rapidly.
August 2025: Google begins a more comprehensive global rollout of AI Mode, signaling its transition from a test experience to a core part of Search. Google also announced that they’re increasing the number of links in AI mode. Searchers begin to see inline link carousels and contextual introductions explaining why a link might be useful to visit.
November 2025: Google integrates Gemini 3.0 and Nano Banana in AI Mode.
Using AI Mode: AI Overviews vs. AI Mode
Time for the unboxing. To illustrate how AI Mode differs from AI Overviews, consider a simple comparison scenario.
First, a general query is entered into standard Google Search: “What will be the most popular spring break destinations this year.” This triggers an AI Overview.
AI Overviewanalyzes the query, considers general context such as location, and pulls information from multiple sources, stitched together into a quick summary.
Next, the query becomes a bit more specific: “what will be the most popular spring break destinations this year with a 6-month-old baby.”
AI Overview adjusts the response based on the added constraint, returning suggestions that better match the scenario while still relying on summarization.
The same queries are then entered into Google’s AI Mode using the dedicated prompt box.
The initial response looked similar but for a subtle shift. Instead of simply summarizing existing information, AI Mode applies additional reasoning to evaluate suitability and trade-offs.
A follow-up question is then added without restating the full context.
AI Mode retains the earlier details, understands the added nuance, and returns a more detailed, logically structured set of recommendations. This ability to carry context forward highlights one of the key differences between AI Mode and AI Overviews.
How Is AI Mode Different from AI Overviews and Gemini?
Simply put, AI Mode is an expanded version of AI Overview. It incorporates and builds on features of AI Overviews, and both of these run on Gemini, which is Google’s core model.
Here’s how AI Mode compares to AI Overviews:
More advanced reasoning: While AI Overview summarizes information from across sources, AI Mode interprets that information, connects related concepts, and surfaces conclusions based on reasoning rather than aggregation alone.
Multimodal understanding: In the Google app (on Android and iOS), AI Mode can also answer questions based on photos and images.
Better handling of complex questions: AI Overview works well for simple, fact-based queries, but AI Mode is designed for nuanced, multi-layered, or exploratory questions that benefit from context and comparison.
Follow-ups: You can ask follow-up questions, and the AI will respond based on the ongoing context in a conversational style.
AI Mode is also evolving in how it presents sources. Searchers increasingly see inline links, carousels, and contextual explanations that clarify why a particular source may be useful, rather than a static list of citations.
Research conducted by NP Digital shows that these features match emerging user demand. We found, for example, that 72% of people are inputting very precise, “exactly what I want” queries. And 76% are opting for more human-like and conversational interactions.
What Is the Technology Behind AI Mode?
LLMs are vastly complex entities, and Gemini, the model that powers AI Mode, is no different. However, three main technologies separate AI Mode from standard gen AI bots and AI overviews.
Here are the three core processes that power AI Mode:
AI Mode uses a query fan-out technique. This involves breaking a query into subtopics and researching them in parallel. It then combines dozens of information points into a single answer.
Structured logic is a key part of how AI Mode works. It takes a query and then creates a reasoning chain (e.g., “user is looking for a water bottle for hiking, therefore features should include durability and size, therefore a minimum capacity of 3 liters is needed, etc.) and then validates answers against these steps to determine suitable outcomes.
Personal context plays a significant role. This means that AI Mode records conversations over time and builds a picture of individual user preferences, adjusting responses based on past inputs. It does this by creating a sort of digital ID—called a vector embedding—that is included in the answer generation process. This is a form of background memory that works in much the same way as ChatGPT.
How to Optimize Your Site for AI Mode
So-called GEO—generative engine optimization—is big business at the moment. However, there’s still a lot of uncertainty about what directly influences visibility in AI Mode, and many claims go beyond what Google has actually confirmed.
Rather than chasing shortcuts, the clearer pattern is that AI Mode rewards the same fundamentals Google has emphasized for years — with a few emerging signals becoming more important as AI-generated results mature.
Let’s look at what we actually know about “ranking” in AI Mode.
1. Traditional SEO principles still apply
Google has been pretty unequivocal about this. Traditional SEO optimization is still the most important activity for appearing in AI Overviews and AI Mode.
As long as you follow SEO basics—create useful content, generate natural backlinks, and optimize technical health—you’re ahead of 90% of the competition.
Research also backs this up. Ziptie, for example, found that sites with a number one ranking in traditional search results are 25% more likely to be featured in AI Overviews.
2. Indexed web pages are eligible to appear in AI Mode
On the technical front, there’s good news. As long as a page is indexed, it’s eligible to appear in AI Mode. There are no other requirements. You can check your pages are indexed using the URL inspection tool in Search Console.
If you’re having issues, be sure to check you’re adhering to Google Search technical requirements. Make sure Googlebots aren’t blocked, pages return 200 success codes, and content doesn’t violate spam policies.
3. Forum and discussion board citations matter
Recent analysis across multiple large language models shows that discussion forums and Q&A platforms are frequently referenced when generating explanatory or opinion-based answers, particularly for queries that benefit from lived experience or peer discussion.
Reddit, in particular, continues to surface prominently across AI-generated responses, in part due to its scale, freshness, and breadth of first-hand commentary. However, the weighting of any single forum is dynamic and continues to evolve as Google refines how AI Mode sources and cites content.
Given Reddit and Google’s partnership, it’s likely that well-moderated, high-signal community content remains an important input for Gemini-powered experiences.
If you haven’t already, build up a presence on Reddit and other similar forums and discussion boards. This can help reinforce topical authority and increase the likelihood of being referenced in AI-generated answers.
4. Schema markup (structured data) gives you a boost
Schema markup, also called structured data, is a type of code that you add to your content. It givessearch engines and AI systems additional information to help them understand what it’s about. One simple example of schema markup is identifying a recipe as “@type”: “Recipe.”
Research by Aiso has shown that LLMs extract more accurate data from pages with schema markup, with a 30% improvement in quality.
Using schema markup helps reduce ambiguity for AI-generated answers and increases the likelihood that your content is interpreted correctly. Fortunately, adding schema to your web page is relatively straightforward.
5. Digital PR is important
LLMs access information in two ways. They are initially trained on a large amount of information—called training data—and they can also access new online content, such as news articles.
Digital PR is all about acquiring mentions and backlinks from reputable third-party sources, especially media websites.
Brand mentions boost visibility in LLM training materials and strengthen topical associations (a measure of the number of times you’re cited in relation to a specific subject), meaning you’re more likely to appear in responses.
Digital PR involves creating share-worthy content and contacting journalists and site admins to ask them to feature you. Our research shows that original research and tools are especially good at encouraging people to talk about your brand.
6. Be Ready To Test and Track AI Visibility
As AI Mode becomes more integrated into the search experience, visibility is no longer limited to rankings alone. Brands need ways to measure whether — and how often — their content appears in AI-generated answers.
New AI visibility platforms, such as Writesonic and Profound, are emerging to help track citations, brand mentions, and source inclusion across large language models. These tools provide early signals about which content formats, topics, and entities are being surfaced by AI systems.
Monitoring this data allows teams to validate whether SEO, digital PR, and structured data efforts are translating into real AI exposure. It also makes it easier to spot gaps, test changes, and adapt as Google continues to evolve AI Mode.
Treat AI visibility tracking as a complement to traditional performance metrics, not a replacement. Both matter.
What Does AI Mode Mean for the Future of Search?
There are a lot of unknowns about how increased use of AI tools will affect the way people look for information. That said, emerging usage patterns are already pointing to meaningful shifts in how AI SEO is evolving.
With that in mind, here are five implications for the future of search as AI Mode becomes more prominent:
Searchers will still click through to websites: Early performance data from AI-generated results shows that clicks are reduced for some informational queries, but not eliminated. Users continue to seek out original content, particularly for complex decisions, comparisons, and high-consideration topics.
Long-play brand building will become more common: LLMs use third-party brand mentions to measure the authority of publishers. Popular brands are cited more by gen AI search tools and, as such, long-term brand building with an outlook of five years and above will become much more common.
Marketing strategies will become more omnichannel: As AI Mode absorbs more discovery queries, brands will need visibility across multiple platforms, not just Google’s traditional results. This reinforces a broader “search everywhere” approach, where discovery happens across AI tools, social platforms, and communities.
People will favor AI for more specific searches: Analysis of large query sets shows that AI-generated results appear more frequently for longer, more specific searches. Short, navigational queries may still rely on traditional results, while nuanced questions increasingly trigger AI Mode.
Trust in AI will continue to grow: Hallucinations are a big problem with AI Overviews and AI Mode also makes mistakes, according to user reports. With that said, user adoption and satisfaction with AI-powered search tools are trending upward. As Google refines AI Mode, usage is likely to grow alongside improvements in reliability and transparency.
FAQs
What is Google AI Mode?
Google AI Mode is a conversational search experience powered by Gemini, Google’s core AI model. It provides more detailed, context-aware answers to search queries, similar in format to tools like ChatGPT, but integrated directly into Google Search.
Instead of returning a list of links first, AI Mode synthesizes information from multiple sources and presents a reasoned response, with links available for deeper exploration. Users can ask follow-up questions, and the system carries context forward, making the interaction feel more like an ongoing conversation.
AI Mode builds on AI Overviews but goes further by handling complex, multi-step, or exploratory queries more effectively.
How do you use Google AI Mode?
In supported regions, users can access AI Mode directly from the Google homepage. On some AI-generated results, selecting “show more” will also open AI Mode automatically, allowing users to continue their search without returning to traditional results.
Once inside AI Mode, questions can be entered conversationally, and follow-ups don’t require repeating the original context. Users can still click through to source pages or switch back to standard search results at any point.
AI Mode is no longer accessed through Google Labs, and there is no separate opt-in process.
How do you optimize your website for Google AI Mode?
Start with strong SEO fundamentals, which Google has confirmed remain the primary eligibility signals. Beyond that, sites that appear most often in AI-generated answers tend to share a few traits:
Create useful, high-quality content that fully addresses search intent.
Make sure pages are indexed and technically accessible
Use schema markup to clarify meaning and structure
Earn third-party brand mentions from trusted publishers and communities
Build topical authority through consistent, focused publishing
Visibility in AI Mode is not guaranteed, but sites that are trusted, well-structured, and frequently cited are more likely to be referenced in AI-generated responses..
Search Is Changing but the Fundamentals Still Apply
The way people search is changing, and Google AI Mode is accelerating that shift.
People are finding information across a host of different platforms, not just Google. AI-generated answers are reducing clicks. And traditional content publishers are under pressure as gen AI eats up demand.
At the same time, AI Mode doesn’t discard the fundamentals that have always mattered. Google is still prioritizing relevance, authority, and usefulness — it’s just surfacing them in new ways. Sites that understand search intent, build credibility beyond their own domains, and structure content clearly are better positioned to stay visible as AI Mode expands.
From the very start, Google had one aim: to solve users’ needs. That’s also what AI tools seek to do, and their models will continuously be designed to that end.
Understanding your customers—and providing what they want through high-quality, useful content—is the best way of futureproofing your business and ensuring long-term visibility in LLMs.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-21 20:00:002026-01-21 20:00:00What Is Google AI Mode and How Does It Work?
Marketing budgets aren’t collapsing in 2026, but they are making a shift. That’s the part many teams miss.
That distinction matters. Rising media costs, weaker attribution, privacy changes, and AI-driven search shifts have created real pressure, but the data shows budgets are still moving into marketing. They’re just moving with more intent.
Our latest NP Digital research on how marketers are spending their money in 2026 shows a clear pattern: teams are reallocating toward channels that defend ROI, compound value, and hold up under volatility. This article breaks down what’s changing, why it’s happening, and how to think about your own marketing budget for 2026 without relying on outdated assumptions.
Key Takeaways
Marketing budgets in 2026 are not shrinking. They’re being consolidated around confidence, efficiency, and defensibility.
Channels tied directly to conversion, retention, and owned data are absorbing spend, while those with declining signal quality or unclear ROI are losing ground.
SEO and content are not disappearing, but expectations have shifted toward extractability, authority, and measurable downstream impact.
Paid media still plays a critical role, but marginal efficiency now determines where dollars stay or move.
Teams that can reallocate budget quickly, based on real performance signals, are gaining a structural advantage.
The State of the Marketing Budget in 2026
Let’s start with the context that’s shaping every budget decision this year.
Media costs continue rising across search and social. CPCs aren’t coming down, and competition for attention keeps intensifying. At the same time, privacy changes have reduced signal quality, making it harder to target precisely and measure accurately.
Economic uncertainty is pushing marketers to defend ROI more aggressively than ever. Every dollar needs a clear path to revenue, and channels that can’t prove their value are getting cut.
AI adoption has accelerated faster than most teams can operationalize. Nearly everyone is experimenting, but few have figured out how to turn that experimentation into systematic advantage. The gap between “using AI” and “getting results from AI” is wider than you’d think.
Here’s the good news: budgets are not disappearing. They are being reallocated with intent. The marketers who understand where efficiency lives and where it’s eroding are the ones capturing share.
What’s Driving Budget Decisions
The shift in spending comes down to a few core factors:
Purchase journeys are more complex. 94% of purchase journeys now involve multiple touchpoints. Search and social are the most influential, appearing in 79% and 73% of journeys respectively. But they rarely operate in isolation. Budgets are being distributed to support visibility across the full path to purchase, not just the final click.
Attribution is noisier. Third-party signals keep degrading, so budgets are following channels that stay measurable. Paid search, email, and CRO all offer clearer attribution than many emerging channels. In uncertain conditions, that clarity matters.
Organic reach is declining.Zero-click searches now account for roughly 58-60% of Google searches. Organic listings are being pushed below the fold by AI Overviews, ads, and SERP features. This is reducing organic click opportunities and increasing reliance on paid coverage.
Efficiency matters more than volume. When media costs rise and margins compress, growth comes from doing more with what you have. That’s why CRO, lifecycle marketing, and retention are getting more investment even as some acquisition channels face cuts.
The marketers who are winning in 2026 understand that budget decisions aren’t about chasing trends. They’re about matching investment to where performance can be proven and defended.
Where Budgets Are Growing, Holding, and Declining
Let’s look at the actual spending patterns across channels. We’ll start with the big picture, then break down what’s happening in each major category.
Overall Marketing Budget Direction
61% of B2B marketers are increasing overall spend this year, with 20% holding flat and 19% decreasing. B2C is slightly more cautious: 57% are increasing, 32% holding flat, and 11% decreasing.
The takeaway? Growth budgets still exist, but they’re being deployed more carefully than in previous years.
The Biggest Budget Shifts Since 2025
Here’s where the reallocation is happening:
SEO spend has rebounded sharply. After a softer 2025, 61% of marketers are now increasing SEO budgets (up from 44% last year). The return of confidence in organic search reflects a few things: better AI tools for content production, clearer ROI measurement, and recognition that organic visibility still matters even in a zero-click environment.
AI SEO investment is accelerating dramatically. 98% of marketers plan to increase AI SEO spend in 2026. This isn’t just hype. Teams have figured out that AI can accelerate research, content production, and optimization cycles without sacrificing quality.
CRO and UX remain a priority. 52% are increasing spend, and only 25% are planning decreases. When traffic is harder to earn, you optimize what you have. CRO delivers measurable improvements regardless of where visitors come from.
Content creation growth has slowed. Only 32% plan increases, while 31% plan to reduce spend. This reflects a shift away from volume-based content strategies toward fewer, higher-quality assets that can be repurposed across channels.
Organic social media is facing the steepest pullback. 64% of marketers are planning budget decreases. Organic reach has declined to the point where most brands treat social as a support channel, not a growth engine.
Email and lifecycle budgets have stabilized. 60% are keeping spend flat and 23% are increasing. Email remains one of the most reliable channels for retention and conversion, especially as first-party data becomes more valuable.
The pattern across all of this? Increased focus on channels tied to conversion and retention. Reduced investment in traditional advertising channels with declining efficiency signals. And a shift away from broad content volume toward targeted execution.
Channel-by-Channel Breakdown
Now let’s get specific. Here’s what’s happening in each major channel category.
SEO and Organic Search
SEO budgets are rebounding, but the strategy is changing. Digital channels now represent 61.1% of total marketing spend, and organic search remains a major piece. But zero-click searches and AI Overviews are changing how value gets captured.
Search is becoming answer-first. Google increasingly resolves intent directly in the SERP through AI Overviews, featured snippets, and knowledge panels. This means fewer clicks but doesn’t make SEO irrelevant, just less predictable on its own. SEO needs to optimize for visibility and citation, not just click-through.
Treat rankings as one output among several that matter. Visibility in AI Overviews and featured snippets matters as much as position one. Prioritize topics tied to revenue intent and customer lifecycle stages. Build content that can win both ways: clicks and citations. Measure organic success across visibility, assisted conversion, and brand lift. More brands are pairing search with other channels, like community, that capture attention off the SERP.
AI systems increasingly resolve intent directly in the SERP, which concentrates click opportunities into fewer, higher-intent moments. Brands that show up consistently in AI-generated answers are building trust and authority even when users don’t click.
Content and Thought Leadership
Content budgets are being reallocated toward assets that influence discovery, trust, and conversion across channels. Thought leadership is increasingly used to earn inclusion in search results and AI-generated answers.
Content still fuels discovery, even when the click doesn’t happen immediately. Strong content is what AI systems summarize, cite, and pull into answers. In a noisy market, a differentiated perspective is one of the few advantages you can own.
Design content for multiple outputs: search, AI summaries, social, sales. Prioritize fewer topics with deeper authority and a clearer point of view. Shift from publishing volume to publishing leverage. Use AI for research acceleration and synthesis, but keep humans in charge of insight, brand voice, and editorial judgment.
Creators especially matter here as a result. They help brands move beyond renting attention and toward building long-term loyalty that holds up even as platforms and algorithms change. This is important because things like original insight, point of view, brand voice, and credibility are not things AI can manufacture on its own. Editorial judgment and prioritization are still very human decisions.
AI can help scale content, but the trust, experience, and perspective that influencers, creators, and SMEs offer gives content weight and relevance with an audience.
Paid Search
Paid search remains a core demand capture channel, but expectations have reset. CPC inflation and competition continue to compress efficiency. Reduced organic click availability increases reliance on paid coverage.
Shift from keyword expansion to coverage efficiency. Prioritize high-intent, defensible queries over volume. Use fewer keywords with tighter control. Coordinate more closely with SEO and CRO. Put higher emphasis on marginal ROI rather than raw spend growth.
AI and automation now control bidding, targeting, and pacing by default. Competitive advantage shifts to inputs: structure, data quality, conversion signals.
Paid Social
Paid social remains the most flexible scaled reach channel. Platform-level shifts show TikTok leading growth at 57%, YouTube at 53%, and Instagram at 46%. Facebook is under pressure, with 36% decreasing spend and only 18% increasing.
Creative velocity matters more than audience hacks. Message clarity beats novelty. Platform-native formats outperform repurposed ads. Measurement focuses on incremental lift, not just ROAS. Close alignment with lifecycle and email capture turns paid social prospects into owned relationships.
Organic Social
Some cuts are dramatic—and predictable.
Organic social: 64 percent decreasing investment.
Content creation volume: Only 32 percent increasing; 31 percent decreasing.
Traditional display: Banner ads are essentially frozen (63 percent flat).
Facebook paid: Thirty-six percent decreasing.
The pattern is clear: Teams are cutting channels with declining reach, opaque ROI, or inflated costs.
But that doesn’t mean content or social isn’t important—it simply means they’re no longer funded as volume engines. The strategy is changing, not disappearing.
Influencer Marketing
Community building is one of the strongest growth areas in 2026 budgets, with 69% of marketers increasing spend. Influencer marketing is seeing even stronger growth at 78%. These channels support retention, referrals, and brand defensibility.
Friend and direct traffic drive more conversions than any paid channel. Don’t just focus on the channels that cause direct conversions. Focus on the channels that create brand awareness and influence purchase decisions earlier in the journey.
Email + Lifecycle
Email and lifecycle budgets remain resilient because performance is driven by trust, relevance, and timing. 60% are keeping spend flat and 23% are increasing. First-party data enables consistent message delivery when paid reach and signal quality decline.
Customer acquisition isn’t the only scalable lever anymore. Retention is the controllable one. Retention programs stabilize margins as media costs, auctions, and platforms stay volatile.
AI enables real-time message sequencing based on behavior, dynamic content assembly across email and SMS, and faster iteration without rebuilding entire lifecycle programs.
CRO and UX
CRO and UX are treated as defensive investments that improve performance regardless of traffic source. 52% are increasing spend. Traffic is harder to earn and easier to lose. Fewer clicks mean every visit carries more revenue weight.
AI-assisted test generation allows faster signal detection across variants and continuous optimization tied to real behavior. Competitive advantage shifts to inputs: structure, data quality, and conversion signals.
A Simple Framework: How to Build a Smarter 2026 Marketing Budget
Here’s a practical framework for budget agility.
Anchor spend in proven demand. Protect budgets tied directly to revenue and high-intent activity. These are your foundation channels.
Build flexibility around performance signals. Shift dollars based on real outcomes. Don’t lock yourself into annual commitments for channels that aren’t delivering.
Separate experimentation from core investment. Test intentionally without destabilizing what works. Set aside 10-15% of budget for testing new channels and tactics.
Reallocate faster than your competitors. Speed of adjustment becomes a competitive advantage in volatile conditions. Review performance monthly and be willing to move budget mid-quarter.
The winners in 2026 will be faster, not just bigger. Budgets are consolidating around fewer, higher-confidence channels. Efficiency and retention now matter as much as acquisition. AI is reshaping how value is captured, not just how work gets done. Visibility, conversion, and experience must be planned together.
Conclusion
Marketing in 2026 requires a different approach to budgeting. The channels that worked three years ago still work, but they work differently. The measurement that mattered in 2023 doesn’t tell the full story anymore. The strategies that justified budget in 2024 need updating for how search, social, and AI have evolved.
The marketers who thrive this year will be the ones who allocate budget where performance is provable, build systems that compound value over time, and move faster than their competitors when signals change.
If you need help translating these budget signals into a channel-specific growth plan, aligning SEO, paid media, content, and lifecycle into one system, or building measurement models that reflect zero-click and AI-driven behavior, we can help. Reach out to discuss your 2026 strategy.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-20 20:00:002026-01-20 20:00:00How Marketers Are Spending in 2026
Google Shopping API migration deadlines are approaching, and advertisers who don’t act risk disrupted Shopping and Performance Max campaigns.
What’s happening. Google is sunsetting older API versions and pushing all merchants toward the Merchant API as the single source of truth for Shopping Ads. Advertisers can confirm which API they’re using in Merchant Center Next by checking the “Source” column under Settings > Data sources, where any listing marked “Content API” requires action.
Why we care. Google is actively reminding advertisers to migrate to the new Merchant API, with beta users required to complete the switch by Feb. 28th, and Content API users by Aug. 18th. If feeds aren’t properly reconnected, campaigns that rely on product data — especially those using feed labels — may stop serving altogether.
The risk. Feed labels don’t automatically carry over during migration. If advertisers don’t update their campaign and feed configurations in Google Ads, Shopping and Performance Max setups that depend on those labels for structure or bidding logic can quietly break.
What to do now. Google recommends completing the migration well ahead of the deadline, reviewing feed labels, and validating campaign delivery after reconnecting feeds. The transition was first outlined in mid-2024, but enforcement is now imminent as Google moves closer to fully retiring legacy APIs.
Bottom line. This isn’t a cosmetic backend change — it’s a technical cutoff that can directly impact revenue if ignored.
First seen. This update was spotted by Google Shopping Specialist Emmanuel Flossie, who shared the warnings he received on LinkedIn.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/01/shopping-api-reminder-2-piZqPR.jpg?fit=1044%2C1220&ssl=112201044http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-20 18:56:342026-01-20 18:56:34Google Shopping API cutoff looms, putting ad delivery at risk
The debate around llms.txt has become one of the most polarized topics in web optimization.
Some treat llms.txt as foundational infrastructure, while many SEO veterans dismiss it as speculative theater. Platform tools flag missing llms.txt files as site issues, yet server logs show that AI crawlers rarely request them.
Google even adopted it. Sort of. In December, the company added llms.txt files across many developer and documentation sites.
The signal seemed clear: if the company behind the sitemap standard is implementing llms.txt, it likely matters.
Except Google pulled it from its Search developer docs within 24 hours.
Google’s John Mueller said the change came from a sitewide CMS update that many content teams didn’t realize was happening. When asked why the files still exist on other Google properties, Mueller said they aren’t “findable by default because they’re not at the top-level” and “it’s safe to assume they’re there for other purposes,” not discovery.
The llms.txt research
We wanted data, not debates.
So we tracked llms.txt adoption across 10 sites in finance, B2B SaaS, ecommerce, insurance, and pet care — 90 days before implementation and 90 days after.
We measured AI crawl frequency, traffic from ChatGPT, Claude, Perplexity, and Gemini, and what else these sites changed during the same window.
The results:
Two of the 10 sites saw AI traffic increases of 12.5% and 25%, but llms.txt wasn’t the cause.
Eight sites saw no measurable change.
One site declined by 19.7%.
The 2 ‘success’ stories weren’t about the file
The Neobank: 25% growth
This digital banking platform implemented llms.txt early in Q3 2025. Ninety days later, AI traffic was up 25%.
Here’s what else happened in that window:
A PR campaign around its banking license, with coverage in major national publications.
Product pages restructured with extractable comparison tables for interest rates, fees, and minimums.
Twelve new FAQ pages optimized for extraction.
A rebuilt resource center with new banking information and concepts.
Technical SEO issues, like header structures, fixed.
When a company gets Bloomberg coverage the same month it launches optimized content and fixes crawl errors, you can’t isolate the llms.txt as the growth driver.
The B2B SaaS platform: 12.5% growth
This workflow automation company saw traffic jump 12.5% two weeks after implementing llms.txt.
Perfect timing. Case closed. Except…
Three weeks earlier, the company published 27 downloadable AI templates covering project management frameworks, financial models, and workflow planners. Functional tools, not content marketing, drove the engagement behind the spike.
Google organic traffic to the templates rose 18% during the same period and continued climbing throughout the 90 days we measured.
Search engines and AI models surfaced the templates because they solved real problems and launched an entirely new site section — not because they were listed in an llms.txt file.
The 8 sites where nothing happened after uploading llms.txt
Eight sites saw no measurable change. One declined by 19.7%.
The decline came from an insurance site that implemented llms.txt in early September. The drop likely had nothing to do with the file.
The same pattern showed up across all traffic channels. Llms.txt neither prevented the decline nor created any advantage.
The other seven sites — ecommerce (pet supplies, home goods, fashion), B2B SaaS (HR tech, marketing analytics), finance, and pet care — all documented their best existing content in llms.txt. That included product pages, case studies, API docs, and buying guides.
Ninety days later, nothing changed. Traffic stayed flat. Crawl frequency was identical. The content was already indexed and discoverable, and the file didn’t alter that.
Sites that launched new, functional content saw gains. Sites that documented existing content saw no gains.
Why the disconnect?
No major LLM provider has officially committed to parsing llms.txt. Not OpenAI. Not Anthropic. Not Google. Not Meta.
“None of the AI services have said they’re using llms.txt, and you can tell when you look at your server logs that they don’t even check for it.”
That’s the reality. The file exists. The advocacy exists. The adoption by platforms doesn’t show it (yet!).
The token efficiency argument (and its limits)
The strongest case for llms.txt is about efficiency. Markdown saves time and tokens when AI agents parse documentation. Clean structure instead of complex HTML with navigation, ads, and JavaScript.
This matters — but almost exclusively for developer tools and API documentation. If your audience uses AI coding assistants like Cursor or GitHub Copilot to interact with your product, token efficiency improves integration.
For ecommerce selling pet supplies, insurance explaining coverage, or B2B SaaS targeting nontechnical buyers, token efficiency doesn’t translate into traffic.
llms.txt is a sitemap, not a strategy
The most accurate comparison is a sitemap.
Sitemaps are valuable infrastructure. They help search engines discover and index content more efficiently. But no one credits traffic growth to adding a sitemap. The sitemap documents what exists; the content drives discovery.
Llms.txt works the same way. It may help AI models parse your site more efficiently if they choose to use it, but it doesn’t make your content more useful, authoritative, or likely to answer user queries.
In our analysis, the sites that grew did so because they:
Created functional assets like downloadable templates, comparison tables, and structured data.
Earned external visibility through press and backlinks.
Fixed technical barriers such as crawl and indexing issues.
Published content optimized for extraction, including FAQs and structured comparisons.
Llms.txt documented those efforts. It didn’t drive them.
What actually works
The two successful sites show what matters:
Create functional, extractable assets. The SaaS platform built 27 downloadable templates that users could deploy immediately. AI models surfaced these because they solved real problems, not because they were listed in a markdown file.
Structure content for extraction. The neobank rebuilt product pages with comparison tables with interest rates, fees, and account minimums. This is data AI models can pull directly into answers without interpretation.
Fix technical barriers first. The neobank fixed crawl errors that had blocked content for months. If AI models can’t access your content, no amount of documentation helps.
Earn external validation. Coverage from Bloomberg and other major publications drove referral traffic, branded searches, and likely influenced how AI models assess authority.
Optimize for user intent. Both sites answered specific queries: “best project management templates” and “how do [brand] interest rates compare?” Models surface content that maps to what users are asking, not content that’s merely well documented.
None of this requires llms.txt. All of it drives results.
Should you implement an llms.txt file?
If you’re a developer tool where AI coding assistants are a primary distribution channel, then yes — token efficiency matters. Your audience is already using agents to interact with documentation.
For everyone else, treat llms.txt like a sitemap: useful infrastructure, not a growth lever.
It’s good practice to have. It won’t hurt. But the hour spent implementing llms.txt is often better spent restructuring product pages with extractable data, publishing functional assets, fixing technical SEO issues, creating FAQ content, or earning press coverage.
Those tactics have shown real ROI in AI discovery. Llms.txt hasn’t — at least not yet.
The lesson isn’t that llms.txt is bad. It’s that we’re reaching for control in a system where the rules aren’t written yet. Llms.txt offers that comfort: something concrete, actionable, and familiar, shaped like the web standards we already know.
But looking like infrastructure isn’t the same as functioning like infrastructure.
Focus on what actually works:
Create useful content.
Structure it for extraction.
Make it technically accessible.
Earn external validation.
Platforms and formats will change. The fundamentals won’t.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/01/does-llms-txt-matter-JgN5U9.png?fit=1920%2C1080&ssl=110801920http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-20 18:07:362026-01-20 18:07:36Does llms.txt matter? We tracked 10 sites to find out
AI has quickly risen to the top of the corporate agenda. Despite this, 95% of businesses struggle with adoption, MIT research found.
Those failures are no longer hypothetical. They are already playing out in real time, across industries, and often in public.
For companies exploring AI adoption, these examples highlight what not to do and why AI initiatives fail when systems are deployed without sufficient oversight.
1. Chatbot participates in insider trading, then lies about it
Researchers prompted the AI bot to act as a trader for a fake financial investment company.
They told the bot that the company was struggling, and they needed results.
They also fed the bot insider information about an upcoming merger, and the bot affirmed that it should not use this in its trades.
The bot still made the trade anyway, citing that “the risk associated with not acting seems to outweigh the insider trading risk,” then denied using the insider information.
Marius Hobbhahn, CEO of Apollo Research (the company that conducted the experiment), said that helpfulness “is much easier to train into the model than honesty,” because “honesty is a really complicated concept.”
He says that current models are not powerful enough to be deceptive in a “meaningful way” (arguably, this is a false statement, see this and this).
However, he warns that it’s “not that big of a step from the current models to the ones that I am worried about, where suddenly a model being deceptive would mean something.”
AI has been operating in the financial sector for some time, and this experiment highlights the potential for not only legal risks but also risky autonomous actions on the part of AI.
2. Chevy dealership chatbot sells SUV for $1 in ‘legally binding’ offer
An AI-powered chatbot for a local Chevrolet dealership in California sold a vehicle for $1 and said it was a legally binding agreement.
In an experiment that went viral across forums on the web, several people toyed with the local dealership’s chatbot to respond to a variety of non-car-related prompts.
One user convinced the chatbot to sell him a vehicle for just $1, and the chatbot confirmed it was a “legally binding offer – no takesies backsies.”
Fullpath, the company that provides AI chatbots to car dealerships, took the system offline once it became aware of the issue.
The company’s CEO told Business Insider that despite viral screenshots, the chatbot resisted many attempts to provoke misbehavior.
Still, while the car dealership didn’t face any legal liability from the mishap, some argue that the chatbot agreement in this case may be legally enforceable.
3. Supermarket’s AI meal planner suggests poison recipes and toxic cocktails
A New Zealand supermarket chain’s AI meal planner suggested unsafe recipes after certain users prompted the app to use non-edible ingredients.
Recipes like bleach-infused rice surprise, poison bread sandwiches, and even a chlorine gas mocktail were created before the supermarket caught on.
A spokesperson for the supermarket said they were disappointed to see that “a small minority have tried to use the tool inappropriately and not for its intended purpose,” according to The Guardian
The supermarket said it would continue to fine-tune the technology for safety and added a warning for users.
That warning stated that recipes are not reviewed by humans and do not guarantee that “any recipe will be a complete or balanced meal, or suitable for consumption.”
The customer inquired about the airline’s bereavement rates via its AI assistant after the death of a family member.
The chatbot responded that the airline offered discounted bereavement rates for upcoming travel or for travel that has already occurred, and linked to the company’s policy page.
Unfortunately, the actual policy was the opposite, and the airline did not offer reduced rates for bereavement travel that had already happened.
The fact that the chatbot linked to the policy page with the correct information was an argument the airline made in court when trying to prove its case.
However, the tribunal (a small claims-type court in Canada) did not side with the defendant. As reported by Forbes, the tribunal called the scenario “negligent misrepresentation.”
Christopher C. Rivers, Civil Resolution Tribunal Member, said this in the decision:
“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”
This is just one of many examples where people have been dissatisfied with chatbots due to their technical limitations and propensity for misinformation – a trend that is sparking more and more litigation.
The Commonwealth Bank of Australia (CBA) believed the AI voicebots could reduce call volume by 2,000 calls per week. But it didn’t.
Instead, left without the assistance of its 45-person call center, the bank scrambled to offer overtime to remaining workers to keep up with the calls, and get other management workers to answer calls, too.
Meanwhile, the union representing the displaced workers elevated the situation to the Finance Sector Union (like the Equal Opportunity Commission in the U.S.).
It was only one month after CBA replaced workers that it issued an apology and offered to hire them back.
CBA said in a statement that they did not “adequately consider all relevant business considerations and this error meant the roles were not redundant.”
Other U.S. companies have faced PR nightmares as well when attempting to replace human roles with AI.
Perhaps that’s why certain brands have deliberately gone in the opposite direction, making sure people remain central to every AI deployment.
Nevertheless, the CBA debacle shows that replacing people with AI without fully weighing the risks can backfire quickly and publicly.
6. New York City’s chatbot advises employers to break labor and housing laws
Just months after its launch, people started noticing the inaccuracies provided by the Microsoft-powered chatbot.
The chatbot offered unlawful guidance across the board, from telling bosses they could pocket employees’ tips and skip notifying staff about schedule changes to tenant discrimination and cashless stores.
“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup
This is despite the city’s initial announcement promising that the chatbot would provide trusted information on topics such as “compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines.”
“Anyone that knows technology knows this is how it’s done,” and that “only those who are fearful sit down and say, ‘Oh, it is not working the way we want, now we have to run away from it all together.’ I don’t live that way.”
Critics called his approach reckless and irresponsible.
This is yet another cautionary tale in AI misinformation and how organizations can better handle the integration and transparency around AI technology.
7. Chicago Sun-Times publishes fake book list generated by AI
The Chicago Sun-Times ran a syndicated “summer reading” feature that included false, made-up details about books after the writer relied on AI without fact-checking the output.
King Features Syndicate, a unit of Hearst, created the special section for the Chicago Sun-Times.
Not only were the book summaries inaccurate, but some of the books were entirely fabricated by AI.
“Syndicated content in Sun-Times special section included AI-generated misinformation,” Chicago Sun-Times
The author, hired by King Features Syndicate to create the book list, admitted to using AI to put the list together, as well as for other stories, without fact-checking.
And the publisher was left trying to determine the extent of the damage.
The Chicago Sun-Times said print subscribers would not be charged for the edition, and it put out a statement reiterating that the content was produced outside the newspaper’s newsroom.
Meanwhile, the Sun-Times said they are in the process of reviewing their relationship with King Features, and as for the writer, King Features fired him.
Oversight matters
The examples outlined here show what happens when AI systems are deployed without sufficient oversight.
When left unchecked, the risks can quickly outweigh the rewards, especially as AI-generated content and automated responses are published at scale.
Organizations that rush into AI adoption without fully understanding those risks often stumble in predictable ways.
In practice, AI succeeds only when tools, processes, and content outputs keep humans firmly in the driver’s seat.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/01/NYCs-AI-Chatbot-Tells-Businesses-to-Break-the-Law-The-Markup-nyYx7V.png?fit=1217%2C463&ssl=14631217http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-20 15:00:002026-01-20 15:00:007 real-world AI failures that show why adoption keeps going wrong
“LLMs have trained on – read and parsed – normal web pages since the beginning,” he said in a recent discussion on Bluesky. “Why would they want to see a page that no user sees?”
His comparison was blunt: LLM-only pages are like the old keywords meta tag. Available for anyone to use, but ignored by the systems they’re meant to influence.
So is this trend actually working, or is it just the latest SEO myth?
The rise of ‘LLM-only’ web pages
The trend is real. Sites across tech, SaaS, and documentation are implementing LLM-specific content formats.
The question isn’t whether adoption is happening, it’s whether these implementations are driving the AI citations teams hoped for.
Here’s what content and SEO teams are actually building.
llms.txt files
A markdown file at your domain root listing key pages for AI systems.
The format was introduced in 2024 by AI researcher Simon Willison to help AI systems discover and prioritize important content.
Plain text lives at yourdomain.com/llms.txt with an H1 project name, brief description, and organized sections linking to important pages.
Stripe’s implementation at docs.stripe.com/llms.txt shows the approach in action:
markdown# Stripe Documentation
> Build payment integrations with Stripe APIs
## Testing
- [Test mode](https://docs.stripe.com/testing): Simulate payments
## API Reference
- [API docs](https://docs.stripe.com/api): Complete API reference
The payment processor’s bet is simple: if ChatGPT can parse their documentation cleanly, developers will get better answers when they ask, “how do I implement Stripe.”
They’re not alone. Current adopters include Cloudflare, Anthropic, Zapier, Perplexity, Coinbase, Supabase, and Vercel.
Markdown (.md) page copies
Sites are creating stripped-down markdown versions of their regular pages.
The implementation is straightforward: just add .md to any URL. Stripe’s docs.stripe.com/testing becomes docs.stripe.com/testing.md.
Everything gets stripped out except the actual content. No styling. No menus. No footers. No interactive elements. Just pure text and basic formatting.
The thinking: if AI systems don’t have to wade through CSS and JavaScript to find the information they need, they’re more likely to cite your page accurately.
/ai and similar paths
Some sites are building entirely separate versions of their content under /ai/, /llm/, or similar directories.
You might find /ai/about living alongside the regular /about page, or /llm/products as a bot-friendly alternative to the main product catalog.
Sometimes these pages have more detail than the originals. Sometimes they’re just reformatted.
The idea: give AI systems their own dedicated content that’s built for machine consumption, not human eyes.
If a person accidentally lands on one of these pages, they’ll find something that looks like a website from 2005.
Instead of creating separate pages, they built structured data feeds that live alongside their regular ecommerce site.
The files contain clean JSON – specs, pricing, and availability.
Everything an AI needs to answer “what’s the best Dell laptop under $1000” without having to parse through product descriptions written for humans.
You’ll typically find these files as /llm-metadata.json or /ai-feed.json in the site’s directory.
# Dell Technologies
> Dell Technologies is a leading technology provider, specializing in PCs, servers, and IT solutions for businesses and consumers.
## Product and Catalog Data
- [Product Feed - US Store](https://www.dell.com/data/us/catalog/products.json): Key product attributes and availability.
- [Dell Return Policy](https://www.dell.com/return-policy.md): Standard return and warranty information.
## Support and Documentation
- [Knowledge Base](https://www.dell.com/support/knowledge-base.md): Troubleshooting guides and FAQs.
This approach makes the most sense for ecommerce and SaaS companies that already keep their product data in databases.
They’re just exposing what they already have in a format AI systems can easily digest.
Real-world citation data: What actually gets referenced
The theory sounds good. The adoption numbers look impressive.
But do these LLM-optimized pages actually get cited?
The individual analysis
Landwehr, CPO and CMO at Peec AI, ran targeted tests on five websites using these tactics. He crafted prompts specifically designed to surface their LLM-friendly content.
Some queries even contained explicit 20+ word quotes designed to trigger specific sources.
Across nearly 18,000 citations, here’s what he found.
llms.txt: 0.03% of citations
Out of 18,000 citations, only six pointed to llms.txt files.
The six that did work had something in common: they contained genuinely useful information about how to use an API and where to find additional documentation.
The kind of content that actually helps AI systems answer technical questions. The “search-optimized” llms.txt files, the ones stuffed with content and keywords, received zero citations.
Markdown (.md) pages: 0% of citations
Sites using .md copies of their content got cited 3,500+ times. None of those citations pointed to the markdown versions.
The one exception: GitHub, where .md files are the standard URLs.
They’re linked internally, and there’s no HTML alternative. But these are just regular pages that happen to be in markdown format.
/ai pages: 0.5% to 16% of citations
Results varied wildly depending on implementation.
One site saw 0.5% of its citations point to its/ai pages. Another hit 16%.
The difference?
The higher-performing site put significantly more information in their /ai pages than existed anywhere else on their site.
Keep in mind, these prompts were specifically asking for information contained in these files.
Even with prompts designed to surface this content, most queries ignored the /ai versions.
JSON metadata: 5% of citations
One brand saw 85 out of 1,800 citations (5%) come from their metadata JSON file.
The critical detail here is that the file contained information that didn’t exist anywhere else on the website.
Once again, the query specifically asked for those pieces of information.
Instead of testing individual sites, they analyzed 300,000 domains to see if llms.txt adoption correlated with citation frequency at scale.
Only 10.13% of domains, or 1 in 10, had implemented llms.txt.
For context, that’s nowhere near the universal adoption of standards like robots.txt or XML sitemaps.
During the study, an interesting relationship between adoption rates and traffic levels emerged.
Sites with 0-100 monthly visits adopted llms.txt at 9.88%.
Sites with 100,001+ visits? Just 8.27%.
The biggest, most established sites were actually slightly less likely to use the file than mid-tier ones.
But the real test was whether llms.txt impacted citations.
SE Ranking built a machine learning model using XGBoost to predict citation frequency based on various factors, including the presence of llms.txt.
The result: removing llms.txt from the model actually improved its accuracy.
The file wasn’t helping predict citation behavior, it was adding noise.
The pattern
Both analyses point to the same conclusion: LLM-optimized pages get cited when they contain unique, useful information that doesn’t exist elsewhere on your site.
The format doesn’t matter.
Landwehr’s conclusion was blunt: “You could create a 12345.txt file and it would be cited if it contains useful and unique information.”
A well-structured about page achieves the same result as an /ai/about page. API documentation gets cited whether it’s in llms.txt or buried in your regular docs.
The files themselves get no special treatment from AI systems.
The content inside them might, but only if it’s actually better than what already exists on your regular pages.
SE Ranking’s data backs this up at scale. There’s no correlation between having llms.txt and getting more citations.
The presence of the file made no measurable difference in how AI systems referenced domains.
No major AI company has confirmed using llms.txt files in their crawling or citation processes.
Google’s Mueller made the sharpest critique in April 2025, comparing llms.txt to the obsolete keywords meta tag:
“[As far as I know], none of the AI services have said they’re using LLMs.TXT (and you can tell when you look at your server logs that they don’t even check for it).”
Google’s Gary Illyes reinforced this at the July 2025 Search Central Deep Dive in Bangkok, explicitly stating Google “doesn’t support LLMs.txt and isn’t planning to.”
Google Search Central’s documentation is equally clear:
“The best practices for SEO remain relevant for AI features in Google Search. There are no additional requirements to appear in AI Overviews or AI Mode, nor other special optimizations necessary.”
OpenAI, Anthropic, and Perplexity all maintain their own llms.txt files for their API documentation to make it easy for developers to load into AI assistants.
But none have announced their crawlers actually read these files from other websites.
The consistent message from every major platform: standard web publishing practices drive visibility in AI search.
No special files, no new markup, and no separate versions needed.
What this means for SEO teams
The evidence points to a single conclusion: stop building content that only machines will see.
“Why would they want to see a page that no user sees?”
If AI companies needed special formats to generate better responses, they would tell you. As he noted:
“AI companies aren’t really known for being shy.”
The data proves him right.
Across Landwehr’s nearly 18,000 citations, LLM-optimized formats showed no advantage unless they contained unique information that didn’t exist anywhere else on the site.
SE Ranking’s analysis of 300,000 domains found that llms.txt actually added confusion to their citation prediction model rather than improving it.
Instead of creating shadow versions of your content, focus on what actually works.
Build clean HTML that both humans and AI can parse easily.
Reduce JavaScript dependencies for critical content, which Mueller identified as the real technical barrier:
“Excluding JS, which still seems hard for many of these systems.”
Heavy client-side rendering creates actual problems for AI parsing.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/01/JohnMu-Lily-Ray-on-BlueSky-vgN5IJ.webp?fit=634%2C511&ssl=1511634http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-20 14:00:002026-01-20 14:00:00Why LLM-only pages aren’t the answer to AI search
Around the turn of the year, search industry media fills up with reviews and predictions. Bold, disruptive ideas steal the spotlight and trigger a sense of FOMO (fear of missing out).
However, sustainable online sales growth doesn’t come from chasing the next big trend. In SEO, what truly matters stays the same.
FOMO is bad for you
We regularly get excited about the next big thing. Each new idea is framed as a disruptive force that will level the playing field.
Real shifts do happen, but they are rare. More often, the promised upheaval fades into a storm in a teacup.
Over the years, search has introduced many innovations that now barely raise an eyebrow. Just a few examples:
Voice search.
Universal Search.
Google Instant.
The Knowledge Graph.
HTTPS as a ranking signal.
RankBrain.
Mobile-first indexing.
AMP.
Featured snippets and zero-click searches.
E-A-T and E-E-A-T.
Core Web Vitals.
Passage indexing.
AI Overviews.
Some claimed these developments would revolutionize SEO or wipe it out entirely. That never happened.
The latest addition to the SEO hype cycle, LLMs and AI, fits neatly into this list. After the initial upheaval, the excitement has already started to fade.
The benefits of LLMs are clear in some areas, especially coding and software development. AI tools boost efficiency and significantly shorten production cycles.
In organic search, however, their impact remains limited, despite warnings from attention-seeking doomsayers. No AI-driven challenger has captured meaningful search market share.
Beyond ethical concerns about carbon footprint and extreme energy use, accuracy remains the biggest hurdle. Because they rely on unverified inputs, LLM-generated answers often leave users more confused than informed.
AI-driven platforms still depend on crawling the web and using core SEO signals to train models and answer queries. Like any bot, they need servers and content to be accessible and crawlable.
It also explains why Google is likely to remain the dominant force in ecommerce search for the foreseeable future. For now, a critical mass of users will continue to rely on Google as their search engine of choice.
It’s all about data
Fundamentally, it makes little difference whether a business focuses on Google, LLM-based alternatives, or both. All search systems depend on crawled data, and that won’t change.
Fast, reliable, and trustworthy indexing signals sit at the core of every ranking system. Instead of chasing hype, brands and businesses are better served by focusing on two core areas: their customers’ needs and the crawlability of their web platforms.
Customer needs always come first.
Most users do not care whether a provider uses the latest innovation. They care about whether expectations are met and promises are kept. That will not change.
Meeting user expectations will remain a core objective of SEO.
Crawlability is just as critical. A platform that cannot be properly crawled or indexed has no chance in competitive sectors such as retail, travel, marketplaces, news, or affiliate marketing.
Making sure bots can crawl a site, and algorithms can clearly understand the unique value of its content, will remain a key success factor in both SEO and GEO for the foreseeable future.
Won’t change: Uncrawled content won’t rank
Other factors are unlikely to change as well, including brand recognition, user trust, ease of use, and fast site performance.
These factors have always mattered and will continue to do so. They only support SEO and GEO if a platform can be properly crawled and understood. That is why regular reviews of technical signals are a critical part of a successful online operation.
Won’t change: server errors prevent indexing by any bot
At the start of a new year, you should resist the fear of missing out on the latest novelty. Following the herd rarely helps anyone stand out.
A better approach is to focus on what is certain to remain consistent in 2026 and beyond.
What to do next
Publishers can breathe a sigh of relief. There is no need to rush into a new tool just because everyone else is. Adopt it if it makes sense, but no tool alone will make a business thrive.
Focus on what you do best and make it even better. Your customers will notice and appreciate it.
At the same time, make sure your web platform is fast and reliable, that your most relevant content is regularly re-crawled, and that bots clearly understand its purpose. These are the SEO and GEO factors that will endure.
Holistic SEO is both an art and a science. While it is far more complex in 2026, it is the unchanging foundational signals that matter most.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/01/2026-Crawled-not-indexed-bDiLjm.webp?fit=1292%2C854&ssl=18541292http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-20 13:00:002026-01-20 13:00:00SEO in 2026: What will stay the same
Search visibility isn’t what it used to be. Rankings still matter, but they’re no longer the whole story.
Today, discovery happens across traditional search results, local listings, brand knowledge panels, and increasingly, AI-driven experiences that surface answers without a click. For marketers, that makes visibility harder to measure — and easier to lose.
SEO teams now operate in a landscape where accuracy, consistency, and trust signals matter as much as keywords. Business information, reviews, and brand authority determine whether a brand shows up at all, especially as AI-powered search reshapes how results are generated and displayed. As a result, many brands think they’re visible — until they look closer.
The Visibility Brief was created to show you what’s really happening. Built on real data from thousands of brands, it provides a practical view of how visibility plays out across today’s search and discovery ecosystem.
Instead of focusing on a single channel or metric, it takes a broader view. The content highlights where brands are gaining ground, where gaps appear, and which trends are shaping performance.
You’ll see how traditional search and AI-driven discovery now overlap, why data accuracy has become a baseline requirement, and where brands are losing exposure without realizing it.
The goal is simple: help you understand how visibility is changing and what to focus on now.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/01/Yext-20260123-cGAzan.jpg?fit=1920%2C1080&ssl=110801920http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-20 12:00:002026-01-20 12:00:00Yext’s Visibility Brief: Your guide to brand visibility in AI search by Yext