Google now uses Gemini 3 Pro to generate some AI Overviews in Google Search. Google said for more complex queries Gemini 3 Pro is used for AI Overview.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Gemini 3 Pro is used to generate AI Overviews for complex queries in English, globally for Google AI Pro & Ultra subscribers.
What Google said. Robby Stein, VP of Product at Google Search wrote:
“Update: AI Overviews now tap into Gemini 3 Pro for complex topics.”
“Behind the scenes, Search will intelligently route your toughest Qs to our frontier model (just like we do in AI Mode) while continuing to use faster models for simpler tasks.”
“Live in English globally for Google AI Pro & Ultra subs.”
Why we care. The AI Overviews may be very different today than it was a week or so ago. Google will continue to improve its Gemini models and work those upgraded models into Google Search, including AI Overviews and AI Mode.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-19 22:19:472026-01-19 22:19:47Some Google AI Overviews now use Gemini 3 Pro
Founded in 2022, Perplexity offers an AI-powered search engine.
AI tools offer a new way to search for factual information, where Perplexity stands out as an AI-native search engine that combines large language models with real-time web search.
With a valuation of $20 billion and a growing user base of 30 million monthly active users, Perplexity is one of the fastest-growing tech startups challenging Google’s dominance with its AI-native search engine.
From the number of Perplexity active users to company revenue, we’ll cover the latest stats about the popular AI search engine on this page.
Key Perplexity Stats
Perplexity has 30 million monthly active users.
Perplexity processes around 600 million search queries a month.
Lifetime downloads of Perplexity mobile apps reached 80.5 million to date.
According to Perplexity AI CEO, the search engine processes around 600 million queries per month as of April 2025. That’s an increase from 400 million reported in October 2024.
Here’s an overview of Perplexity AI monthly search volume over time since X:
According to the latest estimates, the Perplexity AI website received 239.7 million visits worldwide in November 2025, showing a 13.21% decrease compared to October 2025.
Here’s a website traffic breakdown of the Perplexity AI website since September 2025:
According to recent estimates, Perplexity AI app downloads across Google Play and App Store reached an estimated lifetime downloads of 80.5 million to date, including 5.1 million in November 2025 alone.
Perplexity AI had the highest number of app downloads in October 2025, with 15.5 million monthly installs worldwide.
Here’s a table with Perplexity AI app downloads over time since January 2024:
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-19 18:20:392026-01-19 18:20:39Perplexity AI User and Revenue Statistics
We fully decrypted Google’s SearchGuard anti-bot system, the technology at the center of its recent lawsuit against SerpAPI.
After fully deobfuscating the JavaScript code, we now have an unprecedented look at how Google distinguishes human visitors from automated scrapers in real time.
What happened.Google filed a lawsuit on Dec. 19 against Texas-based SerpAPI LLC, alleging the company circumvented SearchGuard to scrape copyrighted content from Google Search results at a scale of “hundreds of millions” of queries daily. Rather than targeting terms-of-service violations, Google built its case on DMCA Section 1201 – the anti-circumvention provision of copyright law.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
The complaint describes SearchGuard as “the product of tens of thousands of person hours and millions of dollars of investment.”
Why we care. The lawsuit reveals exactly what Google considers worth protecting – and how far it will go to defend it. For SEOs and marketers, understanding SearchGuard matters because any large-scale automated interaction with Google Search now triggers this system. If you’re using tools that scrape SERPs, this is the wall they’re hitting.
The OpenAI connection
Here’s where it gets interesting: SerpAPI isn’t just any scraping company.
OpenAI has been partially using Google search results scraped by SerpAPI to power ChatGPT’s real-time answers. SerpAPI listed OpenAI as a customer on its website as recently as May 2024, before the reference was quietly removed.
Google declined OpenAI’s direct request to access its search index in 2024. Yet ChatGPT still needed fresh search data to compete.
The solution? A third-party scraper that pillages Google’s SERPs and resells the data.
Google isn’t attacking OpenAI directly. It’s targeting a key link in the supply chain that feeds its main AI competitor.
The timing is telling. Google is striking at the infrastructure that powers rival search products — without naming them in the complaint.
What we found inside SearchGuard
We fully decrypted version 41 of the BotGuard script – the technology underlying SearchGuard. The script opens with an unexpectedly friendly message:
Anti-spam. Want to say hello? Contact botguard-contact@google.com */
Behind that greeting sits one of the most sophisticated bot detection systems ever deployed.
BotGuard vs. SearchGuard. BotGuard is Google’s proprietary anti-bot system, internally called “Web Application Attestation” (WAA). Introduced around 2013, it now protects virtually all Google services: YouTube, reCAPTCHA v3, Google Maps, and more.
In its complaint against SerpAPI, Google revealed that the system protecting Search specifically is called “SearchGuard” – presumably the internal name for BotGuard when applied to Google Search. This is the component that was deployed in January 2025, breaking nearly every SERP scraper overnight.
Unlike traditional CAPTCHAs that require clicking images of traffic lights, BotGuard operates completely invisibly. It continuously collects behavioral signals and analyzes them using statistical algorithms to distinguish humans from bots – all without the user knowing.
The code runs inside a bytecode virtual machine with 512 registers, specifically designed to resist reverse engineering.
How Google knows you’re human
The system tracks four categories of behavior in real time. Here’s what it measures:
Mouse movements
Humans don’t move cursors in straight lines. We follow natural curves with acceleration and deceleration – tiny imperfections that reveal our humanity.
Google tracks:
Trajectory (path shape)
Velocity (speed)
Acceleration (speed changes)
Jitter (micro-tremors)
A “perfect” mouse movement – linear, constant speed – is immediately suspicious. Bots typically move in precise vectors or teleport between points. Humans are messier.
Detection threshold: Mouse velocity variance below 10 flags as bot behavior. Normal human variance falls between 50-500.
Keyboard rhythm
Everyone has a unique typing signature. Google measures:
Inter-key intervals (time between keystrokes)
Key press duration (how long each key is held)
Error patterns
Pauses after punctuation
A human typically shows 80-150ms variance between keystrokes. A bot? Often less than 10ms with robotic consistency.
Detection threshold: Key press duration variance under 5ms indicates automation. Normal human typing shows 20-50ms variance.
Scroll behavior
Natural scrolling has variable velocity, direction changes, and momentum-based deceleration. Programmatic scrolling is often too smooth, too fast, or perfectly uniform.
Google measures:
Amplitude (how far)
Direction changes
Timing between scrolls
Smoothness patterns
Scrolling in fixed increments – 100px, 100px, 100px – is a red flag.
Detection threshold: Scroll delta variance under 5px suggests bot activity. Humans typically show 20-100px variance.
Timing jitter
This is the killer signal. Humans are inconsistent, and that’s exactly what makes us human.
Google uses Welford’s algorithm to calculate variance in real-time with constant memory usage – meaning it can analyze patterns without storing massive amounts of data, regardless of how many events occur. As each event arrives, the algorithm updates its running statistics.
If your action intervals have near-zero variance, you’re flagged.
The math: If timing follows a Gaussian distribution with natural variance, you’re human. If it’s uniform or deterministic, you’re a bot.
Detection threshold: Event counts exceeding 200 per second indicate automation. Normal human interaction generates 10-50 events per second.
The 100+ DOM elements Google monitors
Beyond behavior, SearchGuard fingerprints your browser environment by monitoring over 100 HTML elements. The complete list extracted from the source code includes:
High-priority elements (forms): BUTTON, INPUT – these receive special attention because bots often target interactive elements.
Structure: ARTICLE, SECTION, NAV, ASIDE, HEADER, FOOTER, MAIN, DIV
SearchGuard also collects extensive browser and device data:
Navigator properties:
userAgent
language / languages
platform
hardwareConcurrency (CPU cores)
deviceMemory
maxTouchPoints
Screen properties:
width / height
colorDepth / pixelDepth
devicePixelRatio
Performance:
performance.now() precision
performance.timeOrigin
Timer jitter (fluctuations in timing APIs)
Visibility:
document.hidden
visibilityState
hasFocus()
WebDriver detection: The script specifically checks for signatures that betray automation tools:
navigator.webdriver (true if automated)
window.chrome.runtime (absent in headless mode)
ChromeDriver signatures ($cdc_ prefixes)
Puppeteer markers ($chrome_asyncScriptInfo)
Selenium indicators (__selenium_unwrapped)
PhantomJS artifacts (_phantom)
Why bypasses become obsolete in minutes
Here’s the critical discovery: SearchGuard uses a cryptographic system that can invalidate any bypass within minutes.
The script generates encrypted tokens using an ARX cipher (Addition-Rotation-XOR) – similar to Speck, a family of lightweight block ciphers released by the NSA in 2013 and optimized for software implementations on devices with limited processing power.
But there’s a twist.
The magic constant rotates. The cryptographic constant embedded in the cipher isn’t fixed. It changes with every script rotation.
Observed values from our analysis:
Timestamp 16:04:21: Constant = 1426
Timestamp 16:24:06: Constant = 3328
The script itself is served from URLs with integrity hashes: //www.google.com/js/bg/{HASH}.js. When the hash changes, the cache invalidates, and every client downloads a fresh version with new cryptographic parameters.
Even if you fully reverse-engineer the system, your implementation becomes invalid with the next update.
It’s cat and mouse by design.
The statistical algorithms
Two algorithms power SearchGuard’s behavioral analysis:
Welford’s algorithm calculates variance in real time with constant memory usage – meaning it processes each event as it arrives and updates a running statistical summary, without storing every past interaction. Whether the system has seen 100 or 100 million events, memory consumption stays the same.
Reservoir sampling maintains a random sample of 50 events per metric to estimate median behavior. This provides a representative sample without storing every interaction.
Combined, these algorithms build a statistical profile of your behavior and compare it against what humans actually do.
SerpAPI’s response
SerpAPI’s founder and CEO, Julien Khaleghy, shared this statement with Search Engine Land:
“SerpApi has not been served with Google’s complaint, and prior to filing, Google did not contact us to raise any concerns or explore a constructive resolution. For more than eight years, SerpApi has provided developers, researchers, and businesses with access to public search data. The information we provide is the same information any person can see in their browser without signing in. We believe this lawsuit is an effort to stifle competition from the innovators who rely on our services to build next-generation AI, security, browsers, productivity, and many other applications.”
The defense may face challenges. The DMCA doesn’t require content to be non-public – it prohibits circumventing technical protection measures, period. If Google proves SerpAPI deliberately bypassed SearchGuard protections, the “public data” argument may not hold.
What this means for SEO – and the bigger picture
If you’re building SEO tools that programmatically access Google Search, 2025 was brutal.
In January, Google deployed SearchGuard. Nearly every SERP scraper suddenly stopped returning results. SerpAPI had to scramble to develop workarounds – which Google now calls illegal circumvention.
Then in September, Google removed the num=100 parameter – a long-standing URL trick that allowed tools to retrieve 100 results in a single request instead of 10. Officially, Google said it was “not a formally supported feature.” But the timing was telling: forcing scrapers to make 10x more requests dramatically increased their operational costs. Some analysts suggested the move specifically targeted AI platforms like ChatGPT and Perplexity that relied on mass scraping for real-time data.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
The combined effect: traditional scraping approaches are increasingly difficult and expensive to maintain.
For the industry: This lawsuit could reshape how courts view anti-scraping measures. If SearchGuard qualifies as a valid “technological protection measure” under DMCA, every platform could deploy similar systems with legal teeth.
Under DMCA Section 1201, statutory damages range from $200 to $2,500 per circumvention act. With hundreds of millions of alleged violations daily, the theoretical liability is astronomical – though Google’s complaint acknowledges that “SerpApi will be unable to pay.”
The message isn’t about money. It’s about setting precedent.
Meanwhile, the antitrust case rolls on. Judge Mehta ordered Google to share its index and user data with “Qualified Competitors” at marginal cost. One hand is being forced open while the other throws punches.
Google’s position: “You want our data? Go through the antitrust process and the technical committee. Not through scraping.”
Here’s the uncomfortable truth: Google technically offers publishers controls, but they’re limited. Google-Extended allows publishers to opt out of AI training for Gemini models and Vertex AI – but it doesn’t apply to Search AI features including AI Overviews.
“AI is built into Search and integral to how Search functions, which is why robots.txt directives for Googlebot is the control for site owners to manage access to how their sites are crawled for Search.”
Court testimony from DeepMind VP Eli Collins during the antitrust trial confirmed this separation: content opted out via Google-Extended could still be used by the Search organization for AI Overviews, because Google-Extended isn’t the control mechanism for Search.
The only way to fully opt out of AI Overviews? Block Googlebot entirely – and lose all search traffic.
Publishers face an impossible choice: accept that your content feeds Google’s AI search products, or disappear from search results altogether.
This analysis is based on version 41 of the BotGuard script, extracted and deobfuscated from challenge data in January 2026. The information is provided for informational purposes only.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-19 15:40:322026-01-19 15:40:32Inside SearchGuard: How Google detects bots and what the SerpAPI lawsuit reveals
Less than 200 years ago, scientists were ridiculed for suggesting that hand washing might save lives.
In the 1840s, it was shown that hygiene reduced death rates, but the underlying explanation was missing.
Without a clear mechanism, adoption stalled for decades, leading to countless preventable deaths.
The joke of the past becomes the truth of today. The inverse is also true when you follow misleading guidance.
Bad GEO advice (I don’t like this acronym, but will use it because it seems to be the most popular) will not literally kill you.
That said, it can definitely cost money, cause unemployment, and lead to economic death.
Not long ago, I wrote about a similar topic and explained why unscientific SEO research is dangerous and acts as a marketing instrument rather than real scientific discovery.
This article is a continuation of that work and provides a framework to make sense of the myths surrounding AI search optimization.
I will highlight three concrete GEO myths, examine whether they are true, and explain what I would do if I were you.
If you’re pressed for time, here’s a TL;DR:
We fall for bad GEO and SEO advice because of ignorance, stupidity, cognitive biases, and black-and-white thinking.
To evaluate any advice, you can use the ladder of misinference – statement vs. fact vs. data vs. evidence vs. proof.
You become more knowledgeable if you seek dissenting viewpoints, consume with the intent to understand, pause before you believe, and rely less on AI.
You currently:
Don’t need an llms.txt.
Should leverage schema markup even if AI chatbots don’t use it today.
Have to keep your content fresh, especially if it matters for your queries.
Before we dive in, I will recap why we fall for bad advice.
Recap: Why we fall for bad GEO and SEO advice
The reasons are:
Ignorance, stupidity, and amathia (voluntary stupidity).
Cognitive biases, such as confirmation bias.
Black-and-white thinking.
We are ignorant because we don’t know better yet. We are stupid if we can’t know better. Both are neutral.
We suffer from amathia when we refuse to know better, which is why it’s the worst of the three.
We all suffer from biases. When it comes to articles and research, confirmation bias is probably the most prevalent.
We refuse to see flaws in how we see things and instead seek out flaws, often with great effort, in rival theories or remain blind to them.
Lastly, we struggle with black-and-white thinking. Everything is this or that, never something in between. A few examples:
Backlinks are always good.
Reddit is always important for AI search.
Blocking AI bots is always stupid.
The truth is, the world consists of many shades of gray. This idea is captured well in the book “May Contain Lies” by Alex Edmans.
He says something can be moderate, granular, or marbled:
Backlinks are not always good or important, as they lose their impact after a certain point (moderate).
Reddit isn’t always important for AI search if it’s not cited at all for the relevant prompt set (granular).
Blocking some AI bots isn’t always stupid because, for some business models and companies, it makes perfect sense (marbled).
The first step to get better is always awareness. And we all are sometimes ignorant, (voluntarily or involuntarily) stupid, suffer from biases or think black and white.
Let’s get more practical now that we know why we fall for bad advice.
How I evaluate GEO (and SEO) advice and protect myself from being stupid
One way to save yourself is the ladder of misinference, once again borrowing from Edmans’ book. It looks like this:
To accept something as proof, it needs to climb the rungs of the ladder.
On closer inspection, many claims fail at the last rung when it comes to evidence versus proof.
To give you an example:
Statement: “User signals are an important factor for better organic performance.”
Fact: Better CTR performance can lead to better rankings.
Data: You can directly measure this on your own site, and several experiments showed the impact of user signals long before it became common knowledge.
Evidence: There are experiments demonstrating causal effects, and a well-known portion of the 2024 Google leak focuses on evaluating user signals.
Proof: Court documents in Google’s DOJ monopoly trial confirmed the data and evidence, making this universally true.
Fun fact: Rand Fishkin and Marcus Tandler both said that user signals matter many years ago and were laughed at, much like scientists in the 1800s.
At the time, the evidence wasn’t strong enough. Today, their “joke” is now the truth.
If I were you, here’s what I would do:
Seek dissenting viewpoints: You only truly understand something when you can argue in its favor. The best defense is steelmanning your argument. To do that, you need to fully understand the other side.
Consume with the intent to understand: Too often, we listen to reply, which means we don’t listen at all and instead converse with ourselves in our own heads. We focus on our own arguments and what we will say next. To understand, you need to listen actively.
Pause before you share and believe: False information is highly contagious, so sharing half-truths or lies is dangerous. You also shouldn’t believe something simply because a well-known person said it or because it’s repeated over and over again.
We will see why the last point is a big problem in a second.
The prime example: Blinding AI workslop
I decided against finger-pointing, so there is no link or mention of who this is about. With a bit of research, you might find the example yourself.
This “research” was promoted in the following way:
“How AI search really works.”
Requiring a time investment of weeks.
19 studies and six case studies analyzed.
Validated, reviewed, and stress-tested.
To quote Edmans:
“It’s not for the authors to call their findings groundbreaking. That’s for the reader to judge. You need to shout about the conclusiveness of your proof or the novelty of your results. Maybe they’re not strong enough to speak for themselves. … It doesn’t matter what fancy name you give your techniques or how much data you gather. Quantity is no substitute for quality.”
Just because something took a long time does not mean the results are good.
Just because the author or authors say so does not mean the findings are groundbreaking.
“AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”
I don’t have proof this work was AI-generated. It’s simply how it felt when I read it myself, with no skimming or AI summaries.
Here are a few things that caught my attention:
It doesn’t deliver what it claims. It purports to explain how AI search works, but instead lists false correlations between studies that analyzed something different from what the analysis claims.
Reported sample sizes are inaccurate.
Studies and articles are mishmashed.
One source is a “someone said something that someone said something that someone said.”
Cited research didn’t analyze or conclude what is claimed in the meta-analysis.
The “correlation coefficient” isn’t a correlation coefficient, but a weighted score.
To be specific, it misdates the GEO study as 2024 instead of 2023 and claims the research “confirms” that schema markup, lists, and FAQ blocks significantly improve inclusion in AI responses. A review of the study shows that it makes no such claims.
This analysis looks convincing on the surface and masquerades as good work, but on closer inspection, it crumbles under scrutiny.
Disclaimer: I specifically wanted to highlight one example because it reflects everything I wrote about in my last article and serves as a perfect continuation.
This “research” was shared in newsletters, news sites, and roundups. It got a lot of eyeballs.
Let’s now take a look at the three, in my opinion, most pervasive recommendations for influencing the rate of your AI citations.
AI chatbots have a centralized source of important information to use for citations.
It’s a lightweight file that makes it easier for AI crawlers to evaluate your domain.
When viewed through the ladder of misinference, the llms.txt claim is a statement.
Some parts are factual – for example, Google and others crawl these files, and Google even indexes and ranks them for keywords – and there is data to support that.
However, there is no data or evidence showing that llms.txt files boost AI inclusion. There is certainly no proof.
It was repeated often enough to become one of the more tiring talking points in black-and-white debates.
One side dismisses it entirely, while the other promotes it as a secret holy grail that will solve all AI visibility problems.
The original proposal also stated:
“We furthermore propose that pages on websites that have information that might be useful for LLMs to read provide a clean markdown version of those pages at the same URL as the original page, but with .md appended.”
This approach would lead to internal competition, duplicate content, and an unnecessary increase in total crawl volume.
The only scenario where llms.txt makes sense is if you operate a complex API that AI agents can meaningfully benefit from.
The last point is egregious. No one has a direct quote from Fabrice Canel or the exact context in which he supposedly said this.
For this recommendation, there is no solid data or evidence.
The reality is this:
For training
Text is extracted and HTML elements are stripped.
Tokenization after pretraining destroys coherent code if markup makes it through to this step.
The existence of LLMs is based on structuring unstructured content.
They can handle schema and write it because they are trained to do so.
That doesn’t mean your individual markup plays a role in the knowledge of the foundation model.
For grounding
There is no evidence that AI chatbots access schema markup.
Correlation studies show that websites with schema markup have better AI visibility, but there are many rival theories that could explain this.
Recent experiments (including this and this) showed the opposite. The tools AI chatbots can access don’t use the HTML.
I recently tested this in Perplexity Comet. Even with an open DOM, it hallucinated schema markup on the page that didn’t match what was actually there.
Also, when someone says they use structured data, that can – but does not have to – mean schema.
All schema is structured data, but not all structured data is schema. In most cases, they mean proper HTML elements such as tables and lists.
So, if I were you, here’s what I would do:
Use schema markup for supported rich results.
Use all relevant properties in your schema markup.
You might ask why I recommend this. To me, solid schema markup is a hygiene factor of good SEO.
Just because AI chatbots and agents don’t use schema today doesn’t mean they won’t in the future.
“One could say the same for llms.txt.” That’s true. However, llms.txt has no SEO benefits.
Schema markup doesn’t help us improve how AI systems process our content directly.
Instead, it helps improve signals they frequently look at, such as search rankings, both in the top 10 and beyond for fan-out queries.
‘Provide fresh content’
The claims for why this should help:
AI chatbots prefer fresh content.
Fresh content is important for some queries and prompts.
Newer or recently updated content should be more accurate.
Compared with llms.txt and schema markup, this recommendation stands on a much more solid foundation in terms of evidence and data.
The reality is that foundation models contain content up to the end of 2022.
After digesting that information, they need fresh content, which means cited sources, on average, have to be more recent.
If freshness is relevant to a query – OpenAI, Anthropic, and Perplexity use freshness as a signal to determine whether to use web search – then finding fresh sources matters.
The researchers used API results, not the user interface. Results differ because of chatbot system prompts and API settings. Surfer recently published a study showing how large those differences can be.
Asking a model to rerank is not how the model or chatbot actually reranks results in the background.
The way dates were injected was highly artificial, with a perfect inverse correlation that may exaggerate the results.
That said, this recommendation appears to have the strongest case for meaningfully influencing AI visibility and increasing citations.
So, if I were you, here’s what I would do:
Add a relevant date indicating when your content was last updated.
Keep update dates consistent:
On-page.
Schema markup.
Sitemap lastmod.
Update content regularly, especially for queries where freshness matters. Fan-out queries from AI chatbots often signal freshness when a date is included.
Never artificially update content by changing only the date. Google stores up to 20 past versions of a web page and can detect manipulation.
In other words, this one appears to be legitimate.
We have to avoid shoveling AI search misinformation into the walls of our industry.
Otherwise, it will become the asbestos we eventually have to dig out.
An attention-grabbing headline should always raise red flags.
I understand the allure of believing what appears to be the consensus or using AI to summarize. It’s easier. We’re all busy.
The issue is that there was already too much content to consume before AI. Now there’s even more because of it.
We can’t consume and analyze everything, so we rely on the same tools not only to generate content, but also to consume it.
It’s a snake-biting-its-own-tail problem.
Our compression culture risks creating a vortex of AI search misinformation that feeds back into the training data of the AI chatbots we both love and hate.
We’re already there. AI chatbots sometimes answer GEO questions from model knowledge.
Take the time to think for yourself and get your hands dirty.
Try to understand why something should or shouldn’t work.
And never take anything at face value, no matter who said it. Authority isn’t accuracy.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-19 14:00:002026-01-19 14:00:00GEO myths: This article may contain lies
Before you apply for a new role, it’s important to prepare for marketing salary negotiations and learn how to pursue fair pay with practical, realistic guidance.
Whether you work in SEO, PPC, or somewhere in between, salaries remain a contentious topic.
They are often hard to discuss, difficult to quantify, and challenging to change.
While many resources cover salary negotiations in general, this article focuses specifically on negotiating pay for marketing roles.
Difficulties with marketing salaries
Several factors make marketing roles harder to benchmark than many other professions, complicating salary expectations and negotiations.
No industry standard
Unlike fields with national governing bodies and defined career grades, marketing lacks standardization.
This makes it difficult to align salary bands across companies or compare roles on an equal footing.
Inconsistent job titles
Job titles in marketing vary widely.
A VP of marketing at one company may perform duties similar to a junior account manager elsewhere, while in another organization, the title represents the most senior marketing leader.
Because titles are used inconsistently, it can be challenging to assess role seniority and determine which salary ranges are appropriate.
Major market shifts in recent years
Marketers who last negotiated pay during the COVID-driven digital boom of 2020-2021 may find today’s job market markedly different.
Just five years ago, businesses rapidly shifted to online-first marketing, driving strong demand for digital talent.
Performance and organic marketers benefited from a candidate-favorable market, with new roles being created, frequent poaching, and rising salaries.
Today, conditions have changed. The rise of AI, global economic uncertainty, and company downsizing have reduced salary pressure for many marketing roles.
There is also more uncertainty around job stability, leading fewer marketers to change roles unless necessary.
As a result, the salary levels seen in 2020-2021 are largely a thing of the past.
Well-paid marketing roles still exist, but they are harder to find. That reality should inform your salary negotiations, not discourage them.
Some marketing channels can be misunderstood
Less marketing-savvy companies often advertise a single role intended to cover three or more distinct specializations, typically at bottom-of-the-market pay.
Even organizations that better understand marketing skill sets may struggle to grasp the full complexity and breadth of knowledge required to perform a role effectively.
This can lead to significant undervaluation of marketers.
Given that marketing salaries can be difficult for employers to navigate, how can you ensure you are fairly compensated for your experience and expertise?
The following nine tips can be broadly grouped into four areas:
Know what you bring to the table.
Know what is realistic.
Identify and demonstrate what is valuable to the company.
Stick to your boundaries.
Know what you bring to the table
We’ll start with the side of salary negotiations that, for some, can be very difficult: accurately valuing your own skill set.
If you are in a position to negotiate a salary, you have either already been offered the job or you work for the company and are hoping to secure a raise.
In either case, the company must already believe you are suitable for the role.
That does not stop it from trying to secure your services at the most economical price.
Knowing what you bring to the table is key to having the bargaining power and confidence to negotiate a fair salary.
This does not just mean how much direct experience you have in the role you have landed.
Tip 1: Demonstrate your experience in the industry
Don’t underestimate how much employers value candidates who have knowledge and experience within their sector.
You may also find that some industries struggle to hire marketing professionals, and your willingness to join that industry can command a higher salary.
If you’ve worked in notoriously difficult industries, such as gambling, adult entertainment, or pharmaceuticals, you may be able to negotiate higher pay because of it.
This can be due to the perception of difficulty in marketing within these industries.
Tip 2: Promote your prior experience in and out of similar roles
Your years of experience in a role may feel like an obvious bargaining chip when negotiating salary.
However, don’t forget that an employer may also benefit from the knowledge and experience you gained outside the role you are applying for.
Just because your previous job titles may not sound similar to the role you are negotiating now doesn’t mean the skills you developed there aren’t directly relevant.
Review your CV and compare it with the role you are applying for. Identify the parts of your work history that align with the job description.
Look beyond the obvious and consider transferable skills such as communication, problem-solving, and stakeholder management.
Tip 3: Highlight extra skills outside your job specification
Think about the skills you’ve developed over the years that may not be listed in the job description but are likely to be important for success in the role.
This can be particularly helpful if you are earlier in your career and lack directly relevant experience in similar positions.
Consider what you learned through volunteer roles, a first summer job, or even hobbies.
They may seem far removed from marketing, but you have likely gained lessons through those experiences that can support your current career.
Tip 4: Show your financial impact in previous roles
As with any ROI calculation, employers want to know whether the salary they may pay a candidate will deliver a return on that investment.
If you are negotiating a higher salary than originally offered, you need to demonstrate why it is financially worthwhile for the employer.
Be strategic in the examples you share. Rather than focusing only on increases in traffic or rankings, emphasize the revenue or cost savings you delivered.
You may be limited by NDAs and unable to share specific figures, but you can still reference outcomes, such as increasing organic search revenue by 5x or reducing a PPC budget by 20% while maintaining performance.
It’s one thing to understand your value based on the skills and experience you bring to a company. It’s another to assess that value accurately in the job market.
Ultimately, salaries are limited by what employers are willing to pay.
Tip 5: Be familiar with industry benchmarks
Do some research when considering your salary.
You may have been paid above or below the market average in your current or previous role, which can skew your expectations.
Review job ads in your geographic area that require similar skills and experience, and note the lower and upper ends of the stated salary ranges.
Be careful not to compare roles based on job titles alone.
As noted earlier, marketing titles are often inconsistent, and you may be comparing your role with one that is more senior or more junior.
Also consider the industry. Salaries in charities, for example, are unlikely to match those in tech or finance.
Salary benchmarking reports can also be useful, such as:
These resources can provide a more objective view of the market.
Keep in mind that salaries vary significantly by country, so avoid comparing U.S. and UK salaries directly.
Tip 6: Find out the internal salary ranges
When applying for a role, it is always helpful to understand the salary range being offered, although this is not always possible.
Some companies wait for candidates to make the first move on salary and may avoid sharing ranges to prevent offering more than necessary.
This is why it’s important to follow Tip 5 first, so you understand what your skills and experience are worth in the broader market.
Some organizations use salary banding. For example, a senior SEO specialist may be classified as a level 4 role, while a junior SEO role may sit at level 2.
If a recruiter is unwilling to share the exact salary range, you can ask about role levels or banding instead.
This can provide insight into where the role sits within the company hierarchy and what the potential salary ceiling might be.
If you are able to identify the salary range, try to determine what qualifies a candidate for the top of the band.
Is the company looking for additional “nice to haves” to justify the highest salary? In some cases, it may be reserved for candidates with experience in a specific industry.
Once you understand which skills command higher pay, you can emphasize them in your CV and during interviews.
Identify and demonstrate the values of the company
As many candidates discover during interviews, what a company truly wants is not always clearly stated in the job description or early conversations with recruiters.
Hiring managers may not fully define what they are looking for in a successful candidate until they have interviewed several people for comparison.
As a result, you may be unclear about what matters most for the role, making it harder to demonstrate your suitability and justify the salary you are requesting.
Interviews can provide an opportunity to explore these values in more detail.
Ask interviewers what “success” looks like in the role or how they would describe the traits of their top-performing colleagues.
This can help you understand the characteristics and behaviors the company values.
Tip 7: Demonstrate how you live up to those values
Once you understand what the company values, identify how you can deliver that through the role.
For example, if “initiative” is highly valued, you can use the interview process to highlight how you demonstrate initiative in your work.
Use examples from past experience to show how you embody the company’s values, citing specific projects or situations where you demonstrated them.
If “transparency” is important to the organization, you might reference a time when you acknowledged a mistake.
Demonstrating alignment with the company, in addition to job proficiency, can make you a more attractive candidate and support a stronger salary case.
When negotiating your salary, you need to know your absolute minimum. This is not just the lowest salary you can afford to accept.
It also means identifying what you need from a role to feel respected and valued, and what the overall compensation package must include to support that.
Going into negotiations with clear boundaries makes it easier to say no when an offer does not meet them.
Tip 8: Consider other benefits that may offset a lower salary
In some situations, accepting a lower salary may make sense.
You may be moving into a different role where you have less experience and are starting at a more junior level.
The opportunity to develop new skills can justify a lower salary.
Other tangible benefits, such as strong health coverage, additional paid time off, shorter working hours, or a gym membership, may also make a lower salary acceptable.
Tip 9: Identify other positives that may justify a lower salary
You may be moving into an industry you care deeply about.
For example, joining a charity may provide enough personal satisfaction to offset lower pay.
Be sure to factor these considerations into your salary expectations when defining your boundaries.
Tip 10: Decide how little is enough for you to walk away
After working through the previous tips, you should have a clear understanding of the minimum compensation you are willing to accept, or remain in, a role for.
Keep this in mind during negotiations. You may feel pressure not to lose the role by asking for more money, or worry about appearing overly focused on pay.
Joining a company and immediately feeling underpaid is not sustainable.
At the same time, asking for a raise as soon as you start is unlikely to help you establish yourself.
You may be better off declining a role if the company cannot close the gap between its offer and your minimum salary expectations.
Use these tips to define your value, account for any mitigating factors, and arrive at a salary you are willing to accept.
Once you have that number, negotiating becomes a matter of clearly demonstrating the value you bring to the company compared with other candidates.
If the gap between what a company is willing to pay and what you believe your skills and experience are worth is too large, walking away may be the better option.
Selecting an SEO plugin for your WordPress site is one of the most important decisions you’ll make for your online presence. It’s not just about installing software; it’s about choosing a long-term partner that will grow with your business, adapt to changing search algorithms, and support you in the age of AI. While the market offers several options, understanding what truly matters is key. Two of the most popular plugins in the market today are Yoast and Rank Math. Therefore, factors such as reliability, innovation, ecosystem, and trust help you make a choice that will serve your business for years to come.
This guide provides an in-depth comparison of the key differentiating factors between Yoast and Rank Math. We will understand why millions of websites worldwide have made Yoast their trusted comrade in the search business.
Key takeaways
Choosing an SEO plugin like Yoast SEO impacts your online presence and future growth.
Yoast offers reliability with over 15 years of experience and millions of active installations, unlike newer competitors.
Innovations such as AI integration and a unified schema graph set Yoast apart from other plugins.
Yoast provides comprehensive support, education, and a multi-platform ecosystem tailored for long-term success.
Trust industry leaders like Microsoft and Spotify who use Yoast SEO to enhance their online visibility.
When evaluating WordPress SEO plugins, it’s easy to get distracted by feature lists and flashy interfaces. But experienced marketers, agencies, and business owners know that the best tools are defined by much more than what they promise on paper.
The questions that matter most:
Can you trust this plugin to work reliably as your business scales?
Will the company behind it still be innovating five years from now?
What happens when you need help before a critical deadline?
Does the plugin anticipate future SEO trends, or just react to them?
Is this a tool you install, or an ecosystem that supports your growth and development?
These aren’t trivial questions. Your SEO plugin touches essential pages on your site, influences the content you publish, and directly impacts your ability to be found by potential customers. Choosing poorly can lead to migration headaches, compatibility issues, and lost rankings. Choosing wisely means peace of mind, ongoing innovation, and a solid foundation to build upon.
Why legacy and proven trust matter in SEO plugins
Trust isn’t given. It’s earned. Yoast has defined the WordPress SEO landscape for over 15 years, with more than 13 million active installations and over 850 million downloads. This extensive legacy reflects a consistent track record of innovation, stability, and trust. Brands such as The Guardian, Microsoft, Spotify, and others rely on Yoast SEO as a foundation for their SEO strategies. This depth of experience is invaluable as SEO requires ongoing adaptation to algorithm changes and new technologies.
While Rank Math is an ambitious and feature-rich plugin with a growing user base, its presence in the market is relatively recent. For businesses seeking a proven solution with a long-standing heritage, Yoast’s established positioning offers confidence that the plugin will continue to evolve and provide reliable support for years to come.
Innovation that shapes the industry
Yoast has always been at the forefront of defining what modern SEO looks like. This isn’t a reactive development; it’s proactive innovation that anticipates where search is heading. Both plugins invest in innovation, but Yoast’s leadership in integrating AI and collaboration with Google sets it apart.
AI and Automation
We have introduced an industry-first AI-powered optimization toolset, including:
AI Generate: Creates multiple optimized title and meta description variations instantly, giving you professionally crafted options in seconds instead of struggling for the perfect phrasing.
AI Optimize: Scans your content and provides precise, actionable suggestions to improve keyphrase placement, sentence structure, and readability, teaching you SEO best practices while you write.
AI Summarize: Instantly generates bullet-point summaries of your content, making it more scannable and engaging for readers who skim before diving deep.
AI Brand Insights: This is where Yoast truly separates from the pack. As AI platforms like ChatGPT reshape how people find information, AI Brand Insights, included in the Yoast SEO AI+ package, tracks how your brand appears in AI-generated responses. You can monitor your AI visibility, compare it against competitors, and ensure AI platforms accurately represent your business.
While Rank Math includes helpful automation features such as AI keyword suggestions, Yoast’s AI integration is more comprehensive and positioned as a core pillar of modern SEO strategy.
Schema markup that search engines can understand
While many plugins output disconnected structured data, Yoast SEO automatically generates a unified semantic graph on every page, linking your organization, content, authors, and products through a single JSON-LD structure that search engines and AI platforms can interpret consistently.
What makes this different
Automatic and invisible: Yoast outputs rich structured data representing your content, business, and relationships without requiring technical configuration. You focus on creating content; Yoast handles the complexity of structured data behind the scenes.
Single unified graph format: Instead of fragmented schema markup, Yoast creates one cohesive graph structure per page, connecting all entities with unique IDs. When plugins output conflicting schema, search engines can’t reliably interpret your site. Yoast’s unified graph ensures consistent interpretation at scale, whether Google, ChatGPT, or any API is reading your content.
Minimal configuration: Choose whether your site represents a person or organization; Yoast handles the rest automatically. Specialized blocks like FAQ and How-To map directly to correct schema types and link into the graph without additional setup.
Why this matters for AI-driven search
As AI platforms increasingly rely on structured data to understand websites, Yoast’s approach of creating a full semantic model of your site positions you for how search and discovery are evolving. The framework scales reliably from 100 to 100,000 pages while maintaining valid entity relationships. For developers, Yoast’s Schema API provides clean filters to extend or customize the graph without breaking its integrity.
Rank Math and other plugins support Schema markup, but Yoast’s unified graph framework represents a fundamentally different approach: automatic generation, consistent entity relationships, and architecture built for scale.
Continuous algorithm adaptation
Search engines make thousands of updates every year. Google alone rolls out over 5,000 algorithm changes annually. Now, as search engines evolve to incorporate AI tooling and platforms like ChatGPT reshape the way people discover information, the SEO landscape is changing faster than ever.
Most website owners can’t possibly track these shifts across traditional search AND emerging AI platforms, let alone understand their implications. Yoast’s dedicated SEO team monitors every significant update, from Google algorithm changes to how AI platforms index and reference content, and proactively adjusts the plugin to ensure your site stays optimized for both traditional and AI-driven discovery.
When you use Yoast, you’re not just getting software. You’re getting a team of experts working behind the scenes to keep your SEO strategy current across the entire discovery ecosystem.
An ecosystem built to support your SEO workflow
Yoast offers an ecosystem beyond the plugin. While Yoast SEO itself is a plugin, Yoast provides a comprehensive ecosystem to support your growth:
24/7 real human expert support available for Yoast SEO Premium users. It ensures that you get fast, knowledgeable help when you need it.
Yoast SEO Academy offers comprehensive SEO education, covering a range of topics from basics to advanced, with accompanying certifications.
A massive knowledge base and community for continuous learning and troubleshooting.
Multi-Platform Support
Your business doesn’t exist on WordPress alone. That’s why Yoast extends beyond a single platform:
Yoast SEO for Shopify: Brings Yoast’s trusted optimization to Shopify stores, helping ecommerce businesses improve product visibility and drive more sales.
Yoast WooCommerce SEO: Specifically designed for WooCommerce stores with automated product schema, smart breadcrumbs, and ecommerce-focused content analysis.
AI Brand Insights: A comprehensive feature, a part of Yoast SEO AI+ package, that shows how your brand appears across top AI platforms. Tracks key elements of your brand visibility and suggests relevant insights.
This ecosystem approach means Yoast grows with your business, supporting you across platforms as your needs evolve. Rank Math primarily focuses on the WordPress environment with a strong feature set, but lacks the same breadth of educational resources and multi-platform reach.
Stability and reliability at enterprise-grade scale
Flashy features attract attention. Rock-solid reliability keeps businesses running. Yoast rigorously tests every update for compatibility and performance across different WordPress versions and server configurations. This commitment ensures:
Backward compatibility: Updates maintain existing functionality without requiring extensive reconfiguration
WordPress core integration: Seamless compatibility with new WordPress releases
Performance at any scale: Optimized for sites ranging from personal blogs to high-traffic enterprise installations
With over 15 years in the market and more than 13 million active installations, Yoast has proven its reliability across millions of sites, hosting environments, and various use cases.
Rigorous testing and quality assurance
Yoast maintains strict development standards that prioritize stability above rapid feature deployment. Every update undergoes extensive testing across the latest WordPress versions, most PHP configurations, and common plugin combinations before release.
This disciplined approach means Yoast users rarely experience plugin conflicts, broken updates, or compatibility issues that plague WordPress sites using less mature plugins.
Backward compatibility
Major updates usually shake the functionality of plugins and software. However, Yoast maintains backward compatibility, ensuring that updating your plugin doesn’t suddenly break critical SEO features or require extensive reconfiguration.
WordPress core compatibility
As a plugin deeply integrated with WordPress development, Yoast maintains close relationships with the WordPress core team. This ensures seamless compatibility with new WordPress releases, often supporting new versions on launch day while other plugins scramble to catch up.
Performance optimized for scale
Whether you run a small blog or an enterprise site with millions of pages, Yoast performs efficiently without slowing down your site. The plugin is engineered for performance, using best practices for database queries, resource loading, and caching integration.
Enterprises trust Yoast precisely because it scales reliably. Small teams appreciate that the same plugin powering major corporations works flawlessly on their modest sites, too.
Ready to make a difference with Yoast SEO Premium?
Explore Yoast SEO Premium and the Yoast SEO AI+ package to discover advanced tools built for serious marketers.
While comprehensive feature-by-feature comparisons can be overwhelming, certain capabilities distinguish truly professional SEO plugins from the rest. Here’s where Yoast’s innovation and depth shine through.
AI-powered optimization
Yoast leads the industry in AI integration for SEO optimization:
AI Brand Insights for tracking your presence in AI search platforms
No competing plugin offers this comprehensive AI integration designed specifically for modern SEO workflows.
Schema Graph
Yoast’s Schema implementation creates a complete structured data graph connecting your organization, content, authors, and brand identity. This goes far beyond basic Schema markup, providing search engines with rich context that improves your chances of appearing in knowledge panels, rich results, and AI-generated answers.
Smart internal linking
Yoast SEO Premium includes intelligent internal linking suggestions that analyze your content and recommend relevant pages to link to. This isn’t just a list of posts; it’s context-aware suggestions that strengthen your site architecture and improve crawlability.
Advanced redirect manager
Managing redirects is critical when restructuring sites, changing URLs, or handling broken links. Yoast’s redirect manager offers:
Duplicate content prevention for product variations
Comprehensive crawl settings
Advanced users appreciate Yoast’s granular control over crawl optimization, robots.txt management, and indexation settings, giving technical SEO professionals the precision they need without overwhelming casual users.
Bot blocker for LLM training control
As AI companies scrape the web to train large language models, Yoast gives you control over whether your content is used for AI training via Bot Blocker. This cutting-edge feature addresses a concern most plugins haven’t even acknowledged yet.
Recognized and trusted by industry leaders
The company you keep says a lot about who you are. When the world’s most recognized brands trust Yoast to power their WordPress SEO, it’s a powerful testament to the quality, reliability, and effectiveness of our solutions.
Global brands* using Yoast include:
The Guardian
Microsoft
Spotify
Rolling Stones
Taylor Swift
Facebook
eBay
These organizations have teams of developers, SEO experts, and decision-makers who have evaluated every available option. They chose Yoast, not because it was the newest, but because it was the best.
*Disclaimer: Based on third party data sources.
Industry Recognition:
Global Search Awards Finalist: Recognized among the world’s leading SEO solutions
Yoast isn’t just popular, it’s the default choice for WordPress SEO professionals worldwide.
Understanding what you really need
Before making your final decision, consider what matters most for your specific situation:
If you value reliability and stability: Choose a plugin with a proven track record of consistent updates, compatibility, and performance. Longevity matters because it signals the company will be around to support you for years to come.
If innovation matters to your strategy: Look for a plugin that anticipates SEO trends rather than reacting to them. AI integration, Schema excellence, and algorithm adaptation separate forward-thinking tools from those playing catch-up.
If support is critical: Consider whether you need community forums or access to real SEO experts who can troubleshoot complex issues quickly. When your business relies on organic traffic, response time is crucial.
If education is important: Some plugins provide features; others teach you how to use them effectively. Comprehensive training resources and certifications demonstrate a commitment to your success.
If you’re building for the long term: Think about whether this plugin will grow with your business. Multi-platform support, scalability, and an ecosystem approach ensure that your investment pays dividends for years to come.
Make the choice that drives real growth
Choosing an SEO plugin isn’t about finding the tool with the longest feature list; it’s about finding the one that best suits your needs. It’s about partnering with a company that shares your commitment to long-term growth, innovation, and excellence.
Over 13 million websites trust Yoast SEO because it delivers on these promises:
Reliability: 15+ years of consistent innovation and stability
Trust: Used by global brands and industry leaders
Innovation: Leading the industry in AI integration and Schema excellence
Support: 24/7 access to real SEO professionals
Education: Comprehensive training through Yoast Academy
Ecosystem: Multi-platform support and continuous learning resources
Stability: Enterprise-grade performance at any scale
When you choose Yoast, you’re not just installing a plugin; you’re joining millions of websites that have made the strategic decision to partner with the most trusted name in WordPress SEO.
A smarter analysis in Yoast SEO Premium
Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-16 11:55:372026-01-16 11:55:37Choosing the right WordPress SEO plugin for your business – Yoast vs Rank Math
Marketing is moving faster than most teams can keep up with. Users expect answers immediately. They jump across channels before they ever land on your website. Search results summarize key points before they show links. AI Overviews and other LLMs give people clean, structured answers that used to require a full research session.
This change affects every part of the funnel, not because the fundamentals changed, but because AI reshaped how information flows and how decisions get made.
If you want your marketing system to keep up, you need to adapt your funnel to fit the way people learn, compare, and act. That requires new workflows, smarter content systems, and teams who know how to direct AI instead of wrestling with it.
Here is how to rebuild your entire marketing funnel for the AI era.
Key Takeaways
AI changes how users research, compare, and choose products, which means your funnel needs to adapt to shifting intent and new behavior patterns.
Teams that rely on structured systems can apply AI consistently across planning, content, outreach, and optimization.
Content needs to be created for humans and models at the same time, with clarity, structure, and trustworthy signals built in.
AI increases speed, insight, and variation, but human judgment still guides strategy and protects brand quality.
Funnel performance improves when your systems evolve continuously, using real-time data and predictive insights to guide action.
The New AI Reality in Marketing
With the advent of AI, users expect fast answers everywhere. They expect straightforward explanations and content that gets to the point. They expect the next step to feel obvious.
Search engines now summarize information before they send traffic. AI tools analyze questions and give people simple paths to follow. Teams that rely on slow planning cycles or rigid workflows fall behind because the landscape shifts too quickly.
AI also gives marketers more information. You can spot friction faster. You can discover demand signals earlier. You can build variations of a single idea in seconds instead of hours. The speed and clarity AI provides changes how you think, plan, and publish.
This is why systems matter. AI works best when your inputs are strong, your workflows are structured, and your team knows how to guide models with purpose.
The Funnel Rebuilt for AI
Funnels used to follow a predictable path. People saw a message, explored options, compared details, and made a decision. AI changed that pattern. Users often skip steps. They expect answers before they even start researching, mix channels and search surfaces, and they compare brands in less time using more tools.
You need a funnel that adapts to intent in real time. Let’s talk about things have changed over time.
Awareness: Earn Visibility in a Summarized World
Brand awareness used to mean ranking in search or showing up in social feeds. Now it means being visible wherever models and search engines pull information. Your content needs to be clear and structured, so AI systems can understand it instantly. That includes using strong definitions, concise explanations, and content that answers emerging questions.
AI can also help you plan faster. It can reveal topic clusters, related interests, language patterns, and questions users ask before they search. That insight helps you create content that works for both humans and models.
Consideration: Personalize and Adapt as Users Explore
Users take unpredictable paths. One person might read a comparison page, then watch a video, then search for alternatives. Another might start with a chatbot, skim reviews, and jump straight into pricing.
AI helps you adapt to these differences. You can tailor the next piece of information based on behavior, not assumptions. You can understand objections earlier and give people specific proof that supports their decision-making. You can create educational paths that feel natural, not forced.
Conversion: Speed Up Decisions With Smarter Insight
AI improves how you analyze signals across campaigns. You can see which touchpoints matter most. You can understand where people drop off and what gets them to return. You can time outreach based on behavior instead of sending messages on a fixed schedule.
Models also help you support decisions. You can create guided tools, calculators, and tailored content that answers the final questions users have before they convert. These experiences help users feel confident about their choice.
Upgrade Your Team’s Skills for an AI-Driven Funnel
AI changes workflows, but the impact depends on how your team uses it. You need people who can orchestrate systems, think strategically, and refine outputs with intention.
From Doers to Directors of Intelligence
AI accelerates execution. That means your team shifts from doing every step manually to guiding the process. They need to know how to set the direction, review outputs, and make judgment calls that models cannot.
This is where strategy and quality control become more important. Your team’s experience becomes the intelligence that powers the system.
Build Systems, Not Isolated Tasks
AI performs best when it has structure. You need workflows with clear inputs, expected outputs, and consistent guardrails. That includes:
Prompt libraries
Structured briefs
Standardized content formats
Quality assurance criteria
Automation playbooks
When these systems exist, you can scale execution without losing quality. The last thing you want to do is invest time in AI materials with little value.
Run an AI Literacy Sprint
A simple two-week sprint helps teams adopt AI confidently. The idea is to identify a few repetitive tasks, replace them with AI workflows, refine the prompts, and share results across the team.
This builds trust in the system and helps everyone learn from real examples.
Five Core Capabilities Modern Marketers Need
Teams need the ability to:
Guide models with strong prompts
Interpret data and validate insights
Design basic automations
Blend creativity with AI acceleration
Apply ethical judgment to protect quality
These skills support every stage of an AI-driven funnel.
AI at the Top of the Funnel: Attract
Top-of-funnel work moves faster with AI. You can build content calendars, briefs, and outlines in minutes. You can analyze emerging trends and understand what people are searching for before those topics peak.
AI also helps you identify gaps. When you study how search experiences present information, you can see which answers, examples, or evidence are missing. That insight becomes your content roadmap.
You need content that models can interpret easily. Pages should include clear summaries, simple explanations, structured sections, and credible sources. Models scan for signals of clarity and authority. When your content is well structured, it has a better chance of being displayed and referenced.
Repurposing becomes easier too. Long-form content can become social posts, email snippets, video scripts, and answers for community threads. With AI, you can extract angles and variations quickly without losing the core message.
Creating Content That AI Can Interpret
Models look for patterns. They favor content with consistent formatting, headings that reflect questions, concise explanations, and supporting details like data or examples. When your pages follow these patterns, your visibility improves.
Turning Content Into Multi-Format Assets
AI can help you transform one asset into many. A blog post becomes video ideas, social carousels, email sequences, and outline drafts for deeper content. This helps you move faster and create consistent messaging across channels.
AI in the Middle of the Funnel: Nurture and Convert
Middle-of-funnel work thrives when you combine expertise with AI-driven insight. You can turn educational content into interactive tools. You can enrich lead profiles with data about company size, tools used, or behavior patterns. You can score leads based on signals instead of guessing who is most interested.
Personalization becomes more natural. You can adapt messaging to match how each user learns. You can offer the right format for each segment, whether that is a video, a comparison chart, or a detailed guide.
AI also strengthens outbound efforts. You can build smarter lists, generate personalized outreach, and adjust timing based on reply patterns. This helps your team focus on conversations that matter.
Audience modeling becomes more precise. Instead of relying on broad personas, you can identify micro-segments based on motivations, predicted actions, and friction points. This leads to journeys that respond to real behavior.
Building Guided Tools That Turn Expertise Into Self-Serve Experiences
AI makes it easier to convert long-form content into calculators, quizzes, assessments, and guided flows. These tools educate users, gather signals, and qualify leads at the same time.
AI at the Bottom of the Funnel: Retain and Expand
AI changes how you manage customer relationships. It helps you capture insights from conversations, identify churn early, and create proactive outreach. It also helps you spot expansion opportunities by analyzing usage patterns and engagement.
Teams can turn sales calls and support conversations into repeatable playbooks. You can extract objections, winning responses, and communication patterns that help new reps ramp faster.
Retention becomes more proactive. You can monitor behavior for early signals, trigger personalized save sequences, and direct account outreach based on needs.
Upsell and expansion become more personalized too. You can focus on value moments and highlight features or products that match each customer’s journey.
Build an AI-Ready Growth Engine
Adapting your entire funnel to AI does not happen in one step. The most effective approach is to start with workflows that produce quick wins. Research, content briefs, reporting, and follow-ups are the easiest places to start.
You also need to train your team to review AI outputs like editors. They should think critically, refine prompts, and guide models toward better results. When teams treat AI as a collaborator, quality stays high.
Document every win. When a workflow works, turn it into a repeatable playbook. Build a culture where experimentation is normal. Share wins and failures openly. This helps your team learn faster and improve together.
Your growth engine becomes stronger every time you refine these systems.
FAQs
Where should I start if my funnel is not AI-ready?
Begin with workflows that affect every channel. Research, briefs, reporting, and follow-ups are easy to replace with AI-assisted versions and offer immediate gains.
Will AI replace my marketing team?
No. AI accelerates execution, but your team guides strategy, applies judgment, and protects quality. The work shifts from doing everything manually to directing intelligent systems.
How do I keep brand quality high when using AI?
Set clear guardrails. Use structured briefs, standardized formats, and defined editorial criteria. Review outputs carefully and refine prompts until they consistently match your voice and standards.
How do I introduce automation without breaking workflows?
Start small. Automate simple, repetitive tasks and build confidence. Add complexity only when your team has mastered the basics.
How do I measure improvements across the funnel?
Track speed, quality, and impact. Look at how quickly your team produces content.
Conclusion
AI is not replacing marketing funnels. It is reshaping how they work. Every stage of the journey changes when users rely on faster information, clearer answers, and smarter systems.
Teams that build structures around AI will move faster, make better decisions, and adapt to real-time behavior. Small changes add up. When you refine workflows, train your team, and document wins, you create a system that improves with every cycle.
The future belongs to marketers who learn how to direct AI with clarity and purpose. Let’s build a funnel that matches the way people make decisions today.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-13 20:00:002026-01-13 20:00:00How To Adapt Your Entire Marketing Funnel With AI
Why? Because Google AI Overviews are often littered with information stemming from online forums like Reddit and Quora.
And oftentimes, this user-generated content can be inaccurate — or entirely false.
Why Google AI Overviews heavily rely on content from Reddit and Quora
But how and why have Google AI Overviews come to rely on user-generated content forums?
The answer is quite simple. Google AI Overviews sources much of its information from “high-authority” domains. These happen to be platforms like Reddit and Quora.
Google also prioritizes “conversational content” and “real user experiences.” They want searchers to receive answers firsthand from other online humans.
Furthermore, Google places the same amount of weight on these firsthand anecdotes as it does on factual reporting.
How negative threads end up on AI summaries
Obviously, the emphasis placed on Reddit and Quora threads can lead to issues. Especially for professionals and those leading product- or service-driven organizations.
Many of the Reddit threads that rise to the surface are those that are complaint-driven. Think of threads where users are asking, “Does Brand X actually suck?” or “Is Brand Z actually a scam?”
The main problem is that these threads become extremely popular. AI Overviews gather the consensus of many comments and combine them into a single resounding answer.
In essence, minority opinions end up being represented as fact.
Additionally, Google AI Overviews often resurface old threads that lack timestamps. This can lead to the resurfacing of outdated, often inaccurate information.
Patterns that SEO, ORM, and brands are noticing
Those in the ORM field have been noticing troubling patterns in Google AI Overviews for a while now. For instance, we’ve identified the following trends:
Overwhelming Reddit criticism: Criticism on Reddit rises to the top at alarming rates. Google AI Overviews even seem to ignore official responses from brands at times, instead opting for the opinions of users on forum platforms.
Pros vs. cons summaries: These sorts of lists are supposed to implore balance. (Isn’t that the entire point of identifying both the pros and cons of a brand?) However, sites like Reddit and Quora tend to accentuate the negative aspects of brands, at times ignoring the pros altogether.
Outdated content resurfacing: As mentioned in the previous section, outdated content can hold far too much value. Aa troubling amount of “resolved issues” gain prevalence in the Google AI Overviews feature.
The amplification effect: AI can turn opinion into fact
We live in an era defined by instantaneous knowledge.
Gen Z takes in information at startling rates. What’s seen on TikTok is absorbed as immediate fact. Instagram is where many turn to get both breaking news and updates on the latest brands.
This has led to an amplification effect, where algorithms quickly turn opinion into fact. We’re seeing it widely across social media, and now on Google AI Overviews, too.
On top of what we listed in the previous section, those in the ORM realm are noticing the following take effect:
Nuance-less summarization: Because AI Overviews take such overwhelming negative criticism from Reddit, we’re getting less nuanced responses. The focus in AI Overviews is often one-sided and seemingly biased, featuring emotional, extreme language.
Feedback loops: As others in the ORM field have pointed out, many citations in Overview come from deep pages. It’s also common to see feedback loops wherein one negative Reddit thread can hold multiple citations, leading to quick AI validation.
Enhanced trust in AI Overviews: Perhaps most troubling of all has been society’s immediate jump to accept AI Overviews and all the answers it has to offer. Many users now turn to Google’s feature as their ultimate encyopledia — without even caring to view the citations AI Overviews has listed.
Misinformation and bias create risk
All in all, the rise of information from Reddit and Quora on AI Overviews has led to enhanced risk for businesses and entrepreneurs alike.
False statements and defamatory claims posted online can be accepted as fact. And incomplete narratives or opinion-based criticism floating around on forums are filtered through the lens of AI Overviews.
Making matters worse is that Google does not automatically remove or filter AI summaries that are linked to harmful content.
This can be damaging to a company’s reputation, as users absorb what they see on AI Overviews at face value. They take it as fact, even though it might be fiction.
Building a reputation strategy for false AI-driven searches
Working with an ORM team is a critical first step. They might suggest the following measures:
Monitoring online forums: Yes, our modern world dictates that you stay on top of online forums like Reddit and Quora. Monitor the name of your business and the top players on your team. If you’re aware of the dialogue, you’re already one step ahead.
Creating “AI-readable” content: It’s also important to always be creating content designed to land on AI Overviews. This content should boost your platform on search engines, be citation-worthy, and push down less favorable results.
Addressing known criticism: Ever notice criticism directed at your brand? Seek to address it with proper business practices. Respond to online reviews kindly, suppress or remove negative content with your ORM team, and establish your business as a caring practice online.
Coordinating various teams: It’s imperative to establish the right teams around your business. We already mentioned ORM, but what about your legal, SEO, and PR teams? Have the right experts in place to deal with any controversies before they arise.
Also, remember to keep an eye on the future. Online reputation management is constantly evolving, and if your intention is to manage and elevate your brand, you must evolve with the times.
That means staying up-to-date with AI literacy and adapting to new KPIs, including sentiment framing, source attribution, and AI visibility.
Staying on top of Google AI Overviews
We live in a new age. One where AI Overviews dictates much of what searches think and react to.
And the honest truth is that much of the knowledge AI Overviews gleans comes from user-dominated forums like Reddit and Quora.
As a brand manager, you can no longer be idle. You have to act. You have to manage the sources that Google AI Overviews summarizes, constantly staying one step ahead.
If you don’t, then you’re not properly managing your search reputation.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/01/misleading-googleai-overview-9GwfJJ.png?fit=1920%2C1079&ssl=110791920http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-13 18:16:362026-01-13 18:16:36How brands can respond to misleading Google AI Overviews
As AI-led search becomes a real driver of discovery, an old assumption is back with new urgency. If AI systems infer quality from user experience, and Core Web Vitals (CWV) are Google’s most visible proxy for experience, then strong CWV performance should correlate with strong AI visibility.
The logic makes sense.
Faster page load times result in smoother page load times, increased user engagement, improved signals, and AI systems that reward the outcome (supposedly)
But logic is not evidence.
To test this properly, I analysed 107,352 webpages that appear prominently in Google AI Overviews and AI Mode, examining the distribution of Core Web Vitals at the page level and comparing them against patterns of performance in AI-driven search and answer systems.
The aim was not to confirm whether performance “matters”, but to understand how it matters, where it matters, and whether it meaningfully differentiates in an AI context.
What emerged was not a simple yes or no, but a more nuanced conclusion that challenges prevailing assumptions about how many teams currently prioritise technical optimisation in the AI era.
Why distributions matter more than scores
Most Core Web Vitals reporting is built around thresholds and averages. Pages pass or fail. Sites are summarized with mean scores. Dashboards reduce thousands of URLs into a single number.
The first step in this analysis was to step away from that framing entirely.
When Largest Contentful Paint was visualized as a distribution, the pattern was immediately clear. The dataset exhibited a heavy right skew.
Median LCP values clustered in a broadly acceptable range, while a long tail of extreme outliers extended far beyond it. A relatively small proportion of pages were horrendously slow, but they exerted a disproportionate influence on the average.
Cumulative Layout Shift showed a similar issue. The majority of pages recorded near-zero CLS, while a small minority exhibited severe instability.
Again, the mean suggested a site-wide problem that did not reflect the lived reality of most pages.
This matters because AI systems do not reason over averages, if they reason on user engagement metrics at all.
They evaluate individual documents, templates, and passages of content. A site-wide CWV score is an abstraction created for reporting convenience, not a signal consumed by an AI model.
Before correlation can even be discussed, one thing becomes clear. Core Web Vitals are not a single signal, they are a distribution of behaviors across a mixed population of pages.
Correlations
Because the data was uneven and not normally distributed, a standard Pearson correlation was not suitable. Instead, I used a Spearman rank correlation, which assesses whether higher-ranking pages on one measure also tend to rank higher or lower on another, without assuming a linear relationship.
This matters because, if Core Web Vitals were closely linked to AI performance, pages that perform better on CWV would also tend to perform better in AI visibility, even if the link was weak.
I found a small negative relationship. It was present, but limited. For Largest Contentful Paint, the correlation ranged from -0.12 to -0.18, depending on how AI visibility was measured. For Cumulative Layout Shift, it was weaker again, typically between -0.05 and -0.09.
These relationships are visible when you look at large volumes of data, but they are not strong in practical terms. Crucially, they do not suggest that faster or more stable pages are consistently more visible in AI systems. Instead, they point to a more subtle pattern.
The absence of upside, and the presence of downside
The data do not support the claim that improving Core Web Vitals beyond basic thresholds improves AI performance. Pages with good CWV scores did not reliably outperform their peers in AI inclusion, citation, or retrieval.
However, the negative correlation is instructive.
Pages sitting in the extreme tail of CWV performance, particularly for LCP, were far less likely to perform well in AI contexts.
These pages tended to exhibit lower engagement, higher abandonment, and weaker behavioral reinforcement signals. Those second-order effects are precisely the kinds of signals AI systems rely on, directly or indirectly, when learning what to trust.
This reveals the true shape of the relationship.
Core Web Vitals do not act as a growth lever for AI visibility. They act as a constraint.
Good performance does not create an advantage. Severe failure creates disadvantage.
This distinction is easy to miss if you examine only pass rates or averages. It becomes apparent when examining distributions and rank-based relationships.
Why ‘passing CWV’ is not a differentiator
One reason the positive correlation many expect does not appear is simple. Passing Core Web Vitals is no longer rare.
In this dataset, the majority of pages already met recommended thresholds, especially for CLS. When most of the population clears a bar, clearing it does not distinguish you. It merely keeps you in contention.
AI systems are not selecting between pages because one loads in 1.8 seconds and another in 2.3 seconds. They are selecting between pages because one explains a concept clearly, aligns with established sources, and satisfies the user’s intent, whereas the other does not.
Core Web Vitals ensure that the experience does not actively undermine those qualities. They do not substitute for them.
Reframing the role of Core Web Vitals in AI strategy
The implication is not that Core Web Vitals are unimportant. It is that their role has been misunderstood.
In an AI-led search environment, Core Web Vitals function as a risk-management tool, not acompetitive strategy. They prevent pages from falling out of contention due to poor experience signals.
This reframing has practical consequences for developing an AI visibility strategy.
Chasing incremental CWV gains across already acceptable pages is unlikely to deliver returns in AI visibility. It consumes engineering effort without changing the underlying selection logic AI systems apply.
Targeting the extreme tail, however, does matter. Pages with really bad performance generate negative behavioral signals that can suppress trust, reduce reuse, and weaken downstream learning signals.
The objective is not to make everything perfect. It is to ensure that the content you want AI systems to rely on is not compromised by avoidable technical failure.
Why this matters
As AI systems increasingly mediate discovery, brands are seeking controllable levers. Core Web Vitals feel attractive because they are measurable, familiar, and actionable.
The risk is mistaking measurability for impact.
This analysis suggests a more disciplined approach. Treat Core Web Vitals as table stakes. Eliminate extreme failures.
Protect your most important content from technical debt. Then shift focus back to the factors AI systems actually use to infer value, such as clarity, consistency, intent alignment, and behavioral validation.
Core Web Vitals: A gatekeeper, not a differentiator
Based on an analysis of 107,352 AI visible webpages, the relationship between Core Web Vitals and AI performance is real, but limited.
There is no strong positive correlation. Improving CWV beyond baseline thresholds does not reliably improve AI visibility.
However, a measurable negative relationship exists at the extremes. Severe performance failures are associated with poorer AI outcomes, mediated through user behavior and engagement.
Core Web Vitals are therefore best understood as a gate, not a signal of excellence.
In an AI-led search landscape, this clarity matters.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/01/distribution-largest-contentful-paint-uke2z8.png?fit=1980%2C1180&ssl=111801980http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-13 16:39:392026-01-13 16:39:39What 107,000 pages reveal about Core Web Vitals and AI search
You’ve likely invested in AI tools for your marketing team, or at least encouraged people to experiment.
Some use the tools daily. Others avoid them. A few test them quietly on the side.
This inconsistency creates a problem.
An MIT study found that 95% of AI pilots fail to show measurable ROI.
Scattered marketing AI adoption doesn’t translate to proven time savings, higher output, or revenue growth.
AI usage ≠ AI adoption ≠ effective AI adoption.
To get real results, your whole team needs to use AI systematically with clear guidelines and documented outcomes.
But getting there requires removing common roadblocks.
In this guide, I’ll explain seven marketing AI adoption challenges and how to overcome them. By the end, you’ll know how to successfully roll out AI across your team.
Free roadmap: I created a companion AI adoption roadmap with step-by-step tasks and timeframes to help you execute your pilot. Download it now.
First up: One of the biggest barriers to AI adoption — lack of clarity on when and how to use it.
1. No Clear AI Use Cases to Guide Your Team
Companies often mandate AI usage but provide limited guidance on which tasks it should handle.
In my experience, this is one of the most common AI adoption challenges teams face. Regardless of industry or company size.
Vague directives like “use AI more” leave people guessing.
The solution is to connect tasks to tools so everyone knows exactly how AI fits into their workflow.
The Fix: Map Team Member Tasks to Your Tech Stack
Start by gathering your marketing team for a working session.
Ask everyone to write down the tasks they perform daily or weekly. (Not job descriptions, but actual tasks they repeat regularly.)
Then look for patterns.
Which tasks are repetitive and time-consuming?
Maybe your content team realizes they spend four hours each week manually tracking competitor content to identify gaps and opportunities. That’s a clear AI use case.
Or your analytics lead notices they are wasting half a day consolidating campaign performance data from multiple regions into a single report.
AI tools can automatically pull and format that data.
Once your team has identified use cases, match each task to the appropriate tool.
After your workshop, create assignments for each person based on what they identified in the session.
For example: “Automate competitor tracking with [specific tool].”
When your team knows exactly what to do, adoption becomes easier.
2. No Structured Plan to Roll Out AI Across the Organization
If you give AI tools to everyone at once, don’t be surprised if you get low adoption in return.
The issue isn’t your team or the technology. It’s launching without testing first.
The Fix: Start with a Pilot Program
A pilot program is a small-scale test where one team uses AI tools. You learn what works, fix problems, and prove value — before rolling it out to everyone else.
A company-wide launch doesn’t give you this learning period.
Everyone struggles with the same issues at once. And nobody knows if the problem is the tool, their approach, or both.
Which means you end up wasting months (and money) before realizing what went wrong.
Plan to run your pilot for 8-12 weeks.
Note: Your pilot timeline will vary by team.
Small teams can move fast and test in 4-8 weeks. Larger teams might need 3-4 months to gather enough feedback.
Start with three months as your baseline. Then adjust based on how quickly your team adapts.
Content, email, or social teams work best because they produce repetitive outputs that show AI’s immediate value.
Select 3-30 participants from this department, depending on your team size.
(Smaller teams might pilot with 3-5 people. Larger organizations can test with 20-30.)
Then, set measurable goals with clear targets you can track. Like:
Schedule weekly meetings to gather feedback throughout the pilot.
The pilot will produce department-specific workflows. But you’ll also discover what transfers: which training methods work, where people struggle, and what governance rules you need.
When you expand to other departments, they’ll adapt these frameworks to their own AI tasks.
After three months, you’ll have proven results and trained users who can teach the next group.
At that point, expand the pilot to your second department (or next batch of the same team).
They’ll learn from the first group’s mistakes and scale faster because you’ve already solved common problems.
Pro tip: Keep refining throughout the pilot.
Update prompts when they produce poor results
Add new tools when you find workflow gaps
Remove friction points the moment they appear
Your third batch will move even quicker.
Within a year, you’ll have organization-wide marketing AI adoption with measurable results.
Employees may resist AI marketing adoption because they fear losing their jobs to automation.
Headlines about AI replacing workers don’t help.
Your goal is to address these fears directly rather than dismissing them.
The Fix: Have Honest Conversations About Job Security
Meet with each team member and walk through how AI affects their workflow.
Point out which repetitive tasks AI will automate. Then explain what they’ll work on with that freed-up time.
Be careful about the language you use. Be empathetic and reassuring.
For example, don’t say “AI makes you more strategic.”
Say: “AI will pull performance reports automatically. You’ll analyze the insights, identify opportunities, and make strategic decisions on budget allocation.”
One is vague. The other shows them exactly how their role evolves.
Don’t just spring changes on your team. Give them a clear timeline.
Explain when AI tools will roll out, when training starts, and when you expect them to start using the new workflows.
For example: “We’re implementing AI for competitor tracking in Q2. Training happens in March. By April, this becomes part of your weekly process.”
When people know what’s coming and when, they have time to prepare instead of panicking.
Pro tip: Let people choose which AI features align with their interests and work style.
Some team members might gravitate toward AI for content creation. Others prefer using it for data analysis or reporting.
When people have autonomy over which features they adopt first, resistance decreases. They’re exploring tools that genuinely interest them rather than following mandates.
5. Your Team Resists AI-Driven Workflow Changes
People resist AI when it disrupts their established workflows.
Your team has spent years perfecting their processes. AI represents change, even when the benefits are obvious.
Resistance gets stronger when organizations mandate AI usage without considering how people actually work.
New platforms can be especially intimidating.
It means new logins, new interfaces, and completely new workflows to learn.
Rather than forcing everyone to change their workflows at once, let a few team members test the new approach first using familiar tools.
The Fix: Start with AI Features in Existing Tools
Your team likely already uses HubSpot, Google Ads, Adobe, or similar platforms daily.
When you use AI within existing tools, your team learns new capabilities without learning an entirely new system.
If you’re running a pilot program, designate 2-3 participants as AI champions.
Their role goes beyond testing — they actively share what they’re learning with the broader team.
The AI champions should be naturally curious about new tools and respected by their colleagues (not just the most senior people).
Have them share what they discover in a team Slack channel or during standups:
Specific tasks that are now faster or easier
What surprised them (good or bad)
Tips or advice on how others can use the tool effectively
When others see real examples, such as “I used Social Content AI to create 10 LinkedIn posts in 20 minutes instead of 2 hours,” it carries more weight than reassurance from leadership.
For example, if your team already uses a tool like Semrush, your champions can demonstrate how its AI features improve their workflows.
Keyword Magic Tool’s AI-powered Personal Keyword Difficulty (PKD%) score shows which keywords your site can realistically rank for — without requiring any manual research or analysis.
Your content writers can input a topic, set their brand voice, and get a structured first draft in minutes. This reduces the time spent staring at a blank page.
Social Content AI handles the repetitive parts of social media planning. It generates post ideas, copy variations, and images.
Your social team can quickly build out a week’s content calendar instead of creating each post from scratch.
Don’t have a Semrush subscription? Sign up now and get a 14-day free trial + get a special 17% discount on annual plan.
6. No Governance or Guardrails to Keep AI Usage Safe
Without clear guidelines, your team may either avoid AI entirely or use it in ways that create risk.
They paste customer data into ChatGPT without realizing it violates data policies.
Or publish AI-generated content without approval because the review process was never explained.
Your team needs clear guidelines on what’s allowed, what’s not, and who approves what.
Free AI policy template: Need help creating your company’s AI policy? Download our free AI Marketing Usage Policy template. Customize it with your team’s tools and workflows, and you’re ready to go.
The Fix: Create a One-Page AI Usage Policy
When creating your policy, keep it simple and accessible. Don’t create a 20-page document nobody will read.
Aim for 1-2 pages that are straightforward and easy to follow.
Include four key areas to keep AI usage both safe and productive.
Policy Area
What to Include
Example
Approved Tools
List which AI tools your team can use — both standalone tools and AI features in platforms you already use
“Approved: ChatGPT, Claude, Semrush’s AI Article Generator, Adobe Firefly”
Data Sharing Rules
Define specifically what data can and can’t be shared with AI tools
“Safe to share: Product descriptions, blog topics, competitor URLs
Concerns about whether AI-generated content is accurate or appropriate
Questions about data sharing
The goal is to give them a clear path to get help, rather than guessing or avoiding AI altogether.
Then, post the policy where your team will see it.
This might be your Slack workspace, project management tool, or a pinned document in your shared drive.
And treat it as a living document.
When the same question comes up multiple times, add the answer to your policy.
For example, if three people ask, “Can I use AI to write email subject lines?” update your policy to explicitly say yes (and clarify who reviews them before sending).
7. No Reliable Way to Measure AI’s Impact or ROI
Without clear proof that AI improves their results, team members may assume it’s just extra work and return to old methods.
And if leadership can’t see a measurable impact, they might question the investment.
This puts your entire AI program at risk.
Avoid this by establishing the right metrics before implementing AI.
The Fix: Track Business Metrics (Not Just Efficiency)
Here’s how to measure AI’s business impact properly.
Pick 2-3 metrics your leadership already reviews in reports or meetings.
These are typically:
Leads generated
Conversion rate
Revenue growth
Customer acquisition
Customer retention
These numbers demonstrate to your team and leadership that AI is helping your business.
Then, establish your baseline by recording your current numbers. (Do this before implementing AI tools.)
For example, if you’re tracking leads and conversion rate, write down:
Current monthly leads: 200
Current conversion rate: 3%
This baseline lets you show your team (and leadership) exactly what changed after implementing AI.
Pro tip: Avoid making multiple changes simultaneously during your pilot or initial rollout.
If you implement AI while also switching platforms or restructuring your team, you won’t know which change drove results.
Keep other variables stable so you can clearly attribute improvements to AI.
Once AI is in use, check your metrics monthly to see if they’re improving. Use the same tools you used to record your baseline.
Write down your current numbers next to your baseline numbers.
For example:
Baseline leads (before AI): 200 per month
Current leads (3 months into AI): 280 per month
But don’t just check if numbers went up or down.
Look for patterns:
Did one specific campaign or content type perform better after using AI?
Are certain team members getting better results than others?
Track individual output alongside team metrics.
For example, compare how many blog posts each writer completes per week, or email open rates by the person who drafted them.
If someone’s consistently performing better, ask them to share their AI workflow with the team.
This shows you what’s working, and helps the rest of your team improve.
Share results with both your team and leadership regularly.
When reporting, connect AI’s impact to the metrics you’ve been tracking.
For example:
Say: “AI cut email creation time from 4 hours to 2.5 hours. We used that time to run 30% more campaigns, which increased quarterly revenue from email by $5,000.”
Not: “We saved 90 hours with AI email tools.”
The first shows business impact — what you accomplished with the time saved. The second only shows time saved.
Other examples of how to frame your reporting include:
Build Your Marketing AI Adoption Strategy
When AI usage is optional, undefined, or unsupported, it stays fragmented.
Effective marketing AI adoption looks different.
It’s built on:
Role-specific training people actually use
Guardrails that reduce uncertainty and risk
Metrics that drive business outcomes
When those pieces are in place, AI becomes part of how work gets done.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-01-13 15:57:132026-01-13 15:57:137 Marketing AI Adoption Challenges (And How to Fix Them)