Posts

How brands can respond to misleading Google AI Overviews

Misleading -Google AI Overview

Google’s AI Overviews feature has become the face of our search engine results.

Type almost any question into your Google search bar, and the first answer you receive will be AI generated.

Many are thrilled about this. Others are wary.

Marketers and those in the online reputation management (ORM) field are among those urging caution.

Why? Because Google AI Overviews are often littered with information stemming from online forums like Reddit and Quora. 

And oftentimes, this user-generated content can be inaccurate — or entirely false. 

Why Google AI Overviews heavily rely on content from Reddit and Quora

But how and why have Google AI Overviews come to rely on user-generated content forums?

The answer is quite simple. Google AI Overviews sources much of its information from “high-authority” domains. These happen to be platforms like Reddit and Quora.

Google also prioritizes “conversational content” and “real user experiences.” They want searchers to receive answers firsthand from other online humans.

Furthermore, Google places the same amount of weight on these firsthand anecdotes as it does on factual reporting. 

How negative threads end up on AI summaries

Obviously, the emphasis placed on Reddit and Quora threads can lead to issues. Especially for professionals and those leading product- or service-driven organizations.

Many of the Reddit threads that rise to the surface are those that are complaint-driven. Think of threads where users are asking, “Does Brand X actually suck?” or “Is Brand Z actually a scam?”

The main problem is that these threads become extremely popular. AI Overviews gather the consensus of many comments and combine them into a single resounding answer. 

In essence, minority opinions end up being represented as fact.

Additionally, Google AI Overviews often resurface old threads that lack timestamps. This can lead to the resurfacing of outdated, often inaccurate information. 

Patterns that SEO, ORM, and brands are noticing

Those in the ORM field have been noticing troubling patterns in Google AI Overviews for a while now. For instance, we’ve identified the following trends:

  • Overwhelming Reddit criticism: Criticism on Reddit rises to the top at alarming rates. Google AI Overviews even seem to ignore official responses from brands at times, instead opting for the opinions of users on forum platforms.
  • Pros vs. cons summaries: These sorts of lists are supposed to implore balance. (Isn’t that the entire point of identifying both the pros and cons of a brand?) However, sites like Reddit and Quora tend to accentuate the negative aspects of brands, at times ignoring the pros altogether. 
  • Outdated content resurfacing: As mentioned in the previous section, outdated content can hold far too much value. Aa troubling amount of “resolved issues” gain prevalence in the Google AI Overviews feature.

The amplification effect: AI can turn opinion into fact

We live in an era defined by instantaneous knowledge.

Gen Z takes in information at startling rates. What’s seen on TikTok is absorbed as immediate fact. Instagram is where many turn to get both breaking news and updates on the latest brands

This has led to an amplification effect, where algorithms quickly turn opinion into fact. We’re seeing it widely across social media, and now on Google AI Overviews, too.

On top of what we listed in the previous section, those in the ORM realm are noticing the following take effect:

  • Nuance-less summarization: Because AI Overviews take such overwhelming negative criticism from Reddit, we’re getting less nuanced responses. The focus in AI Overviews is often one-sided and seemingly biased, featuring emotional, extreme language. 
  • Feedback loops: As others in the ORM field have pointed out, many citations in Overview come from deep pages. It’s also common to see feedback loops wherein one negative Reddit thread can hold multiple citations, leading to quick AI validation.
  • Enhanced trust in AI Overviews: Perhaps most troubling of all has been society’s immediate jump to accept AI Overviews and all the answers it has to offer. Many users now turn to Google’s feature as their ultimate encyopledia — without even caring to view the citations AI Overviews has listed. 

Misinformation and bias create risk

All in all, the rise of information from Reddit and Quora on AI Overviews has led to enhanced risk for businesses and entrepreneurs alike.

False statements and defamatory claims posted online can be accepted as fact. And incomplete narratives or opinion-based criticism floating around on forums are filtered through the lens of AI Overviews.

Making matters worse is that Google does not automatically remove or filter AI summaries that are linked to harmful content. 

This can be damaging to a company’s reputation, as users absorb what they see on AI Overviews at face value. They take it as fact, even though it might be fiction.

Building a reputation strategy for false AI-driven searches

As a business owner, it’s critical to have response strategies in place for Google AI Overviews. 

Working with an ORM team is a critical first step. They might suggest the following measures:

  • Monitoring online forums: Yes, our modern world dictates that you stay on top of online forums like Reddit and Quora. Monitor the name of your business and the top players on your team. If you’re aware of the dialogue, you’re already one step ahead.
  • Creating “AI-readable” content: It’s also important to always be creating content designed to land on AI Overviews. This content should boost your platform on search engines, be citation-worthy, and push down less favorable results.
  • Addressing known criticism: Ever notice criticism directed at your brand? Seek to address it with proper business practices. Respond to online reviews kindly, suppress or remove negative content with your ORM team, and establish your business as a caring practice online.
  • Coordinating various teams: It’s imperative to establish the right teams around your business. We already mentioned ORM, but what about your legal, SEO, and PR teams? Have the right experts in place to deal with any controversies before they arise.

Also, remember to keep an eye on the future. Online reputation management is constantly evolving, and if your intention is to manage and elevate your brand, you must evolve with the times.

That means staying up-to-date with AI literacy and adapting to new KPIs, including sentiment framing, source attribution, and AI visibility. 

Staying on top of Google AI Overviews

We live in a new age. One where AI Overviews dictates much of what searches think and react to.

And the honest truth is that much of the knowledge AI Overviews gleans comes from user-dominated forums like Reddit and Quora.

As a brand manager, you can no longer be idle. You have to act. You have to manage the sources that Google AI Overviews summarizes, constantly staying one step ahead.

If you don’t, then you’re not properly managing your search reputation. 

Read more at Read More

What 107,000 pages reveal about Core Web Vitals and AI search

Core Web Vitals AI visibility

As AI-led search becomes a real driver of discovery, an old assumption is back with new urgency. If AI systems infer quality from user experience, and Core Web Vitals (CWV) are Google’s most visible proxy for experience, then strong CWV performance should correlate with strong AI visibility.

The logic makes sense.

Faster page load times result in smoother page load times, increased user engagement, improved signals, and AI systems that reward the outcome (supposedly)

But logic is not evidence.

To test this properly, I analysed 107,352 webpages that appear prominently in Google AI Overviews and AI Mode, examining the distribution of Core Web Vitals at the page level and comparing them against patterns of performance in AI-driven search and answer systems. 

The aim was not to confirm whether performance “matters”, but to understand how it matters, where it matters, and whether it meaningfully differentiates in an AI context.

What emerged was not a simple yes or no, but a more nuanced conclusion that challenges prevailing assumptions about how many teams currently prioritise technical optimisation in the AI era.

Why distributions matter more than scores

Most Core Web Vitals reporting is built around thresholds and averages. Pages pass or fail. Sites are summarized with mean scores. Dashboards reduce thousands of URLs into a single number.

The first step in this analysis was to step away from that framing entirely.

When Largest Contentful Paint was visualized as a distribution, the pattern was immediately clear. The dataset exhibited a heavy right skew. 

Median LCP values clustered in a broadly acceptable range, while a long tail of extreme outliers extended far beyond it. A relatively small proportion of pages were horrendously slow, but they exerted a disproportionate influence on the average.

Cumulative Layout Shift showed a similar issue. The majority of pages recorded near-zero CLS, while a small minority exhibited severe instability. 

Again, the mean suggested a site-wide problem that did not reflect the lived reality of most pages.

This matters because AI systems do not reason over averages, if they reason on user engagement metrics at all. 

They evaluate individual documents, templates, and passages of content. A site-wide CWV score is an abstraction created for reporting convenience, not a signal consumed by an AI model.

Before correlation can even be discussed, one thing becomes clear. Core Web Vitals are not a single signal, they are a distribution of behaviors across a mixed population of pages.

Correlations

Because the data was uneven and not normally distributed, a standard Pearson correlation was not suitable. Instead, I used a Spearman rank correlation, which assesses whether higher-ranking pages on one measure also tend to rank higher or lower on another, without assuming a linear relationship.

This matters because, if Core Web Vitals were closely linked to AI performance, pages that perform better on CWV would also tend to perform better in AI visibility, even if the link was weak.

I found a small negative relationship. It was present, but limited. For Largest Contentful Paint, the correlation ranged from -0.12 to -0.18, depending on how AI visibility was measured. For Cumulative Layout Shift, it was weaker again, typically between -0.05 and -0.09.

These relationships are visible when you look at large volumes of data, but they are not strong in practical terms. Crucially, they do not suggest that faster or more stable pages are consistently more visible in AI systems. Instead, they point to a more subtle pattern.

The absence of upside, and the presence of downside

The data do not support the claim that improving Core Web Vitals beyond basic thresholds improves AI performance. Pages with good CWV scores did not reliably outperform their peers in AI inclusion, citation, or retrieval.

However, the negative correlation is instructive.

Pages sitting in the extreme tail of CWV performance, particularly for LCP, were far less likely to perform well in AI contexts. 

These pages tended to exhibit lower engagement, higher abandonment, and weaker behavioral reinforcement signals. Those second-order effects are precisely the kinds of signals AI systems rely on, directly or indirectly, when learning what to trust.

This reveals the true shape of the relationship.

Core Web Vitals do not act as a growth lever for AI visibility. They act as a constraint.

Good performance does not create an advantage. Severe failure creates disadvantage.

This distinction is easy to miss if you examine only pass rates or averages. It becomes apparent when examining distributions and rank-based relationships.

Why ‘passing CWV’ is not a differentiator

One reason the positive correlation many expect does not appear is simple. Passing Core Web Vitals is no longer rare.

In this dataset, the majority of pages already met recommended thresholds, especially for CLS. When most of the population clears a bar, clearing it does not distinguish you. It merely keeps you in contention.

AI systems are not selecting between pages because one loads in 1.8 seconds and another in 2.3 seconds. They are selecting between pages because one explains a concept clearly, aligns with established sources, and satisfies the user’s intent, whereas the other does not.

Core Web Vitals ensure that the experience does not actively undermine those qualities. They do not substitute for them.

Reframing the role of Core Web Vitals in AI strategy

The implication is not that Core Web Vitals are unimportant. It is that their role has been misunderstood.

In an AI-led search environment, Core Web Vitals function as a risk-management tool, not acompetitive strategy. They prevent pages from falling out of contention due to poor experience signals.

This reframing has practical consequences for developing an AI visibility strategy.

Chasing incremental CWV gains across already acceptable pages is unlikely to deliver returns in AI visibility. It consumes engineering effort without changing the underlying selection logic AI systems apply.

Targeting the extreme tail, however, does matter. Pages with really bad performance generate negative behavioral signals that can suppress trust, reduce reuse, and weaken downstream learning signals.

The objective is not to make everything perfect. It is to ensure that the content you want AI systems to rely on is not compromised by avoidable technical failure.

Why this matters

As AI systems increasingly mediate discovery, brands are seeking controllable levers. Core Web Vitals feel attractive because they are measurable, familiar, and actionable.

The risk is mistaking measurability for impact.

This analysis suggests a more disciplined approach. Treat Core Web Vitals as table stakes. Eliminate extreme failures. 

Protect your most important content from technical debt. Then shift focus back to the factors AI systems actually use to infer value, such as clarity, consistency, intent alignment, and behavioral validation.

Core Web Vitals: A gatekeeper, not a differentiator

Based on an analysis of 107,352 AI visible webpages, the relationship between Core Web Vitals and AI performance is real, but limited.

There is no strong positive correlation. Improving CWV beyond baseline thresholds does not reliably improve AI visibility.

However, a measurable negative relationship exists at the extremes. Severe performance failures are associated with poorer AI outcomes, mediated through user behavior and engagement.

Core Web Vitals are therefore best understood as a gate, not a signal of excellence.

In an AI-led search landscape, this clarity matters.

Read more at Read More

7 Marketing AI Adoption Challenges (And How to Fix Them)

You’ve likely invested in AI tools for your marketing team, or at least encouraged people to experiment.

Some use the tools daily. Others avoid them. A few test them quietly on the side.

This inconsistency creates a problem.

An MIT study found that 95% of AI pilots fail to show measurable ROI.

Scattered marketing AI adoption doesn’t translate to proven time savings, higher output, or revenue growth.

AI usage ≠ AI adoption ≠ effective AI adoption.

To get real results, your whole team needs to use AI systematically with clear guidelines and documented outcomes.

But getting there requires removing common roadblocks.

In this guide, I’ll explain seven marketing AI adoption challenges and how to overcome them. By the end, you’ll know how to successfully roll out AI across your team.

Free roadmap: I created a companion AI adoption roadmap with step-by-step tasks and timeframes to help you execute your pilot. Download it now.


First up: One of the biggest barriers to AI adoption — lack of clarity on when and how to use it.

1. No Clear AI Use Cases to Guide Your Team

Companies often mandate AI usage but provide limited guidance on which tasks it should handle.

In my experience, this is one of the most common AI adoption challenges teams face. Regardless of industry or company size.

Reddit – r/antiwork – AI usage

Vague directives like “use AI more” leave people guessing.

The solution is to connect tasks to tools so everyone knows exactly how AI fits into their workflow.

The Fix: Map Team Member Tasks to Your Tech Stack

Start by gathering your marketing team for a working session.

Ask everyone to write down the tasks they perform daily or weekly. (Not job descriptions, but actual tasks they repeat regularly.)

Then look for patterns.

Which tasks are repetitive and time-consuming?

Common AI Use Cases for Marketing Teams

Maybe your content team realizes they spend four hours each week manually tracking competitor content to identify gaps and opportunities. That’s a clear AI use case.

Or your analytics lead notices they are wasting half a day consolidating campaign performance data from multiple regions into a single report.

AI tools can automatically pull and format that data.

Once your team has identified use cases, match each task to the appropriate tool.

Task-to-Tool Decision

After your workshop, create assignments for each person based on what they identified in the session.

For example: “Automate competitor tracking with [specific tool].”

When your team knows exactly what to do, adoption becomes easier.

2. No Structured Plan to Roll Out AI Across the Organization

If you give AI tools to everyone at once, don’t be surprised if you get low adoption in return.

The issue isn’t your team or the technology. It’s launching without testing first.

The Fix: Start with a Pilot Program

A pilot program is a small-scale test where one team uses AI tools. You learn what works, fix problems, and prove value — before rolling it out to everyone else.

A company-wide launch doesn’t give you this learning period.

Everyone struggles with the same issues at once. And nobody knows if the problem is the tool, their approach, or both.

Which means you end up wasting months (and money) before realizing what went wrong.

Two Approaches to Marketing AI Adoption

Plan to run your pilot for 8-12 weeks.

Note: Your pilot timeline will vary by team.

Small teams can move fast and test in 4-8 weeks. Larger teams might need 3-4 months to gather enough feedback.

Start with three months as your baseline. Then adjust based on how quickly your team adapts.


Content, email, or social teams work best because they produce repetitive outputs that show AI’s immediate value.

Select 3-30 participants from this department, depending on your team size.

(Smaller teams might pilot with 3-5 people. Larger organizations can test with 20-30.)

Then, set measurable goals with clear targets you can track. Like:

  • Cut blog production time from 8 hours to 5 hours
  • Reduce email draft revisions from 3 rounds to 1
  • Create 50 social media posts weekly instead of 20

Schedule weekly meetings to gather feedback throughout the pilot.

The pilot will produce department-specific workflows. But you’ll also discover what transfers: which training methods work, where people struggle, and what governance rules you need.

When you expand to other departments, they’ll adapt these frameworks to their own AI tasks.

After three months, you’ll have proven results and trained users who can teach the next group.

3-Month Pilot

At that point, expand the pilot to your second department (or next batch of the same team).

They’ll learn from the first group’s mistakes and scale faster because you’ve already solved common problems.

Pro tip: Keep refining throughout the pilot.

  • Update prompts when they produce poor results
  • Add new tools when you find workflow gaps
  • Remove friction points the moment they appear


Your third batch will move even quicker.

Within a year, you’ll have organization-wide marketing AI adoption with measurable results.

3. Your Team Lacks the Training to Use AI Confidently

Most marketing teams roll out AI tools without training team members how to use them.

In fact, only 39% of people who use AI at work have received any training from their company.

61% of workers who use AI at work received no training from their company

And when training does exist, it might focus on generic AI concepts rather than specific job applications.

The answer is better training that connects to the work your team does.

The Fix: Role-Specific Training

Generic training explains how AI works. Role-specific training shows people how to use AI in their actual jobs.

Here’s the difference:

Role Generic Training (Lower Priority) Role-Specific Training (Start Here)
Social Media Manager AI concepts and how large language models work How to automate content calendars and schedule posts faster
SEO Specialist Understanding neural networks and machine learning AI-powered keyword research and competitor analysis
Email Marketer Machine learning algorithms and data processing Using AI for personalization and subject line testing
Content Writer How AI models generate text and natural language processing Using AI to research topics, create outlines, and edit drafts
Paid Ads Manager Deep learning fundamentals and algorithmic optimization AI tools for ad copy testing, audience targeting, and bid management

When training connects directly to someone’s daily tasks, they actually use what they learn.

For example, Mastercard applies this approach with three types of training:

  • Foundational knowledge for everyone
  • Job-specific applications for different roles
  • Reskilling programs where needed.

Mastercard – Putting the "I"in AI

Companies like KPMG, Accenture, and IKEA have also developed dedicated AI training programs for their teams.

This is likely because they learned that generic training creates enterprise AI adoption challenges at scale.

Employees complete courses but never apply what they learned to their actual work.

Ikea – AI training programs for their teams

But you don’t need enterprise-scale resources to make this work.

Start by mapping what each role actually does with AI.

For example:

  • Your content team uses AI for research, strategy, outlines, and drafts
  • Your ABM team uses it for account research and personalized outreach
  • Your social team uses it for video creation and caption variations
  • Your marketing ops team uses it for workflow automation and data integration

Once you know what each role needs, pick your training approach.

Platforms like Coursera and LinkedIn Learning offer specific AI training programs that work well for flexible, self-paced learning.

Coursera – GenAI for PR Specialists

Training may also be available from your existing tools.

Check whether your current marketing platforms offer AI training resources, such as courses or documentation.

For example, Semrush Academy offers various training programs that also cover its AI capabilities.

Semrush Academy – AI Courses

For teams with highly specific workflows, external trainers can be useful.

This costs more. But it delivers the most relevant results because the trainer focuses only on what your team actually needs to learn.

For example, companies like Section offer AI adoption programs for enterprises, including coaching and custom workshops.

Sectionai – Homepage

But keep in mind that training alone won’t sustain marketing AI adoption.

AI tools evolve constantly, and your team needs continuous support to adapt.

Create these support systems:

  • Set up a dedicated Slack channel for AI questions where your team can share wins and troubleshoot problems
  • Run weekly Q&A sessions where people discuss specific challenges
  • Update training materials as new features and use cases emerge

4. Team Members Fear AI Will Replace Their Roles

Employees may resist AI marketing adoption because they fear losing their jobs to automation.

Headlines about AI replacing workers don’t help.

Forbes – AI Is Killing Marketing

Your goal is to address these fears directly rather than dismissing them.

The Fix: Have Honest Conversations About Job Security

Meet with each team member and walk through how AI affects their workflow.

Point out which repetitive tasks AI will automate. Then explain what they’ll work on with that freed-up time.

Be careful about the language you use. Be empathetic and reassuring.

For example, don’t say “AI makes you more strategic.”

Say: “AI will pull performance reports automatically. You’ll analyze the insights, identify opportunities, and make strategic decisions on budget allocation.”

One is vague. The other shows them exactly how their role evolves.

How to Address AI Fears With Your Team

Don’t just spring changes on your team. Give them a clear timeline.

Explain when AI tools will roll out, when training starts, and when you expect them to start using the new workflows.

For example: “We’re implementing AI for competitor tracking in Q2. Training happens in March. By April, this becomes part of your weekly process.”

When people know what’s coming and when, they have time to prepare instead of panicking.

Sample Timeline

Pro tip: Let people choose which AI features align with their interests and work style.

Some team members might gravitate toward AI for content creation. Others prefer using it for data analysis or reporting.

When people have autonomy over which features they adopt first, resistance decreases. They’re exploring tools that genuinely interest them rather than following mandates.


5. Your Team Resists AI-Driven Workflow Changes

People resist AI when it disrupts their established workflows.

Your team has spent years perfecting their processes. AI represents change, even when the benefits are obvious.

Resistance gets stronger when organizations mandate AI usage without considering how people actually work.

Reddit – Why AI

New platforms can be especially intimidating.

It means new logins, new interfaces, and completely new workflows to learn.

Rather than forcing everyone to change their workflows at once, let a few team members test the new approach first using familiar tools.

The Fix: Start with AI Features in Existing Tools

Your team likely already uses HubSpot, Google Ads, Adobe, or similar platforms daily.

When you use AI within existing tools, your team learns new capabilities without learning an entirely new system.

If you’re running a pilot program, designate 2-3 participants as AI champions.

Their role goes beyond testing — they actively share what they’re learning with the broader team.

What Do AI Champions Do

The AI champions should be naturally curious about new tools and respected by their colleagues (not just the most senior people).

Have them share what they discover in a team Slack channel or during standups:

  • Specific tasks that are now faster or easier
  • What surprised them (good or bad)
  • Tips or advice on how others can use the tool effectively

When others see real examples, such as “I used Social Content AI to create 10 LinkedIn posts in 20 minutes instead of 2 hours,” it carries more weight than reassurance from leadership.

Slack – Message

For example, if your team already uses a tool like Semrush, your champions can demonstrate how its AI features improve their workflows.

Keyword Magic Tool’s AI-powered Personal Keyword Difficulty (PKD%) score shows which keywords your site can realistically rank for — without requiring any manual research or analysis.

Keyword Magic Tool – Newsletter platform – PKD

AI Article Generator creates SEO-friendly drafts from keywords.

Your content writers can input a topic, set their brand voice, and get a structured first draft in minutes. This reduces the time spent staring at a blank page.

Semrush – AI Article Generator

Social Content AI handles the repetitive parts of social media planning. It generates post ideas, copy variations, and images.

Your social team can quickly build out a week’s content calendar instead of creating each post from scratch.

Semrush – Social Content AI Kit – Ideas by topic

Don’t have a Semrush subscription? Sign up now and get a 14-day free trial + get a special 17% discount on annual plan.

6. No Governance or Guardrails to Keep AI Usage Safe

Without clear guidelines, your team may either avoid AI entirely or use it in ways that create risk.

In fact, 57% of enterprise employees input confidential data into AI tools.

Types of Sensitive Data Employees Input Into AI Tools

They paste customer data into ChatGPT without realizing it violates data policies.

Or publish AI-generated content without approval because the review process was never explained.

Your team needs clear guidelines on what’s allowed, what’s not, and who approves what.

Free AI policy template: Need help creating your company’s AI policy? Download our free AI Marketing Usage Policy template. Customize it with your team’s tools and workflows, and you’re ready to go.


The Fix: Create a One-Page AI Usage Policy

When creating your policy, keep it simple and accessible. Don’t create a 20-page document nobody will read.

Aim for 1-2 pages that are straightforward and easy to follow.

Include four key areas to keep AI usage both safe and productive.

Policy Area What to Include Example
Approved Tools List which AI tools your team can use — both standalone tools and AI features in platforms you already use “Approved: ChatGPT, Claude, Semrush’s AI Article Generator, Adobe Firefly”
Data Sharing Rules Define specifically what data can and can’t be shared with AI tools “Safe to share: Product descriptions, blog topics, competitor URLs

Never share: Customer names, email addresses, revenue data, internal campaign plans, pricing strategies, unannounced product details”

Review Requirements Document who reviews what type of content before publication “Social posts: Peer review

Blog posts: Content lead approval

Legal/compliance content: Legal team review”

Approval Workflows (optional) Clarify who approves AI content at each stage “Internal drafts: Content team

Customer-facing materials: Marketing director

Compliance-related content: Legal sign-off”

Beyond documenting the rules, establish who team members should contact when they encounter situations the policy doesn’t address.

Designate a department lead, governance contact, or weekly office hours as the escalation point for:

  • Scenarios not covered in your guidelines
  • Technical site issues with approved AI tools
  • Concerns about whether AI-generated content is accurate or appropriate
  • Questions about data sharing

Marketing AI Escalation Process

The goal is to give them a clear path to get help, rather than guessing or avoiding AI altogether.

Then, post the policy where your team will see it.

This might be your Slack workspace, project management tool, or a pinned document in your shared drive.

AI Policy document

And treat it as a living document.

When the same question comes up multiple times, add the answer to your policy.

For example, if three people ask, “Can I use AI to write email subject lines?” update your policy to explicitly say yes (and clarify who reviews them before sending).

AI Governance Checklist

7. No Reliable Way to Measure AI’s Impact or ROI

Without clear proof that AI improves their results, team members may assume it’s just extra work and return to old methods.

And if leadership can’t see a measurable impact, they might question the investment.

This puts your entire AI program at risk.

Avoid this by establishing the right metrics before implementing AI.

The Fix: Track Business Metrics (Not Just Efficiency)

Here’s how to measure AI’s business impact properly.

Pick 2-3 metrics your leadership already reviews in reports or meetings.

These are typically:

  • Leads generated
  • Conversion rate
  • Revenue growth
  • Customer acquisition
  • Customer retention

Measure Marketing AI's Business Impact

These numbers demonstrate to your team and leadership that AI is helping your business.

Then, establish your baseline by recording your current numbers. (Do this before implementing AI tools.)

For example, if you’re tracking leads and conversion rate, write down:

  • Current monthly leads: 200
  • Current conversion rate: 3%

This baseline lets you show your team (and leadership) exactly what changed after implementing AI.

Pro tip: Avoid making multiple changes simultaneously during your pilot or initial rollout.

If you implement AI while also switching platforms or restructuring your team, you won’t know which change drove results.

Keep other variables stable so you can clearly attribute improvements to AI.


Once AI is in use, check your metrics monthly to see if they’re improving. Use the same tools you used to record your baseline.

Write down your current numbers next to your baseline numbers.

For example:

  • Baseline leads (before AI): 200 per month
  • Current leads (3 months into AI): 280 per month

But don’t just check if numbers went up or down.

Look for patterns:

Did one specific campaign or content type perform better after using AI?

Are certain team members getting better results than others?

Track individual output alongside team metrics.

For example, compare how many blog posts each writer completes per week, or email open rates by the person who drafted them.

Email report overview page

If someone’s consistently performing better, ask them to share their AI workflow with the team.

This shows you what’s working, and helps the rest of your team improve.

Share results with both your team and leadership regularly.

When reporting, connect AI’s impact to the metrics you’ve been tracking.

For example:

Say: “AI cut email creation time from 4 hours to 2.5 hours. We used that time to run 30% more campaigns, which increased quarterly revenue from email by $5,000.”

Not: “We saved 90 hours with AI email tools.”

The first shows business impact — what you accomplished with the time saved. The second only shows time saved.

Other examples of how to frame your reporting include:

How to Report AI Results to Leadership

Build Your Marketing AI Adoption Strategy

When AI usage is optional, undefined, or unsupported, it stays fragmented.

Effective marketing AI adoption looks different.

It’s built on:

  • Role-specific training people actually use
  • Guardrails that reduce uncertainty and risk
  • Metrics that drive business outcomes

When those pieces are in place, AI becomes part of how work gets done.

If you want a step-by-step implementation plan, download our Marketing AI Adoption Roadmap.

Need help choosing which AI tools to pilot? Our AI Marketing Tools guide breaks down the best options by use case.

The post 7 Marketing AI Adoption Challenges (And How to Fix Them) appeared first on Backlinko.

Read more at Read More

How to choose a link building agency in the AI SEO era by uSERP

Remember when a handful of links from sites in your niche could drive steady organic traffic? That era is over.

Today, Google’s AI Overviews and the rise of answer engines like ChatGPT raise the bar. You have to do more to stay visible. Hiring an experienced link building agency is one efficient way to meet that challenge.

It’s also one of the most important investments you’ll make. The right partner doesn’t just build links. They position your brand as a trusted, cited source in the AI era.

So how do you choose the right agency for your company?

While the interface has changed, the core ranking signals remain largely the same. What’s changed is their priority.

LLMs need credible sources to ground their answers. That makes authoritative link building more important than ever.

This article shows you how to vet and choose a link building agency that understands these new priorities and can help your brand win trust in the AI-driven SEO landscape.

How link building and SEO are changing

Gartner predicted search engine volume to drop by 25% as AI takes over more answers. That makes working with an agency that understands AI SEO essential.

But how do you know which agencies actually do?

The real indicators are holistic authority and AI visibility. Only one in five links cited in Google’s AI Overviews matched a top-10 organic result, according to an Authoritas study. Even more telling, 62.1% of cited links or domains didn’t rank in the top 10 at all.

The takeaway is simple. AI systems and search engines don’t evaluate websites the same way. We’re no longer building links just for Google’s crawler.

Link equity alone isn’t enough. Sites need topical authority, brand mentions, and real market presence. The goal is to build a footprint that AI models recognize and can’t ignore.

The new criteria: Evaluating a link building agency for AI SEO

Choosing the right link building agency comes down to how well they prioritize the factors that matter now.

This section shows you what to look for.

Prioritizing quality, relevance, and traffic

I see this mistake all the time. A marketing director evaluates link quality based only on Domain Rating (DR).

High DR matters, but at uSERP, we know it’s not the finish line. You should also look for:

  • Relevance: A link from a DR 60, niche-specific site in your industry often beats a DR 80 general news site that covers everything from crypto to keto.
  • Minimum traffic standards: If a site doesn’t rank for keywords or attract real traffic, its links won’t help you rank. That’s why strict traffic minimums matter.

When vetting an agency, ask for contractual site-traffic guarantees.

A confident agency won’t hesitate to sign a Statement of Work that guarantees every link comes from a site with a minimum traffic threshold, such as 5,000+ monthly organic visitors.

If they won’t put traffic minimums in writing, they’re likely planning to place links on “ghost town” sites. These domains appear strong, but they lack a real audience, which protects their margins rather than supporting your growth.

Look for a content-driven approach and digital PR

Links don’t exist in a vacuum. The strongest ones come from being part of a real conversation.

The best agencies no longer operate like traditional link builders. They act more like content marketing and digital PR teams. 

Instead of asking for links, the best agencies create linkable assets — data studies, expert commentary, and in-depth guides that journalists and publishers want to cite – because they understand:

  • Google’s algorithms and AI models are continually getting better at identifying paid placements. A content-led approach keeps links natural, editorial, and valuable to readers.
  • Guest posting in the AI SEO era isn’t about a disposable 500-word article. It’s about thought leadership that positions your CEO as a credible expert.

At uSERP, for example, we created — and continuously update — our State of Backlinks for SEO report.

Red flags: Recognizing outdated or dangerous tactics

Choosing the wrong partner doesn’t just waste your budget. It puts your brand reputation — and potentially your company’s future — at risk.

Here are the biggest red flags to avoid when hiring an agency:

Guaranteed rankings

No one can guarantee a number-one ranking on Google. Any agency that promises specific keyword positions on a fixed timeline is likely doing one of two things:

  • Using risky, short-term tactics to force a temporary spike.
  • Selling you snake oil.

These agencies often rely on private blog networks (PBNs) or aggressive anchor text manipulation to manufacture fast results.

You might see an early jump, but the crash that follows—and the risk of a penalty when Google’s spam systems catch up—is never worth it.

Lack of transparency

If an agency won’t explain how they earn links or where placements will come from before you pay, walk away.

Reputable agencies are transparent. They’ll show real examples of past placements and share relevant case studies from your industry.

Agencies that hide their inventory usually do it for a reason. Those sites are often part of a low-quality network or link farm.

Self-serve link portfolios

If you’re a marketer or SEO on LinkedIn, chances are you’ve received a message like this:

This is a common tactic among low-quality link builders: reselling backlinks from a shared inventory. I understand the appeal.

Strategic link acquisition is hard. Buying and flipping links is easy.

The problem — for you — is the footprint. If an agency can secure a link by filling out a form, anyone can. That includes casino affiliates, gambling sites, adult content, and outright scammers.

That’s not a natural link profile. Google has almost certainly already identified and burned those domains.

In the best case, you pay for a link that passes zero authority. In the worst case, Google flags your site as part of a link scheme.

Dirt-cheap packages

SEO and link building deliver incredible ROI, but they aren’t cheap.

You can’t buy a high-quality article with a real, earned link from an authoritative site for $50. Speaking as someone who runs an AI SEO agency, the true cost of quality content, editing, outreach, and relationship building is at least an order of magnitude higher.

That’s why cheap packages that promise multiple high-authority links are a major red flag. They almost always rely on:

  • Fully AI-generated, barely edited content.
  • Low-value link farms or resold inventory.
  • Toxic backlinks.

None of those will help you show up on AI search engines or Google.

Partnering with a link building agency for a sustainable market presence

Link building in the AI era is a long-term investment. It’s about building a durable market presence, not chasing quick wins.

The right partner sees themselves as an extension of your team. They care about:

  • Your backlink gap compared to competitors.
  • Your brand mentions across LLMs.
  • Your overall search and AI visibility.

They help you navigate content syndication, backlink audits, content marketing, and modern link building strategies with a unified approach.

If you’re ready to move past vanity metrics and start building authority that drives revenue and AI citations, it’s time to be selective about who you trust with your domain.

The right link building agency is out there. You just need to know how to spot them.

Read more at Read More

News publishers expect search traffic to drop 43% by 2029: Report

News executives expect search referrals to drop by more than 40% over the next three years, as search engines continue evolving into AI-driven answer engines, according to a new Reuters Institute report. That shift is squeezing publisher traffic and accelerating a move away from classic SEO toward AEO and GEO.

Why we care. Google’s AI Overviews and chatbot-style search are changing how people get information, often without clicking through. SEO visibility, attribution, and ROI models built on old playbooks are breaking fast.

What’s happening. Publishers expect search traffic to nearly halve. Survey respondents forecast search engine traffic down 43% within three years, with a fifth of respondents expecting losses above 75%.

  • Google referrals are already falling. Chartbeat data cited in the report show organic Google search traffic down 33% globally from November 2024 to November 2025, and down 38% in the U.S. over the same period.
  • AI Overviews are a major factor. Google’s AI Overviews appear at the top of roughly 10% of U.S. search results, with studies showing higher zero-click behavior when they appear, according to the report.
  • The impact is uneven. Lifestyle and utility content (e.g., weather, TV guides, horoscopes) appear to be the most exposed, while hard news queries have been more insulated so far.

SEO to AEO and GEO. The Reuters Institute expects rapid growth in answer engine optimization (AEO) and generative engine optimization (GEO) as publishers and agencies adapt to AI-led interfaces.

  • AEO and GEO services are set to surge. Agencies are repurposing SEO playbooks for chatbots and overview boxes, with new demands on how content is written, structured, and surfaced.
  • Publishers are dialing back traditional SEO. Many survey respondents plan to reduce investment in classic Google SEO and focus more on distribution through AI platforms like ChatGPT, Gemini, and Perplexity.

Between the lines. This is about more than rankings. It’s about distribution inside platforms that publishers do not control.

  • Chat referrals are growing, but remain small. Traffic from ChatGPT is rising quickly, but the report calls it a rounding error compared with Google.
  • Attribution is getting murkier. If AI agents summarize content and complete tasks for users, it becomes unclear what counts as a visit and how monetization works.
  • Licensing is becoming a parallel strategy. As referral risk grows, publishers are turning to AI licensing, revenue-sharing deals, and negotiated citation or prominence as another path to value.

What to watch. A new KPI stack is emerging. Metrics like share of answer, citation visibility, and brand recall may matter as much as clicks.

  • Utility content faces the biggest squeeze. Categories built for fast answers are easiest for AI systems to commoditize.
  • A measurement arms race is coming. Expect new tools to separate human visits from agent consumption and to measure value beyond raw traffic.

Bottom line. Publishers are bracing for a world where search still matters, but clicks matter less. The report’s message is clear: when AI answers become the interface, AEO, GEO, and attribution strategy are no longer optional. They are a core modern search strategy.

The report. Journalism, media, and technology trends and predictions 2026

Read more at Read More

Google opens Olympic live sports inventory to biddable CTV buys

Live sports advertising is getting more programmatic — and more measurable.

Driving the news. Google is expanding biddable live sports in Display & Video 360, giving advertisers programmatic access to NBCUniversal’s Olympic Winter Games inventory ahead of a crowded 2026 global sports calendar.

Why we care. Live sports remain one of the few media environments that consistently deliver massive, attentive audiences. By moving premium sports inventory into biddable CTV, Google gives advertisers more control, stronger measurement, and simpler activation — without sacrificing reach.

What’s new. Advertisers can now pair Google audience signals with NBCUniversal’s live sports CTV inventory to reach fans on the big screen and re-engage them across YouTube and other Google surfaces.

  • New household-level frequency management reduces overexposure, while Google’s AI-powered cross-device conversion tracking connects CTV impressions to downstream purchases at no additional cost.
  • Google is also streamlining access to live sports with a redesigned Marketplace.
  • You can activate curated sports packages in just a few clicks instead of managing fragmented media buys.

The big picture. As fans move fluidly between connected TV, YouTube, Search and social feeds, advertisers are under pressure to follow attention across screens. Google is positioning Display & Video 360 as the hub that connects those moments, from the living room to mobile.

Bottom line: By unlocking Olympic and live sports inventory inside Display & Video 360, Google is making premium sports advertising easier to buy, easier to measure, and far more accountable.

Read more at Read More

Google expands Shopping promotion rules ahead of 2026

Inside Google Ads’ AI-powered Shopping ecosystem: Performance Max, AI Max and more

Google is broadening what counts as an eligible promotion in Shopping, giving merchants more flexibility heading into next year.

Driving the news. Google is update its Shopping promotion policies to support additional promotion types, including subscription discounts, common promo abbreviations, and — in Brazil — payment-method-based offers.

Why we care. Promotions are a key lever for visibility and conversion in Shopping results. These changes unlock more promotion formats that reflect how consumers actually buy today, especially subscriptions and cashback offers. Greater flexibility in promotion types and language reduces disapprovals and makes Shopping ads more competitive at key decision moments.

For retailers relying on subscriptions or local payment incentives, this update creates new ways to drive visibility and conversion on Google Shopping.

What’s changing. Google will now allow promotions tied to subscription fees, including free trials and percent- or amount-off discounts. Merchants can set these up by selecting “Subscribe and save” in Merchant Center or by using the subscribe_and_save redemption restriction in promotion feeds. Examples include a free first month on a premium subscription or a steep discount for the first few billing cycles.

Google is also loosening restrictions on language. Common promotional abbreviations like BOGO, B1G1, MRP and MSRP are now supported, making it easier for retailers to mirror real-world retail messaging without risking disapproval.

In Brazil only, Google will now support promotions that require a specific payment method, including cashback offers tied to digital wallets. Merchants must select “Forms of payment” in Merchant Center or use the forms_of_payment redemption restriction. Google says there are no immediate plans to expand this change to other markets.

Between the lines. These updates signal Google’s intent to better align Shopping promotions with modern retail models — especially subscriptions and localized payment behaviors — while reducing friction for merchants.

The bottom line. By expanding eligible promotion types, Google is giving advertisers more room to compete on value, not just price, when Shopping policies update in January 2026.

Read more at Read More

Apple is finally upgrading Siri, and Google Gemini will power it

Apple

Apple is teaming up with Google to power its next generation of AI features, including a long-awaited Siri upgrade.

What’s happening: Apple will use Google’s Gemini AI models and cloud infrastructure to support future Apple Foundation Models. The multi-year partnership is expected to roll out later this year.

Why we care. With Gemini powering Siri, Apple’s assistant should become a true AI answer engine. That will likely change how millions of iOS users find information, ask questions, and interact with search.

Driving the news. Apple said it chose Google after a “careful evaluation,” calling Gemini the “most capable foundation” for its AI ambitions.

  • We learned in September that Apple was in talks to use a custom Gemini model to power a revamped Siri.
  • Apple delayed its Siri AI upgrade last year, despite marketing the feature. The delay intensified scrutiny of Apple’s AI strategy.

What they’re saying. Here’s a statement Google shared via X:

Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year. After careful evaluation, Apple determined that Google’s Al technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users. Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple’s industry-leading privacy standards.

The bigger picture. Google briefly crossed a $4 trillion market cap last week, surpassing Apple for the first time since 2019.

  • Google’s Gemini 3 model launched late last year as part of its broader AI push.
  • Apple largely stayed out of the AI arms race that followed ChatGPT’s launch in late 2022 while rivals poured billions into models, chips, and cloud infrastructure.

Read more at Read More

3 PPC myths you can’t afford to carry into 2026

SEO myths vs facts

PPC advice in 2025 leaned hard on AI and shiny new tools. 

Much of it sounded credible. Much of it cost advertisers money. 

Teams followed platform narratives instead of business constraints. Budgets grew. Efficiency did not.

As 2026 begins, carrying those beliefs forward guarantees more of the same. 

This article breaks down three PPC myths that looked smart in theory, spread quickly in 2025, and often drove poor decisions in practice. 

The goal is simple: reset priorities before repeating expensive mistakes.

Myth 1: Forget about manual targeting, AI does it better

We have seen this claim everywhere: 

AI outperforms humans at targeting, and manual structures belong to the past. 

Consolidate campaigns as much as possible. 

Let AI run the show.

There is truth in that – but only under specific conditions. 

AI performance depends entirely on inputs. No volume means no learning. No learning means no results. 

A more dangerous version of the same problem is poor signal quality. No business-level conversion signal means no meaningful optimization.

For ecommerce brands that feed purchase data back into Google Ads and consistently generate at least 50 conversions per bid strategy each month, trusting AI with targeting can make sense. 

In those cases, volume and signal quality are usually sufficient. Put simply, AI favors scale and clear outcomes.

That logic breaks down quickly for low-volume campaigns, especially those optimizing to leads as the primary conversion. 

Without enough high-quality conversions, AI cannot learn effectively. The result is not better performance, but automation without improvement.

How to fix this

Before handing targeting decisions entirely to AI, you should be able to answer “yes” to all three of the questions below:

  • Are campaigns optimized against a business-level KPI, such as CAC or a ROAS threshold?
  • Are enough of those conversions being sent back to the ad platforms?
  • Are those conversions reported quickly, with minimal latency?

If the answer to any of these is no, 2026 should be about reassessing PPC fundamentals.

Do not be afraid to go old school when the situation calls for it. 

In 2025, I doubled a client’s margin by implementing a match-type mirroring structure and pausing broad match keywords.

It ran counter to prevailing best practices, but it worked. 

The decision was grounded in historical performance data, shown below:

Match type Cost per lead Customer acquisition cost Search impression share
Exact €35 €450 24%
Phrase €34 1,485 17%
Broad €33 2,116 18%

This is a classic case of Google Ads optimizing to leads and delivering exactly what it was asked to do: drive the lowest possible cost per lead across all audiences. 

The algorithm is literal. It does not account for downstream outcomes, such as business-level KPIs.

By taking back control, you can direct spend toward top-performing audiences that are not yet saturated. In this case, that meant exact match keywords.

If you are not comfortable with older structures like match-type mirroring – or even SKAGs – learning advanced semantic techniques is a viable alternative. 

Those approaches can provide a more controlled starting point without relying entirely on automation.

Myth 2: Meta’s Andromeda means more ads, better results

This myth is particularly frustrating because it sounds logical and spreads quickly. 

The claim is simple: more creative means more learning, which leads to better auction performance. 

In practice, it far more reliably increases creative production costs than it improves results – and often benefits agencies more than advertisers.

Creative volume only helps when ad platforms receive enough high-quality conversion signals. 

Without those signals, more ads simply mean more assets to rotate. The AI has nothing meaningful to learn from.

Andromeda generated significant attention in 2025, and it gave marketers a new term to rally around. 

In reality, Andromeda is one component of Meta’s ad retrieval system:

  • “This stage [Andromeda] is tasked with selecting ads from tens of millions of ad candidates into a few thousand relevant ad candidates.”

That positioning coincided with Meta’s broader pivot from the metaverse narrative to AI. It worked. 

But it also led some teams to conclude that aggressive creative diversification was now required – more hooks, more formats, more variations, increasingly produced with generative AI.

Similar to Google Ads’ push around automated bidding, broad match, and responsive search ads, Andromeda has become a convenient justification for adopting Advantage+ targeting and Advantage+ creative. 

Those approaches can perform well in the right conditions. They are not universally reliable.

Get the newsletter search marketers rely on.


How to fix this

Creative diversification helps platforms match messages to people and contexts. That value is real. It is also not new. The same fundamentals still apply:

  • Creative testing requires a strategy. Testing without intent wastes resources.
  • Measurement must be planned in advance. Otherwise you’re setting yourself up for failure.
  • Business-level KPIs need to exist in sufficient volume to matter.

This myth breaks down most clearly when resources are limited – budget, skills, or time. In those cases, platforms often rotate ads with little signal-driven direction.

When resources are constrained, CRO is a better use of your resources:

  • Review tracking. More tracked conversions improve performance.
  • Improve the customer journey to increase conversion rates and signal volume.
  • Map higher-margin products to support more efficient spend.
  • Test new channels or networks using budget saved from excessive creative production.

The pattern is consistent. Creative scale follows signal scale, not the other way around.

Myth 3: GA4 and attribution are flawed, but marketing mix modeling will provide clarity

Can you think of 10 marketers who believe GA4 is a good tool? Probably not. 

That alone speaks to how poorly Google handled the rollout. 

As a result, more clients now say the same thing: GA4 does not align with ad platform data, neither feels trustworthy, and a more “serious” solution must be needed. 

More often than not, that path leads to higher costs and average results. 

Most brands simply do not have the spend, scale, or complexity required for MMM to produce meaningful insight. 

Instead of adding another layer of abstraction, they would be better served by learning to use the tools they already have.

For most brands, the setup looks familiar:

  • Media spend is concentrated across two or three channels at most – typically Google and Meta, with YouTube, LinkedIn, or TikTok as secondary options.
  • The business depends on a recurring but narrow customer base, which creates long-term fragility.
  • Outside that core audience, marketing is barely incremental, if incremental at all.

In those conditions, MMM does not add clarity. It adds abstraction. 

With such a limited channel mix, the focus should remain on fundamentals. 

The challenge is not modeling complexity, but identifying what is actually impactful. 

How to fix this

The priorities below deliver more value than MMM in these scenarios:

  • Differentiate clearly from competitors.
  • Increase margins, even basic budget planning can move the needle.
  • Build a solid data foundation, including tracking, CRO, and conversion pipelines.
  • Diversify channels or ad networks.
  • Lock creative execution to real customer pain points.
  • Fix marketing execution wherever it breaks.

MMM – like any advanced tool – becomes useful once complexity demands it. Not before. 

Used too early, it replaces accountability with abstraction, not insight.

The reality behind the myths

The common thread across these three myths is not AI, creative, or analytics. It is misuse. 

Platforms do exactly what they are asked to do. They optimize against the signals provided, within the constraints of budget and structure.

When business fundamentals break, AI cannot fix the problem. 

2026 is not about chasing the next abstraction. It is about business and ops focus, paired with disciplined execution, to scale profitably.

Read more at Read More

Why copywriting is the new superpower in 2026

Why copywriting is the new superpower in 2026

For the last few years, copywriting has been quietly written off.

Not with outrage. Not with ceremony.

Just sidelined. Replaced. Automated.

Words – the core material of SEO, landing pages, ads, and persuasion – were demoted during the traffic rush and later the AI gold rush.

Blog posts were generated. Product descriptions were bulked out. Landing pages were templated.

Content teams shrank. Freelancers disappeared. And a convenient narrative emerged to justify it all:

“AI can write now, so writing doesn’t matter anymore.”

Then Google made it worse.

The helpful content update, followed by AI Overviews and conversational search, didn’t just hurt SEO. It hurt the broader web.

It gutted an entire economy built on informational arbitrage – niche blogs, affiliate sites, ad-funded publishers, and content-led SEO businesses that had learned how to monetize curiosity at scale.

Now, large language models are finishing the job. Informational queries are answered directly in search. The click is increasingly optional. Traffic is evaporating.

So yes, on the surface, it sounds mad to say this:

Copywriting is once again becoming the most important skill in digital marketing.

But only if you confuse copywriting with the thing that just died.

AI didn’t kill copywriting

What AI destroyed was not persuasion. 

It destroyed low-grade informational publishing – content that existed to intercept search demand, not to change decisions.

  • “How to” posts.
  • “Best tools for” roundups.
  • Explainers written for algorithms, not people.

LLMs are exceptionally good at this kind of work because it never required judgment. It required:

  • Synthesis. 
  • Summarization. 
  • Pattern matching. 
  • Compression.

That’s exactly what LLMs do best.

This content was designed to intercept purchase decisions by giving users something else to click before buying, often with the hope that a cookie would track the stop in the journey and reward the page for “influencing” the buyer journey.

That influence was rewarded either through analytics for the SEO team or through an affiliate’s bank account.

But persuasion – real persuasion – has never worked like that.

Persuasion requires:

  • A defined audience.
  • A clearly articulated problem.
  • A credible solution.
  • A deliberate attempt to influence choice.

Most SEO copy never attempted any of this. It aimed to rank, not to convert.

So when people say “AI killed copywriting,” what they really mean is this: AI exposed how little real copywriting was being done in the first place.

And that matters, because the environment we’re moving into makes persuasion more important, not less.

Dig deeper: SEO copywriting: 5 pillars for ranking and relevance

GEO isn’t about rankings

Traditional search engines forced users to translate their problems into keywords.

Someone didn’t search for “I’m an 18-year-old who’s just passed my test and needs insurance without being ripped off.” They typed [cheap car insurance] and hoped Google would serve the best results.

This created a monopoly in SEO. Those who could spend the most on links usually won once a semi-decent landing page was written.

It also created a sea of sameness, with most ranking websites saying exactly the same thing.

LLMs reverse this process. They:

  • Start with the problem.
  • Understand context, constraints, and intent. 
  • Decide which suppliers are most relevant.

That distinction is everything.

LLMs are not ranking pages. Instead, they seek and select the best solutions to solve users’ problems.

And selection depends on one thing above all else – positioning.

Not “position on Google,” but strategic positioning.

  • Who are you for?
  • What problem do you solve?
  • Why are you a better or different choice than the alternatives?

If an LLM cannot clearly answer those questions from your website and third-party information, you will not be recommended, no matter how many backlinks you have or how “authoritative” your content once looked.

This is why copywriting suddenly sits at the center of SEO’s future.

Dig deeper: The new SEO imperative: Building your brand

From SEO to GEO: Availability beats visibility

Search engine optimization was about visibility.

Generative engine optimization is about AI availability.

Availability means increasing the likelihood that your business will be surfaced in a buying situation.

That depends on whether your relevance is legible.

Most businesses still describe themselves in static, categorical terms:

  • “We’re an SEO agency in Manchester.”
  • “We’re solicitors in London.”
  • “We’re an insurance provider.”

These descriptions tell you what the business is. 

They do not tell you what problem it solves or for whom it solves that problem. They are catchall descriptors for a world where humans use search engines.

This is where most companies miss the opportunity in front of them.

The vast majority of “it’s just SEO” advice centers on entities and semantics. 

The tactics suggested for AI SEO are largely the same as traditional SEO: 

  • Create a topical map.
  • Publish topical content at scale.
  • Build links.

This is why many SEOs have defaulted to the “it’s just SEO” position.

If your lens is meaning, topics, context, and relationships, everything looks like SEO.

In contrast, the world in which copywriters and PRs operate looks very different.

Copywriters and PRs think in terms of problems, solutions, and sales.

All of this stems from brand positioning.

Positioning is not a fixed asset

A strategic position is a viable combination of:

  • Who you target.
  • What you offer.
  • How your product or service delivers it

Change any one of those, and you have a new position.

Most firms treat their current position as fixed. 

They accept the rules of the category and pour their effort into incremental improvement, competing with the same rivals, for the same customers, in the same way.

LLMs quietly remove that constraint.

If you genuinely solve problems – and most established businesses do – there is no reason to limit yourself to a single inherited position simply because that’s how the category has historically been defined.

No position remains unique forever. Competitors copy attractive positions relentlessly. 

The only sustainable advantage is the ability to continually identify and colonize new ones.

This doesn’t mean becoming everything to everyone. Overextension dilutes brands.

It means being honest and explicit about the problems you already solve well.

This is something copywriters understand well. 

A good business or marketing strategist can help uncover new positions in the market, and a good copywriter can help articulate them on landing pages.

This is a key shift from semantic SEO to GEO.

You want LLMs to recommend your business to solve those problems.

Get the newsletter search marketers rely on.


From SEOs’ ‘what we are’ to GEOs’ ‘what problem we solve’

Take insurance as a simple example.

A large insurer may technically offer “car insurance.” But the problems faced by:

  • An 18-year-old new driver.
  • A parent insuring a second family car.
  • A courier using a vehicle for work.
  • Are completely different.

Historically, these distinctions were collapsed into broad keywords because that’s how search worked. 

LLMs don’t behave like that. They start with the user problem to be solved.

If you are well placed to solve a specific use case, it makes strategic sense to articulate that explicitly, even if no one ever typed that exact phrase into Google.

A helpful way to think about this is as a padlock.

Your business can be unlocked by many different combinations. 

Each combination represents a different problem, for a different person, solved in a particular way.

If you advertise only one combination, you artificially restrict your AI availability.

Have you ever had a customer say, “We didn’t know you offered that?”

Now you have the chance to serve more people as individuals.

Essentially, this makes one business suitable for more problems.

You aren’t just a solicitor in Manchester.

You’re a solicitor who solves X by Y.

You’re a solicitor for X with a Y problem.

The list could be endless.

Why copywriting becomes infrastructure again

This is where copywriting returns to its original job.

Good copywriting has always been about creating a direct relationship with a prospect, framing the problem correctly, intensifying it, and making the case that you are the best place to solve it.

That logic hasn’t changed.

What has changed is that the audience has expanded.

You now have to persuade:

  • A human decision-maker.
  • A LLM acting as a recommender.

Both require the same thing: clarity.

You must be explicit about:

  • The problem you solve.
  • Who you solve it for.
  • How you solve it.
  • Why your solution works.

You must also support those claims with evidence.

This is not new thinking. It comes straight out of classic direct marketing.

Drayton Bird defined direct marketing as the creation and exploitation of a direct relationship between you and an individual prospect. 

Eugene Schwartz spent his career explaining that persuasion is not accidental – benefits must be clear, claims must be demonstrated, and relevance must be immediate.

The web environment made it possible to forget these fundamentals for a while.

AI brings them back.

Dig deeper: Why ‘it’s just SEO’ misses the mark in the era of AI SEO

Less traffic doesn’t mean less performance

Traffic is going to fall.

Informational traffic is being stripped out of the system.

Traffic only became a problem when it stopped being a measure and became a target. 

Once that happened, it ceased to be useful. Volume replaced outcomes. Movement replaced progress.

In an AI-mediated world, fewer clicks does not mean less opportunity.

It means less irrelevant traffic.

When GEO and positioning-led copy work, you see:

  • Traffic landing on revenue-generating pages.
  • Brand-page visits from pre-qualified prospects.
  • Fewer exploratory visits and more decisive ones

No one can buy from you if they never reach your site. Traffic still matters, but only traffic with intent.

In this environment, traffic stops being a vanity metric and becomes meaningful again.

Every click has a purpose.

What measurement looks like now

The North Star is no longer sessions. It is commercial interaction.

The questions that matter are:

  • How many clicks did we get to revenue-driving pages this month versus last?
  • How many of those visits turned into real conversations?
  • Is branded demand increasing as our positioning becomes clearer?
  • Are lead quality and close rates improving, even as traffic falls?

Share of search still has relevance – particularly brand share – but it must be interpreted differently when the interface doesn’t always click through.

AI attribution is messy and imperfect. Anyone claiming otherwise is lying. But signals already exist:

  • Prospects saying, “ChatGPT recommended you.”
  • Sales calls referencing AI tools.
  • Brand searches rising without content expansion.
  • Direct traffic increasing alongside reduced informational content

These are directional indicators. And they are enough.

The real shift SEO needs to make

For a decade, SEO rewarded people who were good at publishing.

The next decade will reward people who are good at positioning.

That means:

  • Fewer pages, but sharper ones.
  • Less information, more persuasion.
  • Fewer visitors, higher intent.

It means treating your website not as a library, but as a set of sales letters, each one earning its place by clearly solving a problem for a defined audience.

This is not the death of SEO.

SEO is growing up.

The reality nobody wants, but everyone needs

Copywriting didn’t die.

Those spending a fortune on Facebook ads embraced copywriting. Those selling SEO went down the route of traffic chasing.

The two worlds had different values.

  • The ad crowd embraced copy.
  • The SEO crowd disowned it.

One valued conversion. The other valued traffic.

We are entering a world with less traffic, fewer clicks, and an intelligent intermediary between you and the buyer.

That makes clarity a weapon. That makes good copy a weapon.

In 2026, the brands that win will not be the ones with the most content.

They will be the brands that return to the basics of good copy and PR.

The information era of SEO is over.

It’s time to get back to marketing.

Read more at Read More