How brands can respond to misleading Google AI Overviews

Misleading -Google AI Overview

Google’s AI Overviews feature has become the face of our search engine results.

Type almost any question into your Google search bar, and the first answer you receive will be AI generated.

Many are thrilled about this. Others are wary.

Marketers and those in the online reputation management (ORM) field are among those urging caution.

Why? Because Google AI Overviews are often littered with information stemming from online forums like Reddit and Quora. 

And oftentimes, this user-generated content can be inaccurate — or entirely false. 

Why Google AI Overviews heavily rely on content from Reddit and Quora

But how and why have Google AI Overviews come to rely on user-generated content forums?

The answer is quite simple. Google AI Overviews sources much of its information from “high-authority” domains. These happen to be platforms like Reddit and Quora.

Google also prioritizes “conversational content” and “real user experiences.” They want searchers to receive answers firsthand from other online humans.

Furthermore, Google places the same amount of weight on these firsthand anecdotes as it does on factual reporting. 

How negative threads end up on AI summaries

Obviously, the emphasis placed on Reddit and Quora threads can lead to issues. Especially for professionals and those leading product- or service-driven organizations.

Many of the Reddit threads that rise to the surface are those that are complaint-driven. Think of threads where users are asking, “Does Brand X actually suck?” or “Is Brand Z actually a scam?”

The main problem is that these threads become extremely popular. AI Overviews gather the consensus of many comments and combine them into a single resounding answer. 

In essence, minority opinions end up being represented as fact.

Additionally, Google AI Overviews often resurface old threads that lack timestamps. This can lead to the resurfacing of outdated, often inaccurate information. 

Patterns that SEO, ORM, and brands are noticing

Those in the ORM field have been noticing troubling patterns in Google AI Overviews for a while now. For instance, we’ve identified the following trends:

  • Overwhelming Reddit criticism: Criticism on Reddit rises to the top at alarming rates. Google AI Overviews even seem to ignore official responses from brands at times, instead opting for the opinions of users on forum platforms.
  • Pros vs. cons summaries: These sorts of lists are supposed to implore balance. (Isn’t that the entire point of identifying both the pros and cons of a brand?) However, sites like Reddit and Quora tend to accentuate the negative aspects of brands, at times ignoring the pros altogether. 
  • Outdated content resurfacing: As mentioned in the previous section, outdated content can hold far too much value. Aa troubling amount of “resolved issues” gain prevalence in the Google AI Overviews feature.

The amplification effect: AI can turn opinion into fact

We live in an era defined by instantaneous knowledge.

Gen Z takes in information at startling rates. What’s seen on TikTok is absorbed as immediate fact. Instagram is where many turn to get both breaking news and updates on the latest brands

This has led to an amplification effect, where algorithms quickly turn opinion into fact. We’re seeing it widely across social media, and now on Google AI Overviews, too.

On top of what we listed in the previous section, those in the ORM realm are noticing the following take effect:

  • Nuance-less summarization: Because AI Overviews take such overwhelming negative criticism from Reddit, we’re getting less nuanced responses. The focus in AI Overviews is often one-sided and seemingly biased, featuring emotional, extreme language. 
  • Feedback loops: As others in the ORM field have pointed out, many citations in Overview come from deep pages. It’s also common to see feedback loops wherein one negative Reddit thread can hold multiple citations, leading to quick AI validation.
  • Enhanced trust in AI Overviews: Perhaps most troubling of all has been society’s immediate jump to accept AI Overviews and all the answers it has to offer. Many users now turn to Google’s feature as their ultimate encyopledia — without even caring to view the citations AI Overviews has listed. 

Misinformation and bias create risk

All in all, the rise of information from Reddit and Quora on AI Overviews has led to enhanced risk for businesses and entrepreneurs alike.

False statements and defamatory claims posted online can be accepted as fact. And incomplete narratives or opinion-based criticism floating around on forums are filtered through the lens of AI Overviews.

Making matters worse is that Google does not automatically remove or filter AI summaries that are linked to harmful content. 

This can be damaging to a company’s reputation, as users absorb what they see on AI Overviews at face value. They take it as fact, even though it might be fiction.

Building a reputation strategy for false AI-driven searches

As a business owner, it’s critical to have response strategies in place for Google AI Overviews. 

Working with an ORM team is a critical first step. They might suggest the following measures:

  • Monitoring online forums: Yes, our modern world dictates that you stay on top of online forums like Reddit and Quora. Monitor the name of your business and the top players on your team. If you’re aware of the dialogue, you’re already one step ahead.
  • Creating “AI-readable” content: It’s also important to always be creating content designed to land on AI Overviews. This content should boost your platform on search engines, be citation-worthy, and push down less favorable results.
  • Addressing known criticism: Ever notice criticism directed at your brand? Seek to address it with proper business practices. Respond to online reviews kindly, suppress or remove negative content with your ORM team, and establish your business as a caring practice online.
  • Coordinating various teams: It’s imperative to establish the right teams around your business. We already mentioned ORM, but what about your legal, SEO, and PR teams? Have the right experts in place to deal with any controversies before they arise.

Also, remember to keep an eye on the future. Online reputation management is constantly evolving, and if your intention is to manage and elevate your brand, you must evolve with the times.

That means staying up-to-date with AI literacy and adapting to new KPIs, including sentiment framing, source attribution, and AI visibility. 

Staying on top of Google AI Overviews

We live in a new age. One where AI Overviews dictates much of what searches think and react to.

And the honest truth is that much of the knowledge AI Overviews gleans comes from user-dominated forums like Reddit and Quora.

As a brand manager, you can no longer be idle. You have to act. You have to manage the sources that Google AI Overviews summarizes, constantly staying one step ahead.

If you don’t, then you’re not properly managing your search reputation. 

Read more at Read More

What 107,000 pages reveal about Core Web Vitals and AI search

Core Web Vitals AI visibility

As AI-led search becomes a real driver of discovery, an old assumption is back with new urgency. If AI systems infer quality from user experience, and Core Web Vitals (CWV) are Google’s most visible proxy for experience, then strong CWV performance should correlate with strong AI visibility.

The logic makes sense.

Faster page load times result in smoother page load times, increased user engagement, improved signals, and AI systems that reward the outcome (supposedly)

But logic is not evidence.

To test this properly, I analysed 107,352 webpages that appear prominently in Google AI Overviews and AI Mode, examining the distribution of Core Web Vitals at the page level and comparing them against patterns of performance in AI-driven search and answer systems. 

The aim was not to confirm whether performance “matters”, but to understand how it matters, where it matters, and whether it meaningfully differentiates in an AI context.

What emerged was not a simple yes or no, but a more nuanced conclusion that challenges prevailing assumptions about how many teams currently prioritise technical optimisation in the AI era.

Why distributions matter more than scores

Most Core Web Vitals reporting is built around thresholds and averages. Pages pass or fail. Sites are summarized with mean scores. Dashboards reduce thousands of URLs into a single number.

The first step in this analysis was to step away from that framing entirely.

When Largest Contentful Paint was visualized as a distribution, the pattern was immediately clear. The dataset exhibited a heavy right skew. 

Median LCP values clustered in a broadly acceptable range, while a long tail of extreme outliers extended far beyond it. A relatively small proportion of pages were horrendously slow, but they exerted a disproportionate influence on the average.

Cumulative Layout Shift showed a similar issue. The majority of pages recorded near-zero CLS, while a small minority exhibited severe instability. 

Again, the mean suggested a site-wide problem that did not reflect the lived reality of most pages.

This matters because AI systems do not reason over averages, if they reason on user engagement metrics at all. 

They evaluate individual documents, templates, and passages of content. A site-wide CWV score is an abstraction created for reporting convenience, not a signal consumed by an AI model.

Before correlation can even be discussed, one thing becomes clear. Core Web Vitals are not a single signal, they are a distribution of behaviors across a mixed population of pages.

Correlations

Because the data was uneven and not normally distributed, a standard Pearson correlation was not suitable. Instead, I used a Spearman rank correlation, which assesses whether higher-ranking pages on one measure also tend to rank higher or lower on another, without assuming a linear relationship.

This matters because, if Core Web Vitals were closely linked to AI performance, pages that perform better on CWV would also tend to perform better in AI visibility, even if the link was weak.

I found a small negative relationship. It was present, but limited. For Largest Contentful Paint, the correlation ranged from -0.12 to -0.18, depending on how AI visibility was measured. For Cumulative Layout Shift, it was weaker again, typically between -0.05 and -0.09.

These relationships are visible when you look at large volumes of data, but they are not strong in practical terms. Crucially, they do not suggest that faster or more stable pages are consistently more visible in AI systems. Instead, they point to a more subtle pattern.

The absence of upside, and the presence of downside

The data do not support the claim that improving Core Web Vitals beyond basic thresholds improves AI performance. Pages with good CWV scores did not reliably outperform their peers in AI inclusion, citation, or retrieval.

However, the negative correlation is instructive.

Pages sitting in the extreme tail of CWV performance, particularly for LCP, were far less likely to perform well in AI contexts. 

These pages tended to exhibit lower engagement, higher abandonment, and weaker behavioral reinforcement signals. Those second-order effects are precisely the kinds of signals AI systems rely on, directly or indirectly, when learning what to trust.

This reveals the true shape of the relationship.

Core Web Vitals do not act as a growth lever for AI visibility. They act as a constraint.

Good performance does not create an advantage. Severe failure creates disadvantage.

This distinction is easy to miss if you examine only pass rates or averages. It becomes apparent when examining distributions and rank-based relationships.

Why ‘passing CWV’ is not a differentiator

One reason the positive correlation many expect does not appear is simple. Passing Core Web Vitals is no longer rare.

In this dataset, the majority of pages already met recommended thresholds, especially for CLS. When most of the population clears a bar, clearing it does not distinguish you. It merely keeps you in contention.

AI systems are not selecting between pages because one loads in 1.8 seconds and another in 2.3 seconds. They are selecting between pages because one explains a concept clearly, aligns with established sources, and satisfies the user’s intent, whereas the other does not.

Core Web Vitals ensure that the experience does not actively undermine those qualities. They do not substitute for them.

Reframing the role of Core Web Vitals in AI strategy

The implication is not that Core Web Vitals are unimportant. It is that their role has been misunderstood.

In an AI-led search environment, Core Web Vitals function as a risk-management tool, not acompetitive strategy. They prevent pages from falling out of contention due to poor experience signals.

This reframing has practical consequences for developing an AI visibility strategy.

Chasing incremental CWV gains across already acceptable pages is unlikely to deliver returns in AI visibility. It consumes engineering effort without changing the underlying selection logic AI systems apply.

Targeting the extreme tail, however, does matter. Pages with really bad performance generate negative behavioral signals that can suppress trust, reduce reuse, and weaken downstream learning signals.

The objective is not to make everything perfect. It is to ensure that the content you want AI systems to rely on is not compromised by avoidable technical failure.

Why this matters

As AI systems increasingly mediate discovery, brands are seeking controllable levers. Core Web Vitals feel attractive because they are measurable, familiar, and actionable.

The risk is mistaking measurability for impact.

This analysis suggests a more disciplined approach. Treat Core Web Vitals as table stakes. Eliminate extreme failures. 

Protect your most important content from technical debt. Then shift focus back to the factors AI systems actually use to infer value, such as clarity, consistency, intent alignment, and behavioral validation.

Core Web Vitals: A gatekeeper, not a differentiator

Based on an analysis of 107,352 AI visible webpages, the relationship between Core Web Vitals and AI performance is real, but limited.

There is no strong positive correlation. Improving CWV beyond baseline thresholds does not reliably improve AI visibility.

However, a measurable negative relationship exists at the extremes. Severe performance failures are associated with poorer AI outcomes, mediated through user behavior and engagement.

Core Web Vitals are therefore best understood as a gate, not a signal of excellence.

In an AI-led search landscape, this clarity matters.

Read more at Read More

7 Marketing AI Adoption Challenges (And How to Fix Them)

You’ve likely invested in AI tools for your marketing team, or at least encouraged people to experiment.

Some use the tools daily. Others avoid them. A few test them quietly on the side.

This inconsistency creates a problem.

An MIT study found that 95% of AI pilots fail to show measurable ROI.

Scattered marketing AI adoption doesn’t translate to proven time savings, higher output, or revenue growth.

AI usage ≠ AI adoption ≠ effective AI adoption.

To get real results, your whole team needs to use AI systematically with clear guidelines and documented outcomes.

But getting there requires removing common roadblocks.

In this guide, I’ll explain seven marketing AI adoption challenges and how to overcome them. By the end, you’ll know how to successfully roll out AI across your team.

Free roadmap: I created a companion AI adoption roadmap with step-by-step tasks and timeframes to help you execute your pilot. Download it now.


First up: One of the biggest barriers to AI adoption — lack of clarity on when and how to use it.

1. No Clear AI Use Cases to Guide Your Team

Companies often mandate AI usage but provide limited guidance on which tasks it should handle.

In my experience, this is one of the most common AI adoption challenges teams face. Regardless of industry or company size.

Reddit – r/antiwork – AI usage

Vague directives like “use AI more” leave people guessing.

The solution is to connect tasks to tools so everyone knows exactly how AI fits into their workflow.

The Fix: Map Team Member Tasks to Your Tech Stack

Start by gathering your marketing team for a working session.

Ask everyone to write down the tasks they perform daily or weekly. (Not job descriptions, but actual tasks they repeat regularly.)

Then look for patterns.

Which tasks are repetitive and time-consuming?

Common AI Use Cases for Marketing Teams

Maybe your content team realizes they spend four hours each week manually tracking competitor content to identify gaps and opportunities. That’s a clear AI use case.

Or your analytics lead notices they are wasting half a day consolidating campaign performance data from multiple regions into a single report.

AI tools can automatically pull and format that data.

Once your team has identified use cases, match each task to the appropriate tool.

Task-to-Tool Decision

After your workshop, create assignments for each person based on what they identified in the session.

For example: “Automate competitor tracking with [specific tool].”

When your team knows exactly what to do, adoption becomes easier.

2. No Structured Plan to Roll Out AI Across the Organization

If you give AI tools to everyone at once, don’t be surprised if you get low adoption in return.

The issue isn’t your team or the technology. It’s launching without testing first.

The Fix: Start with a Pilot Program

A pilot program is a small-scale test where one team uses AI tools. You learn what works, fix problems, and prove value — before rolling it out to everyone else.

A company-wide launch doesn’t give you this learning period.

Everyone struggles with the same issues at once. And nobody knows if the problem is the tool, their approach, or both.

Which means you end up wasting months (and money) before realizing what went wrong.

Two Approaches to Marketing AI Adoption

Plan to run your pilot for 8-12 weeks.

Note: Your pilot timeline will vary by team.

Small teams can move fast and test in 4-8 weeks. Larger teams might need 3-4 months to gather enough feedback.

Start with three months as your baseline. Then adjust based on how quickly your team adapts.


Content, email, or social teams work best because they produce repetitive outputs that show AI’s immediate value.

Select 3-30 participants from this department, depending on your team size.

(Smaller teams might pilot with 3-5 people. Larger organizations can test with 20-30.)

Then, set measurable goals with clear targets you can track. Like:

  • Cut blog production time from 8 hours to 5 hours
  • Reduce email draft revisions from 3 rounds to 1
  • Create 50 social media posts weekly instead of 20

Schedule weekly meetings to gather feedback throughout the pilot.

The pilot will produce department-specific workflows. But you’ll also discover what transfers: which training methods work, where people struggle, and what governance rules you need.

When you expand to other departments, they’ll adapt these frameworks to their own AI tasks.

After three months, you’ll have proven results and trained users who can teach the next group.

3-Month Pilot

At that point, expand the pilot to your second department (or next batch of the same team).

They’ll learn from the first group’s mistakes and scale faster because you’ve already solved common problems.

Pro tip: Keep refining throughout the pilot.

  • Update prompts when they produce poor results
  • Add new tools when you find workflow gaps
  • Remove friction points the moment they appear


Your third batch will move even quicker.

Within a year, you’ll have organization-wide marketing AI adoption with measurable results.

3. Your Team Lacks the Training to Use AI Confidently

Most marketing teams roll out AI tools without training team members how to use them.

In fact, only 39% of people who use AI at work have received any training from their company.

61% of workers who use AI at work received no training from their company

And when training does exist, it might focus on generic AI concepts rather than specific job applications.

The answer is better training that connects to the work your team does.

The Fix: Role-Specific Training

Generic training explains how AI works. Role-specific training shows people how to use AI in their actual jobs.

Here’s the difference:

Role Generic Training (Lower Priority) Role-Specific Training (Start Here)
Social Media Manager AI concepts and how large language models work How to automate content calendars and schedule posts faster
SEO Specialist Understanding neural networks and machine learning AI-powered keyword research and competitor analysis
Email Marketer Machine learning algorithms and data processing Using AI for personalization and subject line testing
Content Writer How AI models generate text and natural language processing Using AI to research topics, create outlines, and edit drafts
Paid Ads Manager Deep learning fundamentals and algorithmic optimization AI tools for ad copy testing, audience targeting, and bid management

When training connects directly to someone’s daily tasks, they actually use what they learn.

For example, Mastercard applies this approach with three types of training:

  • Foundational knowledge for everyone
  • Job-specific applications for different roles
  • Reskilling programs where needed.

Mastercard – Putting the "I"in AI

Companies like KPMG, Accenture, and IKEA have also developed dedicated AI training programs for their teams.

This is likely because they learned that generic training creates enterprise AI adoption challenges at scale.

Employees complete courses but never apply what they learned to their actual work.

Ikea – AI training programs for their teams

But you don’t need enterprise-scale resources to make this work.

Start by mapping what each role actually does with AI.

For example:

  • Your content team uses AI for research, strategy, outlines, and drafts
  • Your ABM team uses it for account research and personalized outreach
  • Your social team uses it for video creation and caption variations
  • Your marketing ops team uses it for workflow automation and data integration

Once you know what each role needs, pick your training approach.

Platforms like Coursera and LinkedIn Learning offer specific AI training programs that work well for flexible, self-paced learning.

Coursera – GenAI for PR Specialists

Training may also be available from your existing tools.

Check whether your current marketing platforms offer AI training resources, such as courses or documentation.

For example, Semrush Academy offers various training programs that also cover its AI capabilities.

Semrush Academy – AI Courses

For teams with highly specific workflows, external trainers can be useful.

This costs more. But it delivers the most relevant results because the trainer focuses only on what your team actually needs to learn.

For example, companies like Section offer AI adoption programs for enterprises, including coaching and custom workshops.

Sectionai – Homepage

But keep in mind that training alone won’t sustain marketing AI adoption.

AI tools evolve constantly, and your team needs continuous support to adapt.

Create these support systems:

  • Set up a dedicated Slack channel for AI questions where your team can share wins and troubleshoot problems
  • Run weekly Q&A sessions where people discuss specific challenges
  • Update training materials as new features and use cases emerge

4. Team Members Fear AI Will Replace Their Roles

Employees may resist AI marketing adoption because they fear losing their jobs to automation.

Headlines about AI replacing workers don’t help.

Forbes – AI Is Killing Marketing

Your goal is to address these fears directly rather than dismissing them.

The Fix: Have Honest Conversations About Job Security

Meet with each team member and walk through how AI affects their workflow.

Point out which repetitive tasks AI will automate. Then explain what they’ll work on with that freed-up time.

Be careful about the language you use. Be empathetic and reassuring.

For example, don’t say “AI makes you more strategic.”

Say: “AI will pull performance reports automatically. You’ll analyze the insights, identify opportunities, and make strategic decisions on budget allocation.”

One is vague. The other shows them exactly how their role evolves.

How to Address AI Fears With Your Team

Don’t just spring changes on your team. Give them a clear timeline.

Explain when AI tools will roll out, when training starts, and when you expect them to start using the new workflows.

For example: “We’re implementing AI for competitor tracking in Q2. Training happens in March. By April, this becomes part of your weekly process.”

When people know what’s coming and when, they have time to prepare instead of panicking.

Sample Timeline

Pro tip: Let people choose which AI features align with their interests and work style.

Some team members might gravitate toward AI for content creation. Others prefer using it for data analysis or reporting.

When people have autonomy over which features they adopt first, resistance decreases. They’re exploring tools that genuinely interest them rather than following mandates.


5. Your Team Resists AI-Driven Workflow Changes

People resist AI when it disrupts their established workflows.

Your team has spent years perfecting their processes. AI represents change, even when the benefits are obvious.

Resistance gets stronger when organizations mandate AI usage without considering how people actually work.

Reddit – Why AI

New platforms can be especially intimidating.

It means new logins, new interfaces, and completely new workflows to learn.

Rather than forcing everyone to change their workflows at once, let a few team members test the new approach first using familiar tools.

The Fix: Start with AI Features in Existing Tools

Your team likely already uses HubSpot, Google Ads, Adobe, or similar platforms daily.

When you use AI within existing tools, your team learns new capabilities without learning an entirely new system.

If you’re running a pilot program, designate 2-3 participants as AI champions.

Their role goes beyond testing — they actively share what they’re learning with the broader team.

What Do AI Champions Do

The AI champions should be naturally curious about new tools and respected by their colleagues (not just the most senior people).

Have them share what they discover in a team Slack channel or during standups:

  • Specific tasks that are now faster or easier
  • What surprised them (good or bad)
  • Tips or advice on how others can use the tool effectively

When others see real examples, such as “I used Social Content AI to create 10 LinkedIn posts in 20 minutes instead of 2 hours,” it carries more weight than reassurance from leadership.

Slack – Message

For example, if your team already uses a tool like Semrush, your champions can demonstrate how its AI features improve their workflows.

Keyword Magic Tool’s AI-powered Personal Keyword Difficulty (PKD%) score shows which keywords your site can realistically rank for — without requiring any manual research or analysis.

Keyword Magic Tool – Newsletter platform – PKD

AI Article Generator creates SEO-friendly drafts from keywords.

Your content writers can input a topic, set their brand voice, and get a structured first draft in minutes. This reduces the time spent staring at a blank page.

Semrush – AI Article Generator

Social Content AI handles the repetitive parts of social media planning. It generates post ideas, copy variations, and images.

Your social team can quickly build out a week’s content calendar instead of creating each post from scratch.

Semrush – Social Content AI Kit – Ideas by topic

Don’t have a Semrush subscription? Sign up now and get a 14-day free trial + get a special 17% discount on annual plan.

6. No Governance or Guardrails to Keep AI Usage Safe

Without clear guidelines, your team may either avoid AI entirely or use it in ways that create risk.

In fact, 57% of enterprise employees input confidential data into AI tools.

Types of Sensitive Data Employees Input Into AI Tools

They paste customer data into ChatGPT without realizing it violates data policies.

Or publish AI-generated content without approval because the review process was never explained.

Your team needs clear guidelines on what’s allowed, what’s not, and who approves what.

Free AI policy template: Need help creating your company’s AI policy? Download our free AI Marketing Usage Policy template. Customize it with your team’s tools and workflows, and you’re ready to go.


The Fix: Create a One-Page AI Usage Policy

When creating your policy, keep it simple and accessible. Don’t create a 20-page document nobody will read.

Aim for 1-2 pages that are straightforward and easy to follow.

Include four key areas to keep AI usage both safe and productive.

Policy Area What to Include Example
Approved Tools List which AI tools your team can use — both standalone tools and AI features in platforms you already use “Approved: ChatGPT, Claude, Semrush’s AI Article Generator, Adobe Firefly”
Data Sharing Rules Define specifically what data can and can’t be shared with AI tools “Safe to share: Product descriptions, blog topics, competitor URLs

Never share: Customer names, email addresses, revenue data, internal campaign plans, pricing strategies, unannounced product details”

Review Requirements Document who reviews what type of content before publication “Social posts: Peer review

Blog posts: Content lead approval

Legal/compliance content: Legal team review”

Approval Workflows (optional) Clarify who approves AI content at each stage “Internal drafts: Content team

Customer-facing materials: Marketing director

Compliance-related content: Legal sign-off”

Beyond documenting the rules, establish who team members should contact when they encounter situations the policy doesn’t address.

Designate a department lead, governance contact, or weekly office hours as the escalation point for:

  • Scenarios not covered in your guidelines
  • Technical site issues with approved AI tools
  • Concerns about whether AI-generated content is accurate or appropriate
  • Questions about data sharing

Marketing AI Escalation Process

The goal is to give them a clear path to get help, rather than guessing or avoiding AI altogether.

Then, post the policy where your team will see it.

This might be your Slack workspace, project management tool, or a pinned document in your shared drive.

AI Policy document

And treat it as a living document.

When the same question comes up multiple times, add the answer to your policy.

For example, if three people ask, “Can I use AI to write email subject lines?” update your policy to explicitly say yes (and clarify who reviews them before sending).

AI Governance Checklist

7. No Reliable Way to Measure AI’s Impact or ROI

Without clear proof that AI improves their results, team members may assume it’s just extra work and return to old methods.

And if leadership can’t see a measurable impact, they might question the investment.

This puts your entire AI program at risk.

Avoid this by establishing the right metrics before implementing AI.

The Fix: Track Business Metrics (Not Just Efficiency)

Here’s how to measure AI’s business impact properly.

Pick 2-3 metrics your leadership already reviews in reports or meetings.

These are typically:

  • Leads generated
  • Conversion rate
  • Revenue growth
  • Customer acquisition
  • Customer retention

Measure Marketing AI's Business Impact

These numbers demonstrate to your team and leadership that AI is helping your business.

Then, establish your baseline by recording your current numbers. (Do this before implementing AI tools.)

For example, if you’re tracking leads and conversion rate, write down:

  • Current monthly leads: 200
  • Current conversion rate: 3%

This baseline lets you show your team (and leadership) exactly what changed after implementing AI.

Pro tip: Avoid making multiple changes simultaneously during your pilot or initial rollout.

If you implement AI while also switching platforms or restructuring your team, you won’t know which change drove results.

Keep other variables stable so you can clearly attribute improvements to AI.


Once AI is in use, check your metrics monthly to see if they’re improving. Use the same tools you used to record your baseline.

Write down your current numbers next to your baseline numbers.

For example:

  • Baseline leads (before AI): 200 per month
  • Current leads (3 months into AI): 280 per month

But don’t just check if numbers went up or down.

Look for patterns:

Did one specific campaign or content type perform better after using AI?

Are certain team members getting better results than others?

Track individual output alongside team metrics.

For example, compare how many blog posts each writer completes per week, or email open rates by the person who drafted them.

Email report overview page

If someone’s consistently performing better, ask them to share their AI workflow with the team.

This shows you what’s working, and helps the rest of your team improve.

Share results with both your team and leadership regularly.

When reporting, connect AI’s impact to the metrics you’ve been tracking.

For example:

Say: “AI cut email creation time from 4 hours to 2.5 hours. We used that time to run 30% more campaigns, which increased quarterly revenue from email by $5,000.”

Not: “We saved 90 hours with AI email tools.”

The first shows business impact — what you accomplished with the time saved. The second only shows time saved.

Other examples of how to frame your reporting include:

How to Report AI Results to Leadership

Build Your Marketing AI Adoption Strategy

When AI usage is optional, undefined, or unsupported, it stays fragmented.

Effective marketing AI adoption looks different.

It’s built on:

  • Role-specific training people actually use
  • Guardrails that reduce uncertainty and risk
  • Metrics that drive business outcomes

When those pieces are in place, AI becomes part of how work gets done.

If you want a step-by-step implementation plan, download our Marketing AI Adoption Roadmap.

Need help choosing which AI tools to pilot? Our AI Marketing Tools guide breaks down the best options by use case.

The post 7 Marketing AI Adoption Challenges (And How to Fix Them) appeared first on Backlinko.

Read more at Read More

SEO in 2026: Key predictions from Yoast experts

If there’s one takeaway as we look toward SEO in 2026, it’s that visibility is no longer just about ranking pages, but about being understood by increasingly selective AI-driven systems. In 2025, SEO proved it was not disappearing, but evolving, as search engines leaned more heavily on structure, authority, and trust to interpret content beyond the click. In this article, we share SEO predictions for 2026 from Yoast SEO experts, Alex Moss and Carolyn Shelby, highlighting the shifts that will shape how brands earn visibility across search and AI-powered discovery experiences.

Key takeaways

  • In 2026, SEO focuses on visibility defined by clarity, authority, and trust rather than just page rankings
  • Structured data becomes essential for eligibility in AI-driven search and shopping experiences
  • Editorial quality must meet machine readability standards, as AI evaluates content based on structure and clarity
  • Rankings remain important as indicators of authority, but visibility now also includes citations and brand sentiment
  • Brands should align their SEO strategies with social presence and aim for consistency across all platforms to enhance visibility

A brief recap of SEO in 2025: what actually changed?

2025 marked a clear shift in how SEO works. Visibility stopped being defined purely by pages and rankings and began to be shaped by how well search engines and AI systems could interpret content, brands, and intent across multiple surfaces. AI-generated summaries, richer SERP features, and alternative discovery experiences made it harder to rely solely on traditional metrics, while signals such as authority, trust, and structure played a larger role in determining what was surfaced and reused.

As we outlined in our SEO in 2025 wrap-up, the brands that performed best were those with strong foundations: clear content, credible signals, and structured information that search systems could confidently understand. That shift set the direction for what was to come next.

By the end of 2025, it was clear that SEO had entered a new phase, one shaped by interpretation rather than isolated optimizations. The SEO predictions for 2026 from Yoast experts build directly on this evolution.

2026 SEO predictions by Yoast experts

The SEO predictions for 2026 shared here come from our very own Principal SEOs at Yoast, Alex Moss and Carolyn Shelby. Built on the lessons SEO revealed in 2025, these predictions focus less on reacting to individual updates and more on how search and AI systems are evolving at a foundational level, and what that means for sustainable visibility going forward.

TL;DR

SEO in 2026 is about understanding how signals such as structure, authority, clarity, and trust are now interpreted across search engines, AI-powered experiences, and discovery platforms. Each prediction below explains what is changing, why it matters, and how brands can practically adapt in the coming year.

Prediction 1: Structured data shifts from ranking enhancer to retrieval qualifier

In 2026, structured data will no longer be a competitive advantage; it will become a baseline requirement. Search engines and AI systems increasingly rely on structured data as a layer of eligibility to determine whether content, products, and entities can be confidently retrieved, compared, or surfaced in AI-powered experiences.

For ecommerce brands, this shift is especially significant. Product information such as pricing, availability, shipping details, and merchant data is now critical for visibility in AI-driven shopping agents and comparison interfaces. At the enterprise level, the move toward canonical identifiers reflects a growing need to avoid misattribution and data decay across systems that reuse information at scale.

What this means in practice:

Brands without clean, comprehensive entity and product data will not rank lower. They will simply not appear in AI-driven shopping and comparison flows at all.

Also read: Optimizing ecommerce product variations for SEO and conversions

How to act on this:

Treat structured data as part of your SEO foundation, not an enhancement. Tools like Yoast SEO help standardize the implementation of structured data. The plugin’s structured data features make it easier to generate rich, meaningful schema markup, helping search engines better understand your site and take control of how your content is described.

A smarter analysis in Yoast SEO Premium

Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!

Get Yoast SEO Premium Only $118.80 / year (ex VAT)

Prediction 2: Agentic commerce becomes a visibility battleground, not a checkout feature

Agentic commerce marks a shift in how users discover and choose brands. Instead of browsing, comparing, and transacting manually, users increasingly rely on AI-driven agents to recommend, reorder, or select products and services on their behalf. In this environment, visibility is established before a checkout ever happens, often without a traditional search query.

This shift is becoming more concrete as search and commerce platforms move toward standardised ways for agents to understand and transact with merchants. Recent developments around agentic commerce protocols and Universal Commerce Protocol (UCP) highlight how AI systems are being designed to access product, pricing, availability, and merchant information more directly. As a result, platforms such as Shopify, Stripe, and WooCommerce are no longer just infrastructure. They increasingly act as distribution layers, where agent compatibility influences which brands are surfaced, recommended, or selected.

What this means in practice:

In 2026, SEO teams will be accountable for agent readiness in much the same way they were once accountable for mobile-first readiness. If agents cannot consistently interpret your brand, product data, or availability, they are more likely to default to competitors that they can understand with greater confidence.

How to act on this:

Focus on making your brand legible to automated decision systems. Ensure product information, pricing, availability, and supporting metadata are clear, structured, and consistent across your site and feeds. This is not about optimising for a single platform or protocol, but about reducing ambiguity so AI agents can accurately interpret and act on your information across emerging agent-driven discovery and commerce experiences.

Prediction 3: Editorial quality becomes a machine readability requirement

In 2026, editorial quality is no longer judged only by human readers. AI systems increasingly evaluate content based on how efficiently it can be parsed, summarized, cited, and reused. Verbosity, fluff, and circular explanations do not fail editorially. They fail functionally.

Content that is concise, clearly structured, and well-attributed has higher chances of performing well. Headings, lists, definitions, and tables directly influence how information is chunked and reused across AI-generated summaries and search experiences.

Must read: Why is summarizing essential for modern content?

What this means in practice:

“Helpful content” is being held to higher editorial standards. Content that cannot be summarized cleanly without losing meaning becomes less useful to AI systems, even if it remains readable to human audiences.

How to act on this:

Make editorial quality measurable and machine actionable. Utilize tools that assist you in aligning content with modern discoverability requirements. Yoast SEO Premium’s AI features, AI Generate, AI Optimize, and AI Summarize, help you assess and improve how content is structured and optimized, supporting both search engines and AI systems in understanding your intent.

Prediction 4: Rankings still matter, but as training signals, not endpoints

Despite ongoing speculation, rankings do not disappear in 2026. Instead, their role changes. AI agents and search systems continue to rely on top-ranked, trusted pages to understand authority, relevance, and consensus within a topic.

While rankings are no longer the final KPI, abandoning them entirely creates blind spots in understanding why certain brands are included or ignored in AI-driven experiences.

What this means in practice:

Teams that stop tracking rankings altogether risk losing insight into how authority is established and reinforced across search and AI systems.

How to act on this:

Continue to use rankings as diagnostic signals, but don’t treat them as the sole indicator of success in 2026. Alongside traditional performance metrics for SEO in 2026, look at how often your brand is mentioned, cited, or summarized in AI-generated answers and recommendations.

Tools like Yoast AI Brand Insights, available as part of Yoast SEO AI+, help surface these broader visibility signals by showing how your brand appears across AI platforms, including sentiment, citation patterns, and competitive context.

See how visible your brand is in AI search

Track mentions, sentiment, and AI visibility. With AI Brand Insights and Yoast SEO AI+, you can start monitoring and improving your performance.

Prediction 5: Brand sentiment becomes a core visibility signal

Brand sentiment increasingly influences how search engines and AI systems assess credibility and trust. Mentions, whether linked or unlinked, contribute to a broader understanding of how a brand is perceived across the web. AI systems synthesize signals from reviews, forums, social platforms, media coverage, and knowledge bases to form a composite view of legitimacy and expertise.

What makes this shift more impactful is amplification. Inconsistent messaging or negative sentiment is not smoothed out over time. Instead, it becomes more apparent when systems attempt to summarize, compare, or recommend brands across search and AI-driven experiences.

What this means in practice:

SEO, brand, PR, and social teams increasingly influence the same visibility signals. When these efforts are misaligned, credibility weakens. When they reinforce one another, trust becomes easier for systems to establish and maintain.

How to act on this:

Focus on consistency across owned, earned, and shared channels. Pay attention not only to where your brand ranks, but also to how it is discussed, described, and contextualized across various platforms. As discovery expands beyond traditional search results, reputation and narrative coherence become essential inputs into how brands are surfaced and understood.

Prediction 6: Multimodal optimization becomes baseline, not optional

Search behavior is no longer text-first. Images, video, audio, and transcripts now function as retrievable knowledge objects that feed both traditional search and AI-powered experiences. In particular, video platforms continue to influence how expertise and authority are understood at scale.

Platforms like YouTube function not only as discovery engines, but also as training corpora for AI systems learning how to interpret topics, brands, and creators.

What this means in practice:

Brands with strong written content but weak visual or video assets may appear incomplete or “thin” to AI systems, even if their articles are well-optimized.

How to act on this:

Treat multimodal content as part of your SEO foundation. Support written content with relevant visuals, video, and transcripts. Clear structure and readability remain essential, and tools like Yoast SEO help ensure your core content remains accessible and well-organized as it is reused across formats.

Prediction 7: Social platforms become secondary search indexes

Discovery will increasingly happen outside traditional search engines. Platforms such as TikTok, LinkedIn, Reddit, and niche communities now act as secondary search indexes where users validate expertise and intent.

AI systems reference these platforms to verify whether a brand’s claims, expertise, and messaging are substantiated in public discourse.

What this means in practice:

Presence alone is not enough. Inconsistent or unclear messaging across platforms weakens trust signals, while focused, repeatable narratives reinforce authority.

How to act on this:

Align your SEO strategy with social and community visibility to enhance your online presence. Ensure that your expertise, terminology, and positioning remain consistent across all discussions about your brand.

Must read: When AI gets your brand wrong: Real examples and how to fix it

Prediction 8: Email reasserts itself as the most controllable growth channel

As discovery fragments and platforms increasingly gate access to audiences, email regains importance as a high-signal, low-distortion channel. Unlike search or social platforms, email offers direct access to users without algorithmic mediation.

In 2026, email plays a supporting role in reinforcing authority, engagement, and intent signals, especially as AI systems evaluate how audiences interact with trusted sources over time.

What this means in practice:

Brands that underinvest in email become overly dependent on platforms they do not control, which increases volatility and reduces long-term resilience.

How to act on this:

Focus on relevance over volume. Segment audiences, align content with intent, and use email to reinforce expertise and trust, not just drive clicks.

Prediction 9: Authority outweighs freshness for most non-news queries

For non-news content, AI systems increasingly prioritize credible, historically consistent sources over frequent updates or constant publishing. Freshness still matters, but only when it meaningfully improves accuracy or relevance.

Long-standing domains with coherent narratives and well-maintained content benefit, provided their foundations remain clean and trustworthy.

What this means in practice:

Scaled/programmatic content strategies lose effectiveness. Publishing frequently without maintaining quality or consistency introduces noise rather than value.

How to act on this:

Invest in maintaining and improving existing content. Update thoughtfully, reinforce expertise, and ensure that your most important pages remain accurate, structured, and authoritative.

Prediction 10: SEO teams evolve into visibility and narrative stewards

In 2026, SEO will extend far beyond search engines. SEO teams are increasingly influencing how brands are perceived by both humans and machines across search, AI-generated answers, and discovery platforms.

Success is measured not only by traffic alone, but also by inclusion, citation, and trust. SEO becomes a strategic function that shapes how a brand is represented and understood.

What this means in practice:

SEO teams that focus solely on production or technical fixes risk losing influence as visibility becomes a cross-channel concern.

How to act on this:

Shift focus toward clarity, consistency, and long-term trust. The most effective teams help define how a brand is understood, not just how it ranks.

What SEO is no longer about in 2026 (misconceptions to discard)

As SEO evolves in 2026, many long-standing assumptions no longer reflect how search engines and AI-driven systems actually determine visibility. The table below contrasts common SEO myths with the realities shaped by recent changes and expert insights from Yoast.

Diminishing relevance What actually matters in 2026
SEO is mainly about ranking pages Rankings still matter, but they serve as signals for authority and relevance, rather than the final measure of visibility
Structured data is optional or a ranking boost Structured data is now a baseline requirement for eligibility in AI-driven search, shopping, and comparison experiences
Publishing more content leads to better performance Authority, clarity, and maintenance of fewer strong assets outperform high-volume publishing
Editorial quality is subjective Content quality is increasingly evaluated by machines based on structure, clarity, and reusability
Brand reputation is a PR concern, not an SEO one Brand sentiment directly influences how AI systems interpret, trust, and recommend brands
Search is still primarily text-based Images, video, audio, and transcripts are now core retrievable knowledge objects
SEO can be measured only through traffic Visibility spans AI answers, social platforms, agents, and citations, requiring broader performance signals

Looking ahead: what will shape SEO in 2026

The focus is no longer on isolated tactics or short-term wins, but on building visibility systems that search engines and AI platforms can reliably understand, trust, and reuse.

Clarity and interpretability matter more than clever optimization. Content, products, and brand narratives need to be easy for machines to interpret without ambiguity. Structured data has become foundational, not optional, determining whether brands are eligible to appear in AI-powered shopping, comparison, and answer-driven experiences.

Authority is built over time, not manufactured at scale. Search and AI systems increasingly favor sources with consistent, well-maintained narratives over those chasing volume. Visibility also extends beyond the SERP, spanning AI-generated answers, citations, recommendations, and cross-platform mentions, making it essential to look beyond traffic as the sole measure of success.

Finally, SEO in 2026 demands alignment. Brand, content, product, and platform signals all contribute to how systems interpret trust and relevance.

The post SEO in 2026: Key predictions from Yoast experts appeared first on Yoast.

Read more at Read More

How to choose a link building agency in the AI SEO era by uSERP

Remember when a handful of links from sites in your niche could drive steady organic traffic? That era is over.

Today, Google’s AI Overviews and the rise of answer engines like ChatGPT raise the bar. You have to do more to stay visible. Hiring an experienced link building agency is one efficient way to meet that challenge.

It’s also one of the most important investments you’ll make. The right partner doesn’t just build links. They position your brand as a trusted, cited source in the AI era.

So how do you choose the right agency for your company?

While the interface has changed, the core ranking signals remain largely the same. What’s changed is their priority.

LLMs need credible sources to ground their answers. That makes authoritative link building more important than ever.

This article shows you how to vet and choose a link building agency that understands these new priorities and can help your brand win trust in the AI-driven SEO landscape.

How link building and SEO are changing

Gartner predicted search engine volume to drop by 25% as AI takes over more answers. That makes working with an agency that understands AI SEO essential.

But how do you know which agencies actually do?

The real indicators are holistic authority and AI visibility. Only one in five links cited in Google’s AI Overviews matched a top-10 organic result, according to an Authoritas study. Even more telling, 62.1% of cited links or domains didn’t rank in the top 10 at all.

The takeaway is simple. AI systems and search engines don’t evaluate websites the same way. We’re no longer building links just for Google’s crawler.

Link equity alone isn’t enough. Sites need topical authority, brand mentions, and real market presence. The goal is to build a footprint that AI models recognize and can’t ignore.

The new criteria: Evaluating a link building agency for AI SEO

Choosing the right link building agency comes down to how well they prioritize the factors that matter now.

This section shows you what to look for.

Prioritizing quality, relevance, and traffic

I see this mistake all the time. A marketing director evaluates link quality based only on Domain Rating (DR).

High DR matters, but at uSERP, we know it’s not the finish line. You should also look for:

  • Relevance: A link from a DR 60, niche-specific site in your industry often beats a DR 80 general news site that covers everything from crypto to keto.
  • Minimum traffic standards: If a site doesn’t rank for keywords or attract real traffic, its links won’t help you rank. That’s why strict traffic minimums matter.

When vetting an agency, ask for contractual site-traffic guarantees.

A confident agency won’t hesitate to sign a Statement of Work that guarantees every link comes from a site with a minimum traffic threshold, such as 5,000+ monthly organic visitors.

If they won’t put traffic minimums in writing, they’re likely planning to place links on “ghost town” sites. These domains appear strong, but they lack a real audience, which protects their margins rather than supporting your growth.

Look for a content-driven approach and digital PR

Links don’t exist in a vacuum. The strongest ones come from being part of a real conversation.

The best agencies no longer operate like traditional link builders. They act more like content marketing and digital PR teams. 

Instead of asking for links, the best agencies create linkable assets — data studies, expert commentary, and in-depth guides that journalists and publishers want to cite – because they understand:

  • Google’s algorithms and AI models are continually getting better at identifying paid placements. A content-led approach keeps links natural, editorial, and valuable to readers.
  • Guest posting in the AI SEO era isn’t about a disposable 500-word article. It’s about thought leadership that positions your CEO as a credible expert.

At uSERP, for example, we created — and continuously update — our State of Backlinks for SEO report.

Red flags: Recognizing outdated or dangerous tactics

Choosing the wrong partner doesn’t just waste your budget. It puts your brand reputation — and potentially your company’s future — at risk.

Here are the biggest red flags to avoid when hiring an agency:

Guaranteed rankings

No one can guarantee a number-one ranking on Google. Any agency that promises specific keyword positions on a fixed timeline is likely doing one of two things:

  • Using risky, short-term tactics to force a temporary spike.
  • Selling you snake oil.

These agencies often rely on private blog networks (PBNs) or aggressive anchor text manipulation to manufacture fast results.

You might see an early jump, but the crash that follows—and the risk of a penalty when Google’s spam systems catch up—is never worth it.

Lack of transparency

If an agency won’t explain how they earn links or where placements will come from before you pay, walk away.

Reputable agencies are transparent. They’ll show real examples of past placements and share relevant case studies from your industry.

Agencies that hide their inventory usually do it for a reason. Those sites are often part of a low-quality network or link farm.

Self-serve link portfolios

If you’re a marketer or SEO on LinkedIn, chances are you’ve received a message like this:

This is a common tactic among low-quality link builders: reselling backlinks from a shared inventory. I understand the appeal.

Strategic link acquisition is hard. Buying and flipping links is easy.

The problem — for you — is the footprint. If an agency can secure a link by filling out a form, anyone can. That includes casino affiliates, gambling sites, adult content, and outright scammers.

That’s not a natural link profile. Google has almost certainly already identified and burned those domains.

In the best case, you pay for a link that passes zero authority. In the worst case, Google flags your site as part of a link scheme.

Dirt-cheap packages

SEO and link building deliver incredible ROI, but they aren’t cheap.

You can’t buy a high-quality article with a real, earned link from an authoritative site for $50. Speaking as someone who runs an AI SEO agency, the true cost of quality content, editing, outreach, and relationship building is at least an order of magnitude higher.

That’s why cheap packages that promise multiple high-authority links are a major red flag. They almost always rely on:

  • Fully AI-generated, barely edited content.
  • Low-value link farms or resold inventory.
  • Toxic backlinks.

None of those will help you show up on AI search engines or Google.

Partnering with a link building agency for a sustainable market presence

Link building in the AI era is a long-term investment. It’s about building a durable market presence, not chasing quick wins.

The right partner sees themselves as an extension of your team. They care about:

  • Your backlink gap compared to competitors.
  • Your brand mentions across LLMs.
  • Your overall search and AI visibility.

They help you navigate content syndication, backlink audits, content marketing, and modern link building strategies with a unified approach.

If you’re ready to move past vanity metrics and start building authority that drives revenue and AI citations, it’s time to be selective about who you trust with your domain.

The right link building agency is out there. You just need to know how to spot them.

Read more at Read More

New: Track brand visibility in Gemini with Yoast AI Brand Insights

Yoast AI Brand Insights now lets you track how your brand appears in Google’s Gemini. You can see your Gemini data alongside ChatGPT and Perplexity, all in one dashboard. 

With a single analysis, you can see how different AI platforms describe your brand with the Yoast SEO AI+ plan. You’ll see which sources they use and how sentiment compares across the tools your customers use most. 

Why this matters 

AI platforms use different methods to answer questions about your brand, often leading to different results. Seeing these results side-by-side helps you spot gaps or missed opportunities in your brand’s AI presence. 

  • ChatGPT is designed as a conversational assistant, focusing on natural dialogue and using multi-step reasoning to explain complex topics. 
  • Perplexity positions itself as an “answer engine”, emphasizing transparency by grounding every response in cited web sources. 
  • Gemini presents itself as a search-driven LLM, leveraging Google’s vast index to show how your brand appears in real-time search contexts.

As these tools frame your brand differently, from conversational reasoning to source-heavy citations, you need a single dashboard which covers all to see which sources they rely on and how their sentiment compares. 

What’s new 

You can now: 

  • Run brand visibility analyses in Gemini, in addition to ChatGPT and Perplexity. 
  • Compare results across all three platforms with the added benefit of a built-in historical view. 
  • Track brand mentions, sentiment, and citations in one place. 
  • Monitor changes over time in your AI Visibility Index. 

How to get started 

If you’re already using Yoast SEO AI+, nothing changes in how you work. Log in and at your next analysis, Gemini data is now included automatically at no extra cost. You can select the AI platform from the dropdown, and your dashboard will show a broader view of how your brand appears across AI search and chat. 

To upgrade

If you don’t yet have Yoast SEO AI+, you’ll need to upgrade to access the Yoast AI Brand Insights tool. The AI+ plan brings brand visibility tracking together with on-page SEO tools, content optimization, and AI-powered insights in one package, so you can analyze how your brand is mentioned and act from the same workflow. 

Upgrade to Yoast SEO AI+ to start scanning your brand across Gemini, ChatGPT, and Perplexity. 

The post New: Track brand visibility in Gemini with Yoast AI Brand Insights appeared first on Yoast.

Read more at Read More

News publishers expect search traffic to drop 43% by 2029: Report

News executives expect search referrals to drop by more than 40% over the next three years, as search engines continue evolving into AI-driven answer engines, according to a new Reuters Institute report. That shift is squeezing publisher traffic and accelerating a move away from classic SEO toward AEO and GEO.

Why we care. Google’s AI Overviews and chatbot-style search are changing how people get information, often without clicking through. SEO visibility, attribution, and ROI models built on old playbooks are breaking fast.

What’s happening. Publishers expect search traffic to nearly halve. Survey respondents forecast search engine traffic down 43% within three years, with a fifth of respondents expecting losses above 75%.

  • Google referrals are already falling. Chartbeat data cited in the report show organic Google search traffic down 33% globally from November 2024 to November 2025, and down 38% in the U.S. over the same period.
  • AI Overviews are a major factor. Google’s AI Overviews appear at the top of roughly 10% of U.S. search results, with studies showing higher zero-click behavior when they appear, according to the report.
  • The impact is uneven. Lifestyle and utility content (e.g., weather, TV guides, horoscopes) appear to be the most exposed, while hard news queries have been more insulated so far.

SEO to AEO and GEO. The Reuters Institute expects rapid growth in answer engine optimization (AEO) and generative engine optimization (GEO) as publishers and agencies adapt to AI-led interfaces.

  • AEO and GEO services are set to surge. Agencies are repurposing SEO playbooks for chatbots and overview boxes, with new demands on how content is written, structured, and surfaced.
  • Publishers are dialing back traditional SEO. Many survey respondents plan to reduce investment in classic Google SEO and focus more on distribution through AI platforms like ChatGPT, Gemini, and Perplexity.

Between the lines. This is about more than rankings. It’s about distribution inside platforms that publishers do not control.

  • Chat referrals are growing, but remain small. Traffic from ChatGPT is rising quickly, but the report calls it a rounding error compared with Google.
  • Attribution is getting murkier. If AI agents summarize content and complete tasks for users, it becomes unclear what counts as a visit and how monetization works.
  • Licensing is becoming a parallel strategy. As referral risk grows, publishers are turning to AI licensing, revenue-sharing deals, and negotiated citation or prominence as another path to value.

What to watch. A new KPI stack is emerging. Metrics like share of answer, citation visibility, and brand recall may matter as much as clicks.

  • Utility content faces the biggest squeeze. Categories built for fast answers are easiest for AI systems to commoditize.
  • A measurement arms race is coming. Expect new tools to separate human visits from agent consumption and to measure value beyond raw traffic.

Bottom line. Publishers are bracing for a world where search still matters, but clicks matter less. The report’s message is clear: when AI answers become the interface, AEO, GEO, and attribution strategy are no longer optional. They are a core modern search strategy.

The report. Journalism, media, and technology trends and predictions 2026

Read more at Read More

Google opens Olympic live sports inventory to biddable CTV buys

Live sports advertising is getting more programmatic — and more measurable.

Driving the news. Google is expanding biddable live sports in Display & Video 360, giving advertisers programmatic access to NBCUniversal’s Olympic Winter Games inventory ahead of a crowded 2026 global sports calendar.

Why we care. Live sports remain one of the few media environments that consistently deliver massive, attentive audiences. By moving premium sports inventory into biddable CTV, Google gives advertisers more control, stronger measurement, and simpler activation — without sacrificing reach.

What’s new. Advertisers can now pair Google audience signals with NBCUniversal’s live sports CTV inventory to reach fans on the big screen and re-engage them across YouTube and other Google surfaces.

  • New household-level frequency management reduces overexposure, while Google’s AI-powered cross-device conversion tracking connects CTV impressions to downstream purchases at no additional cost.
  • Google is also streamlining access to live sports with a redesigned Marketplace.
  • You can activate curated sports packages in just a few clicks instead of managing fragmented media buys.

The big picture. As fans move fluidly between connected TV, YouTube, Search and social feeds, advertisers are under pressure to follow attention across screens. Google is positioning Display & Video 360 as the hub that connects those moments, from the living room to mobile.

Bottom line: By unlocking Olympic and live sports inventory inside Display & Video 360, Google is making premium sports advertising easier to buy, easier to measure, and far more accountable.

Read more at Read More

Google expands Shopping promotion rules ahead of 2026

Inside Google Ads’ AI-powered Shopping ecosystem: Performance Max, AI Max and more

Google is broadening what counts as an eligible promotion in Shopping, giving merchants more flexibility heading into next year.

Driving the news. Google is update its Shopping promotion policies to support additional promotion types, including subscription discounts, common promo abbreviations, and — in Brazil — payment-method-based offers.

Why we care. Promotions are a key lever for visibility and conversion in Shopping results. These changes unlock more promotion formats that reflect how consumers actually buy today, especially subscriptions and cashback offers. Greater flexibility in promotion types and language reduces disapprovals and makes Shopping ads more competitive at key decision moments.

For retailers relying on subscriptions or local payment incentives, this update creates new ways to drive visibility and conversion on Google Shopping.

What’s changing. Google will now allow promotions tied to subscription fees, including free trials and percent- or amount-off discounts. Merchants can set these up by selecting “Subscribe and save” in Merchant Center or by using the subscribe_and_save redemption restriction in promotion feeds. Examples include a free first month on a premium subscription or a steep discount for the first few billing cycles.

Google is also loosening restrictions on language. Common promotional abbreviations like BOGO, B1G1, MRP and MSRP are now supported, making it easier for retailers to mirror real-world retail messaging without risking disapproval.

In Brazil only, Google will now support promotions that require a specific payment method, including cashback offers tied to digital wallets. Merchants must select “Forms of payment” in Merchant Center or use the forms_of_payment redemption restriction. Google says there are no immediate plans to expand this change to other markets.

Between the lines. These updates signal Google’s intent to better align Shopping promotions with modern retail models — especially subscriptions and localized payment behaviors — while reducing friction for merchants.

The bottom line. By expanding eligible promotion types, Google is giving advertisers more room to compete on value, not just price, when Shopping policies update in January 2026.

Read more at Read More

Apple is finally upgrading Siri, and Google Gemini will power it

Apple

Apple is teaming up with Google to power its next generation of AI features, including a long-awaited Siri upgrade.

What’s happening: Apple will use Google’s Gemini AI models and cloud infrastructure to support future Apple Foundation Models. The multi-year partnership is expected to roll out later this year.

Why we care. With Gemini powering Siri, Apple’s assistant should become a true AI answer engine. That will likely change how millions of iOS users find information, ask questions, and interact with search.

Driving the news. Apple said it chose Google after a “careful evaluation,” calling Gemini the “most capable foundation” for its AI ambitions.

  • We learned in September that Apple was in talks to use a custom Gemini model to power a revamped Siri.
  • Apple delayed its Siri AI upgrade last year, despite marketing the feature. The delay intensified scrutiny of Apple’s AI strategy.

What they’re saying. Here’s a statement Google shared via X:

Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year. After careful evaluation, Apple determined that Google’s Al technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users. Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple’s industry-leading privacy standards.

The bigger picture. Google briefly crossed a $4 trillion market cap last week, surpassing Apple for the first time since 2019.

  • Google’s Gemini 3 model launched late last year as part of its broader AI push.
  • Apple largely stayed out of the AI arms race that followed ChatGPT’s launch in late 2022 while rivals poured billions into models, chips, and cloud infrastructure.

Read more at Read More