Posts

7 real-world AI failures that show why adoption keeps going wrong

7 real-world AI failures that show why adoption keeps going wrong

AI has quickly risen to the top of the corporate agenda. Despite this, 95% of businesses struggle with adoption, MIT research found.

Those failures are no longer hypothetical. They are already playing out in real time, across industries, and often in public. 

For companies exploring AI adoption, these examples highlight what not to do and why AI initiatives fail when systems are deployed without sufficient oversight.

1. Chatbot participates in insider trading, then lies about it

In an experiment driven by the UK government’s Frontier AI Taskforce, ChatGPT placed illegal trades and then lied about it

Researchers prompted the AI bot to act as a trader for a fake financial investment company. 

They told the bot that the company was struggling, and they needed results. 

They also fed the bot insider information about an upcoming merger, and the bot affirmed that it should not use this in its trades. 

The bot still made the trade anyway, citing that “the risk associated with not acting seems to outweigh the insider trading risk,” then denied using the insider information.  

Marius Hobbhahn, CEO of Apollo Research (the company that conducted the experiment), said that helpfulness “is much easier to train into the model than honesty,” because “honesty is a really complicated concept.”

He says that current models are not powerful enough to be deceptive in a “meaningful way” (arguably, this is a false statement, see this and this).

However, he warns that it’s “not that big of a step from the current models to the ones that I am worried about, where suddenly a model being deceptive would mean something.”

AI has been operating in the financial sector for some time, and this experiment highlights the potential for not only legal risks but also risky autonomous actions on the part of AI.  

Dig deeper: AI-generated content: The dangers of overreliance

2. Chevy dealership chatbot sells SUV for $1 in ‘legally binding’ offer

An AI-powered chatbot for a local Chevrolet dealership in California sold a vehicle for $1 and said it was a legally binding agreement. 

In an experiment that went viral across forums on the web, several people toyed with the local dealership’s chatbot to respond to a variety of non-car-related prompts.  

One user convinced the chatbot to sell him a vehicle for just $1, and the chatbot confirmed it was a “legally binding offer – no takesies backsies.”

Fullpath, the company that provides AI chatbots to car dealerships, took the system offline once it became aware of the issue.

The company’s CEO told Business Insider that despite viral screenshots, the chatbot resisted many attempts to provoke misbehavior.

Still, while the car dealership didn’t face any legal liability from the mishap, some argue that the chatbot agreement in this case may be legally enforceable. 

3. Supermarket’s AI meal planner suggests poison recipes and toxic cocktails

A New Zealand supermarket chain’s AI meal planner suggested unsafe recipes after certain users prompted the app to use non-edible ingredients. 

Recipes like bleach-infused rice surprise, poison bread sandwiches, and even a chlorine gas mocktail were created before the supermarket caught on.

A spokesperson for the supermarket said they were disappointed to see that “a small minority have tried to use the tool inappropriately and not for its intended purpose,” according to The Guardian 

The supermarket said it would continue to fine-tune the technology for safety and added a warning for users. 

That warning stated that recipes are not reviewed by humans and do not guarantee that “any recipe will be a complete or balanced meal, or suitable for consumption.”

Critics of AI technology argue that chatbots like ChatGPT are nothing more than improvisational partners, building on whatever you throw at them. 

Because of the way these chatbots are wired, they could pose a real safety risk for certain companies that adopt them.  

Get the newsletter search marketers rely on.


4. Air Canada held liable after chatbot gives false policy advice

An Air Canada customer was awarded damages in court after the airline’s AI chatbot assistant made false claims about its policies

The customer inquired about the airline’s bereavement rates via its AI assistant after the death of a family member. 

The chatbot responded that the airline offered discounted bereavement rates for upcoming travel or for travel that has already occurred, and linked to the company’s policy page. 

Unfortunately, the actual policy was the opposite, and the airline did not offer reduced rates for bereavement travel that had already happened. 

The fact that the chatbot linked to the policy page with the correct information was an argument the airline made in court when trying to prove its case.

However, the tribunal (a small claims-type court in Canada) did not side with the defendant. As reported by Forbes, the tribunal called the scenario “negligent misrepresentation.”

Christopher C. Rivers, Civil Resolution Tribunal Member, said this in the decision:

  • “Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”

This is just one of many examples where people have been dissatisfied with chatbots due to their technical limitations and propensity for misinformation – a trend that is sparking more and more litigation. 

Dig deeper: 5 SEO content pitfalls that could be hurting your traffic

5. Australia’s largest bank replaces call center with AI, then apologizes and rehires staff

The largest bank in Australia replaced its call center team with AI voicebots with the promise of boosted efficiency, but admitted it made a big mistake. 

The Commonwealth Bank of Australia (CBA) believed the AI voicebots could reduce call volume by 2,000 calls per week. But it didn’t.

Instead, left without the assistance of its 45-person call center, the bank scrambled to offer overtime to remaining workers to keep up with the calls, and get other management workers to answer calls, too.

Meanwhile, the union representing the displaced workers elevated the situation to the Finance Sector Union (like the Equal Opportunity Commission in the U.S.). 

It was only one month after CBA replaced workers that it issued an apology and offered to hire them back.

CBA said in a statement that they did not “adequately consider all relevant business considerations and this error meant the roles were not redundant.”

Other U.S. companies have faced PR nightmares as well when attempting to replace human roles with AI.

Perhaps that’s why certain brands have deliberately gone in the opposite direction, making sure people remain central to every AI deployment.

Nevertheless, the CBA debacle shows that replacing people with AI without fully weighing the risks can backfire quickly and publicly.

6. New York City’s chatbot advises employers to break labor and housing laws

New York City launched an AI chatbot to provide information on starting and running a business, and it advised people to carry out illegal activities

Just months after its launch, people started noticing the inaccuracies provided by the Microsoft-powered chatbot.

The chatbot offered unlawful guidance across the board, from telling bosses they could pocket employees’ tips and skip notifying staff about schedule changes to tenant discrimination and cashless stores.

“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup
“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup

This is despite the city’s initial announcement promising that the chatbot would provide trusted information on topics such as “compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines.” 

Still, then-mayor Eric Adams defended the technology, saying: 

  • “Anyone that knows technology knows this is how it’s done,” and that “only those who are fearful sit down and say, ‘Oh, it is not working the way we want, now we have to run away from it all together.’ I don’t live that way.” 

Critics called his approach reckless and irresponsible. 

This is yet another cautionary tale in AI misinformation and how organizations can better handle the integration and transparency around AI technology. 

Dig deeper: SEO shortcuts gone wrong: How one site tanked – and what you can learn

7. Chicago Sun-Times publishes fake book list generated by AI

The Chicago Sun-Times ran a syndicated “summer reading” feature that included false, made-up details about books after the writer relied on AI without fact-checking the output. 

King Features Syndicate, a unit of Hearst, created the special section for the Chicago Sun-Times.  

Not only were the book summaries inaccurate, but some of the books were entirely fabricated by AI. 

“Syndicated content in Sun-Times special section included AI-generated misinformation,” Chicago Sun-Times

The author, hired by King Features Syndicate to create the book list, admitted to using AI to put the list together, as well as for other stories, without fact-checking. 

And the publisher was left trying to determine the extent of the damage. 

The Chicago Sun-Times said print subscribers would not be charged for the edition, and it put out a statement reiterating that the content was produced outside the newspaper’s newsroom. 

Meanwhile, the Sun-Times said they are in the process of reviewing their relationship with King Features, and as for the writer, King Features fired him.  

Oversight matters

The examples outlined here show what happens when AI systems are deployed without sufficient oversight. 

When left unchecked, the risks can quickly outweigh the rewards, especially as AI-generated content and automated responses are published at scale.

Organizations that rush into AI adoption without fully understanding those risks often stumble in predictable ways. 

In practice, AI succeeds only when tools, processes, and content outputs keep humans firmly in the driver’s seat.

Read more at Read More

Yext’s Visibility Brief: Your guide to brand visibility in AI search by Yext

Search visibility isn’t what it used to be. Rankings still matter, but they’re no longer the whole story. 

Today, discovery happens across traditional search results, local listings, brand knowledge panels, and increasingly, AI-driven experiences that surface answers without a click. For marketers, that makes visibility harder to measure — and easier to lose.

SEO teams now operate in a landscape where accuracy, consistency, and trust signals matter as much as keywords. Business information, reviews, and brand authority determine whether a brand shows up at all, especially as AI-powered search reshapes how results are generated and displayed. As a result, many brands think they’re visible — until they look closer.

The Visibility Brief was created to show you what’s really happening. Built on real data from thousands of brands, it provides a practical view of how visibility plays out across today’s search and discovery ecosystem.

Instead of focusing on a single channel or metric, it takes a broader view. The content highlights where brands are gaining ground, where gaps appear, and which trends are shaping performance.

You’ll see how traditional search and AI-driven discovery now overlap, why data accuracy has become a baseline requirement, and where brands are losing exposure without realizing it. 

The goal is simple: help you understand how visibility is changing and what to focus on now.

Watch or listen to the Visibility Brief to get a clearer view of today’s search landscape — and what it means for your brand’s visibility.

Subscribe to the Visibility Brief on Spotify or Apple Podcasts.

Read more at Read More

Google expands Shopping promotion rules ahead of 2026

Inside Google Ads’ AI-powered Shopping ecosystem: Performance Max, AI Max and more

Google is broadening what counts as an eligible promotion in Shopping, giving merchants more flexibility heading into next year.

Driving the news. Google is update its Shopping promotion policies to support additional promotion types, including subscription discounts, common promo abbreviations, and — in Brazil — payment-method-based offers.

Why we care. Promotions are a key lever for visibility and conversion in Shopping results. These changes unlock more promotion formats that reflect how consumers actually buy today, especially subscriptions and cashback offers. Greater flexibility in promotion types and language reduces disapprovals and makes Shopping ads more competitive at key decision moments.

For retailers relying on subscriptions or local payment incentives, this update creates new ways to drive visibility and conversion on Google Shopping.

What’s changing. Google will now allow promotions tied to subscription fees, including free trials and percent- or amount-off discounts. Merchants can set these up by selecting “Subscribe and save” in Merchant Center or by using the subscribe_and_save redemption restriction in promotion feeds. Examples include a free first month on a premium subscription or a steep discount for the first few billing cycles.

Google is also loosening restrictions on language. Common promotional abbreviations like BOGO, B1G1, MRP and MSRP are now supported, making it easier for retailers to mirror real-world retail messaging without risking disapproval.

In Brazil only, Google will now support promotions that require a specific payment method, including cashback offers tied to digital wallets. Merchants must select “Forms of payment” in Merchant Center or use the forms_of_payment redemption restriction. Google says there are no immediate plans to expand this change to other markets.

Between the lines. These updates signal Google’s intent to better align Shopping promotions with modern retail models — especially subscriptions and localized payment behaviors — while reducing friction for merchants.

The bottom line. By expanding eligible promotion types, Google is giving advertisers more room to compete on value, not just price, when Shopping policies update in January 2026.

Read more at Read More

Google to require separate product IDs for multi-channel items

Google Shopping Ads - Google Ads

Starting in March 2026, Google Merchant Center will enforce a new system for multi-channel products — items sold both online and in physical stores — requiring advertisers to use separate product IDs when those products differ by channel.

What’s changing. Under the new approach, online product attributes will become the default. If a product’s in-store details differ, advertisers will need to create a second version with a distinct product ID and manage it independently in their feeds.

What advertisers should do. Google has started emailing affected accounts, flagging products that need updates ahead of the March deadline. Retailers should review their product data feeds now to ensure online and in-store items are properly segmented — especially if they rely on Local Inventory Ads or sell across multiple Google surfaces.

Why we care. Many retailers currently manage online and in-store versions of the same product under a single ID. Google’s update changes that assumption, pushing advertisers to explicitly separate products when attributes like price, availability, or condition aren’t identical.

The big picture. This update gives Google cleaner, more consistent product data across channels, but shifts more feed management responsibility onto advertisers — particularly large retailers with complex inventories.

First seen. The update and news of Google’s comms was first mentioned by PPC News Feed founder Hana Kobzová.

Bottom line. If your online and in-store products aren’t truly identical, Google will soon require you to treat them as separate items, or risk issues with visibility and eligibility.

Dig Deeper. Update of multi-channel product system from Google.

Read more at Read More

Google to allow Prediction Markets ads under strict rules

Why phrase match is losing ground to broad match in Google Ads

Google is updating its advertising policies to allow ads for Prediction Markets in the U.S. starting January 21st — but only for federally regulated entities.

Who qualifies. Eligibility is limited to entities authorized by the Commodity Futures Trading Commission (CFTC) as Designated Contract Markets (DCMs) whose primary business is listing exchange-listed event contracts, or brokerages registered with the National Futures Association (NFA) that offer access to products listed by qualifying DCMs. Advertisers must also apply for Google certification to run ads in the U.S.

Why we care. Prediction markets have long been restricted on Google Ads. This change opens a new advertising channel while keeping tight controls around compliance and regulation. The narrow eligibility and certification requirements mean only compliant, federally regulated players can participate, potentially reducing competition. For qualifying advertisers, this offers earlier access to a high-intent audience within a tightly controlled ad environment.

The fine print. All ads, products, and landing pages must comply with applicable local laws, financial regulations, industry standards, and Google Ads policies. The new policy will appear in the Advertising Policies Help Center, with references in the Financial Services and Gambling and Games sections, and is available now for preview.

The big picture. Google is cautiously expanding access for prediction markets by recognizing them as regulated financial products — while continuing to block unregulated platforms.

Bottom line. Prediction market ads are coming to Google, but only for advertisers that meet strict federal and platform-level requirements.

Read more at Read More

A 90-day SEO playbook for AI-driven search visibility

A 90-day SEO playbook for AI-driven search visibility

SEO now sits at an uncomfortable intersection at many organizations.

Leadership wants visibility in AI-driven search experiences. Product teams want clarity on which narratives, features, and use cases are being surfaced. Sales still depends on pipeline.

Meanwhile, traditional rankings, traffic, and conversions continue to matter. What has changed is the surface area of search.

Pages are now summarized, excerpted, and cited in environments where clicks are optional and attribution is selective. 

When a generative AI summary appears on the SERP, users click traditional result links only about 8% of the time.

As a result, SEO teams need a clearer playbook for earning visibility inside generative outputs, not just around them.

This 90-day action plan outlines how to achieve this in a phased, weekly execution, with practical adjustments tailored to the specific purpose of the website.

Phase 1: Foundation (Weeks 1-2)

Define your ‘AI search topics’

Keywords still matter. But AI systems organize information around entities, topics, and questions, not just query strings.

The first step is to decide what you want AI tools to associate your brand with.

Action steps

  • Identify 5-10 core topics you want to be known for.
  • For each topic, map:
    • The questions users ask most often
    • The comparisons they evaluate
    • “Best,” “how,” and “why” queries that indicate decision-making intent

Example:

  • Topic: AI SEO tools
  • Mapped query types:
    • Core questions: What are the best AI SEO tools? How does AI improve SEO?
    • Comparisons: AI SEO tools vs traditional SEO tools.
    • Intent signals: Best AI SEO tools for content optimization.

Where this shifts by website type

  • Content hubs (media brands, publishers, research orgs) should prioritize mapping educational breadth – covering a topic comprehensively so AI systems see the site as a reference source, not a transactional endpoint.
  • Services/lead gen sites (agencies, consultants, local businesses) should map problem-solution queries prospects ask before converting, especially comparison and “how does this work?” questions.
  • Product and ecommerce sites (DTC brands, marketplaces, subscription ecommerce, retailers) should map topics to use cases, alternatives, and comparisons – not just product names or category terms.
  • Commercial, long-funnel sites (B2B SaaS, fintech, healthcare) should anchor topics to category leadership – the “what is,” “how it works,” and “why it matters” content buyers research long before demos.

If you can’t clearly articulate what you want AI systems to associate you with, neither can they.

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

Create AI-friendly content structure

Generative engines consistently surface content that is easy to extract, summarize, and reuse. 

In practice, that favors pages where answers are clearly framed, front-loaded, and supported by scannable structure.

 High-performing pages tend to follow a predictable pattern.

AI-friendly content structures include: 

  • A short intro (2-3 lines) that establishes scope.
  • A direct answer placed immediately after the header, written to stand alone if excerpted.
  • Bulleted lists or numbered steps that break down the explanation.
  • A concise FAQ section at the bottom that reinforces key queries.

This increases the likelihood your content is:

  • Quoted in AI Overviews.
  • Used in ChatGPT or Perplexity answers.
  • Surfaced for voice and conversational search.

For ecommerce and services sites in particular, this is often where internal resistance shows up. Teams worry that answering questions too directly will reduce conversion opportunities. 

In AI-driven search, the opposite is usually true: pages that make answers easy to extract are more likely to be surfaced, cited, and revisited when users move from research to decision-making.

Dig deeper: Organizing content for AI search: A 3-level framework

Phase 2: Generative engine optimization (Weeks 3-6)

Optimize for AI answers (GEO/AEO)

In generative search, content that gets surfaced typically resolves the core question immediately, then provides context and depth. 

For many commercial teams, that requires rethinking how early pages prioritize explanation versus persuasion – a shift that’s increasingly necessary to earn visibility at all.

This is where GEO (generative engine optimization) and AEO (answer engine optimization) move from theory into page-level execution.

  • Add a 1–2 sentence TL;DR under key H2s that can stand on its own if excerpted
  • Use explicit, question-based headers:
    • “What is…”
    • “How does…”
    • “Why does…”
  • Include clear, plain-language definitions before introducing nuance or positioning

Example:

What is generative engine optimization?

Generative engine optimization (GEO) helps content get selected as a source in AI-generated answers.

In practice, GEO is the process of structuring and optimizing content so AI tools like ChatGPT and Google AI Overviews can interpret, evaluate, and reference it when responding to user queries.

How does answer-first structure change by site type?

  • Publishers benefit from definitional clarity because it increases citation frequency.
  • Lead gen sites see stronger mid-funnel engagement when prospects get clear answers upfront.
  • Product sites reduce friction by addressing comparison and “is it right for me?” questions early.
  • B2B platforms establish category authority long before a buyer ever hits a pricing page.

Add structured data (high impact, often underused)

Structured data remains one of the clearest ways to signal meaning and credibility to AI-driven search systems. 

It helps generative engines quickly identify the source, scope, and authority behind a piece of content – especially when deciding what to cite.

At a minimum, most sites should implement:

  • Article schema to clarify content type and topical focus.
  • Organization schema to establish the publishing entity.
  • Author or Person schema to surface expertise and accountability.

FAQ schema, where it reflects genuine question-and-answer content, can still reinforce structure and intent – but it should be used selectively, not as a default.

This matters differently by site type:

  • Content hubs benefit when author and publication signals reinforce editorial credibility and reference value.
  • Lead gen and services sites use schema to connect expertise to specific problem areas and queries.
  • Product and ecommerce sites help AI systems distinguish between informational content and transactional pages.
  • Commercial, long-funnel sites rely on schema to support trust signals alongside relevance in high-stakes categories.

Structured data doesn’t guarantee inclusion – but in generative search environments, its absence makes exclusion more likely.

Get the newsletter search marketers rely on.


Phase 3: Authority and trust (Weeks 7-10)

Strengthen E-E-A-T signals

As generative systems decide which sources to reference, demonstrated experience increasingly outweighs polish alone. 

Pages that surface consistently tend to show clear evidence that the content comes from real people with real expertise. 

Meaning, signals associated with E-E-A-T – experience, expertise, authoritativeness, and trust – remain central to how generative systems decide which sources to reference.

Key signals to reinforce:

  • Clear author bios that establish credentials, role, or subject-matter relevance.
  • First-hand experience statements that indicate direct involvement (“We tested…”, “In our experience…”).
  • Original visuals, screenshots, data, or case studies that can’t be inferred or synthesized

This is where generic, AI-generated content reliably falls short. 

Without visible signals of experience and accountability, AI systems struggle to distinguish authoritative sources from interchangeable ones.

How different site types should demonstrate experience and authority

  • Media and research sites should reinforce editorial standards, sourcing, and author attribution to support citation trust.
  • Agencies and consultants benefit from foregrounding lived client experience and specific outcomes, not abstract expertise.
  • Ecommerce brands earn trust through real-world product usage, testing, and visual proof.
  • High-ACV B2B companies stand out by showcasing practitioner insight and operational knowledge rather than marketing language alone.

If your content reads like it could belong to anyone, AI systems will treat it that way.

Dig deeper: User-first E-E-A-T: What actually drives SEO and GEO

Build ‘citation-worthy’ pages

Certain page types are more likely to be cited in AI-generated answers because they organize information in ways that are easy to extract, compare, and reference. 

These pages are designed to serve as reference material – resolving common questions clearly and completely, rather than advancing a particular perspective.

Formats that consistently perform well include:

  • Ultimate guides that consolidate a topic into a single, authoritative resource.
  • Comparison tables that make differences explicit and scannable.
  • Statistics pages that centralize data points AI systems can reference.
  • Glossaries that define terms clearly and consistently.

Pages with titles such as “AI SEO Statistics (2025)” or “Best AI SEO Tools Compared” are frequently surfaced because they signal completeness, recency, and reference value at a glance.

For commercial sites, citation-worthy pages don’t replace conversion-focused assets. 

They support them by capturing early-stage, informational demand – and positioning the brand as a credible source long before a buyer enters the funnel.

Dig deeper: How generative engines define and rank trustworthy content

Phase 4: Multimodal SEO (Weeks 11-12)

Optimize beyond text

Generative systems increasingly synthesize signals across text, images, and video when assembling answers. 

Content that performs well in AI-driven search is often reinforced across formats, not confined to a single page or medium.

  • Add descriptive, specific alt text that explains what an image shows and why it’s relevant.
  • Create short-form videos paired with transcripts that mirror on-page explanations.
  • Repurpose core content into formats AI systems can encounter and contextualize elsewhere:
    • YouTube videos.
    • LinkedIn carousels.
    • X threads.

How this supports different site goals

  • Publishers extend the reach and reference value of core reporting and explainers.
  • Services and B2B sites reinforce expertise by repeating the same answers across multiple surfaces.
  • Ecommerce brands support discovery by contextualizing products beyond traditional listings and category pages.

Track AI visibility – not just traffic

As generative results absorb more of the discovery layer, traditional click-based metrics capture only part of search performance. 

AI visibility increasingly shows up in how often – and where – a brand’s content is referenced, summarized, or surfaced without a click.

With 88% of businesses worried about losing organic visibility in the world of AI-driven search, tracking these signals is essential for demonstrating continued influence and reach.

Signals worth monitoring include:

  • Featured snippet ownership, which often feeds AI-generated summaries.
  • Appearances within AI Overviews and similar answer experiences.
  • Brand mentions inside AI tools during exploratory queries.
  • Search Console impressions, even when clicks don’t follow.

For long sales cycles in particular, these signals act as early indicators of influence. 

AI citations and impressions often precede direct engagement, shaping consideration well before a buyer enters the funnel.

Dig deeper: LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Recommended tools

These tools support different parts of an SEO-for-AI workflow, from topic research and content structure to schema implementation and visibility tracking.

  • Content and AI SEO 
    • Surfer, Clearscope, Frase
    • Used to identify gaps in topical coverage and evaluate whether content resolves questions clearly enough to be excerpted in AI-generated answers.
  • Schema and structured data 
    • RankMath, Yoast, Schema App
    • Useful for implementing and maintaining schema that helps AI systems interpret content, authorship, and organizational credibility.
  • Visibility and performance tracking 
    • Google Search Console, Ahrefs
    • Essential for monitoring impressions, query patterns, and how content surfaces in search – including cases where visibility doesn’t result in a click.
  • AI research and validation 
    • ChatGPT, Perplexity, Gemini
    • Helpful for testing how topics are summarized, which sources are cited, and where your content appears (or doesn’t) in AI-driven responses.

The rule that matters most

AI systems tend to favor content that provides definitive answers to questions. 

If your content can’t answer a question clearly in 30 seconds, it’s unlikely to be selected for AI-generated answers.

What separates teams succeeding in this environment isn’t experimentation with new tactics, but consistency in execution. 

Pages built to be understandable, referenceable, and trustworthy are the ones generative systems return to.

Read more at Read More

Fashion AI SEO: How to Improve Your Brand’s LLM Visibility

AI chat is changing how people shop for fashion — fast.

Before AI, buying something as simple as casual leggings meant typing keywords into Google. Then, sifting through pages of results.

Comparing prices. Reading reviews. Getting overwhelmed.

In fact, 74% of shoppers give up because there’s too much choice, according to research by Business of Fashion and McKinsey.

Now?

A shopper submits a query. AI gives one clear answer — often with direct links to products, reviews, and retailers. They can even click straight to purchase.

Google AI Mode – Women's leggings

So, how do you make sure AI recommends your fashion brand?

We analyzed how fashion brands appear in AI search. And why some brands dominate while others disappear.

In this article, you’ll learn how large language models (LLMs) interpret fashion, what drives visibility, and the levers you can pull to get your brand visible in AI searches (plus a free fashion trend calendar to help you plan).

Note: The data in this article comes from Semrush’s AI Visibility Index, August 2025.


The 3 Types of AI Visibility in Fashion

There are three ways people will see your brand in AI search: brand mentions, citations, and recommendations.

3 Types of AI Visibility in Fashion

Brand mentions are references to your brand within an answer.

Ask AI about the latest fashion trends, and the answer includes a couple of relevant brands.

ChatGPT – Top trending fashion looks – Brands

Citations are the proof that backs up AI answers. Your brand properties get linked as a source. This could be product pages, sizing guides, or care instructions.

AI Search Visibility

Citations also include other sites that talk about your brand, like Wikipedia, Amazon, or review sites.

Product recommendations are the most powerful form of AI visibility. Your brand isn’t just mentioned; it’s actively suggested when someone is ready to buy.

For example, I asked ChatGPT for recommendations of aviator sunglasses:

ChatGPT – Aviator sunglasses recommendations

Ray-Ban doesn’t just show up as a mention — they’re a recommended option with clickable shopping cards.

How AI Models Choose Which Fashion Brands to Surface

If you’ve ever wondered how AI chooses which fashion brands to surface, here are the two basic factors:

  • By evaluating what other people say about you online
  • By checking how consistently factual and trustworthy your own information is

Let’s talk about consensus and consistency. Plus, we’ll discuss real fashion brands that are winning at both.

Consensus

If you ask all your friends for their favorite ice cream shop, they’ll probably give different answers.

But if almost everyone coincided in the same answer, you trust that’s probably the best place to go.

AI does something similar.

First, it checks different sources of information online. This includes:

  • Editorial websites, like articles in Vogue, Who What Wear, InStyle, and others
  • Community and creator content, including TikTok try-ons, Reddit threads, and YouTube product roundups
  • Retailer corroboration, like ratings and reviews on Amazon, Nordstrom, Zalando, and more
  • Sustainability verification from third parties like B Corp, OEKO-TEX, or Good On You

After analyzing this information, it gives you recommendations for what it perceives to be the best option.

Here’s an example of what that consensus looks like for a real brand:

Brand Consensus

Carhartt is mentioned all over the web. They appear in retail listings, editorial pieces, and in community discussions.

The result?

They get consistent LLM mentions.

ChatGPT – Jacket recommendations

Consistency

AI also judges your brand based on the consistency of your product information.

This includes:

  • Naming & colorways: Identical names/color codes across your own site, retailers, and mentions
  • Fit & size data: Standardized size charts, fit guides, and model measurements
  • Materials & care: The same composition and instructions across all channels
  • Imagery/video parity: The same SKU visuals (like hero, 360, try-on) on your site and retailer sites
  • Price & availability sync: Real-time updates during drops or restocks to avoid stale or conflicting data

For example, Lululemon does a great job of keeping product availability updated on their website.

If you ask AI where to find a specific product type, it directs you back to the Lululemon website.

Google AI Mode – Specific product type

This happens because Lululemon’s site provides accurate, up-to-date information.

Plus, it’s consistent across retailer pages.

The Types of Content That Dominate Fashion AI Search

Mentions get you into the conversation. Recommendations make you the answer. Citations build the credibility that supports both.

The brands winning in AI search have all three — here’s how to diagnose where you stand.

AI Visibility Diagnostic

Let’s talk about the fashion brands that are consistently showing up in AI search results, and the kind of content that helps them gain AI visibility.

Editorial Shopping Guides and Roundups

Editorial content has a huge impact on results.

Sites like Vogue, Who What Wear, and InStyle are regularly cited by LLMs.

TOP Sources Analysis Fashion & Apparel

These editorial pieces are key for AI search, since they frame products in context — showing comparison, specific occasions, or trends.

There are two ways to play into this.

First, you can develop relationships with editorial websites relevant to your brand.

Start by researching your top three competitors. Using Google (or a quick AI search), find out which publications have featured those competitors recently.

Then, reach out to the editor or writers at those publications.

If they’re individual creators, you might send sample products for them to review.

Looking for mentions from bigger publications?

You might consider working with a PR team to get your products listed in articles.

To build consistency in that content, provide data sheets with information about material, fit, or care.

Who What Wear – Provide information

​​

Second, you can build your own editorial content.

That’s exactly what Huckberry does:

Huckberry – Build your own editorial content

They regularly produce editorial-style content that answers questions.

Many of these posts include a video as well, giving them more opportunity for discovery in LLMs:

YouTube – Huckberry wardrobe 2025

Retailer Product Pages and Brand Stores

Think of your product detail page (PDP) as the source of truth for AI.

If you don’t have all the information there, AI will take its answers from other sources — whether or not they’re accurate.

Product pages (your own website or a retailer’s) need to reflect consistent, accurate information. Then, AI can understand and translate into answers.

Some examples might include:

  • Structured sizing information
  • Consistent naming and colorways
  • Up-to-date prices and availability
  • Ratings (with pictures)
  • Fit guides (like sizing guides and images with model measurements and sizing)
  • Materials and care pages
  • Transparent sustainability modules

For example,Everlane provides the typical sizing chart on each of its products. But they take it a step further and include a guide to show how a piece is meant to fit on your body.

You can even see instructions to measure yourself and find the right size.

Everlane – Size Guide

That’s why, when I ask AI to help me pick the right size for a pair of pants, it gives me a clear answer.

And the citations come straight from Everlane’s website.

ChatGPT – Suggesting a size

Everlane’s product pages also include model measurements and sizing.

So when I ask ChatGPT for pictures to help me pick the right size, I get this response:

ChatGPT – Pictures to help

However you choose to present this information on your product pages, just remember: It needs to be identical on all retailer pages as well.

Otherwise, your brand could confuse the LLMs.

User Generated Video Content

What you say about your own brand is one thing.

But what other people say about you online can have a huge influence on your AI mentions.

Of course, you don’t have full control over what consumers post about you online.

So, proactively build connections with creators. Or, try to join the conversation online when appropriate.

This can help you build a positive sentiment toward your brand, which AI will pick up on.

Not sure which creators to work with?

Try searching for your competitors on channels like TikTok or Instagram. See which creators are mentioning their products, and getting engagement.

You can also use tools like Semrush’s Influencer Analytics app to discover influencers.

Search by social channels, and filter by things like follower count, location, and pricing.

Semrush Influencer Analytics App

Here’s an example: Aritzia has grown a lot on TikTok. They show up in creator videos, fit checks, and unboxing-style videos.

In fact, the hashtag #aritziahaul has a total of 32k posts, racking up 561 million views overall.

TikTok – Artizia

Other fashion brands, like Quince, include a reviewing system on their PDPs.

This allows consumers to rate the fit and add pictures of themselves wearing the product.

LLMs also use this information to answer questions.

Quince – Reviwing system

Creator try-ons, styling videos, and similar content can help increase brand mentions in “best for [body type]” or “best for [occasion]” prompts.

Pro tip: Zero-click shopping is coming. Perplexity’s “Buy with Pro” and ChatGPT’s “Instant Checkout” hint at a future where AI answers lead straight to one-click purchases. The effects are still emerging, but as with social shopping, visibility wins. So, make sure your brand shows up in the chats that drive buying decisions.


Reddit and Community Threads

Reddit is a major source of information for fashion AI queries.

This includes information about real-world fit, durability, comfort, return experiences, and comparisons.

For example, Uniqlo shows up regularly in Reddit threads and questions about style.

Reddit – Fashion community threads

You can also find real reviews of durability about the products.

Reddit – Real review of durability

As a result, the brand is getting thousands of mentions in LLMs based on Reddit citations.

Plus, this leads to a ton of organic traffic back to the Uniqlo website.

Semrush – AI Visibility – Uniqlo – Cited Sources

Obviously, it’s impossible to completely control the conversation around your brand. So for this to work, there’s one key thing you can’t miss:

Your products need to be truly excellent.

A mediocre product that has a lot of negative sentiment online won’t show up in AI search results.

And no amount of marketing tactics can fool the LLMs.

Further reading: Learn how to join the conversation online with our Reddit Marketing guide.


Lab Tests and Fabric Explainers

This kind of content shows the quality of your products.

It gives LLMs a measurable benchmark to quote on things like pilling or color fastness.

This content could include:

  • “6-month wear” style videos
  • Pages that explain the fabrics and materials used
  • Third party tests
  • Clear care instructions

For example, Quince has an entire page on their website talking about cashmere.

Quince – About cashmere

And in Semrush’s AI Visibility dashboard, you can see this page is one of the top cited sources from Quince’s website.

Semrush – Visibility Overview – Quince – Cited Pages

Another option is to create content that shows tests of your products.

Here’s a great example from a brand that makes running soles, Vibram.

They sponsored pro trail runner Robyn Lesh, and teamed up with Huckberry to lab test some of their shoes.

YouTube – Vibram – Lab test of the product

This kind of content is helping Vibram maintain solid AI visibility.

Visibility Overview – Vibram – AI Visibility

And for smaller brands who don’t have Vibram’s sponsorship budget?

Try doing product testing content with your own team.

For example, have a team member wear a specific product every day for a month, and report back on durability.

Or, bury a piece of clothing underground and watch how long it takes to decompose, like Woolmark did:

Instagram – Woolmark decompose clothing

Get creative, and you’ll have some fun creating content that can also help your brand be more visible.

Want to check your brand’s AI visibility?

Try the AI Visibility Toolkit from Semrush to see where your brand stands in AI search, and learn how to optimize.

Start by checking your AI visibility score. You’ll see how this measures up against the industry benchmarks.

Visibility Overview – Ray-ban – AI Visibility – Industry avg

You can prioritize next steps based on the Topic Opportunities tab.

There, you’ll see topics where your competitors are being mentioned, but your brand is missed.

Visibility Overview – Ray-ban – Topic & Sources

Then, jump to the Brand Perception tab to learn more about your Share of Voice and Sentiment in AI search results.

You’ll also get some clear insights on improvements you can make.

Semrush – Brand Performance – Sentiment & Share of Voice

Comparisons and Alternatives Content

AI loves a good comparison post (and honestly, who doesn’t?). So, creating content that compares your products to other brands is a great way to get more mentions.

This is part of LLM seeding.

It helps you get brand exposure without depending on organic traffic dependence. Plus, it helps level the playing field with bigger competitors.

How does LLM Seeding Work

For instance, Quince is often cited online as a cheaper alternative to luxury clothing.

I asked ChatGPT for affordable cashmere options, and Quince was the first recommendation.

ChatGPT – Affordable cashmere options

So, why is this brand showing up consistently?

One reason is their comparison content.

In each PDP, you’ll see the “Beyond Compare” box, showing specific points of comparison with major competitors.

Quince – Beyond Compare

The right comparisons are handled honestly and tastefully.

Focus on real points of difference (like Quince does with price). Or, show which products are best for certain occasions.

For example: “Our sweaters are great for hiking in the snow. Our competitors’ sweaters are better for indoor activities.”

Comparisons give AI a reason to recommend your fashion brand when someone asks for an alternative.

What This Shift Means for Your Fashion Brand

AI search has changed the way people discover products, and even their path to purchase.

Before, this involved multiple searches, clicking on different websites, or scrolling through forums. Now, you can do this in one simple interface.

So, how is AI changing fashion, and how can your brand adapt?

Editorial, Retailer, and PDP Split

AI search doesn’t treat every source of information equally.

And depending on which model your audience uses, the “default” source of truth can look very different.

ChatGPT leans heavily on editorial and community signals.

It rewards cultural traction — what people are talking about, buying, and loving.

For example, articles like this one from Vogue are a prime source for ChatGPT answers:

Vogue – Fashion trends

Meanwhile, Google’s AI Mode and Perplexity skew toward retailer PDPs.

They look for structured data like price, availability, or fit guides. In other words, they trust whoever has the cleanest, richest product data.

The most visible brands win in both arenas: cultural conversation and PDP completeness.

Here’s What You Can Do

To show up in all major LLMs, you need two parallel pipelines.

  1. Cultural traction: Like press mentions, creator partnerships, and community visibility
  2. Citation-ready proof: For example, complete and accurate PDPs across retailer channels

Here’s an Example: Carhartt

Carhartt is a great example of a brand that’s winning on both sides.

First, they get consistent cultural visibility.

For instance, Vogue reported that the Carhartt WIP Detroit jacket made Lyst’s “hottest product” list. That led to searches for their brand increasing by 410%.

This makes it more likely for LLMs to recommend their products in answers:

Google AI Mode – Womens workwear jacket

This is the kind of loop that works wonders for a fashion brand.

AI TrenD Loop

At the same time, Carhartt is also stocked across a huge range of retailers. You can find them in REI, Nordstrom, Amazon, and Dick’s, plus their own direct-to-consumer website.

So, Google AI Mode has an abundance of PDPs, videos, reviews, and Q&A to cite.

This makes Carhartt extremely “citation-friendly” in both models.

No wonder it has such a strong AI visibility score.

Visibility Overview – Carhartt – AI Visibility

Trend Shocks and Seasonal Volatility

Trend cycles aren’t a new challenge in the fashion industry. But it becomes a bigger challenge to maintain visibility when those trends affect which brands appear in AI search.

Micro-trends pop up all the time, triggering quick shifts in how AI answers fashion queries.

When the trend heats up, LLMs pull in brands that appear online in listicles or TikTok roundups.

ChatGPT – When the trend heats up

And when the trend cools? Those same brands disappear just as quickly.

Here’s What You Can Do

To stay present during each trend swing, you need a content and operations pipeline that speaks in real time to the language models are echoing.

  1. Build a proactive trend calendar: Map your content to seasonal moments, like spring tailoring, fall layers, holiday capsules, back-to-school basics, and so on
  2. Refresh imagery and copy to mirror trend language: Update PDPs, on-site copy, and retailer description to match the phrasing used in cultural content
  3. Create rapid-fire listicles and lookbooks: Listicle-style content, creator videos, and other trend-related mentions can help boost visibility. This includes building your own content and working with creators and publications to feature your product in their content.

Download our Trend Calendar for Fashion Brands to plan ahead for upcoming trends and create content that matches.


Here’s an Example: UGG

Anyone who was around for Y2K may have been shocked to see UGG boots come around again.

But the brand was ready to jump onto the trend and make the most of their moment.

Vogue reported that UGG made Lyst’s “hottest products” list in 2024.

Since then, they’ve been regularly featured in seasonal “winter wardrobe essentials” style roundups.

One analyst found that there had been a 280% increase in popularity for the shoes. Funny enough, that trend seems to be a regular occurrence every year once “UGG season” rolls around.

In fact, on TikTok, the hashtag #uggseason has almost 70k videos.

TikTok – Uggseason videos

UGG stays visible even as seasons trends shift. That’s because the brand is always present in the content streams that LLMs treat as cultural indicators. By partnering with influencers, UGG amplified its presence so effectively that the boots themselves became a moment — something people wanted to photograph, share, and join in on without being asked.

The result?

They have one of the highest AI Visibility scores I saw while researching this article.

Visibility Overview – Ugg – AI Visibility

(As a marketer, I find this encouraging. As a Millennial, I find it deeply disturbing.)

Pro tip: Want to measure the results? Track how often your brand or SKUs appear in new listicles per month, plus how they rank in those roundups. Then use Semrush’s AI Visibility Toolkit to track your brand’s visibility using trend-related prompts.


Sustainability and Proof (Not Claims)

Sustainability has become one of the strongest differentiators for fashion brands in AI search.

But only when brands back it up with verifiable proof.

LLMs don’t reward vague eco-friendly language. Instead, they surface brands with certifications, documentation, and third-party validation.

Models also pull heavily from Wikipedia and third-party certification databases. These pages often act as trust anchors for AI search results.

Here’s What You Can Do

You need to build a clear, credible footprint that models can cite.

  1. Centralize pages on materials, care, and impact: Make them brief, structured, and verifiable. Include materials, sourcing, certifications, and repair/resale info.
  2. Maintain third-party profiles: Keep your certifications up-to-date. This includes things like Fair Trade, Bluesign, B-Corp, GOTs, etc.
  3. Standardize sustainability claims across all retailers: If your DTC site says “Fair Trade Certified” but your Nordstrom PDP doesn’t? Models treat that as unreliable.

Here’s an Example: Patagonia

Patagonia is the ruler of AI visibility with a 21.96% share of voice.

Top 20 Brands Fashion & Apparel

In part, this is because of their incredible dedication to sustainability. They basically own this niche category within fashion.

Patagonia’s sustainability claims are backed up by third-party certifications.

And they’re displayed proudly on each PDP.

Patagonia – Sustainability Certs

They’re also transparent about their efforts to help the environment.

They keep pages like this updated regularly.

Patagonia – Progress This Season

These sustainable efforts aren’t just big talk.

Review sites and actual consumers speak positively online about these efforts.

Gearist – Patagonia Repair Review

They’ve made their claim as a sustainable fashion brand.

So, Patagonia shows up first, almost always, in LLMs when talking about sustainable fashion:

ChatGPT recommends Patagonia

That’s the power of building a sustainable brand.

Make AI Work for Your Fashion Brand

You’ve seen how the top fashion brands earn AI visibility.

The path forward is simple: Consensus + Consistency.

Build consensus by getting people talking: Create shareable content, encourage customer posts, or work with creators and publications.

Build consistency by keeping your product info aligned across your site and retail partners.

To get started, download our Fashion Trend Content Calendar to plan your strategy around seasonal trends.

Want to go deeper? Check out our complete guide to AI Optimization.


The post Fashion AI SEO: How to Improve Your Brand’s LLM Visibility appeared first on Backlinko.

Read more at Read More

The 2025 SEO wrap-up: What we learned about search, content, and trust

SEO didn’t stand still in 2025. It didn’t reinvent itself either. It clarified what actually matters. If you followed The SEO Update by Yoast monthly webinars this year, you’ll recognize the pattern. Throughout 2025, our Principal SEOs, Carolyn Shelby and Alex Moss, cut through the noise to explain not just what was changing but why it mattered as AI-powered search reshaped visibility, trust, and performance. If you missed some sessions or want the full picture in one place, this wrap-up is for you. We’re looking back at how SEO evolved over the year, what those changes mean in practice, and what they signal going forward.

Key takeaways

  • In 2025, SEO shifted its focus from rankings to visibility management, as AI-driven search reshaped priorities
  • Key developments included the rise of AI Overviews, a shift from clicks to citations, and increased importance of clarity and trust
  • Brands needed to prioritize structured, credible content that AI systems could easily interpret to remain visible
  • By December, SEO transformed to retrieval-focused strategies, where success rested on clarity, relevance, and E-E-A-T signals
  • Overall, 2025 clarified that the fundamentals still matter but emphasized the need for precision in content for AI-driven systems

SEO in 2025: month-by-month overview

Month Key evolutions Core takeaways
January AI-powered, personalized search accelerated. Zero-click results increased. Brand signals, E-E-A-T, performance, and schema shifted from optimizations to requirements. SEO expanded from ranking pages to representing trusted brands that machines can understand.
February Massive AI infrastructure investments. AI Overviews pushed organic results down. Traffic dropped while brand influence and revenue held steady. SEO outcomes can no longer be measured by traffic alone. Authority and influence matter more than raw clicks.
March AI Overviews expanded as clicks declined. Brand mentions appeared to play a larger role in AI-driven citation and selection behavior than links alone. Search behavior grew despite fewer referrals. Visibility fractured across systems. Trust and brand recognition became the differentiators for inclusion.
April Schema and structure proved essential for AI interpretation. Multimodal and personalized search expanded. Zero-click behavior increased further. SEO shifted from optimization to interpretation. Clarity and structure determine reuse.
May Discovery spread beyond Google. AI Overviews reached mass adoption. Citations replaced visits as success signals. SEO outgrew the SERP. Presence across platforms and AI systems became critical.
June – July AI Mode became core to search. Ads entered AI answers. Indexing alone no longer offers guaranteed visibility. Reporting lagged behind reality. Traditional SEO remained necessary but insufficient. Resilience and adaptability became essential.
August Visibility without value became a real risk. SEO had to tie exposure to outcomes beyond the number of sessions. Visibility without value became a real risk. SEO had to tie exposure to outcomes beyond sessions.
September AI Mode neared default status. Legal, licensing, and attribution pressures intensified. Persona-based strategies gained relevance. Control over visibility is no longer guaranteed. Trust and credibility are the only durable advantages.
October Search Console data reset expectations. AI citations outweighed rankings. AI search became the destination. SEO success depends on presence inside AI systems, not just SERP positions.
November AI Mode became core to search. Ads entered AI answers. Indexing alone is no longer a guarantee of visibility. Reporting lagged behind reality. Clarity and structure beat scale. Authority decides inclusion.
December SEO fully shifted to retrieval-based logic. AI systems extracted answers, not pages. E-E-A-T acted as a gatekeeper. SEO evolved into visibility management for AI-driven search. Precision replaced volume.

January: SEO enters the age of representation

January set the tone for the year. Not through a single disruptive update, but through a clear signal that SEO was moving away from pure rankings toward something broader. The search was becoming more personalized, AI-driven, and selective about which sources it chose to surface. Visibility was no longer guaranteed just because you ranked well.

Do read: Perfect prompts: 10 tips for AI-driven SEO content creation

From the start of the year, it was clear that SEO in 2025 would reward brands that were trusted, technically sound, and easy for machines to understand.

What changed in January

Here are a few clear trends that began to shape how SEO worked in practice:

  • AI-powered search became more personalized: Search results reflected context more clearly, taking into account location, intent, and behavior. The same query no longer produced the same result for every user
  • Zero-click searches accelerated: More answers appeared directly in search results, reducing the need to click through, especially for informational and local queries
  • Brand signals and reviews gained weight: Search leaned more heavily on real-world trust indicators like brand mentions, reviews, and overall reputation
  • E-E-A-T became harder to ignore: Clear expertise, ownership, and credibility increasingly acted as filters, not just quality guidelines
  • The role of schema started to shift: Structured data mattered less for visual enhancements and more for helping machines understand content and entities

What to take away from January

January wasn’t about tactics. It was about direction.

SEO started rewarding clarity over cleverness. Brands over pages. Trust over volume. Performance over polish. If search engines were going to summarize, compare, and answer on your behalf, you needed to make it easy for them to understand who you are, what you offer, and why you are credible.

That theme did not fade as the year went on. It became the foundation for everything that followed.

Do check out the full recording of The SEO update by Yoast – January 2025 Edition webinar.

February: scale, money, and AI made the shift unavoidable

If January showed where search was heading, February showed how serious the industry was about getting there. This was the month where AI stopped feeling like a layer on top of search and started looking like the foundation underneath it.

Massive investments, changing SERP layouts, and shifting performance metrics all pointed to the same conclusion. Search was being rebuilt for an AI-first world.

What changed in February

As the month unfolded, the signs became increasingly difficult to ignore.

  • AI Overviews pushed organic results further down: AI Overviews appeared in a large share of problem-solving queries, favoring authoritative sources and summaries over traditional organic listings
  • Traffic declined while brand value increased: High-profile examples showed sessions dropping even as revenue grew. Visibility, influence, and brand trust started to matter more than raw sessions
  • AI referrals began to rise: Referral traffic from AI tools increased, while Google’s overall market share showed early signs of pressure. Discovery started spreading across systems, not just search engines

What to take away from February

February made January’s direction feel permanent.

When AI systems operate at this scale, they change how visibility works. Rankings still mattered, but they no longer told the full story. Authority, brand recognition, and trust increasingly influenced whether content was surfaced, summarized, or ignored.

The takeaway was clear. SEO could no longer be measured only by traffic. It had to be understood in terms of influence, representation, and relevance across an expanding search ecosystem.

Catch the full discussion in The SEO Update by Yoast – February 2025 Edition webinar recording.

March: visibility fractured, trust became the differentiator

By March, the effects of AI-driven search were no longer theoretical. The conversation shifted from how search was changing to who was being affected by it, and why.

This was the month where declining clicks, citation gaps, and publisher pushback made one thing clear. Search visibility was fragmenting across systems, and trust became the deciding factor in who stayed visible.

What changed in March

The developments in March added pressure to trends that had already been forming earlier in the year.

  • AI Overviews expanded while clicks declined: Studies showed that AI Overviews appeared more frequently, while click-through rates continued to decline. Visibility increasingly stopped at the SERP
  • Brand mentions mattered more than links alone: Citation patterns across AI platforms varied, but one signal stayed consistent. Brands mentioned frequently and clearly were more likely to surface
  • Search behavior continued to grow despite fewer clicks: Overall search volume increased year over year, showing that users weren’t searching less; they were just clicking less
  • AI search struggled with attribution and citations: Many AI-powered results failed to cite sources consistently, reinforcing the need for strong brand recognition rather than reliance on direct referrals
  • Search experiences became more fragmented: New entry points like Circle to Search and premium AI modes introduced additional layers to discovery, especially among younger users
  • Structured signals evolved for AI retrieval: Updates to robots meta tags, structured data for return policies, and “sufficient context” signals showed search engines refining how content is selected and grounded

Also read: Structured data with schema for search and AI

What to take away from March

March exposed the tension at the heart of modern SEO.

Search demand was growing, but traditional traffic was shrinking. AI systems were answering more questions, but often without clear attribution. In that environment, being a recognizable, trusted brand mattered more than being the best-optimized page.

The implication was simple. SEO was no longer just about earning clicks. It was about earning inclusion, recognition, and trust across systems that don’t always send users back.

Watch the complete recording of The SEO Update by Yoast – March 2025 Edition.

April: machines started deciding how content is interpreted

By April, the focus shifted again. The question was no longer whether AI would shape search, but how machines decide what content means and when to surface it.

After March exposed visibility gaps and attribution issues, April zoomed in on interpretation. How AI systems read, classify, and extract information became central to SEO outcomes.

What changed in April

April brought clarity to how modern search systems process content.

  • Schema has proven its value beyond rankings: Microsoft has confirmed that schema markup helps large language models understand content. Bing Copilot used structured data to generate clearer, more reliable answers, reinforcing the schema’s role in interpretation rather than visual enhancement
  • AI-driven search became multimodal: Image-based queries expanded through Google Lens and Gemini, allowing users to search using photos and visuals instead of text alone
  • AI Overviews expanded during core updates: A noticeable surge in AI Overviews appeared during Google’s March core update, especially in travel, entertainment, and local discovery queries
  • Clicks declined as summaries improved: AI-generated content summaries reduced the need to click through, accelerating zero-click behavior across informational and decision-based searches
  • Content structure mattered more than special optimizations: Clear headings that boost readability, lists, and semantic cues helped AI systems extract meaning. There were no shortcuts. Standard SEO best practices carried the weight

What to take away from April

April shifted SEO from optimization to interpretation.

Search engines and AI systems didn’t just look for relevance. They looked for clarity. Content that was well-structured, semantically clear, and grounded in real entities was easier to understand, summarize, and reuse.

The lesson was subtle but important. You didn’t need new tricks for AI search. You needed content that was easier for machines to read and harder to misinterpret.

Want the full context? Watch the complete The SEO Update by Yoast – April 2025 Edition webinar.

May: discovery spread beyond search engines

By May, it was no longer sufficient to discuss how search engines interpret content. The bigger question became where discovery was actually happening.

SEO started expanding beyond Google. Visibility fractured across platforms, AI tools, and ecosystems, forcing brands to think about presence rather than placement.

What changed in May

The month highlighted how search and discovery continued to decentralize.

  • Search behavior expanded beyond traditional search engines: Around 39% of consumers now use Pinterest as a search engine, with Gen Z leading adoption. Discovery increasingly happened inside platforms, not just through search bars
  • AI Overviews reached mass adoption: AI Overviews reportedly reached around 1.5 billion users per month and appeared in roughly 13% of searches, with informational queries driving most of that growth
  • Clicks continued to give way to citations: As AI summaries became more common, being referenced or cited mattered more than driving a visit, especially for top-of-funnel queries
  • AI-powered search diversified across tools: Chat-based search experiences added shopping, comparison, and personalization features, further shifting discovery away from classic result pages
  • Economic pressure on content ecosystems increased: Industry voices warned that widespread zero-click answers were starting to weaken the incentives for content creation across the web
  • Trust signals faced stricter scrutiny: Updated rater guidelines targeted fake authority, deceptive design patterns, and manufactured credibility

What to take away from May

May reframed SEO as a visibility problem, not a traffic problem.

When discovery happens across platforms, summaries, and AI systems, success depends on how clearly your content communicates meaning, credibility, and relevance. Rankings still mattered, but they were no longer the primary measure of success.

The message was clear. SEO had outgrown the SERP. Brands that focused on authenticity, semantic clarity, and structured information were better positioned to stay visible wherever search happened next.

Watch the full The SEO Update by Yoast – May 2025 Edition webinar to see all insights in context.

June and July: SEO adjusted to AI-first search

By early summer, SEO entered a more uncomfortable phase. Visibility still mattered, but control over how and where content appeared became increasingly limited.

June and July were about adjustment. Search moved closer to AI assistants, ads blended into answers, and traditional SEO signals no longer guaranteed exposure across all search surfaces.

What changed in June and July

This period introduced some of the clearest operational shifts of the year.

  • AI Mode became a first-class search experience: AI Mode was rolled out more broadly, including incognito use, and began to merge into core search experiences. Search was no longer just results. It was conversation, summaries, and follow-ups
  • Ads entered AI-generated answers: Google introduced ads inside AI Overviews and began testing them in conversational AI Mode. Visibility now competes not only with other pages, but with monetized responses
  • Measurement lagged behind reality: Search Console confirmed AI Mode data would be included in performance reports, but without separate filters or APIs. Visibility changed more rapidly than reporting tools could keep pace.
  • Citations followed platform-specific preferences: Different AI systems favored different sources. Some leaned heavily on encyclopedic content, others on community-driven platforms, reinforcing that one SEO strategy would not fit every system
  • Most AI-linked pages still ranked well organically: Around 97% of URLs referenced in AI Mode ranked in the top 10 organic results, showing that strong traditional SEO remained a prerequisite, even if it was no longer sufficient
  • Content had to resist summarization: Leaks and tests showed that some AI tools rarely surfaced links unless live search was triggered. Generic, easily summarized modern content became easier to replace
  • Infrastructure became an SEO concern again: AI agents increased crawl and request volume, pushing performance, caching, and server readiness back into focus
  • Search moved beyond text: Voice-based interactions, audio summaries, image-driven queries, and AI-first browsers expanded how users searched and consumed information

What to take away from June and July

This period forced a mindset shift.

SEO could no longer assume that ranking, indexing, or even traffic guaranteed visibility. AI systems decided when to summarize, when to cite, and when to bypass pages entirely. Ads, assistants, and alternative interfaces now often sit between users and websites more frequently than before.

The conclusion was pragmatic. Strong fundamentals still mattered, but they weren’t the finish line. SEO now requires resilience: content that carries authority, resists simplification, loads fast, and stays relevant even when clicks don’t follow.

By the end of July, one thing was clear. SEO wasn’t disappearing. It was operating under new constraints, and the rest of the year would test how well teams adapted to them.

Missed the session? You can watch the full The SEO Update by Yoast – June 2025 Edition recording here.

August: the gap between visibility and value widened

By August, SEO teams were staring at a growing disconnect. Visibility was increasing, but traditional outcomes were harder to trace back to it.

This was the month when the mechanics of AI-driven search became more transparent and more uncomfortable.

What changed in August

August surfaced the operational realities behind AI-powered discovery.

  • Impressions rose while clicks continued to decline: AI Overviews dominated the results, driving exposure without generating traffic. In some cases, conversions still improved, but attribution became harder to prove
  • The “great decoupling” became measurable: Visibility and performance stopped moving in sync. SEO teams saw growth in impressions even as sessions declined
  • Zero-click searches accelerated further: No-click behavior climbed toward 69%, reinforcing that many user journeys now ended inside search interfaces
  • AI traffic stayed small but influential: AI-driven referrals still accounted for under 1% of traffic for most sites, yet they shaped expectations around answers, speed, and convenience
  • Retrieval logic shifted toward context and intent: New retrieval approaches prioritized meaning, relationships, and query context over keyword matching

Must read: On-SERP SEO can help you battle zero-click results

What to take away from August

August made one thing unavoidable.

It reinforced the reality that SEO could no longer rely on traffic as the primary proof of value. Visibility still mattered, but only when paired with outcomes that could survive reduced clicks and blurred attribution.

The lesson was strategic. SEO needed to connect visibility to conversion, brand lift, or long-term trust, not just sessions. Otherwise, its impact would be increasingly hard to defend.

Didn’t catch the live session? You can still watch the full The SEO Update by Yoast – August 2025 Edition webinar.

September: control, attribution, and trust were renegotiated

September pushed the conversation further. It wasn’t just about declining clicks anymore. It was about who controlled discovery, attribution, and access to content.

This was the month where legal, technical, and strategic pressures collided.

What changed in September

September reframed SEO around governance and credibility.

  • AI Mode moved closer to becoming the default: Search experiences shifted toward AI-driven answers with conversational follow-ups and multimodal inputs
  • The decline of the open web was acknowledged publicly: Court filings and public statements confirmed what many publishers were already feeling. Traditional web traffic was under structural pressure
  • Legal scrutiny intensified: High-profile settlements and lawsuits highlighted growing challenges around training data, summaries, and lost revenue
  • Licensing entered the SEO conversation: New machine-readable licensing approaches emerged as early attempts to restore control and consent
  • Snippet visibility became a gateway signal: AI tools relied heavily on search snippets for real-time answers, making concise, extractable content more critical
  • Persona-based strategies gained traction: SEO began shifting from keyword targeting to persona-driven content aligned with how AI systems infer intent
  • Trust eroded around generic, formulaic, AI writing styles: Formulaic, overly polished AI content raised credibility concerns, reinforcing the need for editorial judgment
  • Measurement tools lost stability again: Changes to search parameters disrupted rank tracking, reminding teams that SEO reporting would remain volatile

What to take away from September

September forced SEO to grow up again.

Control over visibility, attribution, and content use was no longer guaranteed. Trust, clarity, and credibility became the only durable advantages in an ecosystem shaped by AI intermediaries.

The takeaway was sobering but useful. SEO could still drive value, but only when it is aligned with real user needs, strong brand signals, and content that earned its place in AI-driven answers.

Want to dig a little deeper? Watch the full The SEO Update by Yoast – September 2025 Edition webinar.

October: AI search became the destination

October marked a turning point in how SEO performance needed to be interpreted. The data didn’t just shift. It reset expectations entirely.

This was the month when SEO teams had to accept that AI-powered search was no longer a layer on top of results. It was becoming the place where searches ended.

What changed in October

October brought clarity, even if the numbers looked uncomfortable.

  • AI Mode reshaped user behavior: Around a third of searches now involve AI agents, with most sessions staying inside AI panels. Clicks became the exception, not the default
  • AI citations increasingly rivalled rankings: Visibility increasingly depended on whether content was selected, summarized, or cited by AI systems, not where it ranked
  • Search engines optimized for ideas, not pages: Guidance from search platforms reinforced that AI systems extract concepts and answers, not entire URLs
  • Metadata lost some direct control: Tests of AI-generated meta descriptions suggested that manual optimization would carry less influence over how content appears
  • Commerce and search continued to merge: AI-driven shopping experiences expanded, signaling that transactional intent would increasingly be handled inside AI interfaces

What to take away from October

October reframed SEO as presence within AI systems.

Traffic still mattered, but it was no longer the primary outcome. The real question became whether your content appeared at all inside AI-driven answers. Clarity, structure, and extractability replaced traditional ranking gains as the most reliable levers.

From this point on, SEO had to treat AI search as a destination, not just a gateway.

November: structure and credibility decided inclusion

If October reset expectations, November showed what actually worked.

This month narrowed the gap between theory and practice. It became clearer why some content consistently surfaced in AI results, while other content disappeared.

What changed in November

November focused on how AI systems select and trust sources.

  • Structured content outperformed clever content: Clear headings, predictable formats, and direct answers made it easier for AI systems to extract and reuse information
  • Schema supported understanding, not visibility alone: Structured data remained valuable, but only when paired with clean, readable on-page content
  • AI-driven shopping and comparisons accelerated: Product data quality, consistency, and accessibility directly influenced whether brands appeared in AI-assisted decision flows
  • Citation pools stayed selective: AI systems relied on a relatively small set of trusted sources, reinforcing the importance of brand recognition and authority
  • Search tooling evolved toward themes, not keywords: Grouped queries and topic-based insights replaced one-keyword performance views

What to take away from November

November made one thing clear. SEO wasn’t about producing more content or optimizing harder. It was about making content easier to understand and harder to ignore.

Clarity beats creativity. Structure beat scale. Authority determined whether content was reused at all.

This month quietly reinforced the fundamentals that would define SEO going forward.

For a complete breakdown, check out the full The SEO Update by Yoast – October and November 2025 Edition recording.

December: SEO moved from ranking to retrieval

December tied the entire year together.

Instead of introducing new disruptions, it clarified what 2025 had been building toward all along. SEO was no longer primarily about ranking pages. It was about enabling retrieval.

What changed in December

The year-end review highlighted the new reality of SEO.

  • Search systems retrieved answers, not pages: AI-driven search experiences pulled snippets, definitions, and summaries instead of directing users to full articles
  • Literal language still mattered: Despite advances in understanding, AI systems relied heavily on exact phrasing. Terminology choices directly affected retrieval
  • Content structure became mandatory: Front-loaded answers, short paragraphs, lists, and clear sections made content usable for AI systems
  • Relevance replaced ranking as the core signal: Being the clearest and most contextually relevant answer mattered more than traditional ranking factors
  • E-E-A-T acted as a gatekeeper: Recognized expertise, authorship, and trust signals determined whether content was eligible for reuse
  • Authority reduced AI errors: Strong credibility signals helped AI systems select more reliable sources and reduced hallucinated answers

What to take away from December

December didn’t declare the end of SEO. It defined its next phase.

SEO matured into visibility management for AI-driven systems. Success depended on clarity, credibility, and structure, not shortcuts or volume. The fundamentals still worked, but only when applied with discipline.

By the end of 2025, the direction was clear. SEO didn’t get smaller. It got more precise.

Missed the session? You can watch the full The SEO Update by Yoast – December 2025 Edition recording here.

SEO evolved into visibility management for AI-driven search. Precision replaced volume.

2025 didn’t rewrite SEO. It clarified it.

Search moved from ranking pages to retrieving answers. From rewarding volume to rewarding clarity. From clicks to credibility. And from optimization tricks to systems-level understanding.

The fundamentals still matter. Technical health, helpful content, and strong SEO foundations are non-negotiable. But they are no longer the finish line. What separates visible brands from invisible ones now is how clearly their content can be understood, trusted, and reused by AI-driven search systems.

Going into 2026, the goal isn’t to outsmart search engines. It’s to make your expertise unmistakable. Write for humans, structure for machines, and build authority that holds up even when clicks don’t follow.

SEO didn’t get smaller this year. It got more precise. Stay with us for our 2026 verdict on where search goes next.

The post The 2025 SEO wrap-up: What we learned about search, content, and trust appeared first on Yoast.

Read more at Read More

Why ad approval is not legal protection

Why ad approval is not legal protection

Most business owners assume that if an ad is approved by Google or Meta, it is safe. 

The thinking is simple: trillion-dollar platforms with sophisticated compliance systems would not allow ads that expose advertisers to legal risk.

That assumption is wrong, and it is one of the most dangerous mistakes an advertiser can make.

The digital advertising market operates on a legal double standard. 

A federal law known as Section 230 shields platforms from liability for third-party content, while strict liability places responsibility squarely on the advertiser. 

Even agencies have a built-in defense. They can argue that they relied on your data or instructions. You can’t.

In this system, you are operating in a hostile environment. 

  • The landlord (the platform) is immune. 
  • Bad tenants (scammers) inflate the cost of participation. 
  • And when something goes wrong, regulators come after you, the responsible advertiser, not the platform, and often not even the agency that built the ad.

Here is what you need to know to protect your business.

Note: This article was sparked by a recent LinkedIn post from Vanessa Otero regarding Meta’s revenue from “high-risk” ads. Her insights and comments in the post about the misalignment between platform profit and user safety prompted this in-depth examination of the legal and economic mechanisms that enable such a system.

The core danger: Strict liability explained

While the strict liability standard is specific to U.S. law (FTC), the economic fallout of this system affects anyone buying ads on U.S.-based platforms.

Before we discuss the platforms, it is essential to understand your own legal standing. 

In the eyes of the FTC and state regulators, advertisers are generally held to a standard of strict liability.

What this means: If your ad makes a deceptive claim, you are liable. That’s it.

  • Intent doesn’t matter: You can’t say, “I didn’t mean to mislead anyone.”
  • Ignorance doesn’t matter: You can’t say, “I didn’t know the claim was false.”
  • Delegation doesn’t matter: You can’t say, “My agency wrote it,” or “ChatGPT wrote it.”

The law views the business owner as the “principal” beneficiary of the ad. 

You have a non-delegable duty to ensure your advertising is truthful. 

Even if an agency writes unauthorized copy that violates the law, regulators often fine the business owner first because you are the one profiting from the sale. 

You can try to sue your agency later to get your money back, but that is a separate battle you have to fund yourself.

The unfair shield: Why the platform doesn’t care

If you are strictly liable, why doesn’t the platform help you stay compliant? Because they don’t have to.

Section 230 of the Communications Decency Act declares that “interactive computer services” (platforms) are not treated as the publisher of third-party content.

  • The original intent: This law was passed in 1996 to allow the internet to scale, ensuring that a website wouldn’t be sued every time a user posted a comment. It was designed to protect free speech and innovation.
  • The modern reality: Today, that shield protects a business model. Courts have ruled that even if platforms profit from illegal content, they are generally not liable unless they actively contribute to creating the illegality.
  • The consequence: This creates a “moral hazard.” Because the platform faces no legal risk for the content of your ads, it has no financial incentive to build perfect compliance tools. Their moderation AI is built to protect the platform’s brand safety, not your legal safety.

The liability ladder: Where you stand

To understand how exposed you are, look at the legal hierarchy of the three main players in any ad campaign:

The platform (Google/Meta)

Legal status: Immune.

They accept your money to run the ad. Courts have ruled that providing “neutral tools” like keyword suggestions does not make the platform liable for the fraud that ensues. 

If the FTC sues, they point to Section 230 and walk away.

The agency (The creator)

  • Legal status: Negligence standard.

If your agency writes a false ad, they are typically only liable if regulators prove they “knew or should have known” it was false. 

They can argue they relied on your product data in good faith.

You (The business owner)

  • Legal status: Strict liability.

You are the end of the line. 

You can’t pass the buck to the platform (immune) or easily to the agency (negligence defense). 

If the ad is false, you pay the fine.

The hostile environment: Paying to bid against ‘ghosts’

The situation gets worse. 

Because platforms are immune, they allow “high-risk” actors into the auction that legitimate businesses, like yours, have to compete against.

A recent Reuters investigation revealed that Meta internally projected roughly 10% of its ad revenue (approximately $16 billion) would come from “integrity risks”: 

  • Scams.
  • Frauds.
  • Banned goods.

Worse, internal documents reveal that when the platform’s AI suspects an ad is a scam (but isn’t “95% certain”), it often fails to ban the advertiser.

Instead, it charges them a “penalty bid,” a premium price to enter the auction.

You are bidding against scammers who have deep illicit profit margins because they don’t ship real products (zero cost of goods sold). 

This allows them to bid higher, artificially inflating the cost per click (CPC) for every legitimate business owner. 

You are paying a fraud tax just to get your ad seen.

Get the newsletter search marketers rely on.


The new threat: The AI trap

The most urgent risk for 2026 is the rise of generative AI tools (like “Automatically Created Assets” or “Advantage+ Creative”).

Platforms are pushing you to let their AI rewrite your headlines and generate your images. Do not do this blindly.

If Google’s AI hallucinates a claim, you are strictly liable for it. 

However, the legal shield for platforms is cracking here.

In cases like Forrest v. Meta, courts are seeing that platforms may lose immunity if their tools actively help “develop” the illegality.

We have seen this before. 

In cases like CYBERsitter v. Google, courts refused to dismiss lawsuits when the platform was accused of “developing” the illegal content rather than just hosting it. 

If the AI writes the lie, the platform is arguably the “developer,” which pierces their initial immunity shield.

This liability extends to your entire website. 

By default, Google’s Performance Max campaigns have “Final URL Expansion” turned on. 

This gives their bot permission to crawl any page on your domain, including test pages or joke pages, and turn them into live ads. 

Google’s Terms of Service state that the “Customer is solely responsible” for all assets generated, meaning the bot’s mistake is legally your fault.

Be cautious of programs that blur the line. 

Features like the “Google Guaranteed” badge can create exposure for deceptive marketing. 

Because the platform is no longer a neutral host but is vouching for the business (“Guaranteed”), regulators can argue they have stepped out from behind the Section 230 shield.

By clicking “Auto-apply,” you are effectively signing a blank check for a robot to write legal promises on your behalf.

Risk reality check: Who actually gets investigated?

While strict liability is the law, enforcement is not random. The FTC and State Attorneys General have limited resources, so they prioritize based on harm and scale.

  • If you operate in dietary supplements (i.e., “nutra”), fintech (crypto and loans), or business opportunity offers, your risk is extreme. These industries trigger the most consumer complaints and the swiftest investigations.
  • If you are an HVAC tech or a local florist, you are unlikely to face an FTC probe unless you are engaging in massive fraud (e.g., fake reviews at scale). However, you are still vulnerable to competitor lawsuits and local consumer protection acts.
  • Investigations rarely start from a random audit. They start from consumer complaints (to the BBB or attorney generals) or viral attention. If your aggressive ad goes viral for the wrong reasons, the regulators will see it.

International intricacies

It is vital to remember that Section 230 is a U.S. anomaly. 

If you advertise globally, you’re playing by a different set of rules.

  • The European Union (DSA): The Digital Services Act forces platforms to mitigate “systemic risks.” If they fail to police scams, they face fines of up to 6% of global turnover.
  • The United Kingdom (Online Safety Act): The UK creates a “duty of care.” Senior managers at tech companies can face criminal liability for failing to prevent fraud.
  • Canada (Competition Bureau): Canadian regulators are increasingly aggressive on “drip pricing” and misleading digital claims, without a Section 230 equivalent to shield the platforms.
  • The “Brussels Effect”: Because platforms want to avoid EU fines, they often apply their strictest global policies to your U.S. account. You may be getting flagged in Texas because of a law written in Belgium.

The advertiser’s survival guide

Knowing the deck is stacked, how do you protect your business?

Adopt a ‘zero trust’ policy

Never hit “publish” on an auto-generated asset without human eyes on it first.

If you use an agency, require them to send you a “substantiation PDF” once a quarter that links every claim in your top ads to a specific piece of proof (e.g., a lab report, a customer review, or a supply chain document).

The substantiation file

For every claim you make (“Fastest shipping,” “Best rated,” “Loses 10lbs”), keep a PDF folder with the proof dated before the ad went live. 

This is your only shield against strict liability.

Audit your ‘auto-apply’ settings

Go into your ad accounts today. 

Turn off any setting that allows the platform to automatically rewrite your text or generate new assets without your manual review. 

Efficiency is not worth the liability.

Watch the legislation

Lawmakers are actively debating the SAFE TECH Act, which would carve out paid advertising from Section 230. 

While Congress continues to debate reform, you must protect your own business today.

The responsibility you can’t outsource

The digital ad market is a powerful engine for growth, but it is legally treacherous. 

Section 230 protects the platform. Your contract protects your agency. 

Nothing protects you except your own diligence.

That is why advertisers must stop conflating platform policy with the law. 

  • Platform policies are house rules designed to protect revenue. 
  • Truth in advertising is a federal mandate designed to protect consumers. 

Passing the first does not mean you are safe from the second.

Read more at Read More

Google adds Maps to Demand Gen channel controls

Google Ads logo on smartphone screen

Google expanded Demand Gen channel controls to include Google Maps, giving advertisers a new way to reach users with intent-driven placements and far more control over where Demand Gen ads appear.

What’s new. Advertisers can now select Google Maps as a channel within Demand Gen campaigns. The option can be used alongside other channels in a mixed setup or on its own to create Maps-only campaigns.

Why we care. This update unlocks a powerful, location-focused surface inside Demand Gen, allowing advertisers to tailor campaigns to high-intent moments such as local discovery and navigation. It also marks a meaningful step toward finer channel control in what has traditionally been a more automated campaign type.

Response. Advertisers are very excited by this update. CEO of AdSquire Anthony Higman has been waiting for this for decades:

Google Ads Specialist Thomas Eccel, who shared the update on LinkedIn said: “This is very big news and shake up things quite a lot!”

Between the lines. Google continues to respond to advertiser pressure for greater transparency and control, gradually breaking Demand Gen into more modular, selectable distribution channels.

What to watch. How Maps placements perform compared to YouTube, Discover, and Gmail—and whether Google expands reporting or optimization tools specifically for Maps inventory.

First seen. This update was first spotted by Search Marketing Specialist Francesca Poles, when she shared the update on LinkedIn

Bottom line. Adding Google Maps to Demand Gen channel controls is a significant shift that gives advertisers new strategic flexibility and the option to build fully location-centric campaigns.

Read more at Read More