Google Ads quietly rolls out a new conversion metric

How Google Ads’ AI tools fix creative bottlenecks, streamline asset creation

A new column called “Original Conversion Value” has started appearing inside Google Ads, giving advertisers a long-requested way to see the true, unadjusted value of their conversions.

How it works. Google’s new formula strips everything back:

Conversion Value
– Rule Adjustments (value rules)
– Lifecycle Goal Adjustments (e.g., NCA bonuses)
= Original Conversion Value

Why we care. For years, marketers have struggled to isolate real conversion value from Google’s layers of adjustments — including Conversion Value Rules and Lifecycle Goals (like New Customer Acquisition goals). Original Conversion value makes it easier to diagnose performance, compare data across campaigns, and spot when automated bidding is boosting value rather than actual conversions.

In short: clearer insights, cleaner ROAS, and more confident decision-making.

Between the lines:

  • Value adjustments are useful for steering Smart Bidding.
  • But they also inflate numbers, complicating reporting and performance analysis.
  • Agencies and in-house teams have long asked Google for a cleaner view.

What’s next. “Original Conversion Value” could quickly become a go-to column for:

  • Revenue reporting
  • Post-campaign analysis
  • Troubleshooting inflated ROAS
  • Auditing automated bid strategies

First seen. This update was first picked up by Google Ads Specialist Thomas Eccel when he shared spotting the new column on LinkedIn

The bottom line. It’s a small update with big clarity. Google Ads is giving marketers something rare: a simpler, more transparent look at the value their ads actually drive.

Read more at Read More

Google releases Gemini 3 – it already powers AI Mode

Google announced the release of its latest AI model update, Gemini 3. “And now we’re introducing Gemini 3, our most intelligent model, that combines all of Gemini’s capabilities together so you can bring any idea to life,” Google’s CEO, Sundar Pichai wrote.

Gemini 3 is now being used in AI Mode in Search with more complex reasoning and new dynamic experiences. “This is the first time we are shipping Gemini in Search on day one,” Sundar Pichai said.

AI Mode with Gemini 3. Google shared how AI Mode in Search is now using Gemini 3 to enable new generative UI experiences like immersive visual layouts and interactive tools and simulations, all generated completely on the fly based on your query.

Here is a video of showing how RNA polymerase works with generative UI in AI Mode in Search.

Robby Stein, VP of Product at Google Search said:

“In Search, Gemini 3 with generative layouts will make it easy to get a rich understanding of anything on your mind. It has state-of-the-art reasoning, deep multimodal understanding and advanced agentic capabilities. That allows the model to shine when you ask it to explain advanced concepts or ideas – it reasons and can code interactive visuals in real-time. It can tackle your toughest questions like advanced science.”

More Gemini 3. Google added that Gemini 3 has:

  • State-of-the-art reasoning
  • Deep multimodal understanding
  • Powerful vibe coding so you can go from prompt to app in one shot
  • Improved agentic capabilities, so it can get things done on your behalf, at your direction

Availability. Gemini 3 is now rolling out, yes, in AI Mode but here also:

  • For everyone in the Gemini app and for Google AI Pro and Ultra subscribers in AI Mode in Search
  • For developers in the Gemini API in AI Studio, our new agentic development platform, Google Antigravity; and Gemini CLI
  • For enterprises in Vertex AI and Gemini Enterprise

Why we care. Gemini 3 is currently powering AI Mode, the future of Google Search. It will continue to power more and more search features within Google, as well as other areas within Google’s platforms.

Being on top of these changes and how they impact search and your site and maybe Google Ads is important.

Read more at Read More

I Tested 11 AI Search Engines: Only These 4 Made the Cut

Ask the same question in 11 AI search engines, and you’ll get 11 different answers.

Sometimes wildly different.

Some engines focus on visuals and shoppable results. Others go deep into research. A few just try to get you an answer, fast.

Each platform prioritizes and presents it differently.

And those differences matter.

Not just for users, but for brands trying to get discovered in AI search.

So, I tested popular and lesser-known AI engines on accuracy, depth, user experience, and other factors.

Only four made the cut.

In this guide, you’ll learn which AI search engines came out on top, including pros, cons, and pricing. I’ll also share which engines didn’t make my list, and why.

Along the way, you’ll get a few tips on using these insights to improve your AI visibility.

Start with a quick overview of my findings below. Or jump straight to the #1 AI search engine on my list: ChatGPT.

What Are the Best AI Search Engines?

Tool Best for Pros Cons Price
ChatGPT Comprehensive research and shoppable product comparisons Visual layout with tables and images; remembers context across follow-ups; direct purchase links Overwhelming results for broad queries; accuracy issues; overly agreeable Free or $20+/month
Google AI Mode Quick product searches with real buyer reviews Fast product results with pricing and reviews; integrates Google ecosystem Vague on informational queries; no comparison tables; unavailable in some regions Free
Sigma Chat (Formerly Bagoodex) Research deep dives that build on previous questions Strong conversational memory; suggests follow-up questions; content creation prompts Weak product presentation; no pricing or buy links; poor visuals Free or $10+/month
Microsoft Copilot Fast answers in clean, skimmable formats Clean categorization; fast responses; easy to skim Surface-level depth; no product links; weak for shopping Free

How I Tested 11 AI Search Engines

To keep things consistent, I ran the same set of prompts across 11 AI search tools.

Note: For this article, I defined “AI search engine” as any generative AI platform that can understand queries, pull information from sources, and deliver answers in natural language.


This included big names like ChatGPT, AI Mode, and Perplexity.

And newer players like Arc, Andi, and Sigma Chat.

Andi Search – How long do running shoes last

I focused on one topic (running shoes) and tested a range of prompts across different search intents.

This showed how well each engine handled the full customer journey, from research to shopping.

This included:

  • “Best running shoes”: Assesses top-level recommendations and how each engine handles broad prompts
  • “Best running shoes for beginner marathon training”: Evaluates personalization and context handling as the prompt narrows
  • “How long do running shoes last?”: Gauges accuracy on general product knowledge and durability expectations
  • “Of the trainers you’ve recommended, which ones will last the longest?”: Tests the accuracy of product details and the engine’s ability to remember details from previous prompts
  • “Can I wear any of these running shoes recommended for hiking?”: Assesses how each AI handles reasoning, real-world nuance, and potential safety considerations

ChatGPT – Shoes for hiking

I evaluated each tool on five factors:

  • Accuracy: Did it understand the intent and get the facts right?
  • Depth: Did it add helpful context or just summarize existing content?
  • Transparency: Did it credit or link to its sources?
  • User experience: Was the output fast, skimmable, and well-organized?
  • Adaptability: Could it handle follow-up questions naturally or refine vague prompts?

After testing all 11 AI search engines, these four stood out as the best for different reasons.

1. ChatGPT

Best for comprehensive research and shoppable product comparisons

ChatGPT – Homepage

ChatGPT came out on top overall.

It delivered the best balance of accuracy, organization, and depth. Plus, it showed an “understanding” of search intent and included helpful visuals.

What ChatGPT Does Well

ChatGPT provides detailed, well-formatted answers.

This is true whether you’re comparing products, researching topics, or looking for a step-by-step tutorial.

ChatGPT – Best running shoes

It also remembers context across follow-up questions.

I started with a broad prompt and added specifics as the conversation progressed. ChatGPT remembered key details without making me repeat myself.

For shopping queries, the visual presentation stood out.

When I searched for running shoes, for example, ChatGPT returned products with images, prices, reviews, and short descriptions.

It also included links to retailers and external articles. This made verifying product details and purchasing easy.

ChatGPT – Links to external articles

The summary tables were particularly useful.

After inquiring about shoe lifespan, ChatGPT delivered a clean comparison table with products and their expected mileage.

ChatGPT – Summary Table – Running shoes

For brands: ChatGPT’s visual layout isn’t just useful for shoppers. If you’re trying to get your brand referenced by AI search engines, it also reveals what these models prioritize. Use tables, clear specs, and organized categories on your product pages to help both shoppers and AI find your information faster.


ChatGPT is also evolving quickly.

Features like Instant Checkout (currently limited to select Etsy sellers in the United States) let users complete purchases directly inside the chat.

ChatGPT – Full shoping destination

Great for shoppers — and even greater for the brands featured in ChatGPT’s recommendations.

Where ChatGPT Falls Short

When I tested ChatGPT, I got what most people want from AI search: answers that feel confident and complete.

But not every response was perfect.

Broad prompts, such as “Best running shoes,” resulted in lengthy lists of brands, product categories, and features.

The information took real effort to digest.

ChatGPT – Top picks by category – Running shoes

Specific prompts worked much better.

I also noticed minor inaccuracies in some instances, like when I asked about shoe lifespan.

After fact-checking the replies, some details didn’t match the manufacturer’s specifications.

For example, ChatGPT said the Brooks Ghost running shoe has a lifespan of 450 to 500 miles. But the actual range is 300 to 500 miles.

ChatGPT – The longest lasting trainers

This also highlights a larger problem.

ChatGPT pulls information from multiple sources, such as blog posts and brand sites.

But it also relies on forums like Quora and Reddit, where users share personal experiences.

Reddit – Relies on forums

It then aggregates the information into its responses. This can lead to inaccurate and misleading information.

For brands: Provide clear answers to common user questions on your site. Otherwise, AI search engines may turn to other, potentially inaccurate sources for this information. Add tables with specifications, be explicit about ranges and measurements, and use structured data so AI can extract and cite your product information correctly.


ChatGPT also tends to be overly agreeable.

Whatever you prompt, ChatGPT will lean toward flattery and agreement — even when it involves safety.

For example, when I asked, “Can I wear any of these running shoes recommended for hiking?”

ChatGPT’s response was:

“Good question 👍 — you can hike in road running shoes, but whether it’s a good idea depends on the terrain and how far you’re going.”


Not the worst.

But not as good as other AI search engines in this aspect, like AI Mode, which was more cautious.

AI Mode said:

“It is not recommended to use the road running shoes previously mentioned for hiking…they lack the key features that provide the necessary grip, protection, and stability for off-road trails. Using them for hiking could lead to injury.”


Overall, ChatGPT is fast, detailed, and helpful.

But it can be too generous with information — and too polite to push back.

Pricing

ChatGPT – Pricing

ChatGPT offers three plans based on your needs.

  • Free: Limited access to some features
  • Plus: $20/month
  • Pro: $200/month for extended features

2. Google AI Mode

Best for quick product searches with real buyer reviews

Google AI Mode – Homepage

Google’s AI Mode is built for speed.

It pulls product listings, prices, and reviews directly into the search interface. This makes it ideal for shoppers who want to quickly compare products before purchasing.

What AI Mode Does Well

AI Mode shines when you have clear buying intent.

It instantly surfaces product options with images, prices, star ratings, and quick links to retailers. And it’s all in a clean, scrollable layout.

Google AI Mode – Best running shoes

When I searched “best running shoes,” it showed a curated carousel of options with price comparisons across multiple sites.

Google AI Mode – Open drop down on the right

I especially liked how it paired Google Reviews with its recommendations — a small detail that makes decision-making faster and builds trust.

Google AI Mode – Google reviews & recommendations

For me, that worked perfectly.

Getting straight to the products moved me faster toward a decision.

But some users may prefer more background or context for researching and weighing options. ChatGPT’s research-style answers still win in this regard.

For brands: AI Mode pulls heavily from Google Reviews and structured product data. Focus on getting detailed, positive reviews and keeping your product schema markup up to date. These signals can influence whether your products appear in AI-generated results.


Where AI Mode Falls Short

AI Mode is not yet available in all countries, although it’s rolling out quickly.

And unlike ChatGPT, it didn’t provide any comparison tables for any of my prompts. Just products and bullet points.

This meant more scrolling and clicking to find and digest the information.

Google AI Mode – Bullet points

This was evident when I asked which of the recommended shoes would last the longest.

AI Mode’s response was vague and unhelpful. It said the Brooks Ghost shoe was “exceptionally long-lasting.”

It didn’t provide any of the specifics that would make me want to purchase this shoe. Like mileage range and how it differed between the options.

Google AI Mode – Listings on the left

If you’re early in the evaluation phase, AI Mode can feel limiting.

But it delivers when you want a shortlist of top contenders.

Pricing

AI Mode is available for free within Google Search, depending on your region.

3. Sigma Chat (Formerly Bagoodex)

Best for research deep dives that build on previous questions

Sigma Chat – Homepage

Sigma Chat’s iterative search and in-depth replies are excellent if you love to research.

Ask a question, get an answer, then drill deeper into related topics — and it remembers the full thread.

Note: Bagoodex launched in 2024 and has since rebranded as Sigma Chat. For this review, I tested it against the standard modes of other tools. ChatGPT’s Thinking mode and Perplexity’s Research mode are designed for deep research and may perform differently.


What Sigma Chat Does Well

Sigma Chat stood out for its ability to build on previous context.

When I asked follow-up questions, it remembered what I’d already searched and adjusted its answers accordingly.

No need to repeat myself or reframe the entire query.

For example, after I asked which of the recommended shoes would last the longest, it specifically referenced “marathons.”

(Even though I hadn’t mentioned this criterion again after the initial prompt.)

Sigma Chat – Build on previous context

Sigma Chat’s follow-up suggestions also stood out for their potential to aid deep research.

Instead of ending with one answer, it nudged me toward related questions I hadn’t considered:

  • Beginner running shoes fitting
  • Marathon training schedule
  • Foot pronation assessment

Sigma Chat – Follow upsv

Sigma Chat anticipates knowledge gaps and identifies adjacent topics worth exploring.

This makes it particularly helpful for any kind of research, whether you’re comparing products, building content outlines, or researching niches.

Sigma Chat – Foot pronation

For brands: Sigma Chat rewards depth and topic clustering. To increase visibility in AI tools like this, build content hubs around your main topics — link related pages together and cover every sub-question your audience might ask. The more complete your coverage, the easier it is for AI to surface your site in deep research queries.


Another interesting feature of this AI search engine?

It suggests prompts tailored to content creation. This is especially helpful if you’re using it for marketing purposes.

After providing search results for the best running shoes for a marathon, it offered unexpected options like:

  • “Write a blog post about this topic”
  • “Create an image on this topic”

I tested the blog prompt, and it generated a quick draft titled “Marathon Training on a Budget: Choosing Durable Running Shoes.”

It wasn’t something you’d publish as-is, but it was a decent starting point.

If you’re prone to writer’s block or need to quickly draft comparison content around competitor products, it’s a particularly helpful feature.

Sigma Chat – Blog prompt

From there, it suggested additional prompts like “Add a call to action” and “Shorten for social media.”

This makes it easy for marketers to generate content for multiple platforms at once.

Sigma Chat – Suggested additional prompts

Where Sigma Chat Falls Short

Sigma Chat’s presentation still needs work.

When I searched “best running shoes,” it opened with generic photos pulled from listicles.

This is a wasted use of prime real estate — they could’ve shown real products or reviews to provide more value.

Sigma Chat – Best running shoes

There are also no pricing details, reviews, or direct purchasing links.

But Sigma Chat does cite its sources.

In fact, it cited the same comparison article multiple times. (Helpful for that site’s traffic, not so helpful for someone ready to purchase.)

Sigma Chat – Cite sources

Unless Sigma Chat improves its commercial functionality, it’s unlikely shoppers will use it.

Instead, it might carve out a niche for itself as a deep research tool.

Pricing

Sigma Chat – Pricing

Sigma Chat offers a few plans with varying access and features:

  • Free: Basic search and chat capabilities
  • SigmaChat Plus: $10/month for increased access
  • SigmaChat Pro: $75/month for unlimited access

4. Microsoft Copilot

Best for fast answers in clean, skimmable formats

Microsoft Copilot – Homepage

Microsoft Copilot has the cleanest layout of any AI search engine I tested.

It’s fast, structured, and organized. Perfect for people who want distraction-free takeaways.

What Microsoft Copilot Does Well

When you ask Copilot a question, it responds instantly with skimmable categories, bullet points, and emojis.

For example, when I searched “best running shoes,” it broke recommendations into helpful categories:

  • “Best overall”
  • “Best stability shoe”
  • “Best daily trainer”

Copilot Microsoft – Best running shoes

When I narrowed the query to “best running shoes for beginner marathon training,” Copilot further refined the results.

It added details about who each shoe was best for, making the advice more actionable — a nice touch for a tool focused on clarity.

Copilot Microsoft – More actionable advice

Even for informational queries like “can I wear these for hiking,” Copilot delivered a simple breakdown.

And added specific scenarios where running shoes would and wouldn’t be ideal for hiking.

Copilot Microsoft – Simple breakdown

When you want fast, direct answers without having to sift through a bunch of content, Copilot is a great option.

For brands: Pay close attention to how Copilot structures its answers — categories, comparisons, “best for” labels. Use similar formatting on your own pages to help AI tools extract and present your content more effectively.


Where Microsoft Copilot Falls Short

Copilot’s polished format comes at a cost: depth and shoppability.

Its responses are tidy but often too surface-level — especially for commercial searches like “best running shoes.”

When I tested this prompt, it didn’t link directly to any product pages or show pricing.

So, I couldn’t easily comparison shop, verify information, or choose a merchant and purchase immediately.

Instead, it summarized content from other “best” listicles and linked those sources.

Copilot Microsoft – Don't link directly & no pricing

Like Sigma Chat, unless Microsoft improves its shoppability, it’s unlikely consumers will use it for this purpose.

Instead, Copilot works better as a light research tool — especially when you want fast information with minimal reading.

Pricing

Microsoft Copilot is free to use.

AI Search Engines That Didn’t Make the Cut (and Why)

All of these AI search engines had their pros and cons.

But overall, they fell short for different reasons.

Claude

I really liked Claude, but the output was very similar to ChatGPT.

This isn’t a problem, but I didn’t want to list tools that were similar in functionality.

I wanted to provide only the best.

Compared to ChatGPT, Claude lacked product links and visuals:

Claude – Lacks product links & visuals

The wall of text made the information challenging to process.

I did like the categorization, but ChatGPT does this too — with tables that are easier to skim.

Perplexity

Like Claude, Perplexity came somewhat close to ChatGPT in overall performance.

When asked a prompt with buying intent, it provided a short summary along with product images, pricing, and star ratings.

No tables to help me quickly compare features and options, though.

Perplexity – Best running shoes

The summary was also fairly generic.

And didn’t feel all that tailored to my prompt, even when I used the more specific “marathon” wording.

Perplexity – Running shoes – Generic summary

Brave

Brave, a privacy-focused AI search engine, felt too much like traditional search.

Brave – Best running shoes – Ask

It features long lists of articles without any clear hierarchy or comparison features.

While this might be helpful for browsing links, it doesn’t summarize much or help you make quick decisions.

Andi

Andi, a minimal AI search tool, offered few results, sometimes just one (e.g., a single Reddit thread).

Andi Search – Best running shoes

It’s a bit like the “I’m Feeling Lucky” button on Google. Simple to use but extremely limiting for in-depth research or shopping.

Arc

Arc, a mobile- and browser-based AI search engine, requires a download to use.

Arc – Search

This is inconvenient compared to browser-based AI search.

When so many other options exist, it’s hard to justify using this AI engine for this reason alone.

You

You is a solid AI search engine that has been around for multiple years.

You – Best running shoes

But it was slow to respond and didn’t link to products in commercial searches.

Ultimately, I found it less useful than the other AI tools overall.

What This Means for Your AI Search Visibility

After testing 11 AI search engines, one thing became clear.

No matter how their formatting or preferences differ, the goal remains the same: to serve clear, credible, and well-structured content.

If your pages do that — with comprehensive coverage, positive reviews, and clean markup — you’ll be positioned to perform well across all AI search engines and LLMs.

Want to make that happen?

Our generative engine optimization (GEO) guide shows how to structure your site, earn more citations, and track your AI visibility.

The post I Tested 11 AI Search Engines: Only These 4 Made the Cut appeared first on Backlinko.

Read more at Read More

The three AI research modes redefining search – and why brand wins

The three AI research modes redefining search — and why brand wins

The AI resume has become a C-suite-level asset that reflects your entire digital strategy. 

To use it effectively, we first need to understand where AI is deploying it across the user journey.

How AI has rewritten the user journey

For years, our strategies were shaped by the inbound methodology.

We built content around a user-driven path through awareness, consideration, and decision, with traditional SEO acting as the engine behind those moments.

That journey has now been fundamentally reshaped. 

AI assistive engines – conversational systems like Gemini, ChatGPT, and Perplexity – are collapsing the funnel. 

They move users from discovery to decision within walled-garden environments. 

It’s what I call the BigTech walled garden AI conversational acquisition funnel.

For marketers, that shift can feel like a loss of control. 

We no longer own the click, the landing page, or the carefully engineered funnel. 

But from the consumer perspective, the change is positive. 

People want one thing: a direct, trusted answer.

This isn’t a contradiction. It’s the new reality. 

Our job is to align with this best-service model by proving to the AI that our brand is the most credible answer.

That requires updating the ultimate goal. 

For commercial queries, the win is no longer visibility. 

It’s earning the perfect click – the moment when an AI system acts as a trusted advisor and chooses your brand as the best solution.

To get there, we have to broaden our focus from explicit branded searches to the three modes of research AI uses today: 

  • Explicit.
  • Implicit.
  • Ambient. 

Together, they define the new strategic landscape and lead to one truth.

In an AI-driven ecosystem, brand is what matters most.

3 types of research redefining what search is

These three behaviors reveal how users now discover, assess, and choose brands through AI.

Explicit research (brand): The final perfect click

Explicit research is any query that includes your brand name, such as:

  • Searches for your name.
  • “Brand name reviews.”
  • “Brand vs. competitor.”

They represent deliberate, high-stakes moments when a potential client, partner, or investor is actively researching your brand. 

It’s the decision stage of the funnel, where they look for specific information about you or your services, or conduct a final AI-driven due diligence check before committing.

What they see here is your digital business card

A strong AI assistive engine optimization (AIEO) strategy secures these bottom-of-funnel moments first. 

You must engineer an AI resume – the AI equivalent of a brand SERP – that is positive, accurate, and convincing so the prospect who is actively looking for you converts.

Branded terms are the lowest-hanging fruit, the most critical conversion point in the new conversational funnel, and the foundation of AIEO.

Implicit research (industry/topic/comparison): Being top of algorithmic mind

Implicit research includes any topical query that does not contain a brand name. 

These are the “best of” comparisons and problem-focused questions that happen at the top and middle of the funnel.

To win this part of the journey, your brand must be top of algorithmic mind, the state where an AI instinctively selects you as the most credible, relevant, and authoritative answer to a user’s query.

  • Consideration: When a user asks, “Who are the best personal injury law firms in Los Angeles?”, the AI builds a shortlist, and you cannot afford to be missing.
  • Awareness: When a user asks, “Give me advice about personal injury legal options after a car accident,” your chance to be included depends on whether the AI already understands and trusts your brand.

Implicit research is not about keywords. It is about being understood by the algorithms, demonstrating credibility, and building topical authority.

Here’s how it works:

  • The algorithms understand who you are.
  • They can effectively apply credibility signals. (An expanded version of Google’s E-E-A-T framework, N-E-E-A-T-T, incorporates notability and transparency.)
  • You have provided the content that demonstrates topical authority.

If you meet these three prerequisites, you can become top of algorithmic mind for user-AI interactions at the top and middle of the funnel, where implicit research happens.

Get the newsletter search marketers rely on.


Ambient research (push by software): Where the algorithms advocate for you

Ambient research is the ultimate form of push discovery, where an AI proactively suggests your brand to a user who isn’t even in research mode. 

It represents the most profound shift yet. Ambient research sits beyond the funnel – it is pre-awareness.

Simple examples include:

  • Gemini suggesting your name in Google Sheets while a prospect models ROI.
  • Your profile surfacing as a suggested consultant in Gmail or Outlook.
  • A meeting summary in Google Meet or Teams recommending your brand as the expert who can solve a key challenge.

In these day-to-day situations, the user is no longer pulling information. 

The AI is pushing a solution it trusts so completely that the engine becomes your advocate.

This is the ultimate goal, signaling that a brand has reached true dominant status as top of algorithmic mind within a niche. 

This level of trust comes from building a deep and consistent digital presence that teaches the AI your brand is a helpful default in a given context. 

It’s the same dynamic Seth Godin describes as “permission marketing,” except here the permission is granted by the algorithms.

It may feel like an edge case in 2025, but ambient research will become a major opportunity for those who prepare now. 

The walls are rising in the AI walled garden 2.0 – the new, more restrictive AI ecosystems. 

The next evolution will be AI assistive agents. 

These agents will not just recommend a solution. They will execute it. 

When an agent books a flight, orders a product, or hires a consultant on a user’s behalf, there is no second place. 

This creates a true zero-sum moment in AI. 

If you are not the trusted default choice, you are not an option at all.

Rethink your funnel: Brand is the unifying strategy

The awareness, consideration, and decision funnel still exists, but the journey has been hijacked by AI.

A strategy focused only on explicit research is a losing game. 

It secures the bottom of the funnel but leaves the entire middle and top wide open for competitors to be discovered and recommended.

Expanding into implicit research is better, yet it remains a reactive posture. You are waiting to be chosen from a list. 

That approach will fail as ambient research grows, because ambient moments are where the AI makes the first introduction.

This landscape demands a brand-first strategy.

Brand is the one constant across all three research modes. AI:

  • Recommends you in explicit research because it understands your brand’s facts. 
  • Recommends you in implicit research because it trusts your credibility on a topic. 
  • Advocates for you in ambient research because it has learned your brand is the most helpful default solution.

By building understandability, credibility, and deliverability, you are not optimizing for one type of search. 

You are systematically teaching the AI to trust your brand at every possible interaction.

The brands that become the best teachers will be the ones an AI recommends across all three research modes. 

It’s time to update your strategy or risk being left out of the conversation entirely.

Your final step: The strategic roadmap 

You now understand the what – the AI resume – and the where – the three research modes. 

Finally, we’ll cover the how: the complete strategic roadmap for mastering the algorithmic trinity with a multi-speed approach that systematically builds your brand’s authority.

Read more at Read More

Google AI Overviews: How to remove or suppress negative content

How to remove or suppress negative content from AI Overviews

By now, we’re all familiar with Google AI Overviews. Many queries you search on Google now surface responses through this quick and prominent search feature.

But AI Overview results aren’t always reliable or accurate. 

Google’s algorithms can promote negative or misleading content, making online reputation management (ORM) difficult. 

Here’s how to stay on top of AI Overviews and your ORM – by removing, mitigating, or addressing negative content.

How AI Overviews source information

AI Overviews relies on a mix of data sources across Google and the open web, including:

  • Google’s Knowledge Graph: The Knowledge Graph is Google’s structured database of facts about people, places, and things. It’s built from a range of licensed data sources and publicly available information.
  • Google’s tools and databases: Google also draws on structured data from its own systems. This includes information from:
    • Business Profiles.
    • The Merchant Center.
    • Other Google-managed datasets that commonly appear in search results.
  • Websites: AI Overviews frequently cites content from websites across the open web. The links that appear beside answers point to a wide variety of sources, ranging from authoritative publishers to lower-quality sites.
  • User-generated content (UGC): UGC can also surface in AI Overviews. This may include posts, reviews, photos, or publicly available content from community-driven platforms like Reddit.

Several other factors influence how this data is organized into answers, including topical relevance, freshness, and the authority of the source.

However, even with relevance and authority taken into consideration, harmful or false content can still appear in results.

This can happen for a variety of reasons, including:

  • Where the information is sourced.
  • How Google’s AI fills in gaps.
  • Instances where it may misunderstand the context of a user’s query.

Removing or suppressing harmful content

There are several options for removing or suppressing negative information on the web, including those related to AI Overviews. Let’s look at two.

Legal and platform-based removal

From time to time, you are left with no other option but to take legal action.

In certain instances, a Digital Millennium Copyright Act (DMCA) claim or defamation lawsuit might be applicable. 

A DMCA claim can be initiated at the request of the content owner. A defamation lawsuit, meanwhile, aims to establish libel by showing four things:

  • A false statement purporting to be fact. 
  • Publication or communication of that statement to a third person.
  • Fault amounting to at least negligence.
  • Damages, or some harm caused to the reputation of the person or entity who is the subject of the statement.

Defamation standards vary by jurisdiction, and public figures may face a higher legal standard. 

Because of this, proper documentation and professionalism are essential when filing a lawsuit, and working with a legal professional is likely in your best interest.

Dig deeper: Generative AI and defamation: What the new reputation threats look like

Working with an ORM specialist

The other (and perhaps easier) route to take is working with an online reputation management specialist. 

These teams are extremely well-versed at handling the multi-layered process of removals.

In an online crisis, they have the tools to respond and mitigate damage. They’re also trained to balance ethical considerations you might not always account for.

Get the newsletter search marketers rely on.


How to deliver positive signals to AI systems

Clearer signals make it easier for AI Overview to present your brand correctly. Focus on the following areas.

Strengthening signals through publishing 

One effective method is strategic publishing.

This means building a strong, positive presence around your company, business, or personal brand so AI Overviews have authoritative information to draw from.

A few approaches support this:

  • Publishing on credible domains: ORM firms often publish content on platforms like Medium, LinkedIn, and reputable industry sites. This strengthens your presence in trusted environments.
  • Employing consistent branding and factual accuracy: Content must also be factual and consistently branded. This reinforces authority and signals reliability.
  • Leveraging press releases and thought leadership: Press releases, thought leadership pieces, and expert commentary help create credible backlinks and citations across the web.
  • Supporting pages that build the narrative: ORM specialists also create supporting pages that reinforce key narratives. With the right linking and content clusters, AI Overviews is more likely to surface this material.

Leveraging structured data and E-E-A-T

Another effective method to establish credibility on AI Overviews is to focus on technical enhancements and experience, expertise, authoritativeness, and trustworthiness (E-E-A-T). 

ORM specialists typically focus on two areas:

  • Structured data and schema markup: This involves adding more context about your brand online by:
    • Enhancing author bios.
    • Highlighting positive reviews.
    • Reinforcing signals that reflect credibility.
  • Establishing E-E-A-T signals: This includes building a trusted online presence by:
    • Referencing work published in reputable outlets.
    • Highlighting real client examples.
    • Showcasing customer relationships.
    • Outlining accolades and expertise through your bio.

Monitoring AI Overviews and detecting issues early

A final key aspect of staying on top of AI Overviews is to monitor the algorithm and detect issues early. 

Using tools to track AI Overviews is extremely efficient, and these systems can help business owners monitor keywords and detect potential damage.

For instance, you might use these tools to track your brand name, executive names, or even relevant products.

As discussed, it’s also crucial to have a plan in place in case a crisis ever hits.

This means establishing press outreach contact points and a legal department, and knowing how to suppress content via the suppression methods already mentioned.

Ethical considerations

Online reputation management isn’t just generating think pieces. It’s a layered process grounded in ethical integrity and factual accuracy.

To maintain a truthful and durable strategy, keep the following in mind:

  • Facts matter: Don’t aim to manipulate or deceive. Focus on promoting factual, positive content to AI Overview.
  • Avoid aggression: Aggressive tactics rarely work in ORM. There’s a balance between over-optimization and under-optimization, and an ORM firm can help you find it.
  • Think long-term: You may want negative or false content removed immediately, but lasting suppression requires a long-term plan to promote positive content year after year.

Managing how AI Overviews presents your brand

AI Overviews is already a dominant part of the search experience.

But its design means negative or false content can still rise to the top.

As AI Overviews become more prominent, business owners need to monitor their online reputation and strengthen the positive signals that surface in these results.

Over time, that requires strategic publishing, long-term planning, the right technical signals, and a commitment to factual, honest content.

By following these principles, AI Overviews can become an asset for growth instead of a source of harm.

Read more at Read More

82% of marketers fail AI adoption (Positionless Marketing can fix it) by Optimove

Picture a chocolate company with an elaborate recipe, generations old. They ask an AI system to identify which ingredients they could remove to cut costs. The AI suggests one. They remove it. Sales hold steady. They ask again. The AI suggests another. This continues through four or five iterations until they’ve created the cheapest possible version of their product. Fantastic margins, terrible sales. When someone finally tastes it, the verdict is immediate: “This isn’t even chocolate anymore.”

Aly Blawat, senior director of customer strategy at Blain’s Farm & Fleet, shared this story during a recent MarTech webinar to illustrate why 82% of marketing teams are failing at AI adoption: automation without human judgment doesn’t just fail. It compounds failure faster than ever before. And that failure has nothing to do with the technology itself.

The numbers tell the story. In a Forrester study commissioned by Optimove, only 18% of marketers consider themselves at the leading edge of AI adoption, even though nearly 80% expect AI to improve targeting, personalization and optimization. Forrester’s Rusty Warner, VP and principal analyst, puts this in context: only about 25% of marketers worldwide are in production with any AI use cases. Another third are experimenting but haven’t moved to production. That leaves more than 40% still learning about what AI might do for them.

“This particular statistic didn’t really surprise me,” Warner said. “We find that a lot of people that are able to use AI tools at work might be experimenting with them at home, but at work, they’re really waiting for their software vendors to make tools available that have been deemed safe to use and responsible.”

The caution is widespread. IT teams have controls in place for third-party AI tools. Even tech-savvy marketers who experiment at home often can’t access those tools at work until vendors embed responsible AI, data protections and auditability directly into their platforms.

The problem isn’t the AI tools available today. It’s that marketing work is still structured the same way it was before AI existed.

The individual vs. the organization

Individual marketers are thirsty for AI tools. They see the potential immediately. But organizations are fundamentally built for something different: control over brand voice, short-term optimization and manual processes where work passes from insights teams to creative teams to activation teams, each handoff adding days or weeks to cycle time.

Most marketing organizations still operate like an assembly line. Insights come from one door, creative from another, activation from a third. Warner called this out plainly: “Marketing still runs like an assembly line. AI and automation break that model, letting marketers go beyond their position to do more and be more agile.”

The assembly line model is excellent at governance and terrible at speed. By the time results return, they inform the past more than the present. And in a world where customer behavior shifts weekly, that lag becomes fatal.

The solution is “Positionless Marketing,” a model where a single marketer can access data, generate brand-safe creative and launch campaigns with built-in optimization, all without filing tickets or waiting for handoffs. It doesn’t mean eliminating collaboration. It means reserving human collaboration for major launches, holiday campaigns and sensitive topics while enabling marketers to go end-to-end quickly and safely for everything else.

Starting small, building confidence

Blain’s Farm & Fleet, a 120-year-old retail chain, began its AI journey with a specific problem: launching a new brand campaign and needing to adapt tone consistently across channels. They implemented Jasper, a closed system where they could feed their brand tone and messaging without risk.

“We were teaching it a little bit more about us,” Blawat said. “We wanted to show up cohesively across the whole entire ecosystem.”

Warner recommends this approach. “Start small and pick something that you think is going to be a nice quick win to build confidence,” he said. “Audit your data, make sure it’s cleaned up. Your AI is only going to be as good as the data that you’re feeding it.”

The pattern repeats: start with a closed-loop copy tool, then add scripts to clean product data, then layer in segmentation. Each step frees time, shortens cycles, and builds confidence.

Where data meets speed

Marketers aren’t drowning in too little data. They’re drowning in too much data with too little access. The 20% of marketing organizations that move fast centralize definitions of what “active customer,” “at risk,” and “incremental lift” actually mean. And they put those signals where marketers work, not in a separate BI maze.

“There’s massive potential for AI, but success hinges on embracing the change required,” Warner said. “And change is hard because it involves people and their mindset, not just the technology.”

The adoption lag isn’t about technology readiness. It’s about organizational readiness.

Balancing automation and authenticity

Generative AI took off first in low-risk applications: creative support, meeting notes, copy cleanup. Customer-facing decisions remain slower to adopt because brands pay the price for mistakes. The answer is to deploy AI with guardrails in the highest-leverage decisions, prove lift with holdouts and expand methodically.

Blawat emphasized this balance. “We need that human touch on a lot of this stuff to make sure we’re still showing up as genuine and authentic,” she said. “We’re staying true to who our brand is.”

For Blain’s Farm & Fleet, that means maintaining the personal connection customers expect. The AI handles the mechanics of targeting and timing. But humans ensure every message reflects the values and voice customers’ trust.

The future of marketing work

AI is moving from analysis to execution. When predictive models, generative AI and decisioning engines converge, marketers stop drawing hypothetical journeys and start letting the system assemble unique paths per person.

What changes? Less canvas drawing, more outcome setting. Less reporting theater, more lift by cohort. Fewer meetings, faster iterations.

Warner points to a future that’s closer than most organizations realize. “Imagine a world where I don’t come to your commerce site and browse. Instead, I can just type to a bot what it is I’m looking for. And I expect your brand to be responsive to that.”

That kind of conversational commerce will require everyone in the organization to become a customer experience expert. “It doesn’t matter what channel the customer uses,” Warner explained. “They’re talking to your brand.”

The path forward

There is no AI strategy without an operating model that can use it. The fix requires three fundamental changes: restructure how marketing work flows, measure lift instead of activity and enable marketers to move from idea to execution without handoffs.

The path forward requires discipline. Pick one customer-facing use case with clear financial upside. Define the minimum signals, audiences and KPIs needed. Enforce holdouts by default. Enable direct access to data, creative generation and activation in one place. Publish weekly lift by cohort. Expand only when lift is proven.

Warner expects adoption to accelerate significantly in 2026 as more vendors embed AI capabilities with proper guardrails. For brands like Blain’s Farm & Fleet, that future is already taking shape. They started with copywriting, proved value and are now expanding. The key was finding specific problems where AI could help and measuring whether it actually did.

AI will not fix a slow system. It will amplify it. Teams that modernize the way work gets done and lift the language of decisions will see the promise translate into performance.

As Blawat’s chocolate story reminds us, automation without judgment optimizes for the wrong outcome. The goal isn’t the cheapest product or the fastest campaign. It’s the one that serves customers while building the brand. That requires humans in the loop to point AI in the ri

Read more at Read More

Google tests “Journey Aware Bidding” to optimize Search campaigns

Is it time to rethink your current Google Ads strategy?

Google is preparing a new Search bidding model called Journey Aware Bidding, designed to factor in the entire customer journey — not just the final biddable conversion — to improve prediction accuracy and campaign performance.

How it works:

  • Journey Aware Bidding learns from your primary conversion goal plus additional, non-biddable journey stages.
  • Advertisers who fully track and properly categorize each step of their purchase funnel stand to benefit the most.
  • Google recommends mapping the entire journey — from lead submission to final purchase — and labeling all touchpoints as conversions within standard goals.

Why we care. Performance advertisers have long struggled with fragmented signals across the funnel. Journey Aware Bidding brings more of their conversion funnel into Google’s prediction models, potentially improving efficiency for long, multi-step journeys like lead gen.

Instead of optimizing on a single end-stage signal, Google can learn from every meaningful touchpoint, leading to smarter bids and better alignment with real business outcomes. This update rewards advertisers with strong tracking and could deliver a meaningful performance lift once fully launched.

What advertisers need to do:

  • Choose a single KPI-aligned stage (e.g., purchase, qualified lead) as the optimization target.
  • Mark other journey stages as primary conversions, but exclude them from campaign-level or account-default bidding optimization.
  • Ensure clean tracking and clear categorization of every step.

Pilot status. A closed pilot is due to launch this year for a small group of advertisers, with broader availability expected afterward as Google refines the model.

The bottom line. Journey Aware Bidding could represent a major shift in Search optimization: Google wants its bidding systems to understand not just what converts — but how users get there.

First seen. The details of this new bidding model was shared by Senior Consultant Georgi Zayakov on LinkedIn, amongst other products that were featured at Think Week 2025.

Read more at Read More

Minimizing Marketing Blind Spots: The New Era of Attribution

Attribution in the modern marketing age can be confusing. But the pressure on marketing teams to “prove what’s working” never goes away. 

Traditionally, marketers had certain data we could always rely on, but the data pool we can pull from seems to be growing and shrinking at the same time. Between privacy constraints, zero-click searches, AI Overviews, and channel-walled gardens, marketers are flying blind in more ways than they realize. Attribution has always been an imperfect science. And in 2025, it’s gone from fuzzy to fragmented.

If you’re planning marketing budgets and trying to defend where your spend is going, there’s no need to freak out. Marketing attribution is possible. It doesn’t look like it used to, though. And if you’re still only relying on touch-based models or last-click reports, you might be measuring the wrong things entirely.

Let’s break down where attribution is failing, what’s making it harder, and what forward-looking marketers are doing to close the gap.

Key Takeaway

  • Attribution challenges have multiplied due to AI, automation, and privacy shifts.
  • Walled gardens, offline sales, and dark social are major blind spots, and they often overlap.
  • Deterministic, touch-based attribution is giving way to modeled and probabilistic methods.
  • AI isn’t just the problem, it’s also part of the solution.
  • You don’t need perfect data. You need data that helps you make better decisions.

The New Face of Attribution

Attribution used to be about stitching together clicks. Now, we’re lucky if we get clicks at all thanks to zero-click search.

Today’s buyers bounce between different platforms on multiple devices and AI-curated content. They’re influenced by ads on a connected TV or product mentions in a ChatGPT thread, and neither of those leaves a clean digital trail.

Meanwhile, ad platforms like Meta and Google have leaned hard into automation. That means fewer transparent levers to optimize and more “black box” performance metrics. According to NP Digital analysis, there are over 90% fewer optimization permutations in Google and Meta Ads today compared to 2023. So yes, marketing attribution is back. But the infrastructure around it seems more broken than ever.

A graphic explaining the collapse of optimization levers.

Finding Marketing Blindspots

Unfortunately, the reality is that attribution blind spots don’t come with a warning light. You may be staring directly at your dashboard and not notice traffic is piling up in areas you’re not tracking. And the amount of potential blindspots is growing.

Here are the big ones:

  • Walled Gardens: Platforms like Google, Meta, and Amazon are all powerful, but have become much more mysterious as search evolves. You’re renting their space, but if you don’t play by their rules, you may not get complete visibility.
  • Offline Sales: Leads turn into deals in CRMs, call centers, or retail. They may have started as a click, but the customer journey ends at a brick-and-mortar location or an entirely different platform than the original click.
  • Cross-Device Journeys: That ad someone saw on mobile might convert from their phone, but they could just as easily become a sale on their desktop or smart TV.
  • Building Awareness: Upper funnel spend (like digital out-of-home (OOH) or video) gets undervalued because it rarely leads to a direct conversion.
  • Dark Social: Private sharing (think WhatsApp, SMS, Signal) shows up in attribution models as “direct”, but it’s not.
  • LLM Traffic: People are discovering brands via large language models, and those referrals are often invisible in GA4.

To make matters worse, these blind spots can stack. Before you know it, you find yourself in a nightmare marketing scenario where you’re not just missing one data signal, you’re missing combinations of them, making optimization even harder.

A graphic that explains how multiple marketing blindspots can pile up.

New Attribution Trends and Technology

You can keep up with all of this. It just requires a switch in perspective. Marketers should evaluate their campaigns using a combination of modeled attribution and traditional touch-based metrics. You may never fully connect every dot, and that’s okay. The goal isn’t perfection, just enough clarity to defend marketing budget allocations.

Modern marketers are using these tools:

  • Incrementality testing: Geo holdouts and lift studies to isolate what’s actually moving the needle.
  • MMM (Marketing Mix Modeling): Especially useful for larger budgets or mixed channel strategies.
  • Correlation analysis: Pre/post testing, contextual lift, and even proxy signals like brand search volume.
  • Unified first-party data: Clean, consistent CRM and web data feeding both your models and your platforms.

The best strategies blend these methods based on spend level, complexity, and conversion volume. Leveraging AI in your marketing efforts is one of the best ways to automate this research as much as possible and maximize the benefit of these tactics. 

AI and Blind Spots

Some marketers may feel like AI is eroding attribution. While that could be true, the technology is also helping to rebuild it.

Here’s how AI is stepping in:

  • Generative AI: LLMs like ChatGPT are now discovery platforms. They drive traffic, but don’t always identify themselves unless you tag them.
  • AI coworkers: Agentic AI simulates user behavior, tests messaging, and can even help set up GA4 tracking automatically.
  • Machine learning models: Used in MMMs and platform attribution to refine forecasts, assign contribution, and make predictions.

Still, only 55% of marketers trust AI-generated insights, according to CoSchedule. The key is to treat AI as an assistant, not the authority. Use it to speed up testing and build models, but validate with your own data.

A graphic that explains how to introduce GenAI into reporting workflows.

Analytics platforms like Adobe Analytics are also making steps to better capture attribution from AI tools. In October they released a new referrer type called “Conversational AI Tools” to segment out traffic from ChatGPT and other LLMs from the other channels marketers have historically monitored.

Closing The Gap With Attribution Strategies

So, how do you go from blind spots to better planning? You don’t need perfect clarity. You need consistent signals and a smarter strategy.

Here are some ways marketers are closing attribution gaps:

  1. Clean your first-party data: Data from internal sources like your website and CRM needs to be trustworthy. These are your most important sources of truth.
  2. Use multipliers: Adjust performance based on geo lift or experiment results. Not every click counts equally.
  3. Invite questions: Models are approximations. Encourage teams to challenge them and make improvements as time goes on.
  4. Survey your customers: Ask where they heard about you. It’s old school, but incredibly effective for context.
  5. Use offer codes and landing pages: Even if not perfect, they create new signals across dark social or offline.
  6. Track “AI Referrers”: Create custom =channels in your web analytics, including in GA4, to segment out performance from LLM-driven traffic.

Linking Attribution To Business Outcomes

Attribution and business outcomes go hand-in-hand. Understanding where your most profitable leads originate is essential to growing any business, regardless of its size.

A graphic explaining savings attributed to fixing attribution.

You want to connect your data to actual decisions, such as forecasts, budgets, and resource allocation. But, with the marketing landscape changing so quickly and drastically, how do you know which metrics to follow?

Here are the metrics that matter now:

  • Total conversions and incremental conversions
  • Conversion value over time
  • Cost per incremental conversion
  • Spend thresholds by tactic
  • Directional change (old model vs. new)

Remember: even if your models aren’t perfect, if they get you closer to optimal spend, it’s working. Continuous improvement for your attribution strategy will get you closer and closer still.

A graphic explaining the value of continuous improvement for marketing attribution.

FAQs

What is a marketing attribution blind spot?

It’s any part of the customer journey you can’t track, like dark social shares, offline sales, or LLM referrals that may be influencing conversions without showing up in your data.

Can AI help with attribution?

Yes, but only if used smartly. AI can simulate behavior and identify patterns, but it’s not a silver bullet. Use it to complement your experiments and first-party data.

What’s the best attribution model?

There isn’t one. The most effective models mix touch-based data with testing and contextual clues. Choose based on your business size, channel mix, and data maturity.

Conclusion

When it comes to effective attribution, you just need to see enough to move forward.

Mastering this skill in the modern marketing world is less about getting the credit right and more about making smarter calls with what you can measure. The key is to stop chasing perfection and start building a system that helps you plan and adapt to the data you gather from your testing in real-time. Attribution isn’t the whole picture, but it remains the best tool we have to illuminate the path forward, including its blind spots.

Naturally, we can still learn from tried and true marketing methods. We may just have to think outside the box on how to apply them to today’s search environment and customer journey. It’s worth checking out our guides on which marketing campaigns drive the best impact and how to track your marketing ROI. Combining this extra knowledge with your new attribution perspective could be the secret sauce to put you ahead of the pack in 2026. 

Read more at Read More

Google offers a “less disruptive” fix to EU ad-tech showdown

Google submitted a compliance plan to the European Commission that proposes changes to its ad-tech operations — but rejects calls to break up its business

How it works:

  • Google is offering product-level changes — for example, giving publishers the ability to set different minimum prices for different bidders in Google Ad Manager.
  • It’s also proposing greater interoperability between Google’s tools and those of rivals, in order to give publishers and advertisers more flexibility.
  • The company says these tweaks would resolve the European Commission’s concerns without a “disruptive break-up.”

Why we care. Google’s proposed “non-disruptive” fixes could preserve platform stability and avoid the turbulence of a forced breakup — but they may also shape future auction dynamics, pricing transparency, and access to competitive tools. In short, the outcome will influence how much control, choice, and cost efficiency advertisers have in Europe’s ad ecosystem.

Between the lines. Google is leaning on technical fixes rather than major structural overhaul — but critics argue that without deeper reform, the power dynamics in ad tech may not fundamentally shift.

The bottom line. Google is trying to strike a compromise: addressing the EU’s antitrust concerns while keeping its integrated ad-tech business intact. Regulators now face a choice: accept the tweaks — or push harder for a breakup.

Dig Deeper. EU fines Google $3.5 billion over anti-competitive ad-tech business

Read more at Read More

Small tests to yield big answers on what influences LLMs

Small Tests – Big Answers – Featured image

Undoubtedly, one of the hot topics in SEO over the last few months has been how to influence LLM answers. Every SEO is trying to come up with strategies. Many have created their own tools using “vibe coding,” where they test their hypotheses and engage in heated debates about what each LLM and Google use to pick their sources.

Some of these debates can get very technical, touching on topics like vector embeddings, passage ranking, retrieval-augmented generation (RAG), and chunking. These theories are great—there’s a lot to learn from them and turn into practice. 

However, if some of these AI concepts are going way over your head, let’s take a step back. I’ll walk you through some recent tests I’ve run to help you gain an understanding of what’s going on in AI search without feeling overwhelmed so you can start optimizing for these new platforms.

Create branded content and check for results

A while ago, I went to Austin, Texas, for a business outing. Before the trip, I wondered if I could “teach” ChatGPT about my upcoming travels. There was no public information about the trip on the web, so it was a completely clean test with no competition.

I asked ChatGPT, “is Gus Pelogia going to Austin soon?” The initial answer was what you’d expect: He doesn’t have any trips planned to Austin.

That same day, a few hours later, I wrote a blog post on my website about my trip to Austin. Six hours after I published the post, ChatGPT’s answer changed: Yes, Gus IS going to Austin to meet his work colleagues.

ChatGPT prompts with a blog post published in between queries, which was enough to change a ChatGPT answer.

ChatGPT used an AI framework called RAG (Retrieval Augmented Generation) to fetch the latest result. Basically, it didn’t have enough knowledge about this information in its training data, so it scanned the web to look for an up-to-date answer.

Interestingly enough, it took a few days until the actual blog post with detailed information was found by ChatGPT. Initially, ChatGPT had found a snippet of the new blog post on my homepage and reindexed the page within the six-hour range. It was using just the blog post’s page title to change its answer before actually “seeing” the whole content days later.

Some learnings from this experiment:

  • New information on webpages reaches ChatGPT answers in a matter of hours, even for small websites. Don’t think your website is too small or insignificant to get noticed by LLMs—they’ll notice when you add new content or refresh existing pages, so it’s important to have an ongoing brand content strategy.
  • The answers in ChatGPT are highly dependent on the content published on your website. This is especially true for new companies where there are limited sources of information. ChatGPT didn’t confirm that I had upcoming travel until it fetched the information from my blog post detailing the trip.
  • Use your webpages to optimize how your brand is portrayed beyond showing up in competitive keywords for search. This is your opportunity to promote a certain USP or brand tagline. For instance, “The Leading AI-Powered Marketing Platform” and “See everyday moments from your close friends” are used, respectively, by Semrush and Instagram on their homepages. While users probably aren’t searching for these keywords, it’s still an opportunity for brand positioning that will resonate with them.

Win every search with AI visibility + traditional SEO

Built for how people search today. Track your brand across Google rankings and AI search in one place.

Try free for 14 days

Get started with

Semrush One Logo

Test to see if ChatGPT is using Bing or Google’s index

The industry has been ringing alarm bells about whether ChatGPT uses Google’s index instead of Bing. So I ran another small test to find out: I added a <meta name=”googlebot” content=”noindex”> tag on the blog post, allowing only Bingbot for nine days.

If ChatGPT is using Bing’s index, it should find my new page when I prompt about it. Again, this was on a new topic and the prompt specifically asked for an article I wrote, so there wouldn’t be any doubts about what source to show.

The page got indexed by Bing after a couple of days, while Google wasn’t allowed to see it.

New article has been indexed by Bingbot

I kept asking ChatGPT, with multiple prompt variations, if it could find my new article. For nine days, nothing changed—it couldn’t find the article. It got to a point that ChatGPT hallucinated (actually, tried its best guess) a URL.

ChatGPT made-up URL: https://www.guspelogia.com/learnings-from-building-a-new-product-as-an-seo
Real URL: https://www.guspelogia.com/learnings-new-product-seo

GSC shows that it can’t index the page due to “noindex” tag

I eventually gave up and allowed Googlebot to index the page. A few hours later, ChatGPT changed its answer and found the correct URL.

On the top, ChatGPT’s answer when Googlebot was blocked. On the bottom, ChatGPT’s answer after Googlebot was allowed to see the page.

Interestingly enough, the link to the article was presented on my homepage and blog pages, yet ChatGPT couldn’t display it. It only found that the blog post existed based on the text on those pages, even though it didn’t follow the link.

Yet, there’s no harm in setting up your website for success on Bing. They’re one of the search engines that adopted IndexNow, a simple ping that informs search engines that a URL’s content has changed. This implementation allows Bing to reflect updates in their search results quickly. 

While we all suspect (with evidence) that ChatGPT isn’t using Bing’s index, setting up IndexNow is a low effort task that’s worthwhile.

Change the content on a source used by RAG

Clicks are becoming less important. Instead, being mentioned in sources like Google’s AI Mode is arising as a new KPI for marketing teams. SEOs are testing multiple tactics to “convince” LLMs about a topic. From using LinkedIn Pulse to write about a topic, to controlled experiments with expired domains and hacking sites, in some ways, it feels like old-school SEO is back.

We’re all talking about being included in AI search results, but what happens when a company or product loses a mention on a page? Imagine a specific model of earbuds is removed from a “top budget earbuds” list—would the product lose its mention, or would Google find a new source to back up its AI answer? 

While the answer could always be different for each user and each situation, I ran another small test to find out.

In a listicle that mentioned multiple certification courses, I identified one course that was no longer relevant, so I removed mentions of it from multiple pages on the same domain. I did this to keep the content relevant, so measuring the changes in AI Mode was a side effect.

Initially, within the first few days of the course getting removed from the cited URL, it continued to be part of the AI answer for a few pre-determined prompts. Google simply found a new URL in another domain to validate its initial view. 

However, within a week, the course disappeared from AI Mode and ChatGPT completely. Basically, even though Google found another URL validating the course listing, because the “original source” (in this case, the listicle) was updated to remove the course, Google (and, by extension, ChatGPT) subsequently updated its results as well.  

This experiment suggests that changing the content on the source cited by LLMs can impact the AI results. But take this conclusion with a pinch of salt, as it was a small test with a highly targeted query. I specifically had a prompt combining “domain + courses” so the answer would come from one domain.

Nonetheless, while in the real world it’s unlikely one citation URL would hold all the power, I’d hypothesize that losing a mention on a few high-authority pages would have the side effect of losing the mention in an AI answer.

Test small, then scale

Tests in small and controlled environments are important for learning and give confidence that your optimization has an effect. Like everything else I do in SEO, I start with an MVP (Minimum Viable Product), learn along the way, and once/if evidence is found, make changes at scale.

Do you want to change the perception of a product on ChatGPT? You won’t get dozens of cited sources to talk about you straight away, so you’d have to reach out to each single source and request a mention. You’ll quickly learn how hard it is to convince these sources to update their content and whether AI optimization becomes a pay-to-play game or if it can be done organically.

Perhaps you’re a source that’s mentioned often when people search for a product, like earbuds. Run your MVPs to understand how much changing your content influences AI answers before you claim your influence at scale, as the changes you make could backfire. For example, what if you stop being a source for a topic due to removing certain claims from your pages?

There’s no set time for these tests to show results. As a general rule, SEOs say results take a few months to appear. In the first test on this article, it took just a few hours to see results. 

Running LLM tests with larger websites

Working in large teams or on large websites can be a challenge when doing LLM testing. My suggestion is to create specific initiatives and inform all stakeholders about changes to avoid confusion later, as they might question why these changes are happening.

One simple but effective test done by SEER Interactive was to update their footer tagline.

  • From: Remote-first, Philadelphia-founded
  • To: 130+ Enterprise Clients, 97% Retention Rate 

By changing the footer, ChatGPT 5 started mentioning its new tagline within 36 hours for a prompt like “tell me about Seer Interactive.” I’ve checked, and while every time the answer is different, they still mention the “97% retention rate.”

Imagine if you decide to change the content on a number of pages, but someone else has an optimization plan for those same pages. Always run just one test per page, as results will become less reliable if you have multiple variables.

Make sure to research your prompts, have a tracking methodology, and spread the learnings across the company, beyond your SEO counterparts. Everyone is interested in AI right now, all the way up to C-levels.

Another suggestion is to use a tool like Semrush’s AI SEO toolkit to see the key sentiment drivers about a brand. Start with the listed “Areas for Improvement”—this should give you plenty of ideas for tests beyond “SEO Reason,” as it reflects how the brand is perceived beyond organic results.

Checklist: Getting started with LLM optimization

Things are changing fast with AI, and it’s certainly challenging to keep up to date. There’s an overload of content right now, a multitude of claims, and, I dare to say, not even the LLM platforms running them have things fully figured out.

My recommendation is to find the sources you trust (industry news, events, professionals) and run your own tests using the knowledge you have. The results you find for your brands and clients are always more valuable than what others are saying.

It’s a new world of SEO and everyone is trying to figure out what works for them. The best way to follow the curve (or stay ahead of it) is to keep optimizing and documenting your changes.

To wrap it up, here’s a checklist for your LLM optimization:

  • Before starting a test, make sure your selected prompts consistently return the answer you expect (such as not mentioning your brand or a feature of your product). Otherwise, the new brand mention or link could be a coincidence, not a result of your work.
  • If the same claim is made on multiple pages on your website, update them across the board to increase chances of success
  • Use your own website and external sources (e.g., via digital PR) to influence your brand perception. It’s unclear if users will cross-check AI answers or just trust what they’re told.

Read more at Read More