LLM Optimization (LLMO): How to Rank in AI-Driven Search

You’re not alone if you’ve noticed your organic traffic dipping while your content continues to rank. And you’re not imagining it. Nowadays, people skip clicking to websites and get answers to their questions straight from AI platforms like ChatGPT, Perplexity, or Google’s AI Overviews.

Welcome to the new reality, where AI reshapes how users search and brands that fail to adapt risk fading from the conversation. 

How do we deal with this? LLM optimization (LLMO). 

LLMO isn’t a furry red puppet from a kid’s TV show. Nor is it just another SEO tactic. It’s the next evolution in search visibility, one designed to help your brand show up when large language models (LLMs) generate answers instead of serving up traditional search results.

The good news is that most companies aren’t currently doing it, and that’s an edge you can use to your advantage.

Below, we’ll explore how LLMO works, why it matters, and concrete strategies you can use to get your brand into AI-generated answers before your competitors.

Key Takeaways

  • LLM optimization (LLMO) is the evolution of SEO. It focuses on getting your brand cited and recommended inside AI answers, not just ranked on traditional search results pages.
  • Ignoring LLMO means lost visibility. Even if your rankings stay strong, AI-generated answers can push you out of the conversation.
  • Three pillars drive LLMO success: authoritative content (E-E-A-T), structured data (schema, FAQs, HowTos), and consistent tracking of AI citations.
  • Winning early matters. Most brands have yet to optimize for AI, so moving now gives you a competitive edge.
  • Think beyond Google. AI models pull from multiple platforms, including digital public relations (DPR), backlinks, and multi-format content across trusted spaces, boosting your chances of being included in answers.

What is LLM Optimization?

LLMO is increasing your brand’s visibility in AI-generated answers from large language models like Gemini, Perplexity, Claude, and ChatGPT. You can think of it as the next evolution of SEO.

Traditional SEO helps you rank in search engine results. LLMO helps you get cited, mentioned, and recommended inside AI responses. Instead of blue links on a SERP, these are full-text answers where being included often means you’re the answer.

So, what makes this different from LLM SEO?

LLM SEO typically focuses on targeting AI Overviews or how LLMs pull from search engine results. LLMO goes broader. It focuses on structuring content, strengthening brand authority, and ensuring visibility across any LLM platform, not just Google’s.

More so than ranking highly, LLMO focuses on showing up when users don’t even click.

AI output for the query "what are the best backpacks for work?"

Perplexity’s results when asked for the best backpacks for work.

ChatGPT output for "What are the best backpacks for work?"

ChatGPT’s recommendations for the same question.

How LLMs Work

LLMs don’t search the web in real time (unless they use retrieval methods). Instead, they generate responses based on patterns in their training data, which comprises billions of words from sources such as websites, books, Wikipedia, Reddit, and more.

Here’s how it works: When you type a prompt, the LLM predicts the most likely next word based on everything it’s seen before. That prediction continues word-by-word until it builds a full response.

What makes this a big deal for marketers?

LLMs favor content that’s:

  • Clear and easy to understand
  • Well-structured and logically organized
  • Fact-based
  • Published or associated with trusted sources 

If your content meets these standards (and exists in places LLMs train on), it has a higher chance of showing up in those responses. The goal is no longer to rank in search alone, but to be seen as a reliable part of the internet’s knowledge base.

Bottom line: if your content isn’t clear, structured, and published in trusted places, LLMs won’t see you as credible.

The Impact of LLMs On How We Gather Information

LLMs have changed how people search.

Instead of relying on ten blue links or blog posts for information, users ask questions and get complete answers without leaving the AI experience or SERP. That shift creates even more “zero-click” moments, where users don’t visit your site because the AI already gave them the needed answer.

That’s a big deal if your brand relies on traffic. You could be the best at what you do, but users may never know (or forget) you exist if you’re not part of the AI-generated answer.

That means the rules have changed. Visibility now depends on whether LLMs see and trust your content; failing to actively optimize for that means you’re already falling behind.

Why LLM Optimization is Important

If you’ve relied on traditional SEO alone, you’ve seen the warning signs: traffic dropping even though rankings haven’t moved. Users aren’t clicking. They’re getting their answers straight from AI. How many? The number can vary, but according to some estimates, ChatGPT boasts more than 700 million weekly active worldwide users. Perplexity had 22 million active users in May 2025.

Marketers who ignore LLMO risk losing visibility. Your brand may have great rankings, backlinks, and content, but if LLMs don’t include you in their answers, you’re no longer in the conversation. And that means fewer impressions, clicks, or opportunities to win customers.

There’s a flipside, though. Marketers who adapt today get an advantage over their competitors. LLMs reward trustworthy, structured content that speaks with authority. When you optimize for AI-driven search, you position your brand to appear where people make decisions: inside the answers they read, not just on the links they skip.

The TL;DR? LLMO is the new baseline for staying visible in an AI-first search reality.

How to Optimize for LLMs

LLMO comes down to three pillars:

  • Creating authoritative content
  • Structure content  so AI can understand it
  • Track brand presence AI responses

Nail these three, and you’re on your way to AI-driven visibility. But how do you do that? 

Create Content LLMs Trust

LLMs look for reliable content. That means well-cited, comprehensive content written by people (or brands) who clearly know their stuff. This concept should feel familiar. In SEO terms, we describe it as E-E-A-T: experience, expertise, authority, and trust.

For example, a medical publisher cites peer-reviewed studies and has licensed doctors writing the content. Google and AI models treat this as more trustworthy than a generic health or wellness blog.

AI results for "Which is better for a headache, Tylenol or ibuprofen?

Perplexity sources information from reputable organizations like the Cleveland Clinic and Nature to answer this question.

Your goal is the same. Back up your claims with relevant, recent stats. Link to reputable sources. Build depth into your content. The more proof points you provide, the more likely LLMs will pull your information into their responses.

Use Structured Data and Schema

LLMs thrive on structure. Schema markup helps you present content in a way that AI systems can easily recognize and cite. We’ve been talking about the benefits of schema for years, but focus on practical formats that are easy to implement:

Implementing schema isn’t complicated, either. Tools like Rank Math or Yoast often make it as easy as filling out a form. The payoff is that your content becomes easier for AI to parse, increasing your odds of being referenced in the outputs.

Schema gives LLMs a cheat sheet to your content by telling them exactly what’s on the page and why it matters.

Optimize for Conversational and Long-Tail Queries

Unlike search engines, which primarily reward keywords, LLMs excel at answering natural, human-style questions. That’s why your content should target long-tail and conversational phrases.

Here’s how to adopt:

  • Pull inspiration from the “People Also Ask” results, Reddit threads, and Quora discussions. Read the titles of posts and questions on enthusiast or product-specific forums and subreddits, and create content to answer them.
  • Frame subheadings as real questions. Instead of “LLMO Strategy,” try “How do you optimize for LLMs?”
  • Expand your FAQs with the same language your audience uses.
People also ask responses in Google.

The People Also Ask box on Google’s SERP provides excellent questions to think about answering, if you haven’t already.

Let’s say someone wants to know more about this topic. The keyword AI brand optimization (boring, dry) could become “How do I make my brand visible in AI search?” That’s the kind of phrasing LLMs are built to surface.

When you align your content to how people naturally ask questions, you increase your odds of citation inside answers instead of being skipped over.

Build Topical Authority Across Clusters

One-off articles won’t cut it to establish authority. Both LLMs and search engines are better at recognizing brands that demonstrate expertise across a subject, not just a single page. Topic clusters are the way to meet this demand.

Topic clusters connect one in-depth “pillar” page to multiple related posts. For example, a pillar page might target LLM optimization, while cluster posts examine topics like schema, E-E-A-T, AI metrics, and long-tail queries (all of which we’ve mentioned—or will mention—in this post). 

Each post links back to the pillar and the others, creating a web of authority. That signals to LLMs (and Google) that your brand owns the topic, not just a slice of it. The more complete your coverage, the more likely it is your content will surface in AI-generated answers.

Earn High-Authority Backlinks and Mentions

LLMs trust what the internet trusts. That means your brand needs backlinks and mentions from credible sources. Three major ways to earn backlinks include:

  • Digital PR: Pitch stories or data insights to journalists.
  • Original research: Publish statistics or case studies that others naturally cite.
  • Guest contributions: Share expertise from and on authoritative sites in your industry.

Don’t stop there, though. Regularly audit your backlink profile to clean out low-quality or spammy links. The more respected websites reference your brand, the more likely it becomes part of those AI-driven conversations due to credibility.

Implement Multi-Format Content

LLMs love clarity; the easier your content is to scan and summarize, the higher the chance it gets used. Even better, many of the same tactics that make it simpler for readers to parse are good for LLMs, too. Some practical tips for your content include:

  • Use bullet points and numbered steps for key processes.
  • Add tables to organize comparisons or data.
  • Include visuals such as screenshots, annotated images, or infographics (complete with alt text).

Why do these things work? Structured, multi-format content gives AI models more “hooks” to grab onto. Instead of parsing dense paragraphs, they can quickly identify and cite your answers. Don’t think of it as writing for AI. Think of it as making it friendlier: clear, structured, and easy to reuse.

Monitor AI-Specific Citations

You can’t improve what you don’t track. AI visibility is now a critical KPI. You can monitor it both manually and with reporting tools. Start by asking the LLM platforms questions about your search terms and content, and see where you (or your competitors) appear. With that knowledge, you can adjust content and regularly recheck it.

Of course, manual work can take a lot of time. Tools like Semrush’s AI Tracking, Ubersuggest LLM Beta, and Ahrefs Brand Radar let you see how often AI platforms cite your answers. Look for the following elements as part of your regular reporting:

  • Branded mentions inside chat responses
  • Citations for specific queries
  • Share of voice compared to competitors

These insights reveal content gaps and help guide your next moves. For example, if competitors are being cited for a topic you cover but you’re not cited, that’s your cue to strengthen authority or update your content.

Tracking AI citations is the feedback loop to keep your LLMO strategy moving forward.

Ahrefs' Brand Radar.

Ahrefs’ Brand Radar shows mentions and impressions for the most popular AI dashboards.

Search Everywhere Optimization and LLMO

Search is no longer confined to Google. Users today find their answers on social media, Reddit, YouTube, and AI platforms. Search Everywhere Optimization ties directly into LLMO.

When you optimize for visibility across all platforms, you create more entry points for LLMs to pull from. When your brand is active in multiple trusted spaces, you’re far more likely to be included in AI answers.

How To Track LLM Visibility

You can’t treat LLMO like traditional SEO unless you know where you’re showing up. Tracking AI visibility allows you to measure progress, spot gaps, and benchmark against your competitors. So, what should you measure?

  1. Branded Mentions in AI Responses: Check how often your brand name or content appears in outputs from ChatGPT, Perplexity, Gemini, and Claude, among others. Seek out both direct mentions and co-citations with your competitors.
  2. Topic-Level Inclusion: Search AI models for industry-specific queries. If competitors are cited but you aren’t, that’s a red flag.
  3. Traffic from LLMs: Tools like GA4 can help you track referral traffic. Sometimes using Looker Studio templates can help you separate the AI referrals from organic traffic.
  4. Share of Voice in AI: The platforms we mentioned above—Semrush, Ubersuggest, and Ahrefs Brand Radar—can provide dashboards that show your brand mentions across queries.

There are upcoming tools that combine several of these different functionalities as well, such as Profound. LLM visibility won’t replace your existing analytics; it’s another tool in your ranking report. Instead of asking “Where do I rank in Google?”, you’ll ask, “Where do I appear in AI answers?”

The data you collect here is really important. It shows you which strategies are working and allows you to double down on the ones that matter most.

FAQs

What is LLMO?

LLMO stands for large language model optimization. It’s the practice of making your brand, content, and data more visible in AI-generated answers likeChatGPT, Claude, Gemini, and Perplexity.

How is LLMO different from SEO?

SEO helps you rank in traditional search engines. LLMO ensures you’re included in AI responses. Both are important, but LLMO addresses the “zero-click” future of search.

How do I get my brand into LLM responses?

Focus on three pillars: authoritative content (E-E-A-T), structured data (schema, FAQs, HowTos, Product), and monitoring AI citations. Add digital PR, backlinks, and multi-format content to increase the chances your expertise is recognized and surfaced.

How long does LLM optimization take?

Like SEO, results don’t happen overnight. But unlike SEO, you can sometimes see brand mentions in LLMs faster, especially if your content is well-cited and already trusted.

What tools track AI visibility?

Early options include Semrush AI Tracking, Ubersuggest LLM Beta, and Ahrefs Brand Radar. You can also use GA4 to measure referral traffic from LLM-powered search engines like ChatGPT.

Do backlinks still matter for LLMO?

Yes. LLMs lean on credible, widely cited sources. High-authority backlinks increase your chances of being trusted and surfaced in AI answers.

Can small businesses benefit from LLMO?

Absolutely. In fact, moving early is an advantage. If competitors aren’t optimizing yet, you can claim visibility before they catch up.

Conclusion

AI-driven search is not the future because it’s already here.

If you want your brand to stay visible, think outside the blue link box and start optimizing for where people get their answers. That’s the promise of LLM optimization.

The playbook? Simple: Create trustworthy content and structure it so AI can understand it. Once it’s in place, track how often you show up in responses like AI Overviews and ChatGPT. As you layer in topic clusters, a strong digital PR push, and multi-format assets, you’ll give your brand every chance to surface where it counts.

Companies that adapt today will own tomorrow’s conversation. The ones who won’t risk losing visibility and becoming yesterday’s news, even if their SEO fundamentals look good on paper.

If you’re ready to learn how to turn your content into AI-worthy assets, we can help. Contact us today for your consultation.

Read more at Read More

Google Ads adds loyalty features to boost shopper retention

Google Shopping Ads - Google Ads

Google is rolling out new loyalty integrations across Google Ads and Merchant Center, giving retailers tools to highlight member-only pricing and shipping benefits to their most valuable customers.

How it works:

  • Personalized annotations display member-only discounts or shipping benefits in both free and paid listings.
  • A new loyalty goal in Google Ads helps retailers optimize budgets toward high-value shoppers, adjusting bids to prioritize lifetime value.
  • Sephora US saw a 20% lift in CTR by surfacing loyalty-tier discounts in personalized ads.

Why we care. With 61% of U.S. adults saying tailored loyalty programs are the most compelling part of a personalized shopping experience (according to Google), retailers face pressure to prove value beyond discounts.

By surfacing member-only perks directly in search and shopping results, retailers can boost engagement from their most valuable customers and optimize spend toward higher lifetime value, not just single conversions. It’s a way to tie loyalty programs directly to ad performance — and win more share of wallet from existing shoppers.

The big picture. Loyalty features are Google’s latest move to keep retail advertisers invested in its ecosystem — positioning search and shopping as not just discovery channels, but retention engines. Expect more details at Google’s Think Retail event on Sept. 10.

Read more at Read More

AI tool adoption jumps to 38%, but 95% still rely on search engines

AI tools vs traditional search

More than 1 in 5 Americans now use AI tools heavily – but traditional search engines remain dominant, with usage holding steady at 95%, according to new clickstream data from Datos and SparkToro.

By the numbers:

  • AI tools: 21% of U.S. users access AI tools like ChatGPT, Claude, Gemini, Copilot, Perplexity, and Deepseek 10+ times per month. Overall adoption has jumped from 8% in 2023 to 38% in 2025.
  • Search engines: 95% of Americans still use Google, Bing, Yahoo, or DuckDuckGo monthly, with 87% considered heavy Google users – up from 84% in 2023.
  • Growth trends: AI adoption is slowing. Since September 2024, no month has shown more than 1.1x growth. By contrast, search volume per user has slightly increased year-over-year.

The big picture: Despite the hype around AI replacing Google, the data seems to show the opposite. When people adopt AI tools, their Google searches also rise, SparkToro found.

Yes, but. Are there any truly “traditional search engines” left? Things get a bit messy when talking about “traditional search engines” versus the “AI tools” examined here, because all search engines now have AI baked in:

  • Google is a traditional search engine (or, perhaps more accurately, an AI search engine with a legacy search experience that’s clearly moving in the direction of AI Overviews and AI Mode), and Gemini is Google’s AI tool. Plus, Google’s traditional search data is being used by ChatGPT.
  • Traditional search engine Bing has its own AI tool, Copilot, and is OpenAI’s partner for ChatGPT Search, and feeds DuckDuckGo’s results.

Why we care. This data once again indicates that it isn’t AI vs. search; it’s AI plus search. Heavy AI users are also heavy searchers, meaning Google traffic declines are more about zero-click answers than AI cannibalization.

The report. New Research: 20% of Americans use AI tools 10X+/month, but growth is slowing and traditional search hasn’t dipped

Read more at Read More

Google releases August 2025 spam update

Google released its August 2025 spam update today, the company announced at 12:05 p.m. This is Google’s first announced algorithm update since the June 2025 core update. It is Google’s first spam update of 2025 and the first since December.

Timing. Google called this a “normal spam update” and it will take a “few weeks” to finish rolling out.

The announcement. Google announced:

  • “Today we released the August 2025 spam update. It may take a few weeks to complete. This is a normal spam update, and it will roll out for all languages and locations. We’ll post on the Google Search Status Dashboard when the rollout is done.”

Previous spam updates. Before today, Google’s last spam update was released Dec. 19 and finished rolling out Dec. 26; it was more volatile than the June 2024 spam update, which was released June 20, 2024 and completed rolling out June 27, 2024.

Why we care. This is the first Google algorithm update since the June 2025 core update. It’s unclear what type of spam this update is targeting, but if you see any ranking or traffic changes in the next few weeks, it could be due to this update.

Read more at Read More

ChatGPT’s answers came from Google Search after all: Report

ChatGPT Google unmasking

Multiple tests have suggested ChatGPT is using Google Search. Well, a new report seems to confirm ChatGPT is indeed using Google Search data.

  • OpenAI quietly used (and may still be using) a Google Search scraping service to power ChatGPT’s answers on real-time topics like news, sports, and finance, according to The Information.

The details. OpenAI used SerpApi, an 8-year-old scraping firm, to extract Google results.

  • Google has reportedly long tried to block SerpApi’s crawler, though it’s unclear how effective those efforts have been.
  • Other SerpApi customers reportedly include Meta, Apple, and Perplexity.

Zoom out. This revelation contrasts with OpenAI’s public stance that ChatGPT search relies on its own crawler, Microsoft Bing, and licensed publisher data.

Meanwhile. OpenAI CEO Sam Altman recently dismissed Google Search, saying:

  • “I don’t use Google anymore. I legitimately cannot tell you the last time I did a Google search.” 

Well, based on this news, it seems like he probably is using Google Search all the time within his own product.

Why we care. Google’s search index remains the foundation of online discovery – so much so that even its biggest AI search rival appears to be using it to partially power ChatGPT. This is yet another reminder that SEO isn’t going anywhere just yet. If Google’s results are valuable to OpenAI, they remain essential for driving visibility, traffic, and business outcomes.

The report. OpenAI Is Challenging Google—While Using Its Search Data (subscription required)

Read more at Read More

Historic recurrence in search: Why AI feels familiar and what’s next

Historic recurrence in search- Why AI feels familiar and what’s next

Historic recurrence is the idea that patterns repeat over time, even if the details differ.

In digital marketing, change is the only constant.

Over the last 30 years, we’ve seen nonstop shifts and transformations in platforms and tactics.

Search, social, and mobile have each gone through their own waves of evolution. 

But AI represents something bigger – not just another tactic, but a fundamental shift in how people research, evaluate, and buy products and services.

Estimates vary, but Gartner projects that AI-driven search could account for 25% of search volume by the end of 2026.

I suspect the true share will be much higher as Google weaves AI deeper into its results.

For digital marketers, it can feel like we need a crystal ball to predict what’s next. 

While we don’t have magical foresight, we do have the next best thing: lessons from the past.

This article looks back at the early days of search, how user behavior evolved alongside technology, and what those patterns can teach us as we navigate the AI era.

The early days: Wild and wonderful queries

If you remember the early web – AltaVista, Lycos, Yahoo, Hotbot – search was a free-for-all. 

People typed in long, rambling queries, sometimes entire sentences, other times just a few random words that “felt” right.

There were no search suggestions, no “people also ask,” and no autocorrect. 

It was a simpler time, often summed up as “10 blue links.”

Google Search - 10 blue links

Searchers had to experiment, refine, and iterate on their own, and the variance in query wording was huge.

For marketers, that meant opportunity. 

You could capture traffic in all sorts of unexpected ways simply by having relevant pages indexed.

Back then, SEO was, in large part, about one thing: existing in the index.

Dig deeper: A guide to Google: Origins, history and key moments in search

Google’s rise: From exploration to efficiency

Anyone working in digital marketing in the early 2000s will remember. 

From Day 1, Google felt different. The quality of its results was markedly better.

Then came Google Suggest in 2008, quietly changing the game. 

Suddenly, you didn’t have to finish typing your thought. Google would complete it for you, based on the most common searches.

Research from Moz and others at the time showed that autocomplete reduced query length and variance. 

People defaulted to Google’s suggestions because it was faster and easier.

This marked a significant shift in our behavior as searchers. We moved from sprawling, exploratory queries to shorter, more standardized ones.

It’s not surprising. When something can be achieved with less effort, human nature drives us toward the path of least resistance.

Once again, technology had changed how we search and find information.

Mobile, voice, and the second compression

The shift to mobile accelerated this compression.

Tiny keyboards and on-the-go contexts meant people typed as little as possible.

Autocomplete, voice input, and “search as you type” all encouraged brevity.

At the same time, Google kept rolling out features that answered questions directly, creating a blended, multi-contextual SERP.

The cumulative effect? Search behavior became more predictable and uniform.

For marketers running Google Ads or tracking performance in Google Analytics and Search Console, this shift came with another challenge: less data. 

Long-tail keywords shrank, while most traffic and budget concentrated on a smaller set of high-volume terms.

Once again, our search behavior – and the insights we could glean from it – had evolved.

Zero-click search and the walled garden

By the late 2010s, zero-click searches were on the rise. 

Google – and even social platforms – wanted to keep users inside their ecosystems.

More and more questions were answered directly in the search results. 

Search got smarter, and shorter queries could deliver more refined results thanks to personalization and past interactions.

Google started doing everything for us.

Search for a flight? You’d see Google Flights.

A restaurant? Google Maps. 

A product? Google Shopping. 

Information? YouTube

You get the picture.

For businesses built on organic traffic, this shift was disruptive. 

But for users, it felt seamless – arguably a better experience, even if it created new challenges for optimizers.

Get the newsletter search marketers rely on.


Quality vs. brevity

This shift worked – until it didn’t. 

One common complaint today is that search results feel worse

It’s a complicated issue to unpack. 

  • Have search results actually gotten worse? 
  • Or are the results as good as ever, but the underlying sites have declined in quality?

It’s tricky to call. 

What is certain is that as traffic declined, many sites got more aggressive – adding more ads, more pop-ups, and sneakier lead gen CTAs to squeeze more value from fewer clicks.

The search results themselves have also become a bewildering mix of ads, organic listings, and SERP features. 

To deliver better results from shorter queries, search engines have had to guess at intent while still sending enough clicks to advertisers and publishers to keep the ecosystem running.

And as traffic-starved publishers got more desperate, user experience took a nosedive. 

Anyone who has had to scroll through a food blogger’s life story – while dodging pop-ups and auto-playing ads – just to get to a recipe knows how painful this can be.

It’s this chaotic landscape that, in part, has driven the move to answer engines like ChatGPT and other large language models (LLMs). 

People are simply tired of panning for gold in the search results.

The AI era: From compression back to conversation

Up to this point, the pattern has been clear: the average query length kept getting shorter.

But AI is changing the game again, and the query-length pendulum is now swinging sharply in the opposite direction.

Tools like ChatGPT, Claude, Perplexity, and Google’s own AI Mode are making it normal to type or speak longer, more detailed questions again.

We can now:

  • Ask questions instead of searching for keywords. 
  • Refine queries conversationally. 
  • Ask follow-ups without starting over. 

And as users, we can finally skip the over-optimized lead gen traps that have made the web a worse place overall.

Here’s the key point: we’ve gone from mid-length, varied queries in the early days, to short, refined queries over the last 12 years or so, and now to full, detailed questions in the AI era.

The way we seek information has changed once more.

We’re no longer just searching for sources of information. We’re asking detailed questions to get clear, direct answers.

And as AI becomes more tightly integrated into Google over the coming months and years, this shift will continue to reshape how we search – or, more accurately, how we question – Google.

Dig deeper: SEO in an AI-powered world: What changed in just a year

AI and search: Google playing catch-up

Google was a little behind the AI curve.

ChatGPT launched in late 2022 to massive buzz and unprecedented adoption.

Google’s AI Overviews – frankly underwhelming by comparison – didn’t roll out until mid-2024. 

After launching in the U.S. in mid-June and the U.K. in late July 2025, Google’s full AI Mode is now available in 180 countries and territories around the world.

Now, we can ask more detailed, multi-part questions and get thorough answers – without battling through the lead gen traps that clutter so many websites.

The reality is simple: this is a better system.

This is progress.

Want to know the best way to boil an egg – and whether the process changes for eggs stored in the fridge versus at room temperature? Just ask.

Google will often decide if an AI Overview is helpful and generate it on the fly, considering both parts of your question.

  • What is the best way to boil an egg?
  • Does it differ if they are from the fridge?

The AI Overview answers the question directly. 

And if you want to keep going, you can click the bold “Dive deeper in AI Mode” button to continue the conversation.

Dive deeper in AI Mode

Inside AI Mode, you get streamlined, conversational answers to questions that traditional search could answer – just without the manual trawling or the painfully over-optimized, pop-up-heavy recipe sites.

From shorter queries to shorter journeys

Stepping back, we can see how behavior is shifting – and how it ties to human nature’s tendency to seek the path of least resistance.

The “easy” option used to be entering short queries and wading through an increasingly complex mix of results to find what you needed.

Now, the path of least resistance is to put in a bit more effort upfront – asking a longer, more refined question – and let the AI do the heavy lifting.

A search for the best steak restaurant nearby once meant seven separate queries and reviewing over 100 sites. That’s a lot of donkey work you can now skip.

It’s a subtle shift: slightly more work up front, but a far smoother journey in return.

This change also aligns with a classic computing principle: GIGO – garbage in, garbage out. 

A more refined, context-rich question gives the system better input, which produces a more useful, accurate output.

Historic recurrence: The pattern revealed

Looking back, it’s clear there’s a repeating cycle in how technology shapes search behavior.

The early web (1990s)

  • Behavior: Long, experimental, often clumsy queries.
  • Why: No guidance, poor relevance, and lots of trial-and-error.
  • Marketing lesson: Simply having relevant content was often enough to capture traffic.

Google + Autocomplete (2000s)

  • Behavior: Queries got shorter and more standardized.
  • Why: Google Suggest and smarter algorithms nudged users toward the most common phrases.
  • Marketing lesson: Keyword targeting became more focused, with heavier competition around fewer, high-volume terms.

Mobile and voice era (2010s–early 2020s)

  • Behavior: Even shorter, highly predictable queries.
  • Why: Tiny keyboards, voice assistants, and SERP features that answered questions directly.
  • Marketing lesson: The long tail collapsed into clusters. Zero-click searches rose. Winning visibility meant optimizing for snippets and structured data.

AI conversation era (2023–present)

  • Behavior: Longer, natural-language queries return – now in back-and-forth conversations.
  • Why: Generative AI tools like ChatGPT, Gemini, and Perplexity encourage refinement, context, and multi-step questions.
  • Marketing lesson: It’s no longer about just showing up. It’s about being the best answer – authoritative, helpful, and easy for AI to surface.

Technology drives change

The key takeaway is that technology drives changes in how people ask questions.

And tactically, we’ve come full circle – closer to the early days of search than we’ve been in years.

Despite all the doom and gloom around SEO, there’s real opportunity in the AI era for those who adapt.

What this means for SEO, AEO, LLMO, GEO – and beyond

The environment is changing.

Technology is reshaping how we seek information – and how we expect answers to be delivered.

Traditional search engine results are still important. Don’t abandon conventional SEO.

But now, we also need to optimize for answer engines like ChatGPT, Perplexity, and Google’s AI Mode.

That means developing deeper insight into your customer segments and fully understanding the journey from awareness to interest to conversion. 

  • Talk to your customers. 
  • Run surveys. 
  • Reach out to those who didn’t convert and ask why. 

Then weave those insights into genuinely helpful content that can be found, indexed, and surfaced by the large language models powering these new platforms.

It’s a brave new world – but an incredibly exciting one to be part of.

Read more at Read More

How to tell if Google Ads automation helps or hurts your campaigns

How to tell if Google Ads automation helps or hurts your campaigns

Smart BiddingPerformance Max, and responsive search ads (RSAs) can all deliver efficiency, but only if they’re optimizing for the right signals.

The issue isn’t that automation makes mistakes. It’s that those mistakes compound over time.

Left unchecked, that drift can quietly inflate your CPAs, waste spend, or flood your pipeline with junk leads.

Automation isn’t the enemy, though. The real challenge is knowing when it’s helping and when it’s hurting your campaigns.

Here’s how to tell.

When automation is actually failing

These are cases where automation isn’t just constrained by your inputs. It’s actively pushing performance in the wrong direction.

Performance Max cannibalization

The issue

PMax often prioritizes cheap, easy traffic – especially branded queries or high-intent searches you intended to capture with Search campaigns. 

Even with brand exclusions, Google still serves impressions against brand queries, inflating reported performance and giving the illusion of efficiency. 

On top of that, when PMax and Search campaigns overlap, Google’s auction rules give PMax priority, meaning carefully built Search campaigns can lose impressions they should own.

A clear sign this is happening: if you see Search Lost IS (rank) rising in your Search campaigns while PMax spend increases, it’s likely PMax is siphoning traffic.

Recommendation

Use brand exclusions and negatives in PMax to block queries you want Search to own. 

Segment brand and non-brand campaigns so you can track each cleanly. And to monitor branded traffic specifically, tools like the PMax Brand Traffic Analyzer (by Smarter Ecommerce) can help.

Dig deeper: Performance Max vs. Search campaigns: New data reveals substantial search term overlap

Auto-applied recommendations (AAR) rewriting structure

The issue

AARs can quietly restructure your campaigns without you even noticing. This includes:

  • Adding broad match keywords. 
  • “Upgrading” existing keywords to broader match types.
  • Adding new keywords that are sometimes irrelevant to your targeting.

Google has framed these “optimizations” as efficiency improvements, but the issue is that they can destabilize performance. 

Broad keywords open the door to irrelevant queries, which then can spike CPA and waste budget.

Recommendation

First, opt out of AARs and manually review all recommendations moving forward. 

Second, audit the changes that have already been made by going to Campaigns > Recommendations > Auto Apply > History. 

From there, you can see what change happened on what date, which allows you to go back to your campaign data and see if there are any performance correlations. 

Dig deeper: Top Google Ads recommendations you should always ignore, use, or evaluate

Modeled conversions inflating numbers

The issue

Modeled conversions can climb while real sales or MQLs stay flat. 

For example, you may see a surge in reported leads or purchases in your ads account, but when you look at your CRM, the numbers don’t match up. 

This happens because Google uses modeling to estimate conversions where direct measurement isn’t possible. 

If Google doesn’t have full tracking, it fills gaps by estimating conversions it can’t directly track, based on patterns in observable data. 

When left unchecked, the automation will double down on these patterns (because it assumes they’re correct), wasting budget on traffic that looks good but won’t convert.

Recommendation

Tell the automation what matters most to your business. 

Import offline or qualified conversions (via Enhanced Conversions, manual uploads, or CRM integration). 

This will ensure that Google optimizes for real revenue and not modeled noise.

When automation is boxed in: Reading the signals

Not every warning in Google means automation is failing. 

Sometimes the system is limited by the goals, budget, or inputs you’ve set – and it’s simply flagging that.

These diagnostic signals help you understand when to adjust your setup instead of blaming the algorithm.

Limited statuses (red vs. yellow)

The issue

A Limited status doesn’t always mean your campaign is broken. 

  • If you see a red Limited label, this means your settings are too strict. That could mean that your CPA or ROAS targets are unrealistic, your budget is too low, etc. 
  • Seeing a yellow Limited label is more of a caution sign. It’s usually tied to low volume, limited data, or the campaign is still learning.

Recommendation

If the status is red, loosen constraints gradually: raise your budget and ease up CPA/ROAS targets by 10–15%. 

If the status is yellow, don’t panic. This is Google’s version of telling you that they could use more money, if possible, but it’s not vital to your campaign’s success.

Responsive search ads (RSAs) inputs

The issue

RSAs are built in real-time from the headlines and descriptions you have already provided Google. 

At a minimum, advertisers are required to write 3 headlines with a maximum of 15 (and up to 4 descriptions). The fewer the assets you give the system, the less flexibility it will have. 

On the other hand, if you’re running a small budget and give the RSAs all 15 headlines and 4 descriptions, there is no way Google will be able to collect enough data to figure out which combinations actually work.

The automation isn’t failing with either. You’ve either given it too little information or too much with too little spending. 

Recommendation

Match asset volume to the budget allocated to the campaign. 

  • If you’re unsure, aim to write between 8-10 headlines and 2-4 descriptions.
  • If each headline/description isn’t distinct, don’t use it. 

Conversion reporting lag and attribution issues

The issue

Sometimes, Google Ads reports fewer conversions than your business actually sees. 

This isn’t necessarily an automation failure. It’s often just a matter of when the conversion is counted. 

By default, Google reports conversions on the day of the click, not the day the actual conversion happened. 

That means if you check performance mid-week, you might see fewer conversions than your campaign has actually generated because Google attributes them back to the click date. 

The data usually “catches up” as lagging conversions are processed.

Recommendation

Use the Conversions (by conversion time) column alongside the standard conversion column.

Conversions (by conversion time) column

This helps you separate true performance drops from simple reporting delays. 

If discrepancies persist beyond a few days, investigate the tracking setup or import accuracy. Just don’t assume automation is broken just because of timing gaps.

Get the newsletter search marketers rely on.


Where to look in the Google Ads UI

Automation leaves a clear trail within Google Ads if you know where to look. 

Here are some reports and columns to help spot when automation is drifting.

Bid Strategy report: Top signals 

The issue

The bid strategy report shows some of the signals Smart Bidding relies on when there is enough data. 

The “top signals” can sometimes make sense, and at other times, they can be a bit misleading. 

If the algorithm relies on weak signals (e.g., broad search themes and a lack of first-party data), its optimizations will be weak, too.

Bid Strategy report: Top signals 

Recommendation

Make checking your Top Signals a regular activity. 

If they don’t align with your business, fix the inputs. 

  • Improve conversion tracking.
  • Import offline conversions.
  • Reevaluate search themes.
  • Add customer/remarketing lists.
  • Expand your negative keyword list(s). 

Impression share metrics

The issue

When a campaign underdelivers, it’s tempting to assume automation is failing, but looking at Impression Share (IS) metrics tends to reveal the real bottleneck. 

By looking at Search Lost IS (budget), Search Lost IS (rank), and Absolute Top IS together, you can separate automation problems from structural or competitive ones.

How to use IS metrics as a diagnostic tool.

  • Budget problem
    • High Lost IS (budget) + low Lost IS (rank): Your campaign isn’t struggling. It just doesn’t have enough budget to run properly.
    • Recommendation: Raise the budget or accept capped volume.
  • Targets too aggressive
    • High Lost IS (rank) + low Absolute Top IS: If your Lost IS (rank) is high and your budget is adequate, your CPA/ROAS targets are likely too aggressive, causing Smart Bidding to underbid in auctions.
    • Recommendation: Loosen targets gradually (10-15%).

Scripts to keep automation honest

Scripts give you early warnings so you can step in before wasted spend piles up.

Anomaly detection

  • The issue: Automation can suddenly overspend or underspend when conditions in the marketplace change, but you often won’t notice until reporting lags.
  • Recommendation: Use an anomaly detection script to flag unusual swings in spend, clicks, or conversions so you can investigate quickly.

Query quality (N-gram analysis)

  • The issue: Broad match and PMax can drift into irrelevant themes (“free,” “jobs,” “definition”), wasting budget on low-quality queries.
  • Recommendation: Run an N-gram script to surface recurring poor-quality terms and add them as negatives before automation optimizes toward them.

Budget pacing

  • The issue: Google won’t exceed your monthly cap, but daily spend will be uneven. Pacing scripts help you spot front-loading.
  • Recommendation: A pacing script shows you how spend is distributed so you can adjust daily budgets mid-month or hold back funds when performance is weak.

Turning automation into an asset

Automation rarely fails in dramatic ways – it drifts. 

Your job isn’t to fight it, but to supervise it: 

  • Supply the right signals.
  • Track when it goes off course.
  • Step in before wasted spend compounds.

The diagnostics we covered – impression share, attribution checks, PMax insights, and scripts – help you separate real failures from cases where automation is simply following your inputs.

The key takeaway: automation is powerful, but not self-policing. 

With the right guardrails and oversight, it becomes an asset instead of a liability.

Read more at Read More

Global expansion and hyperlocal focus redefine the next chapter of retail media networks by DoorDash

Retail media networks are projected to be worth $179.5 billion by 2025, but capturing share and achieving long-term success won’t hinge solely on growing their customer base. With over 200 retail media networks now competing for advertiser attention, the landscape has become increasingly complex and crowded. The RMNs that stand out will be those taking a differentiated approach to meeting the evolving needs of advertisers.

The industry’s concentration creates interesting dynamics. While some platforms have achieved significant scale, nearly 70% of RMN buyers cite “complexity in the buying process” as their biggest obstacle. That tension, between explosive growth and operational complexity, is forcing the industry to evolve beyond traditional approaches.

As the landscape matures, which strategies will define the next wave of growth: global expansion, hyperlocal targeting, or both?

The evolution of retail media platforms

To understand where the industry is heading, it’s worth examining how successful platforms are addressing advertisers’ core challenges. Lack of measurement standards across platforms continues to frustrate advertisers who want to compare performance across networks. Manual processes dominate smaller networks, making campaign management inefficient and time-consuming.

At the same time, most retailers lack the digital footprint necessary for standalone success. This has created opportunities for platforms that can solve multiple problems simultaneously: standardization, automation, and scale.

DoorDash represents an interesting case study in this evolution. The platform has built its advertising capabilities around reaching consumers at their moment of local need across multiple categories. With more than 42 million monthly active consumers as of December 2024, DoorDash provides scale and access to high-intent shoppers across various categories spanning restaurants, groceries and retail.

The company’s approach demonstrates how platforms can address advertiser pain points through technology. DoorDash’s recent platform announcement showcases this evolution: the company now serves advertisers with new AI-powered tools and expanded capabilities. Through its acquisition of ad tech platform Symbiosys, a next-generation retail media platform, brands can expand their reach into digital channels, such as search, social, and display, and retailers can extend the breadth of their retail media networks.

Global expansion meets local precision

International expansion presents both opportunities and challenges for retail media networks. Europe’s retail media industry is projected to surpass €31 billion by 2028,. This creates opportunities for networks that can solve the technology puzzle of operating across multiple geographies.

The challenge lies in building platforms that work seamlessly across countries while maintaining local relevance. International expansion requires handling different currencies, regulations, and cultural contexts—capabilities that many networks struggle to develop.

DoorDash’s acquisition of Wolt illustrates how platforms can achieve global scale while maintaining local connections. The integration enables brands to manage campaigns across Europe and the U.S. through a single interface—exactly the kind of operational efficiency that overwhelmed advertisers seek.

The combined entity now operates across more than 30 countries, with DoorDash and Wolt Ads crossing an annualized advertising revenue run rate of more than $1 billion in 2024. What makes this expansion compelling isn’t just the scale—it’s how the integration maintains neighborhood-level precision across diverse geographies.

Wolt has transformed from a food delivery platform into what it describes as a multi-category “shopping mall in people’s pockets.”

The hyperlocal advantage: context beats demographics

Here’s what’s really changing the game: the shift from demographic targeting to contextual precision. Privacy regulations favor contextual targeting over behavioral tracking, but that’s not the only reason smart networks are going hyperlocal.

Location-based intent signals provide dramatically higher conversion probability than traditional demographics. Real-time contextual data—weather patterns, local events, proximity to fulfillment—influences purchase decisions in immediate, actionable ways that broad demographic targeting simply can’t match.

DoorDash built its entire advertising model around this insight, reaching consumers at the exact moment of local need across multiple categories. The platform provides scale and access to high-intent shoppers with contextual precision. A recent innovation that exemplifies this approach is Dayparting for CPG brands, which enables advertisers to target users in their local time zones—a level of time-based precision that distinguishes hyperlocal platforms from broader retail media networks.

In one example, Unilever applied Dayparting to focus on late-night and weekend windows for its ice cream campaigns, aligning ad delivery with peak demand periods. Over a two-week period, 77% of attributed sales were new-to-brand, demonstrating the power of contextual timing in driving incremental reach.

Major brands, including Unilever, Coca-Cola, and Heineken, utilize both DoorDash and Wolt platforms for hyperlocal targeting, proving the model is effective for both endemic and non-endemic advertisers seeking neighborhood-level precision.

Technology evolution: measurement and automation

The technical requirements for next-generation retail media networks extend far beyond basic advertising capabilities. Self-serve functionality has become standard for international geographies—not because it’s trendy, but because manual campaign management doesn’t scale across dozens of countries with different currencies, regulations, and cultural contexts.

Cross-country campaign management requires unified dashboards that manage complexity while maintaining simplicity for advertisers. Automation isn’t optional anymore; it’s necessary to compete with established players who’ve built machine learning into their core operations.

But here’s what’s really transforming measurement: new attribution methodologies that go beyond traditional ROAS. When platforms can integrate fulfillment data with advertising exposure, they enable real-time performance tracking that connects ad spend to actual business outcomes rather than just clicks and impressions.

Progress on standardization continues through IAB guidelines addressing measurement consistency, alongside industry pushes for technical integration standards. The challenge lies in balancing standardization with differentiation—networks need to offer easy integration and consistent measurement while maintaining unique value propositions.

In a move toward addressing advertisers’ need for measurement consistency, DoorDash recognized that restaurant brands valued both click and impression-based attribution for their sponsored listing ads, and recently introduced impression-based attribution and reporting in Ads Manager. This has enabled restaurant brands to gain a deeper understanding of performance and results driven on DoorDash.

Global technology challenges add another layer of complexity: multi-currency transactions, local payment methods, regulatory compliance across countries, and cultural adaptation while maintaining platform consistency. These aren’t afterthoughts for international platforms, they’re core competencies that determine success or failure.

Industry outlook: consolidation and opportunity

Retail media is heading toward consolidation, but not in the way most people expect. Hyperlocal networks are positioned to capture share from undifferentiated RMNs that compete solely on inventory volume. Geographic specialization is becoming a viable alternative to traditional scale-focused approaches.

Simultaneously, community impact measurement is gaining importance for brand strategy. Marketers are discovering that advertising dollars spent on local commerce platforms create multiplier effects—supporting neighborhood businesses and strengthening local economies in ways that traditional e-commerce advertising doesn’t achieve.

The networks that understand this dynamic, that can offer global platform capabilities with genuine local industry expertise, are the ones positioned to define retail media’s next chapter. Success requires technology integration that enables contextual and location-based targeting, plus measurement solutions that prove incrementality beyond traditional metrics.

The path forward

As retail media networks mature, success lies not in choosing between global scale and local relevance, but in achieving both simultaneously. The DoorDash-Wolt combination provides a compelling blueprint, demonstrating how technology platforms can enable international expansion while deepening neighborhood-level connections.

For marketers navigating this evolution, the fundamental question shifts from “where should we advertise?” to “how can we reach consumers at their moment of need?” Networks that answer this effectively—through global reach, hyperlocal precision, or ideally both, will write retail media’s next chapter.Interested to learn more about DoorDash Ads? Get started today.

Read more at Read More

Google Ads expands PMax Channel Reporting to account level

Your guide to Google Ads Smart Bidding

Performance Max (PMax) advertisers just got a major visibility upgrade: Channel Reporting is now available at the account level, not just within individual campaigns.

How it works:

  • View and compare all PMax campaigns in a single reporting overview.
  • Segment by conversion metrics to understand what’s driving results.
  • Identify performance patterns across channels without jumping campaign to campaign.

Why we care. Until now, channel performance data was siloed within each PMax campaign. The new account-level reporting makes it easier to spot trends, compare results, and optimize across campaigns.

The big picture. Google notes that channel data is available for PMax campaigns “at this time” — a phrasing that suggests the feature could expand to other campaign types down the road.

Bottom line. More visibility, less friction. This change gives advertisers a faster, more complete view of PMax performance — and hints at broader reporting upgrades ahead.

First seen. This update was first picked up by Jun von Matt IMPACT’s Head of Google Ads, Thomas Eccel.

Read more at Read More

GEO vs SEO: Understanding the Differences

If you have been working in digital marketing, you already know how much hinges on showing up in search. For years, SEO has been the way to get there. Now, GEO vs SEO is the conversation that matters, because generative AI has introduced a new way for people to get answers.

The rise of generative engine optimization (GEO) does not mean SEO is dead. It means you cannot treat them as the same thing. SEO is about earning visibility in search engine results pages. GEO is about making sure your content shows up inside AI-generated answers.

Marketers who get this right capture attention in both worlds. Everyone else is left wondering why traffic is slipping, even when rankings look fine.

Key Takeaways

  • GEO vs SEO is not either-or. SEO drives visibility in search engines, while GEO ensures your content appears in AI-generated answers.
  • Both GEO and SEO aim to satisfy user intent. High-quality, structured content is the foundation for success with both.
  • The differences matter. SEO measures success in rankings and traffic, while GEO focuses on citations inside AI-driven outputs.
  • E-E-A-T is critical for both. Strong signals of experience, expertise, authority, and trust help improve rankings and AI citations alike.
  • Optimization is ongoing. Neither GEO nor SEO is “set it and forget it.” Both require consistent updates as algorithms and AI models evolve.
  • You need both strategies. Together, they maximize reach across traditional search and generative platforms.

GEO and SEO explained

SEO, or search engine optimization, is the process of improving your site so it ranks higher in search results. It relies on content quality, site structure, backlinks, and technical performance to earn visibility in Google and other engines.

A Google Search for best restaurants in providence, Rhode Island.

GEO, or generative engine optimization, works differently. Instead of chasing rankings in a results page, GEO prepares your content so AI-driven platforms like ChatGPT, Perplexity, and Google’s AI Overviews can interpret and cite you in their responses.

A ChatGPT response asking for restaurants in Providence, Rhode Island.

Both share the same end goal: connect your expertise with the people searching for it. The difference is in delivery. SEO surfaces website links. GEO delivers answers.

GEO vs SEO: The Similarities

GEO and SEO share the same mission: get useful, credible content in front of the right audience. The mechanics differ, but the fundamentals overlap in important ways.

Both are built around user intent. You win by matching the question behind the query, not by chasing vague head terms. Clear problem-solution framing and direct answers perform well in search results and inside AI summaries.

Content quality drives outcomes. Original research, step‑by‑step guidance, current stats, and real examples increase usefulness, similar to the example below. Thin copy gets ignored by ranking systems and by generative engines.

Structure increases visibility. Descriptive headings, short paragraphs, ordered lists, and clear tables help crawlers understand content and make it easier for AI models to process and reuse Clean formatting reduces ambiguity and improves the chances your content is surfaced accurately.

E‑E‑A‑T signals matter. Named authors with credentials, transparent sourcing, solid About and Contact pages, and real brand mentions build confidence for search evaluators and increase the likelihood your content is surfaced in AI outputs.

Author profiles on the Neil Patel blog.

Keywords still count. You need the keywords your audience actually uses. Target natural variations, long‑tail questions, and entity terms. Avoid stuffing. Prioritize clarity.

Strong technical foundations help both. Fast load times, mobile readiness, logical internal linking, and clean URLs make content easier to discover and parse. Fix crawl issues before you expect traction anywhere.

Schema and metadata support extraction. FAQ, HowTo, Product, and other relevant types make meaning explicit.

 Clear titles and concise meta descriptions improve interpretation.

Multimedia boosts understanding. Diagrams, short videos, and annotated screenshots clarify complex steps. 

Ensure you include transcripts and alt text so systems can interpret non‑text assets.

Neither is set‑and‑forget. Algorithms and models change. Refresh outdated stats, expand sections that underperform, and retire content that no longer fits searcher needs.

Measurement principles overlap. Track engagement, clarity of answers, and query coverage. For both approaches, the consistent signal is simple: content that helps users is more likely to be surfaced. The good news here is that on the GEO side, we are seeing more tools emerge to track AI platform visibility, such as Profound.

Things to look for in AI tracking tools.

GEO vs SEO: The Differences

Although GEO and SEO share a foundation, the way they operate, and the way you measure success, is very different.

Focus of optimization. SEO is about ranking well in search engine results pages. GEO is about being increasing visibility in AI-generated answers, whether through citations or inclusion in responses. 

Output style. SEO aims to win clicks from a list of website links. GEO focuses on being included in summaries, snippets, or conversational responses in AI-driven platforms. With SEO, visibility is measured in ranking position. With GEO, it is measured in whether your content is referenced or surfaced.

Signals of value. Traditional SEO still leans heavily on backlinks as proof of authority. GEO shifts more weight to content clarity, structured formatting, and topical alignment. Clean HTML, schema markup, and well-labeled sections give AI systems clearer context, making your content easier to interpret and surface. 

Measurement of success. In SEO, key metrics include keyword rankings, organic traffic, and click-through rate. For GEO, success is measured by brand visibility in AI outputs, including citations, mentions in AI results like AI Overviews, and sustained brand presence across AI-driven platforms.

Best practices. SEO requires long-term link building, technical health, and evergreen content. GEO adds new priorities: question-based keyword targeting, multimedia elements that AI can parse, and wider distribution across platforms AI systems draw from for answers.

Think of it this way: SEO gets you discovered. GEO gets you included in the answer. You need both.

How Does GEO Impact SEO?

GEO does not replace SEO, but it is changing how SEO delivers results. Traditional search rankings still matter, yet more searches are ending in AI-driven answers that do not send clicks or traffic to websites.

High rankings used to mean visibility. Now, visibility also depends on whether AI engines surface you in their summaries. That forces your content to be structured in ways AI can easily reuse.

It also changes the kinds of sources search engines value. AI platforms pull heavily from community-driven sites like Reddit and Quora, along with news outlets and trusted publishers.

Reddit queries in Google results.

If your brand is only visible in your own blog, you risk being left out of those AI answers. Expanding into these other ecosystems helps both GEO and SEO.

The takeaway: SEO still builds the foundation. GEO makes sure the foundation carries into AI-driven search.

How To Make GEO and SEO Work Together

The best strategy is not choosing one. It is making them work together.

Start with a solid SEO foundation. Your site still needs clean technical performance, smart keyword targeting, and high-quality content that demonstrates topical authority. 

From there, layer on GEO tactics. Structure content around real questions. It’s no small surprise that when you type in “when should I buy a house?” the Google AI Overview citations align with actual questions.

An AI overview result for "When should I buy a house?"

Add schema where it fits. Include multimedia formats like charts, transcripts, or short videos so AI systems can interpret your work more effectively. 

Do not keep your content siloed, either. Expand your presence to forums, social platforms, and multimedia channels. 

That distribution helps your search everywhere optimization efforts, making sure that you’re appearing on platforms that your audience may be searching on outside of Google. This ties neatly into GEO because it gives AI engines more chances to surface your brand.

The overlap is clear: SEO helps your content get discovered, GEO helps it get included in answers. When you execute both together, you maximize visibility across traditional search and the new wave of AI-driven platforms.

FAQs

What is the difference between GEO and SEO?

SEO focuses on ranking in traditional search results, while GEO focuses on being cited in AI-generated answers from platforms like ChatGPT, Perplexity, and Google’s AI Overviews.

Do I need GEO if I already do SEO?

Yes. SEO ensures visibility in search results, but as more searches are now answered directly in AI summaries, GEO helps increase your chances of being included in those responses.

Does GEO replace SEO?

No. GEO builds on a strong SEO foundation. You still need SEO for rankings and discovery. GEO adds an extra layer to make your content usable in AI-driven outputs.

What metrics measure GEO success?

While SEO tracks rankings, organic traffic, and click-through rate, GEO success is measured by citations in AI responses, brand mentions, and visibility across AI-powered platforms.

How can businesses start with GEO?

Begin with your best-performing SEO content. Reformat it with clear headings, FAQ sections, schema markup, and question-based targeting to make it easier for AI engines to interpret and surface in their responses.

Conclusion

The GEO vs SEO debate is not about picking sides. It is about realizing they work together. SEO still drives discovery. GEO ensures your brand is part of the answer.

Ignore GEO, and your rankings may look fine while your traffic keeps sliding. Ignore SEO, and you will not have the authority or structure needed for AI engines to trust you. The opportunity is to combine both into a strategy that covers search engines and AI-driven platforms.

This shift is already showing up in user behavior. Nearly 60% of searches end without a click, a trend driven by zero-click searches and AI summaries. If your content is not built to be cited, you are invisible where people stop their journey.

It also reinforces the importance of semantic search. Both search engines and AI engines are getting better at understanding meaning, not just keywords. Content that clearly explains concepts, uses natural language, and ties ideas together stands a much better chance of being surfaced.

Start small. Update a handful of pages. Track where you appear in AI summaries and search results. Double down on what works.

The marketers who adapt early will not just keep their visibility. They will be the ones AI engines and search engines both continue to cite.

Read more at Read More