GEO and SEO: How to invest your time and efforts wisely

GEO and SEO- How to invest your time and efforts wisely

Generative engine optimization (GEO) is the new kid on the block, the hot topic. 

SEO professionals and stakeholders want to know: How much should I invest in it in a world where the people writing the checks are a bit skeptical? 

In the world of large language models (LLMs) in 2025, that’s a complicated question. 

This article breaks down why by covering:

  • How LLMs like OpenAI and Gemini currently use search engines.
  • What search marketers should assume about where AI is heading.
  • The types of executional work that align with GEO.
  • What all of this means for prioritization and investment.

How LLMs stay current: Grounding and RAG

One of the fundamental challenges for the creators of LLMs (LLMs) like OpenAI or Claude is timeliness.

Their training data is static, locked to a specific cutoff date. 

For example, the GPT-5 model’s knowledge cutoff is Sept. 30, 2024.

It’s more recent than GPT-4o’s cutoff of Oct. 1, 2023, but still not up to the present day.

Updating that training data is extremely costly, and it’s increasingly under public scrutiny – both for the resource-intensive nature of the process and the potential copyright issues it raises. 

In my view, these large-scale training updates are becoming less and less likely over time.

GPT-5 knowledge cutoff

So how do OpenAI, Claude, or Gemini keep their answers current? 

They use retrieval-augmented generation (RAG), where the model enriches its responses by effectively “browsing” the web. ChatGPT relies on Bing, while Gemini draws from Google. 

(There are signs Gemini doesn’t always use live results, but rather cached ones – that’s a whole other article, and one Dan Petrovic has already written smartly about.)

Grounding is a similar concept here, so for this article, we’ll treat it as the same “timely” method, even though there are important nuances in implementation.

What does this mean for SEOs and digital marketers deciding how much to invest in GEO?

Quite simply: we still need to prioritize traditional SEO first. RAG is a limited resource, and research shows: 

It’s also important to note: when ChatGPT cites your brand, it doesn’t just pull from your website. It pulls from sources across the web.

The bottom line: you still need to master traditional SEO fundamentals to rank in LLM-driven search. 

If you don’t have the authority to break into the Top 20 results, plus a diversified outreach strategy for press mentions and brand visibility, it will be much harder to surface in generative search.

Dig deeper: SEO vs. GEO: What’s different? What’s the same?

Thinking long term

As a low-risk, forward-looking, brand-focused SEO, you must plan for a future where generative search dominates, driving most traffic and revenue.

At that point, we must assume our websites and digital properties function primarily as enriched data feeds for LLMs. 

It’s also critical to clearly define our brand for both Google and Bing, as strong, unambiguous entity signals will only grow in importance.

Optimizing your data infrastructure and strengthening brand signals – through consistent press mentions, directory listings, and owned media – are essential but resource-intensive tasks. 

They demand coordination across departments that rarely collaborate and often require dismantling entrenched processes.

Because many businesses hesitate to make these foundational changes, you’ll need to account for the time required to execute the work and the time required to gain stakeholder alignment.

Get the newsletter search marketers rely on.


Execution: Technical and brand

The work required to make your website as LLM-friendly as possible falls into two main buckets: technical and brand.

Technical tasks

Implementing thorough schema markup

This is a contentious topic. 

LLMs don’t directly use schema markup in their training data (it’s stripped), and in their RAG process, everything is tokenized and likely broken into n-grams. 

I’m not suggesting schema markup is a direct way to influence visibility in LLMs.

It’s a vehicle for helping Google and Bing understand:

  • Your website.
  • Its relationships.
  • Its products. 

This builds your brand and search engines’ recognition of it, which should improve your visibility in results.

Technical copywriting

On navigational pages – like product collection pages or company listing pages if you’re a marketplace – create technical copy (done via AI with smart prompting if you’re working at scale) that summarizes the available resources.

For example: 

  • “Our stationery includes 5 A5 dotted journals, 2 N1 blank journals, 25 stickers featuring animals, 4 stickers with curse words (all vinyl for weatherproofing and waterproofing), and 1 lapel pin.”

Notice how direct and technical this is. The clear formatting ties back to dependency hops in natural language processing.

XML sitemaps

A spring cleaning task. 

Your XML feeds should be 100% to spec:

  • No 404s.
  • No 301s.
  • No more than 5,000 URLs per sitemap.
  • All recommended fields in place. 

I’m calling it out specifically because it’s one of the most direct ways for search engines and other bots to see and navigate the full scope of your website.

JavaScript fallbacks

This has always been important but has fallen by the wayside in recent years. 

Training data for LLMs is static HTML. For the most part, they don’t render JavaScript. 

Make sure to have functional JavaScript fallbacks.

Address technical debt

This will depend on your organization. It could mean:

  • Having a clear product sunsetting process.
  • Updating the codebase.
  • Removing the ghost codebase still sitting on your site from eight years ago that everyone built on top of rather than deleting. 
  • Migrating from an SPA to a more search-friendly framework.
  • Removing deprecated scripts.
  • Auditing third-party tags to ensure they’re up to date and still in use. 

All of these impact performance.

The technical strength and response time of your website will only grow more important.

Every piece of tech debt is an opportunity to improve.

Dig deeper: A technical SEO blueprint for GEO: Optimize for AI-powered search

Brand tasks

Brand tasks would include things like: 

  • Creating consistent brand descriptions and implementing them across all platforms.
  • Updating your About page to have as much relevant information as possible, such as:
    • Founding timeline.
    • Founders and leadership profiles.
    • Corporate social responsibility initiatives.
    • Partners.
    • Supply chain.
    • Your unique selling proposition.
    • Press mentions.
    • Awards and other social proof.
    • Testimonials.
    • A contact form. 
  • Claiming your Google Knowledge Panel (or monitoring it until it becomes available, then claiming it).
  • Planning regular press mentions, doing outreach yourself, or working with a PR company to make it happen.
  • Claiming and optimizing your Google Business Profile if you’re a local business.
  • Submitting your company for a Wikipedia page once you’ve built enough notability.

Dig deeper: In GEO, brand mentions do what links alone can’t

Making smart investments in SEO and GEO

If there’s only one takeaway, it’s this: keep investing the majority of your time and budget in traditional SEO, while dedicating a smaller portion to technical and brand tasks like those outlined above.

Look closely at the 1-5% improvements you’ve been putting off – things like:

  • Correcting the HTML heading hierarchy to match the site’s visual hierarchy.
  • Fixing internal links so they point directly to final URLs instead of redirect chains.
  • Cleaning up your XML sitemap.
  • Removing deprecated libraries and unused WOFF files. 

This “spring cleaning” and tech debt cleanup should be a priority. 

Add in the brand work as well, since it strengthens traditional search today and also lays the groundwork for an LLM-led search future.

If you don’t already have regular reporting in place for stakeholders and leadership, create it now. 

There’s a perception that large language models are evolving rapidly and changing everything at once. 

That isn’t entirely true – but we do need to plan. 

Establishing a cadence of reporting and education means that when real shifts do happen, your stakeholders will already be aligned and ready to support the work.

Finally, treat GEO/AI optimization as roughly 20% work

This means building systemic schema layers across your organization and creating structured connections in the machine’s native language – code. 

Start with:

  • Conversations.
  • Proofs of concept.
  • Pilot implementations. 

Done properly, this work should have no negative impact on your business metrics, and it builds support for more holistic optimization over time.

Going all in on LLM-specific tactics isn’t the best use of your resources today. 

Instead, treat it as complementary work – something that strengthens your technical and brand foundation while preparing you for a future where generative search plays a central role.

Read more at Read More

Looking beyond AI: 9 marketing principles that will always matter

Concept of timeless principles

AI has fundamentally changed how people search and engage with information online. 

Features like AI Overviews may boost visibility, but they’ve also reduced clicks, leaving many websites with less traffic despite stronger rankings. 

Discovery no longer happens in just one place. 

It’s fragmented across search engines, social platforms, paid ads, and AI tools, creating a complex user journey that’s harder than ever to track.

As behavior shifts with these new technologies, search marketing is evolving in response. 

Yet while the platforms, tools, and touchpoints keep changing, the core principles of effective marketing remain the same. 

Marketers who stay grounded in these fundamentals will be best equipped to adapt and grow.

Here are nine timeless marketing principles that will hold steady – no matter how search evolves.

1. Focusing on search intent: Why people search 

Where people search and find information will continue to change over time as preferences for LLMs, social media, or video content shape where people go for answers. 

However, what remains constant is search intent

Users are always looking to:

  • Learn about something.
  • Navigate to brands they know.
  • Compare and evaluate different options.
  • Purchase or convert. 

Focus on the why behind a search – the intent driving it. 

The key question is whether your content aligns with that intent. 

If it doesn’t, you’re overlooking a critical driver of user behavior. 

When content matches search intent, users immediately recognize its value and engage, which is why intent should remain central to your marketing strategy.

2. The lasting value of brand recognition and loyalty

Even as AI continues to drive change in how companies reach their audiences, brand recognition and loyalty remain important pillars of long-term engagement and growth.

Discovery channels are shifting as people find brands through social media, search engines, paid ads, email, and more.

That’s why it’s important to continually reassess the customer journey and understand where your audience is finding you.

After discovery, your job is to highlight your unique value – what sets you apart from competitors and how you provide real value to your audience. 

Keep asking yourself:

  • Why should someone choose my brand?
  • What makes us stand out?

The clearer and more consistently you communicate this in the spaces that matter, the more you’ll earn trust, recognition, and reliability – all of which shape how people respond to your brand.

Brand loyalty isn’t automatic. It’s something you earn by building real relationships with your customers and consistently providing value. 

Loyalty creates long-term stability and growth, even as platforms and algorithms continue to shift. 

While search intent and brand recognition can attract new visitors, loyalty turns impressions into conversions and builds lasting customer lifetime value.

3. Knowing and understanding your audience

Beyond search intent and branding, truly knowing your audience is essential for long-term marketing success. 

Without that insight, you risk falling into “spray and pray” campaigns that waste resources and fail to connect.

Building clear audience personas helps you decide not just what campaigns and content to create, but how to present them in ways that resonate. 

That means understanding who your audience is, what motivates them, their pain points, their values, and where they spend their time. 

These insights form the foundation of a strategy built to genuinely connect with your audience.

Dig deeper: SEO personas for AI search: How to go beyond static profiles

4. Trustworthiness is currency

Even if your content matches search intent and your brand is well recognized, audiences won’t engage without credibility. 

This matters even more today, as AI tools summarize information and highlight only the most trustworthy sources. 

Both search engines and AI prioritize trust signals. But for users, those signals are everything.

Expertise, consistency, transparency, and reliability are what build that trust. 

People want to feel confident that your brand will deliver on its promises and provide lasting value. 

When they believe you’re dependable, they’re more likely to engage, return, and recommend your brand to others.

Get the newsletter search marketers rely on.


5. Customer service and experience drive perception

Today, customer service is inseparable from brand experience. 

Every interaction – whether answering a support ticket or replying to a social media comment – shapes how people perceive your credibility and value.

Testimonials and reviews create a powerful feedback loop: one story sparks another, influencing how others view your brand. 

Campaigns can drive visibility, but audiences still turn to peer reviews on platforms like Reddit to validate those impressions and decide whether to trust you.

Audience sentiment has become its own form of publicity. 

With user-generated content (UGC) shaping perception and AI systems relying on reviews and sentiment signals to recommend brands, customer experience is now a direct driver of both reputation and visibility.

Dig deeper: How to use SEO and CX for better organic performance

6. Good user experience supports conversions

A core principle that hasn’t changed is the need for an optimized user experience. 

When someone lands on your site, the page should minimize friction in the buying journey. 

Whether visitors arrive through ads or organic search, they need clear conversion paths that guide them smoothly forward.

Audiences expect ease and clarity when looking for information or taking action. 

Slow load times, unnecessary clicks, or confusing layouts increase drop-offs, abandoned forms, and carts – leaving users frustrated.

A good user experience makes the journey to conversion as effortless as possible. 

Done well, it not only boosts conversions but also builds satisfaction and trust.

7. Mobile-first experiences: Meeting users where they are 

AI may be transforming how people search, but mobile devices remain the primary way users access and engage with brands. 

For many, the first interaction with your brand happens on a phone.

That’s why user experience must extend beyond conversion paths.

It also has to be fully optimized for mobile. Otherwise, you risk frustration, lost trust, and missed conversions.

Mobile users abandon sites that load slowly, require pinching and zooming, use hard-to-tap buttons, or rely on clunky forms.

Even a few seconds of delay or disruptive layout shifts can cause drop-offs.

And because search engines prioritize mobile-friendliness, optimizing for mobile isn’t just about usability. It also directly impacts rankings and visibility.

8. Accessibility is essential

Accessibility is a core part of creating inclusive experiences for your entire audience.

In the U.S., it’s also a legal responsibility. 

Making your site accessible means adding features like:

  • Screen reader compatibility.
  • Alt text for images.
  • Strong color contrast.
  • Keyboard navigation.

If accessibility is overlooked, you risk excluding parts of your audience and facing ADA lawsuits.

But when you design with accessibility in mind, you reach more people, strengthen trust, and ensure everyone can engage with your brand.

9. Quality content and authority still define success

No matter how search evolves, quality content and authority remain the foundation of visibility and trust. 

Algorithms may shift and discovery channels may change, but users will always value content that is accurate, relevant, and genuinely helpful.

Authority is earned over time by:

  • Consistently publishing original, reliable content.
  • Being cited by other trusted sources. 

The more credibility your brand builds, the more likely users (and search engines) are to consider you a worthwhile recommendation.

Dig deeper: Mastering content quality: The ultimate guide

Marketing that lasts beyond AI

AI is transforming how people search and how brands reach them, but the fundamentals of marketing haven’t changed. 

What still matters is:

  • Understanding intent.
  • Knowing your audience.
  • Meeting them where they are.
  • Building trust and loyalty.
  • Delivering real value.

Technology and platforms will keep evolving. 

But the brands that stay grounded in these timeless principles will be the ones that adapt, grow, and thrive in the future.

Read more at Read More

Google AI, ChatGPT rarely agree on brand recommendations: Data

Google AI ChatGBT recommendations

Google’s AI Overviews and AI Mode and OpenAI’s ChatGPT often give consumers different brand recommendations – a potential warning sign for marketers chasing AI visibility – according to a new BrightEdge analysis.

The big picture. ChatGPT and Google’s AI Mode and AI Overviews disagreed on brand recommendations nearly two-thirds of the time (61.9%), according to BrightEdge’s analysis of tens of thousands of identical prompts.

By the numbers. Just 33.5% of queries included brands across all three platforms, and only 4.6% had no brands mentioned anywhere. Other key findings:

  • Google AI Overviews dominate: Google’s AI Overviews surfaced brands in 36.8% of queries, while ChatGPT led in just 3.9%.
  • Brand density: Google AI Overviews averaged 6.02 brands per query, more than 2.5x higher than ChatGPT’s 2.37 and far ahead of AI Mode’s 1.59.
  • Silence rates: ChatGPT offered no brand mentions in 43.4% of queries. Google AI Mode stayed silent 46.8% of the time, compared to just 9.1% for AIO.

The citation paradox. The study also uncovered stark differences in citation behavior:

  • ChatGPT mentions more than it cites, with 3.2x more brand mentions (2.37) than citations (0.73).
  • Google AI Overviews cites far more than it mentions (14.30 citations vs. 6.02 mentions).
  • Google AI Mode shows an even bigger gap — 6x more citations than mentions (9.49 vs. 1.59).

This data may suggest that ChatGPT’s responses lean heavily on its training patterns, while Google emphasizes visible source attribution.

Where platforms align. The rare moments of brand alignment depended on query intent:

  • Compare queries: 80% same-brand agreement.
  • Buy queries: 62%.
  • Where queries: 38%.
  • Best queries: 23%.

Industry breakdown. Disagreement rates also varied by sector:

  • Healthcare: 68.5%
  • Education: 62.1%
  • B2B Tech: 61.7%
  • Finance: 57.9%
  • Ecommerce: 57.1% (lowest)

Why we care. For brands, these findings highlight a volatile AI landscape where visibility is far from guaranteed – and often inconsistent. As BrightEdge notes, the fragmentation creates “massive untapped visibility opportunities” for companies optimizing for generative search.

The report. ChatGPT vs Google AI: 62% Brand Recommendation Disagreement

Read more at Read More

Google Ads adds loyalty features to boost shopper retention

Google Shopping Ads - Google Ads

Google is rolling out new loyalty integrations across Google Ads and Merchant Center, giving retailers tools to highlight member-only pricing and shipping benefits to their most valuable customers.

How it works:

  • Personalized annotations display member-only discounts or shipping benefits in both free and paid listings.
  • A new loyalty goal in Google Ads helps retailers optimize budgets toward high-value shoppers, adjusting bids to prioritize lifetime value.
  • Sephora US saw a 20% lift in CTR by surfacing loyalty-tier discounts in personalized ads.

Why we care. With 61% of U.S. adults saying tailored loyalty programs are the most compelling part of a personalized shopping experience (according to Google), retailers face pressure to prove value beyond discounts.

By surfacing member-only perks directly in search and shopping results, retailers can boost engagement from their most valuable customers and optimize spend toward higher lifetime value, not just single conversions. It’s a way to tie loyalty programs directly to ad performance — and win more share of wallet from existing shoppers.

The big picture. Loyalty features are Google’s latest move to keep retail advertisers invested in its ecosystem — positioning search and shopping as not just discovery channels, but retention engines. Expect more details at Google’s Think Retail event on Sept. 10.

Read more at Read More

AI tool adoption jumps to 38%, but 95% still rely on search engines

AI tools vs traditional search

More than 1 in 5 Americans now use AI tools heavily – but traditional search engines remain dominant, with usage holding steady at 95%, according to new clickstream data from Datos and SparkToro.

By the numbers:

  • AI tools: 21% of U.S. users access AI tools like ChatGPT, Claude, Gemini, Copilot, Perplexity, and Deepseek 10+ times per month. Overall adoption has jumped from 8% in 2023 to 38% in 2025.
  • Search engines: 95% of Americans still use Google, Bing, Yahoo, or DuckDuckGo monthly, with 87% considered heavy Google users – up from 84% in 2023.
  • Growth trends: AI adoption is slowing. Since September 2024, no month has shown more than 1.1x growth. By contrast, search volume per user has slightly increased year-over-year.

The big picture: Despite the hype around AI replacing Google, the data seems to show the opposite. When people adopt AI tools, their Google searches also rise, SparkToro found.

Yes, but. Are there any truly “traditional search engines” left? Things get a bit messy when talking about “traditional search engines” versus the “AI tools” examined here, because all search engines now have AI baked in:

  • Google is a traditional search engine (or, perhaps more accurately, an AI search engine with a legacy search experience that’s clearly moving in the direction of AI Overviews and AI Mode), and Gemini is Google’s AI tool. Plus, Google’s traditional search data is being used by ChatGPT.
  • Traditional search engine Bing has its own AI tool, Copilot, and is OpenAI’s partner for ChatGPT Search, and feeds DuckDuckGo’s results.

Why we care. This data once again indicates that it isn’t AI vs. search; it’s AI plus search. Heavy AI users are also heavy searchers, meaning Google traffic declines are more about zero-click answers than AI cannibalization.

The report. New Research: 20% of Americans use AI tools 10X+/month, but growth is slowing and traditional search hasn’t dipped

Read more at Read More

Google releases August 2025 spam update

Google released its August 2025 spam update today, the company announced at 12:05 p.m. This is Google’s first announced algorithm update since the June 2025 core update. It is Google’s first spam update of 2025 and the first since December.

Timing. Google called this a “normal spam update” and it will take a “few weeks” to finish rolling out.

The announcement. Google announced:

  • “Today we released the August 2025 spam update. It may take a few weeks to complete. This is a normal spam update, and it will roll out for all languages and locations. We’ll post on the Google Search Status Dashboard when the rollout is done.”

Previous spam updates. Before today, Google’s last spam update was released Dec. 19 and finished rolling out Dec. 26; it was more volatile than the June 2024 spam update, which was released June 20, 2024 and completed rolling out June 27, 2024.

Why we care. This is the first Google algorithm update since the June 2025 core update. It’s unclear what type of spam this update is targeting, but if you see any ranking or traffic changes in the next few weeks, it could be due to this update.

Read more at Read More

ChatGPT’s answers came from Google Search after all: Report

ChatGPT Google unmasking

Multiple tests have suggested ChatGPT is using Google Search. Well, a new report seems to confirm ChatGPT is indeed using Google Search data.

  • OpenAI quietly used (and may still be using) a Google Search scraping service to power ChatGPT’s answers on real-time topics like news, sports, and finance, according to The Information.

The details. OpenAI used SerpApi, an 8-year-old scraping firm, to extract Google results.

  • Google has reportedly long tried to block SerpApi’s crawler, though it’s unclear how effective those efforts have been.
  • Other SerpApi customers reportedly include Meta, Apple, and Perplexity.

Zoom out. This revelation contrasts with OpenAI’s public stance that ChatGPT search relies on its own crawler, Microsoft Bing, and licensed publisher data.

Meanwhile. OpenAI CEO Sam Altman recently dismissed Google Search, saying:

  • “I don’t use Google anymore. I legitimately cannot tell you the last time I did a Google search.” 

Well, based on this news, it seems like he probably is using Google Search all the time within his own product.

Why we care. Google’s search index remains the foundation of online discovery – so much so that even its biggest AI search rival appears to be using it to partially power ChatGPT. This is yet another reminder that SEO isn’t going anywhere just yet. If Google’s results are valuable to OpenAI, they remain essential for driving visibility, traffic, and business outcomes.

The report. OpenAI Is Challenging Google—While Using Its Search Data (subscription required)

Read more at Read More

Historic recurrence in search: Why AI feels familiar and what’s next

Historic recurrence in search- Why AI feels familiar and what’s next

Historic recurrence is the idea that patterns repeat over time, even if the details differ.

In digital marketing, change is the only constant.

Over the last 30 years, we’ve seen nonstop shifts and transformations in platforms and tactics.

Search, social, and mobile have each gone through their own waves of evolution. 

But AI represents something bigger – not just another tactic, but a fundamental shift in how people research, evaluate, and buy products and services.

Estimates vary, but Gartner projects that AI-driven search could account for 25% of search volume by the end of 2026.

I suspect the true share will be much higher as Google weaves AI deeper into its results.

For digital marketers, it can feel like we need a crystal ball to predict what’s next. 

While we don’t have magical foresight, we do have the next best thing: lessons from the past.

This article looks back at the early days of search, how user behavior evolved alongside technology, and what those patterns can teach us as we navigate the AI era.

The early days: Wild and wonderful queries

If you remember the early web – AltaVista, Lycos, Yahoo, Hotbot – search was a free-for-all. 

People typed in long, rambling queries, sometimes entire sentences, other times just a few random words that “felt” right.

There were no search suggestions, no “people also ask,” and no autocorrect. 

It was a simpler time, often summed up as “10 blue links.”

Google Search - 10 blue links

Searchers had to experiment, refine, and iterate on their own, and the variance in query wording was huge.

For marketers, that meant opportunity. 

You could capture traffic in all sorts of unexpected ways simply by having relevant pages indexed.

Back then, SEO was, in large part, about one thing: existing in the index.

Dig deeper: A guide to Google: Origins, history and key moments in search

Google’s rise: From exploration to efficiency

Anyone working in digital marketing in the early 2000s will remember. 

From Day 1, Google felt different. The quality of its results was markedly better.

Then came Google Suggest in 2008, quietly changing the game. 

Suddenly, you didn’t have to finish typing your thought. Google would complete it for you, based on the most common searches.

Research from Moz and others at the time showed that autocomplete reduced query length and variance. 

People defaulted to Google’s suggestions because it was faster and easier.

This marked a significant shift in our behavior as searchers. We moved from sprawling, exploratory queries to shorter, more standardized ones.

It’s not surprising. When something can be achieved with less effort, human nature drives us toward the path of least resistance.

Once again, technology had changed how we search and find information.

Mobile, voice, and the second compression

The shift to mobile accelerated this compression.

Tiny keyboards and on-the-go contexts meant people typed as little as possible.

Autocomplete, voice input, and “search as you type” all encouraged brevity.

At the same time, Google kept rolling out features that answered questions directly, creating a blended, multi-contextual SERP.

The cumulative effect? Search behavior became more predictable and uniform.

For marketers running Google Ads or tracking performance in Google Analytics and Search Console, this shift came with another challenge: less data. 

Long-tail keywords shrank, while most traffic and budget concentrated on a smaller set of high-volume terms.

Once again, our search behavior – and the insights we could glean from it – had evolved.

Zero-click search and the walled garden

By the late 2010s, zero-click searches were on the rise. 

Google – and even social platforms – wanted to keep users inside their ecosystems.

More and more questions were answered directly in the search results. 

Search got smarter, and shorter queries could deliver more refined results thanks to personalization and past interactions.

Google started doing everything for us.

Search for a flight? You’d see Google Flights.

A restaurant? Google Maps. 

A product? Google Shopping. 

Information? YouTube

You get the picture.

For businesses built on organic traffic, this shift was disruptive. 

But for users, it felt seamless – arguably a better experience, even if it created new challenges for optimizers.

Get the newsletter search marketers rely on.


Quality vs. brevity

This shift worked – until it didn’t. 

One common complaint today is that search results feel worse

It’s a complicated issue to unpack. 

  • Have search results actually gotten worse? 
  • Or are the results as good as ever, but the underlying sites have declined in quality?

It’s tricky to call. 

What is certain is that as traffic declined, many sites got more aggressive – adding more ads, more pop-ups, and sneakier lead gen CTAs to squeeze more value from fewer clicks.

The search results themselves have also become a bewildering mix of ads, organic listings, and SERP features. 

To deliver better results from shorter queries, search engines have had to guess at intent while still sending enough clicks to advertisers and publishers to keep the ecosystem running.

And as traffic-starved publishers got more desperate, user experience took a nosedive. 

Anyone who has had to scroll through a food blogger’s life story – while dodging pop-ups and auto-playing ads – just to get to a recipe knows how painful this can be.

It’s this chaotic landscape that, in part, has driven the move to answer engines like ChatGPT and other large language models (LLMs). 

People are simply tired of panning for gold in the search results.

The AI era: From compression back to conversation

Up to this point, the pattern has been clear: the average query length kept getting shorter.

But AI is changing the game again, and the query-length pendulum is now swinging sharply in the opposite direction.

Tools like ChatGPT, Claude, Perplexity, and Google’s own AI Mode are making it normal to type or speak longer, more detailed questions again.

We can now:

  • Ask questions instead of searching for keywords. 
  • Refine queries conversationally. 
  • Ask follow-ups without starting over. 

And as users, we can finally skip the over-optimized lead gen traps that have made the web a worse place overall.

Here’s the key point: we’ve gone from mid-length, varied queries in the early days, to short, refined queries over the last 12 years or so, and now to full, detailed questions in the AI era.

The way we seek information has changed once more.

We’re no longer just searching for sources of information. We’re asking detailed questions to get clear, direct answers.

And as AI becomes more tightly integrated into Google over the coming months and years, this shift will continue to reshape how we search – or, more accurately, how we question – Google.

Dig deeper: SEO in an AI-powered world: What changed in just a year

AI and search: Google playing catch-up

Google was a little behind the AI curve.

ChatGPT launched in late 2022 to massive buzz and unprecedented adoption.

Google’s AI Overviews – frankly underwhelming by comparison – didn’t roll out until mid-2024. 

After launching in the U.S. in mid-June and the U.K. in late July 2025, Google’s full AI Mode is now available in 180 countries and territories around the world.

Now, we can ask more detailed, multi-part questions and get thorough answers – without battling through the lead gen traps that clutter so many websites.

The reality is simple: this is a better system.

This is progress.

Want to know the best way to boil an egg – and whether the process changes for eggs stored in the fridge versus at room temperature? Just ask.

Google will often decide if an AI Overview is helpful and generate it on the fly, considering both parts of your question.

  • What is the best way to boil an egg?
  • Does it differ if they are from the fridge?

The AI Overview answers the question directly. 

And if you want to keep going, you can click the bold “Dive deeper in AI Mode” button to continue the conversation.

Dive deeper in AI Mode

Inside AI Mode, you get streamlined, conversational answers to questions that traditional search could answer – just without the manual trawling or the painfully over-optimized, pop-up-heavy recipe sites.

From shorter queries to shorter journeys

Stepping back, we can see how behavior is shifting – and how it ties to human nature’s tendency to seek the path of least resistance.

The “easy” option used to be entering short queries and wading through an increasingly complex mix of results to find what you needed.

Now, the path of least resistance is to put in a bit more effort upfront – asking a longer, more refined question – and let the AI do the heavy lifting.

A search for the best steak restaurant nearby once meant seven separate queries and reviewing over 100 sites. That’s a lot of donkey work you can now skip.

It’s a subtle shift: slightly more work up front, but a far smoother journey in return.

This change also aligns with a classic computing principle: GIGO – garbage in, garbage out. 

A more refined, context-rich question gives the system better input, which produces a more useful, accurate output.

Historic recurrence: The pattern revealed

Looking back, it’s clear there’s a repeating cycle in how technology shapes search behavior.

The early web (1990s)

  • Behavior: Long, experimental, often clumsy queries.
  • Why: No guidance, poor relevance, and lots of trial-and-error.
  • Marketing lesson: Simply having relevant content was often enough to capture traffic.

Google + Autocomplete (2000s)

  • Behavior: Queries got shorter and more standardized.
  • Why: Google Suggest and smarter algorithms nudged users toward the most common phrases.
  • Marketing lesson: Keyword targeting became more focused, with heavier competition around fewer, high-volume terms.

Mobile and voice era (2010s–early 2020s)

  • Behavior: Even shorter, highly predictable queries.
  • Why: Tiny keyboards, voice assistants, and SERP features that answered questions directly.
  • Marketing lesson: The long tail collapsed into clusters. Zero-click searches rose. Winning visibility meant optimizing for snippets and structured data.

AI conversation era (2023–present)

  • Behavior: Longer, natural-language queries return – now in back-and-forth conversations.
  • Why: Generative AI tools like ChatGPT, Gemini, and Perplexity encourage refinement, context, and multi-step questions.
  • Marketing lesson: It’s no longer about just showing up. It’s about being the best answer – authoritative, helpful, and easy for AI to surface.

Technology drives change

The key takeaway is that technology drives changes in how people ask questions.

And tactically, we’ve come full circle – closer to the early days of search than we’ve been in years.

Despite all the doom and gloom around SEO, there’s real opportunity in the AI era for those who adapt.

What this means for SEO, AEO, LLMO, GEO – and beyond

The environment is changing.

Technology is reshaping how we seek information – and how we expect answers to be delivered.

Traditional search engine results are still important. Don’t abandon conventional SEO.

But now, we also need to optimize for answer engines like ChatGPT, Perplexity, and Google’s AI Mode.

That means developing deeper insight into your customer segments and fully understanding the journey from awareness to interest to conversion. 

  • Talk to your customers. 
  • Run surveys. 
  • Reach out to those who didn’t convert and ask why. 

Then weave those insights into genuinely helpful content that can be found, indexed, and surfaced by the large language models powering these new platforms.

It’s a brave new world – but an incredibly exciting one to be part of.

Read more at Read More

How to tell if Google Ads automation helps or hurts your campaigns

How to tell if Google Ads automation helps or hurts your campaigns

Smart BiddingPerformance Max, and responsive search ads (RSAs) can all deliver efficiency, but only if they’re optimizing for the right signals.

The issue isn’t that automation makes mistakes. It’s that those mistakes compound over time.

Left unchecked, that drift can quietly inflate your CPAs, waste spend, or flood your pipeline with junk leads.

Automation isn’t the enemy, though. The real challenge is knowing when it’s helping and when it’s hurting your campaigns.

Here’s how to tell.

When automation is actually failing

These are cases where automation isn’t just constrained by your inputs. It’s actively pushing performance in the wrong direction.

Performance Max cannibalization

The issue

PMax often prioritizes cheap, easy traffic – especially branded queries or high-intent searches you intended to capture with Search campaigns. 

Even with brand exclusions, Google still serves impressions against brand queries, inflating reported performance and giving the illusion of efficiency. 

On top of that, when PMax and Search campaigns overlap, Google’s auction rules give PMax priority, meaning carefully built Search campaigns can lose impressions they should own.

A clear sign this is happening: if you see Search Lost IS (rank) rising in your Search campaigns while PMax spend increases, it’s likely PMax is siphoning traffic.

Recommendation

Use brand exclusions and negatives in PMax to block queries you want Search to own. 

Segment brand and non-brand campaigns so you can track each cleanly. And to monitor branded traffic specifically, tools like the PMax Brand Traffic Analyzer (by Smarter Ecommerce) can help.

Dig deeper: Performance Max vs. Search campaigns: New data reveals substantial search term overlap

Auto-applied recommendations (AAR) rewriting structure

The issue

AARs can quietly restructure your campaigns without you even noticing. This includes:

  • Adding broad match keywords. 
  • “Upgrading” existing keywords to broader match types.
  • Adding new keywords that are sometimes irrelevant to your targeting.

Google has framed these “optimizations” as efficiency improvements, but the issue is that they can destabilize performance. 

Broad keywords open the door to irrelevant queries, which then can spike CPA and waste budget.

Recommendation

First, opt out of AARs and manually review all recommendations moving forward. 

Second, audit the changes that have already been made by going to Campaigns > Recommendations > Auto Apply > History. 

From there, you can see what change happened on what date, which allows you to go back to your campaign data and see if there are any performance correlations. 

Dig deeper: Top Google Ads recommendations you should always ignore, use, or evaluate

Modeled conversions inflating numbers

The issue

Modeled conversions can climb while real sales or MQLs stay flat. 

For example, you may see a surge in reported leads or purchases in your ads account, but when you look at your CRM, the numbers don’t match up. 

This happens because Google uses modeling to estimate conversions where direct measurement isn’t possible. 

If Google doesn’t have full tracking, it fills gaps by estimating conversions it can’t directly track, based on patterns in observable data. 

When left unchecked, the automation will double down on these patterns (because it assumes they’re correct), wasting budget on traffic that looks good but won’t convert.

Recommendation

Tell the automation what matters most to your business. 

Import offline or qualified conversions (via Enhanced Conversions, manual uploads, or CRM integration). 

This will ensure that Google optimizes for real revenue and not modeled noise.

When automation is boxed in: Reading the signals

Not every warning in Google means automation is failing. 

Sometimes the system is limited by the goals, budget, or inputs you’ve set – and it’s simply flagging that.

These diagnostic signals help you understand when to adjust your setup instead of blaming the algorithm.

Limited statuses (red vs. yellow)

The issue

A Limited status doesn’t always mean your campaign is broken. 

  • If you see a red Limited label, this means your settings are too strict. That could mean that your CPA or ROAS targets are unrealistic, your budget is too low, etc. 
  • Seeing a yellow Limited label is more of a caution sign. It’s usually tied to low volume, limited data, or the campaign is still learning.

Recommendation

If the status is red, loosen constraints gradually: raise your budget and ease up CPA/ROAS targets by 10–15%. 

If the status is yellow, don’t panic. This is Google’s version of telling you that they could use more money, if possible, but it’s not vital to your campaign’s success.

Responsive search ads (RSAs) inputs

The issue

RSAs are built in real-time from the headlines and descriptions you have already provided Google. 

At a minimum, advertisers are required to write 3 headlines with a maximum of 15 (and up to 4 descriptions). The fewer the assets you give the system, the less flexibility it will have. 

On the other hand, if you’re running a small budget and give the RSAs all 15 headlines and 4 descriptions, there is no way Google will be able to collect enough data to figure out which combinations actually work.

The automation isn’t failing with either. You’ve either given it too little information or too much with too little spending. 

Recommendation

Match asset volume to the budget allocated to the campaign. 

  • If you’re unsure, aim to write between 8-10 headlines and 2-4 descriptions.
  • If each headline/description isn’t distinct, don’t use it. 

Conversion reporting lag and attribution issues

The issue

Sometimes, Google Ads reports fewer conversions than your business actually sees. 

This isn’t necessarily an automation failure. It’s often just a matter of when the conversion is counted. 

By default, Google reports conversions on the day of the click, not the day the actual conversion happened. 

That means if you check performance mid-week, you might see fewer conversions than your campaign has actually generated because Google attributes them back to the click date. 

The data usually “catches up” as lagging conversions are processed.

Recommendation

Use the Conversions (by conversion time) column alongside the standard conversion column.

Conversions (by conversion time) column

This helps you separate true performance drops from simple reporting delays. 

If discrepancies persist beyond a few days, investigate the tracking setup or import accuracy. Just don’t assume automation is broken just because of timing gaps.

Get the newsletter search marketers rely on.


Where to look in the Google Ads UI

Automation leaves a clear trail within Google Ads if you know where to look. 

Here are some reports and columns to help spot when automation is drifting.

Bid Strategy report: Top signals 

The issue

The bid strategy report shows some of the signals Smart Bidding relies on when there is enough data. 

The “top signals” can sometimes make sense, and at other times, they can be a bit misleading. 

If the algorithm relies on weak signals (e.g., broad search themes and a lack of first-party data), its optimizations will be weak, too.

Bid Strategy report: Top signals 

Recommendation

Make checking your Top Signals a regular activity. 

If they don’t align with your business, fix the inputs. 

  • Improve conversion tracking.
  • Import offline conversions.
  • Reevaluate search themes.
  • Add customer/remarketing lists.
  • Expand your negative keyword list(s). 

Impression share metrics

The issue

When a campaign underdelivers, it’s tempting to assume automation is failing, but looking at Impression Share (IS) metrics tends to reveal the real bottleneck. 

By looking at Search Lost IS (budget), Search Lost IS (rank), and Absolute Top IS together, you can separate automation problems from structural or competitive ones.

How to use IS metrics as a diagnostic tool.

  • Budget problem
    • High Lost IS (budget) + low Lost IS (rank): Your campaign isn’t struggling. It just doesn’t have enough budget to run properly.
    • Recommendation: Raise the budget or accept capped volume.
  • Targets too aggressive
    • High Lost IS (rank) + low Absolute Top IS: If your Lost IS (rank) is high and your budget is adequate, your CPA/ROAS targets are likely too aggressive, causing Smart Bidding to underbid in auctions.
    • Recommendation: Loosen targets gradually (10-15%).

Scripts to keep automation honest

Scripts give you early warnings so you can step in before wasted spend piles up.

Anomaly detection

  • The issue: Automation can suddenly overspend or underspend when conditions in the marketplace change, but you often won’t notice until reporting lags.
  • Recommendation: Use an anomaly detection script to flag unusual swings in spend, clicks, or conversions so you can investigate quickly.

Query quality (N-gram analysis)

  • The issue: Broad match and PMax can drift into irrelevant themes (“free,” “jobs,” “definition”), wasting budget on low-quality queries.
  • Recommendation: Run an N-gram script to surface recurring poor-quality terms and add them as negatives before automation optimizes toward them.

Budget pacing

  • The issue: Google won’t exceed your monthly cap, but daily spend will be uneven. Pacing scripts help you spot front-loading.
  • Recommendation: A pacing script shows you how spend is distributed so you can adjust daily budgets mid-month or hold back funds when performance is weak.

Turning automation into an asset

Automation rarely fails in dramatic ways – it drifts. 

Your job isn’t to fight it, but to supervise it: 

  • Supply the right signals.
  • Track when it goes off course.
  • Step in before wasted spend compounds.

The diagnostics we covered – impression share, attribution checks, PMax insights, and scripts – help you separate real failures from cases where automation is simply following your inputs.

The key takeaway: automation is powerful, but not self-policing. 

With the right guardrails and oversight, it becomes an asset instead of a liability.

Read more at Read More

Global expansion and hyperlocal focus redefine the next chapter of retail media networks by DoorDash

Retail media networks are projected to be worth $179.5 billion by 2025, but capturing share and achieving long-term success won’t hinge solely on growing their customer base. With over 200 retail media networks now competing for advertiser attention, the landscape has become increasingly complex and crowded. The RMNs that stand out will be those taking a differentiated approach to meeting the evolving needs of advertisers.

The industry’s concentration creates interesting dynamics. While some platforms have achieved significant scale, nearly 70% of RMN buyers cite “complexity in the buying process” as their biggest obstacle. That tension, between explosive growth and operational complexity, is forcing the industry to evolve beyond traditional approaches.

As the landscape matures, which strategies will define the next wave of growth: global expansion, hyperlocal targeting, or both?

The evolution of retail media platforms

To understand where the industry is heading, it’s worth examining how successful platforms are addressing advertisers’ core challenges. Lack of measurement standards across platforms continues to frustrate advertisers who want to compare performance across networks. Manual processes dominate smaller networks, making campaign management inefficient and time-consuming.

At the same time, most retailers lack the digital footprint necessary for standalone success. This has created opportunities for platforms that can solve multiple problems simultaneously: standardization, automation, and scale.

DoorDash represents an interesting case study in this evolution. The platform has built its advertising capabilities around reaching consumers at their moment of local need across multiple categories. With more than 42 million monthly active consumers as of December 2024, DoorDash provides scale and access to high-intent shoppers across various categories spanning restaurants, groceries and retail.

The company’s approach demonstrates how platforms can address advertiser pain points through technology. DoorDash’s recent platform announcement showcases this evolution: the company now serves advertisers with new AI-powered tools and expanded capabilities. Through its acquisition of ad tech platform Symbiosys, a next-generation retail media platform, brands can expand their reach into digital channels, such as search, social, and display, and retailers can extend the breadth of their retail media networks.

Global expansion meets local precision

International expansion presents both opportunities and challenges for retail media networks. Europe’s retail media industry is projected to surpass €31 billion by 2028,. This creates opportunities for networks that can solve the technology puzzle of operating across multiple geographies.

The challenge lies in building platforms that work seamlessly across countries while maintaining local relevance. International expansion requires handling different currencies, regulations, and cultural contexts—capabilities that many networks struggle to develop.

DoorDash’s acquisition of Wolt illustrates how platforms can achieve global scale while maintaining local connections. The integration enables brands to manage campaigns across Europe and the U.S. through a single interface—exactly the kind of operational efficiency that overwhelmed advertisers seek.

The combined entity now operates across more than 30 countries, with DoorDash and Wolt Ads crossing an annualized advertising revenue run rate of more than $1 billion in 2024. What makes this expansion compelling isn’t just the scale—it’s how the integration maintains neighborhood-level precision across diverse geographies.

Wolt has transformed from a food delivery platform into what it describes as a multi-category “shopping mall in people’s pockets.”

The hyperlocal advantage: context beats demographics

Here’s what’s really changing the game: the shift from demographic targeting to contextual precision. Privacy regulations favor contextual targeting over behavioral tracking, but that’s not the only reason smart networks are going hyperlocal.

Location-based intent signals provide dramatically higher conversion probability than traditional demographics. Real-time contextual data—weather patterns, local events, proximity to fulfillment—influences purchase decisions in immediate, actionable ways that broad demographic targeting simply can’t match.

DoorDash built its entire advertising model around this insight, reaching consumers at the exact moment of local need across multiple categories. The platform provides scale and access to high-intent shoppers with contextual precision. A recent innovation that exemplifies this approach is Dayparting for CPG brands, which enables advertisers to target users in their local time zones—a level of time-based precision that distinguishes hyperlocal platforms from broader retail media networks.

In one example, Unilever applied Dayparting to focus on late-night and weekend windows for its ice cream campaigns, aligning ad delivery with peak demand periods. Over a two-week period, 77% of attributed sales were new-to-brand, demonstrating the power of contextual timing in driving incremental reach.

Major brands, including Unilever, Coca-Cola, and Heineken, utilize both DoorDash and Wolt platforms for hyperlocal targeting, proving the model is effective for both endemic and non-endemic advertisers seeking neighborhood-level precision.

Technology evolution: measurement and automation

The technical requirements for next-generation retail media networks extend far beyond basic advertising capabilities. Self-serve functionality has become standard for international geographies—not because it’s trendy, but because manual campaign management doesn’t scale across dozens of countries with different currencies, regulations, and cultural contexts.

Cross-country campaign management requires unified dashboards that manage complexity while maintaining simplicity for advertisers. Automation isn’t optional anymore; it’s necessary to compete with established players who’ve built machine learning into their core operations.

But here’s what’s really transforming measurement: new attribution methodologies that go beyond traditional ROAS. When platforms can integrate fulfillment data with advertising exposure, they enable real-time performance tracking that connects ad spend to actual business outcomes rather than just clicks and impressions.

Progress on standardization continues through IAB guidelines addressing measurement consistency, alongside industry pushes for technical integration standards. The challenge lies in balancing standardization with differentiation—networks need to offer easy integration and consistent measurement while maintaining unique value propositions.

In a move toward addressing advertisers’ need for measurement consistency, DoorDash recognized that restaurant brands valued both click and impression-based attribution for their sponsored listing ads, and recently introduced impression-based attribution and reporting in Ads Manager. This has enabled restaurant brands to gain a deeper understanding of performance and results driven on DoorDash.

Global technology challenges add another layer of complexity: multi-currency transactions, local payment methods, regulatory compliance across countries, and cultural adaptation while maintaining platform consistency. These aren’t afterthoughts for international platforms, they’re core competencies that determine success or failure.

Industry outlook: consolidation and opportunity

Retail media is heading toward consolidation, but not in the way most people expect. Hyperlocal networks are positioned to capture share from undifferentiated RMNs that compete solely on inventory volume. Geographic specialization is becoming a viable alternative to traditional scale-focused approaches.

Simultaneously, community impact measurement is gaining importance for brand strategy. Marketers are discovering that advertising dollars spent on local commerce platforms create multiplier effects—supporting neighborhood businesses and strengthening local economies in ways that traditional e-commerce advertising doesn’t achieve.

The networks that understand this dynamic, that can offer global platform capabilities with genuine local industry expertise, are the ones positioned to define retail media’s next chapter. Success requires technology integration that enables contextual and location-based targeting, plus measurement solutions that prove incrementality beyond traditional metrics.

The path forward

As retail media networks mature, success lies not in choosing between global scale and local relevance, but in achieving both simultaneously. The DoorDash-Wolt combination provides a compelling blueprint, demonstrating how technology platforms can enable international expansion while deepening neighborhood-level connections.

For marketers navigating this evolution, the fundamental question shifts from “where should we advertise?” to “how can we reach consumers at their moment of need?” Networks that answer this effectively—through global reach, hyperlocal precision, or ideally both, will write retail media’s next chapter.Interested to learn more about DoorDash Ads? Get started today.

Read more at Read More