Posts

Minimizing Marketing Blind Spots: The New Era of Attribution

Attribution in the modern marketing age can be confusing. But the pressure on marketing teams to “prove what’s working” never goes away. 

Traditionally, marketers had certain data we could always rely on, but the data pool we can pull from seems to be growing and shrinking at the same time. Between privacy constraints, zero-click searches, AI Overviews, and channel-walled gardens, marketers are flying blind in more ways than they realize. Attribution has always been an imperfect science. And in 2025, it’s gone from fuzzy to fragmented.

If you’re planning marketing budgets and trying to defend where your spend is going, there’s no need to freak out. Marketing attribution is possible. It doesn’t look like it used to, though. And if you’re still only relying on touch-based models or last-click reports, you might be measuring the wrong things entirely.

Let’s break down where attribution is failing, what’s making it harder, and what forward-looking marketers are doing to close the gap.

Key Takeaway

  • Attribution challenges have multiplied due to AI, automation, and privacy shifts.
  • Walled gardens, offline sales, and dark social are major blind spots, and they often overlap.
  • Deterministic, touch-based attribution is giving way to modeled and probabilistic methods.
  • AI isn’t just the problem, it’s also part of the solution.
  • You don’t need perfect data. You need data that helps you make better decisions.

The New Face of Attribution

Attribution used to be about stitching together clicks. Now, we’re lucky if we get clicks at all thanks to zero-click search.

Today’s buyers bounce between different platforms on multiple devices and AI-curated content. They’re influenced by ads on a connected TV or product mentions in a ChatGPT thread, and neither of those leaves a clean digital trail.

Meanwhile, ad platforms like Meta and Google have leaned hard into automation. That means fewer transparent levers to optimize and more “black box” performance metrics. According to NP Digital analysis, there are over 90% fewer optimization permutations in Google and Meta Ads today compared to 2023. So yes, marketing attribution is back. But the infrastructure around it seems more broken than ever.

A graphic explaining the collapse of optimization levers.

Finding Marketing Blindspots

Unfortunately, the reality is that attribution blind spots don’t come with a warning light. You may be staring directly at your dashboard and not notice traffic is piling up in areas you’re not tracking. And the amount of potential blindspots is growing.

Here are the big ones:

  • Walled Gardens: Platforms like Google, Meta, and Amazon are all powerful, but have become much more mysterious as search evolves. You’re renting their space, but if you don’t play by their rules, you may not get complete visibility.
  • Offline Sales: Leads turn into deals in CRMs, call centers, or retail. They may have started as a click, but the customer journey ends at a brick-and-mortar location or an entirely different platform than the original click.
  • Cross-Device Journeys: That ad someone saw on mobile might convert from their phone, but they could just as easily become a sale on their desktop or smart TV.
  • Building Awareness: Upper funnel spend (like digital out-of-home (OOH) or video) gets undervalued because it rarely leads to a direct conversion.
  • Dark Social: Private sharing (think WhatsApp, SMS, Signal) shows up in attribution models as “direct”, but it’s not.
  • LLM Traffic: People are discovering brands via large language models, and those referrals are often invisible in GA4.

To make matters worse, these blind spots can stack. Before you know it, you find yourself in a nightmare marketing scenario where you’re not just missing one data signal, you’re missing combinations of them, making optimization even harder.

A graphic that explains how multiple marketing blindspots can pile up.

New Attribution Trends and Technology

You can keep up with all of this. It just requires a switch in perspective. Marketers should evaluate their campaigns using a combination of modeled attribution and traditional touch-based metrics. You may never fully connect every dot, and that’s okay. The goal isn’t perfection, just enough clarity to defend marketing budget allocations.

Modern marketers are using these tools:

  • Incrementality testing: Geo holdouts and lift studies to isolate what’s actually moving the needle.
  • MMM (Marketing Mix Modeling): Especially useful for larger budgets or mixed channel strategies.
  • Correlation analysis: Pre/post testing, contextual lift, and even proxy signals like brand search volume.
  • Unified first-party data: Clean, consistent CRM and web data feeding both your models and your platforms.

The best strategies blend these methods based on spend level, complexity, and conversion volume. Leveraging AI in your marketing efforts is one of the best ways to automate this research as much as possible and maximize the benefit of these tactics. 

AI and Blind Spots

Some marketers may feel like AI is eroding attribution. While that could be true, the technology is also helping to rebuild it.

Here’s how AI is stepping in:

  • Generative AI: LLMs like ChatGPT are now discovery platforms. They drive traffic, but don’t always identify themselves unless you tag them.
  • AI coworkers: Agentic AI simulates user behavior, tests messaging, and can even help set up GA4 tracking automatically.
  • Machine learning models: Used in MMMs and platform attribution to refine forecasts, assign contribution, and make predictions.

Still, only 55% of marketers trust AI-generated insights, according to CoSchedule. The key is to treat AI as an assistant, not the authority. Use it to speed up testing and build models, but validate with your own data.

A graphic that explains how to introduce GenAI into reporting workflows.

Analytics platforms like Adobe Analytics are also making steps to better capture attribution from AI tools. In October they released a new referrer type called “Conversational AI Tools” to segment out traffic from ChatGPT and other LLMs from the other channels marketers have historically monitored.

Closing The Gap With Attribution Strategies

So, how do you go from blind spots to better planning? You don’t need perfect clarity. You need consistent signals and a smarter strategy.

Here are some ways marketers are closing attribution gaps:

  1. Clean your first-party data: Data from internal sources like your website and CRM needs to be trustworthy. These are your most important sources of truth.
  2. Use multipliers: Adjust performance based on geo lift or experiment results. Not every click counts equally.
  3. Invite questions: Models are approximations. Encourage teams to challenge them and make improvements as time goes on.
  4. Survey your customers: Ask where they heard about you. It’s old school, but incredibly effective for context.
  5. Use offer codes and landing pages: Even if not perfect, they create new signals across dark social or offline.
  6. Track “AI Referrers”: Create custom =channels in your web analytics, including in GA4, to segment out performance from LLM-driven traffic.

Linking Attribution To Business Outcomes

Attribution and business outcomes go hand-in-hand. Understanding where your most profitable leads originate is essential to growing any business, regardless of its size.

A graphic explaining savings attributed to fixing attribution.

You want to connect your data to actual decisions, such as forecasts, budgets, and resource allocation. But, with the marketing landscape changing so quickly and drastically, how do you know which metrics to follow?

Here are the metrics that matter now:

  • Total conversions and incremental conversions
  • Conversion value over time
  • Cost per incremental conversion
  • Spend thresholds by tactic
  • Directional change (old model vs. new)

Remember: even if your models aren’t perfect, if they get you closer to optimal spend, it’s working. Continuous improvement for your attribution strategy will get you closer and closer still.

A graphic explaining the value of continuous improvement for marketing attribution.

FAQs

What is a marketing attribution blind spot?

It’s any part of the customer journey you can’t track, like dark social shares, offline sales, or LLM referrals that may be influencing conversions without showing up in your data.

Can AI help with attribution?

Yes, but only if used smartly. AI can simulate behavior and identify patterns, but it’s not a silver bullet. Use it to complement your experiments and first-party data.

What’s the best attribution model?

There isn’t one. The most effective models mix touch-based data with testing and contextual clues. Choose based on your business size, channel mix, and data maturity.

Conclusion

When it comes to effective attribution, you just need to see enough to move forward.

Mastering this skill in the modern marketing world is less about getting the credit right and more about making smarter calls with what you can measure. The key is to stop chasing perfection and start building a system that helps you plan and adapt to the data you gather from your testing in real-time. Attribution isn’t the whole picture, but it remains the best tool we have to illuminate the path forward, including its blind spots.

Naturally, we can still learn from tried and true marketing methods. We may just have to think outside the box on how to apply them to today’s search environment and customer journey. It’s worth checking out our guides on which marketing campaigns drive the best impact and how to track your marketing ROI. Combining this extra knowledge with your new attribution perspective could be the secret sauce to put you ahead of the pack in 2026. 

Read more at Read More

Small tests to yield big answers on what influences LLMs

Small Tests – Big Answers – Featured image

Undoubtedly, one of the hot topics in SEO over the last few months has been how to influence LLM answers. Every SEO is trying to come up with strategies. Many have created their own tools using “vibe coding,” where they test their hypotheses and engage in heated debates about what each LLM and Google use to pick their sources.

Some of these debates can get very technical, touching on topics like vector embeddings, passage ranking, retrieval-augmented generation (RAG), and chunking. These theories are great—there’s a lot to learn from them and turn into practice. 

However, if some of these AI concepts are going way over your head, let’s take a step back. I’ll walk you through some recent tests I’ve run to help you gain an understanding of what’s going on in AI search without feeling overwhelmed so you can start optimizing for these new platforms.

Create branded content and check for results

A while ago, I went to Austin, Texas, for a business outing. Before the trip, I wondered if I could “teach” ChatGPT about my upcoming travels. There was no public information about the trip on the web, so it was a completely clean test with no competition.

I asked ChatGPT, “is Gus Pelogia going to Austin soon?” The initial answer was what you’d expect: He doesn’t have any trips planned to Austin.

That same day, a few hours later, I wrote a blog post on my website about my trip to Austin. Six hours after I published the post, ChatGPT’s answer changed: Yes, Gus IS going to Austin to meet his work colleagues.

ChatGPT prompts with a blog post published in between queries, which was enough to change a ChatGPT answer.

ChatGPT used an AI framework called RAG (Retrieval Augmented Generation) to fetch the latest result. Basically, it didn’t have enough knowledge about this information in its training data, so it scanned the web to look for an up-to-date answer.

Interestingly enough, it took a few days until the actual blog post with detailed information was found by ChatGPT. Initially, ChatGPT had found a snippet of the new blog post on my homepage and reindexed the page within the six-hour range. It was using just the blog post’s page title to change its answer before actually “seeing” the whole content days later.

Some learnings from this experiment:

  • New information on webpages reaches ChatGPT answers in a matter of hours, even for small websites. Don’t think your website is too small or insignificant to get noticed by LLMs—they’ll notice when you add new content or refresh existing pages, so it’s important to have an ongoing brand content strategy.
  • The answers in ChatGPT are highly dependent on the content published on your website. This is especially true for new companies where there are limited sources of information. ChatGPT didn’t confirm that I had upcoming travel until it fetched the information from my blog post detailing the trip.
  • Use your webpages to optimize how your brand is portrayed beyond showing up in competitive keywords for search. This is your opportunity to promote a certain USP or brand tagline. For instance, “The Leading AI-Powered Marketing Platform” and “See everyday moments from your close friends” are used, respectively, by Semrush and Instagram on their homepages. While users probably aren’t searching for these keywords, it’s still an opportunity for brand positioning that will resonate with them.

Win every search with AI visibility + traditional SEO

Built for how people search today. Track your brand across Google rankings and AI search in one place.

Try free for 14 days

Get started with

Semrush One Logo

Test to see if ChatGPT is using Bing or Google’s index

The industry has been ringing alarm bells about whether ChatGPT uses Google’s index instead of Bing. So I ran another small test to find out: I added a <meta name=”googlebot” content=”noindex”> tag on the blog post, allowing only Bingbot for nine days.

If ChatGPT is using Bing’s index, it should find my new page when I prompt about it. Again, this was on a new topic and the prompt specifically asked for an article I wrote, so there wouldn’t be any doubts about what source to show.

The page got indexed by Bing after a couple of days, while Google wasn’t allowed to see it.

New article has been indexed by Bingbot

I kept asking ChatGPT, with multiple prompt variations, if it could find my new article. For nine days, nothing changed—it couldn’t find the article. It got to a point that ChatGPT hallucinated (actually, tried its best guess) a URL.

ChatGPT made-up URL: https://www.guspelogia.com/learnings-from-building-a-new-product-as-an-seo
Real URL: https://www.guspelogia.com/learnings-new-product-seo

GSC shows that it can’t index the page due to “noindex” tag

I eventually gave up and allowed Googlebot to index the page. A few hours later, ChatGPT changed its answer and found the correct URL.

On the top, ChatGPT’s answer when Googlebot was blocked. On the bottom, ChatGPT’s answer after Googlebot was allowed to see the page.

Interestingly enough, the link to the article was presented on my homepage and blog pages, yet ChatGPT couldn’t display it. It only found that the blog post existed based on the text on those pages, even though it didn’t follow the link.

Yet, there’s no harm in setting up your website for success on Bing. They’re one of the search engines that adopted IndexNow, a simple ping that informs search engines that a URL’s content has changed. This implementation allows Bing to reflect updates in their search results quickly. 

While we all suspect (with evidence) that ChatGPT isn’t using Bing’s index, setting up IndexNow is a low effort task that’s worthwhile.

Change the content on a source used by RAG

Clicks are becoming less important. Instead, being mentioned in sources like Google’s AI Mode is arising as a new KPI for marketing teams. SEOs are testing multiple tactics to “convince” LLMs about a topic. From using LinkedIn Pulse to write about a topic, to controlled experiments with expired domains and hacking sites, in some ways, it feels like old-school SEO is back.

We’re all talking about being included in AI search results, but what happens when a company or product loses a mention on a page? Imagine a specific model of earbuds is removed from a “top budget earbuds” list—would the product lose its mention, or would Google find a new source to back up its AI answer? 

While the answer could always be different for each user and each situation, I ran another small test to find out.

In a listicle that mentioned multiple certification courses, I identified one course that was no longer relevant, so I removed mentions of it from multiple pages on the same domain. I did this to keep the content relevant, so measuring the changes in AI Mode was a side effect.

Initially, within the first few days of the course getting removed from the cited URL, it continued to be part of the AI answer for a few pre-determined prompts. Google simply found a new URL in another domain to validate its initial view. 

However, within a week, the course disappeared from AI Mode and ChatGPT completely. Basically, even though Google found another URL validating the course listing, because the “original source” (in this case, the listicle) was updated to remove the course, Google (and, by extension, ChatGPT) subsequently updated its results as well.  

This experiment suggests that changing the content on the source cited by LLMs can impact the AI results. But take this conclusion with a pinch of salt, as it was a small test with a highly targeted query. I specifically had a prompt combining “domain + courses” so the answer would come from one domain.

Nonetheless, while in the real world it’s unlikely one citation URL would hold all the power, I’d hypothesize that losing a mention on a few high-authority pages would have the side effect of losing the mention in an AI answer.

Test small, then scale

Tests in small and controlled environments are important for learning and give confidence that your optimization has an effect. Like everything else I do in SEO, I start with an MVP (Minimum Viable Product), learn along the way, and once/if evidence is found, make changes at scale.

Do you want to change the perception of a product on ChatGPT? You won’t get dozens of cited sources to talk about you straight away, so you’d have to reach out to each single source and request a mention. You’ll quickly learn how hard it is to convince these sources to update their content and whether AI optimization becomes a pay-to-play game or if it can be done organically.

Perhaps you’re a source that’s mentioned often when people search for a product, like earbuds. Run your MVPs to understand how much changing your content influences AI answers before you claim your influence at scale, as the changes you make could backfire. For example, what if you stop being a source for a topic due to removing certain claims from your pages?

There’s no set time for these tests to show results. As a general rule, SEOs say results take a few months to appear. In the first test on this article, it took just a few hours to see results. 

Running LLM tests with larger websites

Working in large teams or on large websites can be a challenge when doing LLM testing. My suggestion is to create specific initiatives and inform all stakeholders about changes to avoid confusion later, as they might question why these changes are happening.

One simple but effective test done by SEER Interactive was to update their footer tagline.

  • From: Remote-first, Philadelphia-founded
  • To: 130+ Enterprise Clients, 97% Retention Rate 

By changing the footer, ChatGPT 5 started mentioning its new tagline within 36 hours for a prompt like “tell me about Seer Interactive.” I’ve checked, and while every time the answer is different, they still mention the “97% retention rate.”

Imagine if you decide to change the content on a number of pages, but someone else has an optimization plan for those same pages. Always run just one test per page, as results will become less reliable if you have multiple variables.

Make sure to research your prompts, have a tracking methodology, and spread the learnings across the company, beyond your SEO counterparts. Everyone is interested in AI right now, all the way up to C-levels.

Another suggestion is to use a tool like Semrush’s AI SEO toolkit to see the key sentiment drivers about a brand. Start with the listed “Areas for Improvement”—this should give you plenty of ideas for tests beyond “SEO Reason,” as it reflects how the brand is perceived beyond organic results.

Checklist: Getting started with LLM optimization

Things are changing fast with AI, and it’s certainly challenging to keep up to date. There’s an overload of content right now, a multitude of claims, and, I dare to say, not even the LLM platforms running them have things fully figured out.

My recommendation is to find the sources you trust (industry news, events, professionals) and run your own tests using the knowledge you have. The results you find for your brands and clients are always more valuable than what others are saying.

It’s a new world of SEO and everyone is trying to figure out what works for them. The best way to follow the curve (or stay ahead of it) is to keep optimizing and documenting your changes.

To wrap it up, here’s a checklist for your LLM optimization:

  • Before starting a test, make sure your selected prompts consistently return the answer you expect (such as not mentioning your brand or a feature of your product). Otherwise, the new brand mention or link could be a coincidence, not a result of your work.
  • If the same claim is made on multiple pages on your website, update them across the board to increase chances of success
  • Use your own website and external sources (e.g., via digital PR) to influence your brand perception. It’s unclear if users will cross-check AI answers or just trust what they’re told.

Read more at Read More

Most ChatGPT links get 0% CTR – even highly visible ones

AI vs organic search referral traffic

A leaked file reveals the user interactions that OpenAI is tracking, including how often ChatGPT displays publisher links and how few users actually click on them.

By the numbers. ChatGPT shows links, but hardly anyone clicks on them. For one top-performing page, the OpenAI file reports:

  • 610,775 total link impressions
  • 4,238 total clicks
  • 0.69% overall CTR
  • Best individual page CTR: 1.68%
  • Most other pages: 0.01%, 0.1%, 0%

ChatGPT metrics. The leaked file breaks down every place ChatGPT displays links and how users interact with them. It tracks:

  • Date range (date partition, report month, min/max report dates)
  • Publisher and URL details (publisher name, base URL, host, URL rank)
  • Impressions and clicks across:
    • Response
    • Sidebar
    • Citations
    • Search results
    • TL;DR
    • Fast navigation
  • CTR calculations for each display area
  • Total impressions and total clicks across all surfaces

Where the links appear. Interestingly, the most visible placements drive the fewest clicks. The document broke down performance by zone:

  • Main response: Huge impressions, tiny CTR
  • Sidebar and citations: Fewer impressions, higher CTR (6–10%)
  • Search results: Almost no impressions, zero clicks

Why we care. Hoping ChatGPT visibility might replace your lost Google organic search traffic? This data says no. AI-driven traffic is rising, but it’s still a sliver of overall traffic – and it’s unlikely to ever behave like traditional organic search traffic.

About the data. It was shared on LinkedIn by Vincent Terrasi, CTO and co-founder of Draft & Goal, which bills itself as “a multistep workflow to scale your content production.”

Read more at Read More

Microsoft Advertising adds AI-powered image animation to boost video creation

Microsoft's Audience Network expansion- Higher CPCs, lower CTRs, no added value

Microsoft Advertising is rolling out Image Animation, a new Copilot-powered feature that automatically converts static images into short, dynamic video assets — giving advertisers a faster path into video without traditional production.

How it works:

  • Copilot transforms existing static images into scroll-stopping animated video formats.
  • The tool extends the lifespan of strong image creatives by repurposing them for video placements across Microsoft’s global publisher network.
  • The feature is now in global pilot (excluding mainland China) and accessible through Ads Studio’s video templates.

Why we care. Video continues to dominate digital attention, with the average American now watching more than four hours of digital video per day. As video becomes essential in performance campaigns, advertisers need scalable ways to produce it — especially when budgets or resources are tight.

This update reduces production barriers, extends the value of top-performing images, and unlocks broader inventory across Microsoft’s premium video network.

Between the lines. For many advertisers, the biggest bottleneck to entering video isn’t strategy — it’s production. Microsoft is positioning Copilot as a creative multiplier, letting performance marketers upgrade image-based campaigns with lightweight, AI-generated motion.

The bottom line. Microsoft Advertising is working on their AI advances to close the gap between static creative and video demand — helping advertisers stay competitive as video consumption accelerates.

Read more at Read More

6 Best Ad Intelligence Software to Outsmart the Competition

Ad intelligence software promises to show you everything your competitors are doing: their keywords, budgets, creatives, and landing pages.

But many surface insights you could get for free.

Meta’s Ad Library shows what advertisers are currently running. Google’s Transparency Center does the same for search and YouTube. TikTok’s Creative Center reveals top performers by industry.

So, when does paid software earn its cost?

  • You’re tracking multiple competitors across platforms and losing hours to manual checks
  • You need historical data on which ads they tested and killed
  • You rely on spend benchmarks and real-time alerts to catch shifts before your clients do

That’s the gap paid tools fill. If they’re good.

Many aren’t. They bury useful insights under dashboards that create more work than they save. The data looks complete until you actually try to use it.

This guide covers six platforms that deliver real intelligence (if you know what you’re looking at).

We’re not promising magic improvements. We’re showing what each tool reveals, who it’s built for, and what you give up at each price point.

What Is the Best Ad Intelligence Software?

Ad Intelligence Tools Best For Price
Similarweb Best for stalking competitors’ ads at scale — plus, their SEO, traffic, and market moves $649+/month. (Only higher-tier plans come with ad intelligence.)
Semrush Advertising Toolkit Best for multi-platform ad intelligence, from Meta and TikTok to Google Shopping $99-$220/month
SpyFu Best for affordable Google Ads intelligence with deep historical data $39-$249/month
PowerAdSpy Best for analyzing ad engagement across social media platforms $69-$399/month
Adbeat Best for tracking competitor display ads and landing pages $249+/month
Pathmatics Best for enterprise-level ad spend intelligence across mobile, social, and video Pricing is not publicly available

1. Similarweb

Best for stalking competitors’ ads at scale — plus, their SEO, traffic, and market moves

Similarweb – Homepage

Similarweb reveals your competitors’ complete paid strategies, from their winning ad creatives to their most successful publishers.

It also includes SEO and competitive intelligence tools in every subscription, so you get the full picture of how your rivals attract and convert traffic across every channel.

This cross-channel context is especially helpful if you already use native ad libraries but want scalable intel that ties everything together.

Learn Your Competitors’ Highest-Performing Publishers and Ad Networks

If your competitors are running ads, Similarweb shows you where (and how to beat them).

You’ll see:

  • Which ad networks and placements work best for your top competitors
  • Where their ad budgets go, broken down by channel
  • Industry-wide trends that reveal missed opportunities

Similarweb – The Spruce – Website Intelligence

Say Similarweb shows that multiple competitors spend over 50% of their display budgets on a single publisher.

That’s a data-backed signal you can’t ignore.

Use that data to target the same publisher to test similar placements. Or find underused publishers in the same category for more affordable traffic.

Similarweb – Huffpost – Publisher Performance

Get Inspired by Proven Ad Creatives

Similarweb’s database makes it easy to browse display ads by publisher, network, and format.

  • See the messaging and offers competitors use to get conversions
  • Learn how many days each ad was active, so you know which ones excelled (and which ones failed)
  • Find out which formats your competitors are using, including product, display, and video ads

Similarweb – Creatives

Of course, copying your competitors’ ads word-for-word isn’t the goal.

The real value is in spotting patterns: the hooks they repeat, the formats they invest in, and the offers they continually test.

These insights let you design campaigns that build on what already works in your market.

When you’re juggling multiple accounts, this saves hours of creative testing, and points you directly toward proven formats.

Reverse-Engineer Competitors’ Search and Shopping Campaigns

Similarweb shows you which keywords your competitors bid on and how much they’re spending.

This helps you identify high-value keywords that drive conversions and avoid wasting budget on terms that don’t perform.

Similarweb – Paid Keywords

From there, you can build stronger landing pages that target your competitors’ most successful keywords and match intent.

Pros and Cons

Pros Cons
Tracks 500M+ ads across publishers, networks, and formats for deep competitive insights Ad intelligence tools only available with the most expensive plan
Uncovers competitors’ top-performing publishers and ad placements Can feel overwhelming for smaller teams due to the platform’s depth
Includes SEO, traffic, and market data for a full competitive picture If you only want ad intelligence, you’ll be paying for much more than you need

Pricing

Similarweb – Pricing

Similarweb offers multiple plans, but only the most expensive one includes dedicated tools for ad intelligence.

This plan costs $649/month ($540 billed annually). Similarweb also offers business and enterprise plans, but pricing and tools are not publicly available online.

2. Semrush Advertising Toolkit

Best for multi-platform ad intelligence, from Meta and TikTok to Google Shopping

Semrush – Advertising Research – Ebay – Positions

When you’re managing multiple clients or campaigns, switching between Meta, TikTok, and Google dashboards gets messy fast.

Semrush’s Advertising Toolkit consolidates that chaos into one workspace — letting you analyze competitor campaigns and build your own in the same place.

You’ll get deep intel on keywords, budgets, ad copy, and creative trends.

Plus, actionable advice on how to turn that data into high-performing campaigns.

Track Competitor Keywords and Budgets

The Advertising Research tool reveals everything, and we mean everything, about your competitors’ Google Ads strategies.

Enter any domain and you’ll see:

  • Estimated ad traffic
  • Cost per click (CPC)
  • Highest- and lowest-performing keywords
  • Organic search position
  • Keyword difficulty
  • URL

No more wasting ad budget on terms that don’t perform. You’ll know exactly which ones to target in your next campaign.

Semrush – Advertising Research – Ebay – Position Changes

The tool also tracks keyword trends over time.

See which keywords competitors continuously invest in month after month.

When a keyword consistently appears in their paid strategy with stable or growing volume, that’s a clear sign it’s profitable.

Semrush – Advertising Research – Ebay – Paid Search Trends – Keywords

With this data, you might test variations of the keyword in multiple ads to capitalize on its success.

Or use them to inform your broader content strategy beyond paid campaigns.

Spy on Google Shopping Ads

Have an ecommerce brand?

The PLA Research tool shows you which products your competitors promote most heavily in Google Shopping.

You’ll see position, volume, price, product titles, URLs, and trend data for each listing.

When a product shows up month after month, it’s likely a top seller.

Semrush – PLA Research – Ebay – PLA Positions

If you don’t carry that product yet, you might consider adding it to your catalog.

Already sell it? Increase your Shopping ads to compete directly.

You can also view all of your competitors’ Google Shopping ads in one place.

Semrush – PLA Research – Ebay – PLA Copies

Analyze their copy, images, and offers.

Then, apply these insights to your own listings:

  • Adjust your product titles to match high-performing formats
  • Test pricing strategies that undercut or match theirs
  • Prioritize ads for products where you have a competitive advantage. Think better reviews, faster shipping, or exclusive features they don’t offer.

Here’s another cool feature:

Instead of bouncing between tools, Semrush’s AI-powered Ad Launch Assistant lets you create and optimize Google and Meta ads directly inside the platform.

Semrush's AI powered Ad launch assistant

The tool generates copy and visuals tailored to your brand, from attention-grabbing headlines to conversion-focused descriptions.

Instead of writing everything from scratch, all you have to do is review each element:

  • Headlines
  • Descriptions
  • Site links
  • Callouts
  • Images
  • Videos

Simply refine the voice and messaging as needed. You’ll be able to test multiple variations in minutes instead of hours.

Unlock Deeper Insights with AdClarity

AdClarity is Semrush’s advanced cross-channel ad intelligence tool.

Need complete visibility into competitor display, social, and video campaigns?

This is where you’ll find it.

Semrush – AdClarity

You’ll get a lot of data with this tool.

Including how much rivals spend, which publishers drive the most impact, and the exact creatives they’re using across platforms:

  • Facebook
  • Instagram
  • X
  • Google Display
  • Pinterest
  • YouTube
  • TikTok
  • LinkedIn

Say a competitor suddenly doubles their TikTok spend. You’ll spot the shift immediately and can adjust your strategy in real time.

Semrush – AdClarity – Advertising Intelligence

AdClarity also automatically identifies your competitors’ top publishers and campaigns.

So there’s no guessing or testing which ones work well for your target audience.

Semrush – AdClarity – Top Publishers

Pros and Cons

Pros Cons
Combines robust multi-site ad intelligence with Meta and Google campaign execution in one platform The base plan includes only Google and Meta ad intelligence
Google Shopping insights are especially strong for ecommerce brands AdClarity is only included the higher-tier plan
AdClarity offers advanced ad intelligence across display, social, and video Doesn’t include SEO tools; you’ll need a separate toolkit for that

Pricing

Semrush – Advertising Toolkit – Pricing

The Semrush Advertising Toolkit is $99 per month.

It includes Advertising Research, PLA Research, Ads Launch Assistant, and more.

The higher-tier plan ($220/month) includes AdClarity, along with all of the above.

3. SpyFu

Best for affordable Google Ads intelligence with deep historical data

SpyFu – Homepage

SpyFu is built for one thing: uncovering Google Ads strategies.

If your strategy leans heavily on Google, it’s one of the most detailed and budget-friendly advertising intelligence software options available.

Download Competitor Keywords Without Limits

SpyFu shows you everything your competitors do on Google Ads — and lets you export it all with no limits.

Many ad intelligence platforms cap your keyword downloads, so this is a plus.

Type in any competitor’s domain and you’ll see:

  • Every keyword they’ve ever bought on Google Ads
  • Estimated monthly clicks and CPC
  • Total spend on paid search

SpyFu – Monthly PPC Overview

For example, say you’re in SaaS project management and Asana is your top competitor.

Search their domain, and SpyFu shows you their current and historical ad keywords. We’re talking thousands of terms, not just the top 50 or 100.

Download the complete dataset and…

  • Feed it into your analytics tools or Google Sheets
  • Share it with your team for campaign planning
  • Build custom reports for leadership
  • Cross-reference it with your CRM to see which keywords actually convert

SpyFu – Asana – Most Successful Paid Keywords

Spot Overlaps and Waste in PPC Strategies

SpyFu’s Kombat tool compares your PPC strategy against up to two competitors at once.

But instead of having to sift through 10,000 keywords, the ad intelligence tool automatically groups them into helpful buckets:

  • Core Keywords: Terms all competitors are bidding on
  • Consider Buying: Valuable keywords they use, but you don’t
  • Potential Ad Waste: Terms that neither competitor uses but you do

SpyFu – Asana – Kombat tool

So, you know exactly which terms to focus on (and which to remove from your campaigns).

This is especially helpful if you’re newer to paid campaigns.

Or have limited time (or tolerance) for turning data into actionable insights.

SpyFu also tags certain terms as “Great Buys” and estimates how many impressions you’ll get for each one.

Plus, it shows which competitors already bid on them, so you can piggyback on proven opportunities.

SpyFu – Asana – PPC Overview

For example, the report below reveals that Asana’s competitor, Monday.com, uses “top task management apps” and “work time tracker app” in its ad strategy.

Asana could (and probably should) target both terms since SpyFu’s data shows they’re worth the investment.

SpyFu – Asana – Top Google Ads Buy Recommendations

Learn From Ads That Worked (or Failed)

SpyFu’s Ad History tool shows every ad variation competitors have tested for a given keyword.

If an ad copy ran for 14 consecutive months, you know it was effective.

If it vanished after a week? Probably a dud.

This kind of insight lets you write ads with fewer flops and faster wins.

This is especially valuable if you handle multiple accounts. You can skip obvious mistakes and start from proven winners.

SpyFu – Asana – Ad History

Pros and Cons

Pros Cons
Unlimited keyword exports with no download caps Focused exclusively on Google Ads; no social or display coverage
10+ years of historical ad data for deep competitive analysis Historical data (10+ years) requires paying for higher-tier plans
Kombat tool automatically identifies keyword overlaps and wasted spend The base plan doesn’t come with unlimited downloads

Pricing

SpyFu – Pricing

SpyFu offers three main plans, all of which come with ad intelligence and SEO reports.

The most affordable plan is $39 per month.

However, you’ll need to upgrade to a higher tier to get 10+ years of historical insights ($59-$249/month).

4. PowerAdSpy

Best for analyzing ad engagement across social media platforms

PowerAdSpy – Homepage

PowerAdSpy specializes in social advertising intelligence with one key differentiator: engagement data that shows what’s actually resonating.

You’ll see which competitor social ads are getting likes, shares, and comments across 11 platforms:

  • Facebook
  • Instagram
  • YouTube
  • Google
  • Google Display Network
  • Native
  • Quora
  • Reddit
  • Pinterest
  • LinkedIn
  • TikTok

If you need to understand which creatives are worth replicating at scale, PowerAdSpy is a strong option.

Search Ads by Keyword, Competitor, or Domain

Want to know which competitor ads crush it on Instagram Reels?

Or which offers rivals push hardest on YouTube or TikTok?

Plug in a keyword, competitor’s name, or domain, and you’ll instantly see all of their active and historical campaigns.

That single search can replace hours of platform hopping between ad libraries.

PowerAdSpy – Domain, Advertiser, Keyword – Filter

Reveal What’s Actually Driving Engagement

Every ad includes engagement data specific to the platform you’re analyzing.

Assessing competitors’ or clients’ Facebook ads? Sort by likes, comments, impressions, and popularity.

PowerAdSpy – Likes, Shares – Filter

You can also filter by ad type and call to action, depending on the platform.

This is especially useful for spotting:

  • Whether video or static images dominate your niche
  • Which CTAs (“Learn More” vs. “Sign Up”) consistently get clicks
  • What ad hooks (“Free trial” vs. “Save 50%”) keep resurfacing across competitors

PowerAdSpy – Call to action – Filter

See How Competitors Win Attention on Reddit and Quora

PowerAdSpy tracks sponsored posts on Reddit and Quora.

These platforms matter because buying decisions often start there.

Conversations on these sites can also influence how LLMs (such as ChatGPT and Perplexity) surface your brand in answers.

PowerAdSpy – Ad Spy Tool – Quora

By analyzing these ads, you’ll see:

  • Which threads your competitors target (like “best project management software” on r/productivity)
  • How they position offers in Q&A format
  • Which ads earn upvotes, shares, and comments

PowerAdSpy – Filter with Likes

See what competitors are saying and which conversations are shaping buyer intent.

Spot content angles that consistently earn engagement. Identify threads or audiences they’re overlooking.

Another helpful feature?

Search by topic, like “games,” to find the competitors dominating that ad niche.

PowerAdSpy – Likes sort by filter

Include a custom “like” range so you narrow results to the level of popularity you prefer.

Then, zero in on the highest-performing ads and gather details such as ad copy and social engagement to improve your campaigns.

PowerAdSpy – Reddit ad spy tool

Pros and Cons

Pros Cons
Large, frequently updated database of social ads across 10+ major platforms Mainly focused on social media; lacks advanced search or display ad data
Engagement metrics (likes, shares, comments) reveal which creatives actually resonate Advanced filtering options are locked behind higher plans
Powerful filters for ad type, placement, geography, and CTA performance Only the highest-tier plan includes insights from all 11 platforms

Pricing

PowerAdSpy – Pricing

PowerAdSpy has six different plans.

The one you choose depends on the social platforms you want to analyze, and the features you need.

Only need Facebook, Instagram, Google, and YouTube?

(And don’t mind missing out on features like ad budget, ad type filter, and advanced analytics?)

The most affordable plan ($69/month) might work for you.

Need all the features and platforms? You’ll pay $399 per month.

5. Adbeat

Best for tracking competitor display ads and landing pages

Adbeat – Homepage

Adbeat specializes in display, native, and programmatic advertising.

But it goes beyond ad creatives.

You’ll also see landing page insights, so you get intel on the complete customer journey.

See Which Landing Pages Are Actually Converting

Adbeat shows you which landing pages drive the most ad traffic. And how long each page has been live.

For example, Squarespace’s longest-running landing page has been active for 794 days.

That’s over two years.

Adbeat – Squarespace – Advertiser profile

When a page stays live that long, you know it’s consistently converting.

This intel helps you see which page layouts, offers, and messaging are worth replicating.

If you work for an agency and have multiple clients, this is particularly valuable. It’s a fast way to benchmark what “good” looks like in each vertical.

Reveal Media Buying Strategies and Publisher Insights

The Advertiser Dashboard breaks down where competitors allocate their budgets across channels, networks, and publishers.

You’ll also see share-of-voice data to understand their market presence.

For example, Adbeat found that Squarespace ran 524 ads in one month.

Adbeat – Squarespace – Monthly Ads

And 78% of their spend went to programmatic ads.

Details like this highlight which channels matter most in your niche. And where you can reallocate budget to get better performance for your own campaigns.

Adbeat – Squarespace – Ad Channels breakdown

Benchmark Campaign Performance and Spot Trends

Adbeat’s ad intelligence software lets you monitor how your competitors’ budgets shift over time.

But what’s especially helpful is that they break it down by ad type: standard, native, and video.

For example, Squarespace’s longest-running video ad has been live for 413 days.

Adbeat – Squarespace – Video Ads

If they’ve kept it running that long, it’s a moneymaker.

In other words, it’s worth considering if you’re investing enough in video ads. And studying individual high performers for hooks, visuals, and offers.

Pros and Cons

Pros Cons
Lets you analyze ads and landing pages together for complete funnel insights Limited coverage of search and social campaigns
Reveals media spend, publisher performance, and traffic sources Pricing is higher than ad-creative-only tools
Great for agencies, affiliate marketers, and display-heavy advertisers Enterprise pricing is not publicly available

Pricing

Adbeat – Pricing

Adbeat’s pricing starts at $249 per month for display, programmatic, and native ad intelligence.

For advanced filters, alerts, and historical data, you’ll need the higher plan ($399 per month).

There’s also an enterprise plan, but pricing isn’t listed publicly.

6. Pathmatics by Sensor Tower

Best for enterprise-level ad spend intelligence across mobile, social, and video

SensorTower – Pathmatics

Pathmatics is built for large teams and big brands.

Household names like P&G and Unilever use this platform, so expect enterprise-level pricing and complexity.

But if you’re managing high-volume spend or reporting to leadership, it offers the transparency and benchmarking you can’t get from native tools.

Uncover Competitors’ Ad Spend Across Every Channel

Pathmatics shows you where every ad dollar goes in a pretty granular way.

It breaks down spend by platform, campaign, or creative — and tracks impressions, reach, and frequency over time.

Pathmatics – Gain Visibility

Say you notice a competitor’s Instagram spend suddenly increased by a significant amount in a single week during Q4.

That signals a major campaign launch — possibly holiday shopping or Black Friday prep.

With this data, you can adjust your strategy immediately. And compete head-to-head with your main competitors.

Pathmatics also lets you benchmark your ad spend against multiple competitors at once.

If you’re investing $500K on display while your top three competitors each spend $2M+, you’ll see that gap.

Pathmatics – Identify seasonal advertising trend

Use this data to justify budget increases to leadership.

Or to identify where smaller reallocations could close the gap faster.

Benchmark Market Share and Share of Voice

Pathmatics tracks your share of voice against competitors in your industry and region.

If three brands dominate 80% of impressions in your category, you’ll see who owns what percentage.

This data helps you understand your position in the market.

Are you a distant fourth? Or neck-and-neck with the leader?

Pathmatics – Benchmark Market Share

You can also identify which competitors dominate specific channels and spot opportunities where they’re underinvesting.

If the market leader owns Facebook but ignores TikTok, that’s your opening.

Evaluate Creatives That Resonate

Every ad includes details like format, placement, messaging, CTAs, and audience profiles.

See which creatives competitors keep running and which ones they kill after a few days.

Track the exact messaging and offers that stick around for months or years.

Pathmatics – Analyze Top Creatives

Use these insights to refine your own creative strategy.

Double down on formats that consistently deliver, and try localized messaging in new markets where your competitors are seeing success.

Pros and Cons

Pros Cons
Provides cross-channel visibility across social, display, mobile, video, and OTT Pricing is custom and can be expensive for smaller teams and startups
Combines creative data with detailed spend, reach, and audience insights Steeper learning curve due to platform depth and data complexity
Ideal for enterprise-level teams, app publishers, and multi-channel marketers Some users report data accuracy issues

Pricing

Pathmatics – Pricing

Pathmatics’ pricing is custom.

Request a quote if you’re interested.

Turn Competitive Intel into Campaign Wins

The right ad intelligence software isn’t the one with the most features.

It’s the one you can trust.

This means reliable data, less manual work, and the ability to scale campaigns across platforms with ease.

On a budget and focused mainly on Google Ads? Start with SpyFu.

Need deep, multi-site advertising intelligence across search and social with campaign execution built in?

Go for Semrush’s Advertising Toolkit.

Once you’ve picked your platform and gathered competitive intel, the next step is making sure your paid and organic strategies work together.

Learn how to align SEO and PPC to maximize visibility, reduce wasted spend, and improve your ROI.

The post 6 Best Ad Intelligence Software to Outsmart the Competition appeared first on Backlinko.

Read more at Read More

October 2025 Digital Marketing Roundup: What Changed and What You Should Do About It

October showed just how fast AI is reshaping how brands connect, convert, and stay visible. OpenAI turned chats into checkout experiences. Google tested AI-written snippets and agent-driven search. The line between platforms, ads, and transactions keeps disappearing.

Creators gained new credibility. Rebrands proved riskier than ever. Data-driven PR entered a new era.

Here’s what mattered most and how to stay ahead.

Key Takeaways

  • • AI is officially a channel, not a tool. Search, shopping, and PR are all happening inside AI environments now.
  • • Authenticity outperforms aspiration. Whether you’re selling luxury goods or refreshing your brand, identity, and connection drive growth.
  • • Visibility depends on AI citations and structure. The brands getting mentioned in AI results are building more trust and traffic everywhere.
  • • Automation is powerful, but it still needs control. As Google’s AI Max expands, you need to balance efficiency with oversight to protect budgets and brand safety.
  • • Every brand action is a public statement. From rebrands to creator partnerships, perception moves fast. Plan your narratives or risk losing control of them.

Search & AI Evolution 

Search has moved beyond discovery. October’s updates from OpenAI and Google show how AI is collapsing the gap between queries and actions. Visibility means something different now.

OpenAI launches in-chat purchases

OpenAI rolled out Instant Checkout in ChatGPT. U.S. users can now buy products directly inside the chat. Powered by Stripe, the feature starts with Etsy listings and will expand to more merchants soon. Sellers on Shopify are auto-enrolled. Others can join by connecting product feeds and enabling Stripe checkout.

An ad in ChatGPT.

Our POV: ChatGPT shopping changes product discovery completely. If your product data isn’t complete, detailed, and conversational, you won’t show up. The most visible listings will have rich attributes and language that reflects how users naturally describe what they want.

What to do next: Audit your product feeds. Fill every field. Use detailed, long-form descriptions that anticipate real-world queries. Give the e-commerce agent what it needs to surface your products.

<h3> Google tests AI-written meta descriptions <h3>

Google began testing AI-generated snippets powered by Gemini. Instead of pulling your written meta description, the model writes or summarizes one based on on-page content.

Our POV: Google’s been rewriting descriptions for years. AI just made it smarter and less predictable. Treat your page intros as the new meta description because that’s what AI will pull from.

What to do next: Front-load the first 150 words of each key page with a clear summary of what the page delivers and why it matters. Tighten headings and intros, monitor CTR shifts, and adjust language when AI summaries drift from your brand’s tone.

<h3> Google Search Labs adds Agentic AI <h3>

Google’s AI Mode now lets users book restaurants and other services directly from results. Search is moving from recommending to acting.

Our POV: This isn’t a traffic killer. But signals are shifting. AI will handle the click path. The brands that win will have structured, verified, action-ready data.

What to do next: Audit structured data, integrate local feeds, and make sure your listings are up to date across booking platforms. When the search agent starts acting on your behalf, data hygiene becomes your conversion strategy.

Paid Media & Automation

AI is taking over ad delivery. Control is the new currency. You have to balance efficiency with visibility to keep performance from becoming unpredictable.

Google doubles down on AI Max

Google refreshed its AI Max ad pitch. The system is fully automated: it matches intent, rewrites copy, and routes users to brand assets. Powerful, but still a black box.

Google AI Max.

Our POV: Automation doesn’t replace strategy. Advertisers need visibility, not just results. Without strict guardrails, budgets can leak into low-value placements or off-brand creative.

What to do next: Run low-risk tests first. Add negative keyword lists, set URL exclusions, and manually review creative. Monitor performance closely until you can prove control before scaling.

Apple launches dedicated Games app

Apple introduced a standalone Games app with iOS 26, bridging Game Center and the App Store. Developers can now feature their games, run dual search visibility, and analyze engagement with new metrics later this year.

Apple's Games app.

Our POV: This isn’t a small tweak, Apple’s essentially building a second storefront. Game publishers who adapt early will own discoverability.

What to do next: Refresh creatives, optimize In-App Events, and plan for dual indexing between the Games app and App Store. When analytics arrive, use them to refine ASO and campaign timing.

Social & Content Trends

Creators and consumers are rewriting the rules. Authenticity, identity, and emotional connection drive engagement across platforms that once ran on aspiration and polish.

TikTok reframes luxury branding

TikTok’s new research shows luxury audiences care more about self-expression than status. It’s about showing who you are, not showing off.

TikTok's 4 Ls of Luxury concept.

Our POV: That shift goes way beyond luxury. Audiences in every category now expect brands to reflect their identity. Connection beats aspiration. Authenticity beats polish.

What to do next: Reevaluate your brand’s emotional identity. Work with creators who reinterpret your message through their lens. Build content that feels participatory, not performative.

UK YouTubers contribute £2.2B to the economy

YouTube creators generated £2.2 billion for the UK economy last year, supporting over 45,000 jobs. Parliament even launched a cross-party group to represent them.

Our POV: Creators aren’t influencers anymore. They’re small businesses with real economic weight. Partnering with them means investing in industries, not individuals.

What to do next: Build collaborations that help creators grow beyond campaigns. Shared education, joint products, or community-driven initiatives create deeper, longer-term value.

PR, Reputation & Brand Risk

Reputation management has become real-time and AI-measurable. From LLM citation tracking to brand backlash, every communication choice now echoes faster and louder.

Notified + Profound launch AI-driven PR monitoring

A first-of-its-kind industry partnership between these two companies now offers a tool that tracks how often press releases are cited by LLMs like ChatGPT and Gemini. It finally gives brands visibility into their “AI footprint.”

Our POV: PR just gained a measurable seat in AI discoverability. Knowing when AI cites your releases helps you shape future narratives.

What to do next: Integrate AI citation metrics into your analytics stack. Identify which stories get surfaced and refine future language to match the tone that earns citations.

Rebrands are riskier than ever

Cracker Barrel’s attempted rebrand backfired almost instantly. Modest design updates triggered outrage and political backlash—proof that brand refreshes now carry reputational stakes.

Our POV: Rebrands still matter, but they demand foresight. A design tweak is a message, whether you mean it or not.

What to do next: Before launching a new look, test reactions across audience segments and scenario-plan your communication strategy. Shape the story before the internet does.

Olivia Brown automates PR outreach

A new AI platform called Olivia Brown is automating nearly every part of digital PR, from writing press releases to pitching journalists and sending aggressive follow-ups. It promises to “democratize publicity,” but its bulk-send approach is flooding inboxes and straining relationships between brands and reporters who value relevance and trust.

The Olivia Brown interface.

Our POV: Rebrands still matter, but they demand foresight. A design tweak is a message, whether you mean it or not.

What to do next: Before launching a new look, test reactions across audience segments and scenario-plan your communication strategy. Shape the story before the internet does.

SEO 2.0: The New Search Game

Traditional rankings are giving way to AI visibility. The brands that master structure, credibility, and omnichannel authority are the ones AI systems will learn to trust and users will keep choosing.

Rankings + AI Citations

Traditional SEO metrics can’t capture how visible you are inside AI systems. NP Digital’s SEO 2.0 approach tracks AI citations alongside rankings to see how content performs in generative search.

Our POV: Rankings aren’t the endgame anymore. Visibility inside AI summaries is. The brands that get cited are the ones shaping what users read next.

What to do next: Create original, data-backed content that builds authority across multiple platforms: YouTube, Reddit, TikTok, and forums. These are the signals AI models use to decide who to trust.

<America’s favorite new query: “Is it good or bad?”

SEMrush found that U.S. users are now searching in binary terms. Tens of millions of queries every month ask if something is “good” or “bad.”

A graphic showing the main topics behind "Good/Bad" searches from SEMrush.

Source

Our POV: AI Overviews have trained users to expect clear answers. If your content hedges or buries the lead, you’ll lose clicks and credibility.

What to do next: Structure pages for speed and certainty. Use FAQ blocks, schema markup, and straightforward intros that deliver the verdict early. This is how you earn trust in zero-click environments.

Conclusion

AI is rewriting the rules of visibility, discovery, and trust. Success no longer depends on who publishes most. It depends on who provides the clearest data, most credible voice, and strongest structure. The brands investing in AI-ready content, authentic storytelling, and measurable strategy will own the next wave of search, social, and PR.

Need help applying these insights? Talk to the NP Digital team. We’re already helping brands adapt as things develop.

Read more at Read More

SEO vs. AI search: 101 questions that keep me up at night

SEO AI optimization GEO AEO LLMO

Look, I get it. Every time a new search technology appears, we try to map it to what we already know.

  • When mobile search exploded, we called it “mobile SEO.”
  • When voice assistants arrived, we coined “voice search optimization” and told everyone this would be the new hype.

I’ve been doing SEO for years.

I know how Google works – or at least I thought I did.

Then I started digging into how ChatGPT picks citations, how Perplexity ranks sources, and how Google’s AI Overviews select content.

I’m not here to declare that SEO is dead or to state that everything has changed. I’m here to share the questions that keep me up at night – questions that suggest we might be dealing with fundamentally different systems that require fundamentally different thinking.

The questions I can’t stop asking 

After months of analyzing AI search systems, documenting ChatGPT’s behavior, and reverse-engineering Perplexity’s ranking factors, these are the questions that challenge most of the things I thought I knew about search optimization.

When math stops making sense

I understand PageRank. I understand link equity. But when I discovered Reciprocal Rank Fusion in ChatGPT’s code, I realized I don’t understand this:

  • Why does RRF mathematically reward mediocre consistency over single-query excellence? Is ranking #4 across 10 queries really better than ranking #1 for one?
  • How do vector embeddings determine semantic distance differently from keyword matching? Are we optimizing for meaning or words?
  • Why does temperature=0.7 create non-reproducible rankings? Should we test everything 10 times over now?
  • How do cross-encoder rerankers evaluate query-document pairs differently than PageRank? Is real-time relevance replacing pre-computed authority?

These are also SEO concepts. However, they appear to be entirely different mathematical frameworks within LLMs. Or are they?

When scale becomes impossible

Google indexes trillions of pages. ChatGPT retrieves 38-65. This isn’t a small difference – it’s a 99.999% reduction, resulting in questions that haunt me:

  • Why do LLMs retrieve 38-65 results while Google indexes billions? Is this temporary or fundamental?
  • How do token limits establish rigid boundaries that don’t exist in traditional searches? When did search results become limited in size?
  • How does the k=60 constant in RRF create a mathematical ceiling for visibility? Is position 61 the new page 2?

Maybe they’re just current limitations. Or maybe, they represent a different information retrieval paradigm.

The 101 questions that haunt me:

  1. Is OpenAI also using CTR for citation rankings?
  2. Does AI read our page layout the way Google does, or only the text?
  3. Should we write short paragraphs to help AI chunk content better?
  4. Can scroll depth or mouse movement affect AI ranking signals?
  5. How do low bounce rates impact our chances of being cited?
  6. Can AI models use session patterns (like reading order) to rerank pages?
  7. How can a new brand be included in offline training data and become visible?
  8. How do you optimize a web/product page for a probabilistic system?
  9. Why are citations continuously changing?
  10. Should we run multiple tests to see the variance?
  11. Can we use long-form questions with the “blue links” on Google to find the exact answer?
  12. Are LLMs using the same reranking process?
  13. Is web_search a switch or a chance to trigger?
  14. Are we chasing ranks or citations?
  15. Is reranking fixed or stochastic?
  16. Are Google & LLMs using the same embedding model? If so, what’s the corpus difference?
  17. Which pages are most requested by LLMs and most visited by humans?
  18. Do we track drift after model updates?
  19. Why is EEAT easily manipulated in LLMs but not in Google’s traditional search?
  20. How many of us drove at least 10x traffic increases after Google’s algorithm leak?
  21. Why does the answer structure always change even when asking the same question within a day’s difference? (If there is no cache)
  22. Does post-click dwell on our site improve future inclusion?
  23. Does session memory bias citations toward earlier sources?
  24. Why are LLMs more biased than Google?
  25. Does offering a downloadable dataset make a claim more citeable?
  26. Why do we still have very outdated information in Turkish, even though we ask very up-to-date questions? (For example, when asking what’s the best e-commerce website in Turkiye, we still see brands from the late 2010s)
  27. How do vector embeddings determine semantic distance differently from keyword matching?
  28. Do we now find ourselves in need to understand the “temperature” value in LLMs?
  29. How can a small website appear inside ChatGPT or Perplexity answers?
  30. What happens if we optimize our entire website solely for LLMs?
  31. Can AI systems read/evaluate images in webpages instantly, or only the text around them?
  32. How can we track whether AI tools use our content?
  33. Can a single sentence from a blog post be quoted by an AI model?
  34. How can we ensure that AI understands what our company does?
  35. Why do some pages show up in Perplexity or ChatGPT, but not in Google?
  36. Does AI favor fresh pages over stable, older sources?
  37. How does AI re-rank pages once it has already fetched them?
  38. Can we train LLMs to remember our brand voice in their answers?
  39. Is there any way to make AI summaries link directly to our pages?
  40. Can we track when our content is quoted but not linked?
  41. How can we know which prompts or topics bring us more citations? What’s the volume?
  42. What would happen if we were to change our monthly client SEO reports by just renaming them to “AI Visibility AEO/GEO Report”?
  43. Is there a way to track how many times our brand is named in AI answers? (Like brand search volumes)
  44. Can we use Cloudflare logs to see if AI bots are visiting our site?
  45. Do schema changes result in measurable differences in AI mentions?
  46. Will AI agents remember our brand after their first visit?
  47. How can we make a local business with a map result more visible in LLMs?
  48. Will Google AI Overviews and ChatGPT web answers use the same signals?
  49. Can AI build a trust score for our domain over time?
  50. Why do we need to be visible in query fanouts? For multiple queries at the same time? Why is there synthetic answer generation by AI models/LLMs even when users are only asking a question?
  51. How often do AI systems refresh their understanding of our site? Do they also have search algorithm updates?
  52. Is the freshness signal sitewide or page-level for LLMs?
  53. Can form submissions or downloads act as quality signals?
  54. Are internal links making it easier for bots to move through our sites?
  55. How does the semantic relevance between our content and a prompt affect ranking?
  56. Can two very similar pages compete inside the same embedding cluster?
  57. Do internal links help strengthen a page’s ranking signals for AI?
  58. What makes a passage “high-confidence” during reranking?
  59. Does freshness outrank trust when signals conflict?
  60. How many rerank layers occur before the model picks its citations?
  61. Can a heavily cited paragraph lift the rest of the site’s trust score?
  62. Do model updates reset past re-ranking preferences, or do they retain some memory?
  63. Why can we find better results by 10 blue links without any hallucination? (mostly)
  64. Which part of the system actually chooses the final citations?
  65. Do human feedback loops change how LLMs rank sources over time?
  66. When does an AI decide to search again mid-answer? Why do we see more/multiple automatic LLM searches during a single chat window?
  67. Does being cited once make it more likely for our brand to be cited again? If we rank in the top 10 on Google, we can remain visible while staying in the top 10. Is it the same with LLMs?
  68. Can frequent citations raise a domain’s retrieval priority automatically?
  69. Are user clicks on cited links stored as part of feedback signals?
  70. Are Google and LLMs using the same deduplication process?
  71. Can citation velocity (growth speed) be measured like link velocity in SEO?
  72. Will LLMs eventually build a permanent “citation graph” like Google’s link graph?
  73. Do LLMs connect brands that appear in similar topics or question clusters?
  74. How long does it take for repeated exposure to become persistent brand memory in LLMs?
  75. Why doesn’t Google show 404 links in results but LLMs in answers?
  76. Why do LLMs fabricate citations while Google only links to existing URLs?
  77. Do LLMs retraining cycles give us a reset chance after losing visibility?
  78. How do we build a recovery plan when AI models misinterpret information about us?
  79. Why do some LLMs cite us while others completely ignore us?
  80. Are ChatGPT and Perplexity using the same web data sources?
  81. Do OpenAI and Anthropic rank trust and freshness the same way?
  82. Are per-source limits (max citations per answer) different for LLMs?
  83. How can we determine if AI tools cite us following a change in our content?
  84. What’s the easiest way to track prompt-level visibility over time?
  85. How can we make sure LLMs assert our facts as facts?
  86. Does linking a video to the same topic page strengthen multi-format grounding?
  87. Can the same question suggest different brands to different users?
  88. Will LLMs remember previous interactions with our brand?
  89. Does past click behavior influence future LLM recommendations?
  90. How do retrieval and reasoning jointly decide which citation deserves attribution?
  91. Why do LLMs retrieve 38-65 results per search while Google indexes billions?
  92. How do cross-encoder rerankers evaluate query-document pairs differently than PageRank?
  93. Why can a site with zero backlinks outrank authority sites in LLM responses?
  94. How do token limits create hard boundaries that don’t exist in traditional search?
  95. Why does temperature setting in LLMs create non-deterministic rankings?
  96. Does OpenAI allocate a crawl budget for websites?
  97. How does Knowledge Graph entity recognition differ from LLM token embeddings?
  98. How does crawl-index-serve differ from retrieve-rerank-generate?
  99. How does temperature=0.7 create non-reproducible rankings?
  100. Why is a tokenizer important?
  101. How does knowledge cutoff create blind spots that real-time crawling doesn’t have?

When trust becomes probabilistic

This one really gets me. Google links to URLs that exist, whereas AI systems can completely make things up:

  • Why can LLMs fabricate citations while Google only links to existing URLs?
  • How does a 3-27% hallucination rate compare to Google’s 404 error rate?
  • Why do identical queries produce contradictory “facts” in AI but not in search indices?
  • Why do we still have outdated information in Turkish even though we ask up-to-date questions?

Are we optimizing for systems that might lie to users? How do we handle that?

Where this leaves us

I’m not saying AI search optimization/AEO/GEO is completely different from SEO. I’m just saying that I have 100+ questions that my SEO knowledge can’t answer well, yet.

Maybe you have the answers. Maybe nobody does (yet). But as of now, I don’t have the answers to these questions.

What I do know, however, is this: These questions aren’t going anywhere. And, there will be new ones.

The systems that generate these questions aren’t going anywhere either. We need to engage with them, test against them, and maybe – just maybe – develop new frameworks to understand them.

The winners in this new field won’t be those who have all the answers. There’ll be those asking the right questions and testing relentlessly to find out what works.

This article was originally published on metehan.ai (as 100+ Questions That Show AEO/GEO Is Different Than SEO) and is republished with permission.

Read more at Read More

Tim Berners-Lee warns AI may collapse the ad-funded web

Sir Tim Berners-Lee, who invented the World Wide Web, is worried that the ad-supported web will collapse due to AI. In a new interview with Nilay Patel on Decoder, Berners-Lee said:

  • “I do worry about the infrastructure of the web when it comes to the stack of all the flow of data, which is produced by people who make their money from advertising. If nobody is actually following through the links, if people are not using search engines, they’re not actually using their websites, then we lose that flow of ad revenue. That whole model crumbles. I do worry about that.”

Why we care. There is a split in our industry, where one side thinks “it’s just SEO” and the other sees a near future where visibility in AI platforms has replaced rankings, clicks, and traffic. We know SEO still isn’t dead and people are still using search engines, but the writing is still on the wall (Google execs have said as much in private). Berners-Lee seems to envision the same future, warning that if people stop following links and visiting websites, the entire web model “crumbles,” leaving AI platforms with value while the ad-supported web and SEO fade.

On monopolies. In the same interview, Berners-Lee said a centralized provider or monopoly isn’t good for the web:

  • “When you have a market and a network, then you end up with monopolies. That’s the way markets work.
  • “There was a time before Google Chrome was totally dominant, when there was a reasonable market for different browsers. Now Chrome is dominant.
  • “There was a time before Google Search came along, there were a number of search engines and so on, but now we have basically one search engine.
  • “We have basically one social network. We have basically one marketplace, which is a real problem for people.”

On the semantic web. Berners-Lee worked on the Semantic Web for decades (a web that machines can read as easily as humans). As for where it’s heading next: data by AI, for AI (and also people, but especially AI):

  • “The Semantic Web has succeeded to the extent that there’s the linked open data world of public databases of all kinds of things, about proteins, about geography, the OpenStreetMap, and so on. To a certain extent, the Semantic Web has succeeded in two ways: all of that, and because of Schema.org.
  • “Schema.org is this project of Google. If you have a website and you want it to be recognized by the search engine, then you put metadata in Semantic Web data, you put machine-readable data on your website. And then the Google search engine will build a mental model of your band or your music, whatever it is you’re selling.
  • “In those ways, with the link to the data group and product database, the Semantic Web has been a success. But then we never built the things that would extract semantic data from non-semantic data. Now AI will do that.
  • “Now we’ve got another wave of the Semantic Web with AI. You have a possibility where AIs use the Semantic Web to communicate between one and two possibilities and they communicate with each other. There is a web of data that is generated by AIs and used by AIs and used by people, but also mainly used by AIs.”

On blocking AI crawlers. Discussion turned to Cloudflare and their attempt to block crawlers and its pay per crawl initiative. Berners-Lee was asked whether the web’s architecture could be redesigned so websites and database owners could bake a “not unless you pay me” rule into open standards, forcing AI crawlers and other clients across the ecosystem to honor payment requirements by default. His response:

  • “You could write the protocols. One, in fact, is micropayments. We’ve had micropayments projects in W3C every now and again over the decades. There have been projects at MIT, for example, for micropayments and so on. So, suddenly there’s a “payment required” error code in HTTP. The idea that people would pay for information on the web; that’s always been there. But of course whether you’re an AI crawler or whether you are an individual person, it’s the way you want to pay for things that’s going to be very different.”

The interview. Sir Tim Berners-Lee doesn’t think AI will destroy the web

Read more at Read More

Google expands image search ads with mobile carousel format

Google rolled out AI-powered ad carousels in the Images tab on mobile, now appearing across all categories — not just shopping-related ones.

Why we care. Ads are now showing directly within image search results, giving brands a new, highly visual placement to grab attention where users are actively browsing and comparing visuals. With users often browsing images to explore ideas or compare options, these AI-powered carousels give brands a chance to influence discovery earlier in the journey.

The details:

  • The new format features horizontally scrollable carousels with images, headlines, and links.
  • These carousels are powered by AI-driven ad matching, pulling in visuals relevant to the user’s query — even in non-commerce categories like law or insurance.
  • The feature was first spotted by ADSQUIRE founder Anthony Higman, who shared screenshots of the new layout on X.

The big picture. By integrating ads more seamlessly into visual search, Google is blurring the line between organic and paid discovery a continued shift toward immersive, image-based ad experiences that go beyond traditional text and product listings.

Read more at Read More

Why AI availability is the new battleground for brands

AI availability concept

GEO, AI SEO, AEO – call it what you like.

The label doesn’t matter nearly as much as understanding the shift behind it.

At the center of that shift lies one idea that explains everything: AI availability – and here’s why it matters.

What is AI availability?

The three pillars of brand availability

The idea of AI availability comes from Byron Sharp, research professor at the Ehrenberg-Bass Institute, who introduced it in a comment on one of my LinkedIn posts.

Sharp’s work underpins modern brand science and shows that growth depends on availability.

Brands grow through sales, and sales grow through two kinds of availability: mental and physical.

  • Mental availability refers to the likelihood of being considered in a purchasing situation.
  • Physical availability refers to the ease and convenience with which an item can be bought.

For years, these two principles have guided brand strategy.

They explain why Coca-Cola invests in constant visibility and why Amazon makes every click lead to a checkout.

But in the era of generative search, there’s now a third kind of availability marketers need to understand – the likelihood that your brand or product will be recommended by an AI system when a user is ready to buy.

That is AI availability – and it changes everything.

AI as the new influencer

If you are still thinking of AI as a technology, you are already behind.

Think of it instead as the world’s most powerful influencer.

ChatGPT alone is used by about 10% of the global adult population, according to recent research from OpenAI, Harvard, and Duke. 

That makes it far more pervasive than any social media platform at a similar stage in its life cycle.

Most people do not use it to code or write poetry – they use it to make decisions. 

Nearly 80% of ChatGPT conversations, the same study found, fall into three categories: 

  • Practical guidance.
  • Seeking information.
  • Writing.

In other words, people are asking AI to help them decide what to do, buy, and believe. 

The study also shows that these conversations are increasingly focused on everyday decisions rather than work. 

The distinction between search, research, and conversation is collapsing.

Source- “How People Use ChatGPT,” OpenAI, Harvard University, and Duke University
Source: “How People Use ChatGPT,” OpenAI, Harvard University, and Duke University

The result is simple.

AI systems are now the gatekeepers of modern discovery. They decide what information to surface and which businesses appear in front of consumers.

Forget the Kardashians. Forget influencer marketing.

If you’re invisible to AI, you’re invisible to the market.

AI is the new influencer.

From keywords to fitness signals

The SEO industry has spent two decades optimizing for how humans search with keywords – but that is changing.

Large language models (LLMs) infer meaning from context, probability, and performance.

They are scanning for what we can call fitness signals – a term from network science.

Fitness describes a product or service’s inherent ability to outcompete rivals, allowing one business to dominate a market even if others started earlier or invested more.

Think of how Google overtook Yahoo. 

It wasn’t just about better search algorithms – it was a better business model built on a stronger performance attribute: relevance.

These performance attributes are what make a business fit for survival. They are the qualities that define how well you solve a problem for a customer.

AI deploys search strategies to identify which businesses solve which problems most effectively. 

Because it exists to serve human needs, those same signals determine your AI availability.

Yes, AI uses search strings, fan-out queries, and reciprocal rank fusion, among many other strategies and tactics. 

It doesn’t search like humans because it isn’t bound by the same cognitive and speed limitations.

Humans search by “satisficing.” Keywords + Page 1 rankings = good enough.

Machines operate on an industrial scale – searching, gathering, assessing, and recommending.

Dig deeper: Fame engineering: The key to generative engine optimization

The psychology of performance

To understand why this works, we turn to evolutionary psychology.

Geoffrey Miller, author of “Spent,” explained that humans have always been driven by two fundamental needs. 

  • We seek to display fitness indicators that enhance our status.
  • We chase fitness cues that increase our chances of survival or pleasure.

Consumer products have evolved to meet those needs. Luxury goods signal success. 

Convenience products signal control. Both deliver psychological reassurance.

AI works in a similar way. Its goal is to satisfy human intent. 

When someone types a complex prompt into an LLM, the AI interprets it not as a string of keywords but as a statement of need. 

It then searches its training data and live information to find the most relevant and trustworthy performance attributes that match that need.

That is why context matters so much more than content. 

You are no longer competing for blue links – you are competing for cognitive inclusion in an AI’s mental model of your category. 

Your job is to make your brand’s fitness and performance attributes unmissable to that model.

Get the newsletter search marketers rely on.


Category entry points and the new SEO

Category entry points are the situations, needs, and triggers that put someone in the market to buy.

In the world of GEO, these are your new keywords.

They are what users express in prompts rather than in search terms. 

“Where can I find sustainable running shoes for flat feet?” is not a keyword query – it is a buying situation.

Your strategy is to:

  • Understand those buying situations.
  • Map them to your own performance attributes.
  • Create enough context that AI can confidently associate your brand with the solution.

That means describing not only what you do, but how you do it, who you do it for, and why you are distinctive.

This isn’t new. It’s the same foundational brand positioning marketers have always needed.

What’s changed is that it now feeds the world’s most sophisticated recommender system.

Dig deeper: AI search is booming, but SEO is still not dead

A local example: The sandwich shop in Stoke

Imagine a small sandwich shop in Stoke. It’s not glamorous, serving sausage sandwiches, bacon rolls, and coffee. 

The owners don’t want to be influencers. They just want customers.

How does a business like this make itself visible to AI?

Turn everyday details into data signals

The first step is to make its performance attributes explicit.

  • What ingredients are used?
  • Where do they come from?
  • What makes the sandwiches good value?
  • How long has the business served the local community?
  • Where is it located?
  • What is the hygiene rating?

All these details are small signals of trust and quality. 

A strong website should describe them in clear, human language. 

Every piece of information tells AI that:

  • This business exists.
  • It serves specific needs.
  • It performs well in doing so.

Build reputation where AI listens

Next, build local reputation. 

  • Encourage reviews on Google, TripAdvisor, and social media. 
  • Invite local bloggers to taste and review the food. 
  • Issue a press release about an anniversary or charity event. 

Every third-party mention adds more mutual information between your brand and the market – and that’s what AI learns from.

GEO is where good brand marketing meets intelligent technology.

Embrace both SEO and GEO

And for the “GEO is just SEO” crowd, yes, ranking on Google and in the local pack might be the best bet for increasing AI availability for this shop. 

However, it might also be hosting a relaunch event and inviting 30 local bloggers and press members to secure coverage.

Both are valid tactics with multiple benefits – and you can do both.

Until Google decides what it’s doing with the 10 blue links and AI Mode, bothism is the best plan – SEO and GEO, not just one.

From PR to performance

Larger businesses apply the same logic at scale. The recent wave of acquisitions in the SEO and analytics sector is a testament to this. 

These are deliberate attempts to control information ecosystems.

Owning media outlets, communities, and data platforms increases a company’s visibility in the information that AIs learn from. 

It creates an abundance of references that confirm expertise, authoritativeness, and relevance.

In traditional SEO, this is referred to as off-page optimization. 

In GEO, it is strategic distribution – where performance attributes and PR meet.

Your goal is to describe what you do, while making sure others also describe it.

Fame, distinctive assets, and consistency still matter. But the audience is no longer just human.

Dig deeper: AI search relies on brand-controlled sources, not Reddit: Report

Building AI availability

To make your brand visible to machines that now mediate discovery, you need to understand how and where that visibility is built.

Start with a visibility audit

Diagnose your current presence. 

Identify the category entry points most relevant to your products, and ask what prompts a user might type when they are ready to buy. 

Tools such as Semrush’s AI Enterprise platform can simulate these scenarios and show where your brand appears.

Get listed where AI looks

Identify the sources that AI models reference. 

Many LLMs use a mix of training data and live search, with listicles, directories, and “best of” articles among the most common data sources.

Being included in those lists is a sensible marketing strategy. 

Just as supermarkets stock their own shelves with their best products, you should position your brand among the best available options.

Expand your owned ecosystem

Over time, you’ll find saturation points where every competitor appears in the same lists. 

At that stage, innovation and owned media become essential. 

Start your own publication, commission original research, and contribute to conversations in your category.

Create context that earns recommendations

Digital shelf space isn’t the problem. Credible context amplifies your fitness signals.

Efficient, data-led, and creative, this is GEO’s manufactured style. But its success depends entirely on having a brand worth recommending. 

That’s why GEO is the outcome of proper marketing. 

Still, it’s proper marketing with a specific focus: increasing the likelihood of being recommended by AI.

The future of visibility

SEO has always been about optimization. 

GEO is about promotion – building and distributing enough credible, distinctive information about your business that an AI can recognize it as a trusted source.

The techniques look familiar: PR, branding, copywriting, partnerships, directories, and reviews. 

The difference lies in intent. You’re not feeding a search engine – you’re training an intelligence.

This requires a new mindset. 

  • You’re no longer optimizing for human users who type short queries into Google. You’re optimizing for a probabilistic model that interprets human intent across millions of contexts. 
  • It doesn’t care about your title tags. It cares about whether you look like the right answer to a real problem.

GEO is both exciting and humbling. 

It reconnects brand marketing and search after years of false division, and reminds us that while the tools evolve, the fundamentals endure.

You still need to be known, available, and distinctive. 

And now your audience includes machines that think like humans but learn on their own terms.

Back to fundamentals, forward with AI

GEO is a return to marketing fundamentals seen through a new lens. 

Businesses still grow by increasing availability. 

Consumers still buy from the brands they notice and can easily access. 

What has changed is the mediator: AI has become the primary distributor of attention.

Your task as a marketer is to make your brand’s performance attributes, category entry points, and distinctive assets visible in the data that AI consumes. 

The goal hasn’t changed – to be chosen. Only the mechanics are new.

Because in the age of AI, the only brands that matter are the ones the machines remember.

Read more at Read More