Google tests “Journey Aware Bidding” to optimize Search campaigns

Is it time to rethink your current Google Ads strategy?

Google is preparing a new Search bidding model called Journey Aware Bidding, designed to factor in the entire customer journey — not just the final biddable conversion — to improve prediction accuracy and campaign performance.

How it works:

  • Journey Aware Bidding learns from your primary conversion goal plus additional, non-biddable journey stages.
  • Advertisers who fully track and properly categorize each step of their purchase funnel stand to benefit the most.
  • Google recommends mapping the entire journey — from lead submission to final purchase — and labeling all touchpoints as conversions within standard goals.

Why we care. Performance advertisers have long struggled with fragmented signals across the funnel. Journey Aware Bidding brings more of their conversion funnel into Google’s prediction models, potentially improving efficiency for long, multi-step journeys like lead gen.

Instead of optimizing on a single end-stage signal, Google can learn from every meaningful touchpoint, leading to smarter bids and better alignment with real business outcomes. This update rewards advertisers with strong tracking and could deliver a meaningful performance lift once fully launched.

What advertisers need to do:

  • Choose a single KPI-aligned stage (e.g., purchase, qualified lead) as the optimization target.
  • Mark other journey stages as primary conversions, but exclude them from campaign-level or account-default bidding optimization.
  • Ensure clean tracking and clear categorization of every step.

Pilot status. A closed pilot is due to launch this year for a small group of advertisers, with broader availability expected afterward as Google refines the model.

The bottom line. Journey Aware Bidding could represent a major shift in Search optimization: Google wants its bidding systems to understand not just what converts — but how users get there.

First seen. The details of this new bidding model was shared by Senior Consultant Georgi Zayakov on LinkedIn, amongst other products that were featured at Think Week 2025.

Read more at Read More

Google offers a “less disruptive” fix to EU ad-tech showdown

Google submitted a compliance plan to the European Commission that proposes changes to its ad-tech operations — but rejects calls to break up its business

How it works:

  • Google is offering product-level changes — for example, giving publishers the ability to set different minimum prices for different bidders in Google Ad Manager.
  • It’s also proposing greater interoperability between Google’s tools and those of rivals, in order to give publishers and advertisers more flexibility.
  • The company says these tweaks would resolve the European Commission’s concerns without a “disruptive break-up.”

Why we care. Google’s proposed “non-disruptive” fixes could preserve platform stability and avoid the turbulence of a forced breakup — but they may also shape future auction dynamics, pricing transparency, and access to competitive tools. In short, the outcome will influence how much control, choice, and cost efficiency advertisers have in Europe’s ad ecosystem.

Between the lines. Google is leaning on technical fixes rather than major structural overhaul — but critics argue that without deeper reform, the power dynamics in ad tech may not fundamentally shift.

The bottom line. Google is trying to strike a compromise: addressing the EU’s antitrust concerns while keeping its integrated ad-tech business intact. Regulators now face a choice: accept the tweaks — or push harder for a breakup.

Dig Deeper. EU fines Google $3.5 billion over anti-competitive ad-tech business

Read more at Read More

Small tests to yield big answers on what influences LLMs

Small Tests – Big Answers – Featured image

Undoubtedly, one of the hot topics in SEO over the last few months has been how to influence LLM answers. Every SEO is trying to come up with strategies. Many have created their own tools using “vibe coding,” where they test their hypotheses and engage in heated debates about what each LLM and Google use to pick their sources.

Some of these debates can get very technical, touching on topics like vector embeddings, passage ranking, retrieval-augmented generation (RAG), and chunking. These theories are great—there’s a lot to learn from them and turn into practice. 

However, if some of these AI concepts are going way over your head, let’s take a step back. I’ll walk you through some recent tests I’ve run to help you gain an understanding of what’s going on in AI search without feeling overwhelmed so you can start optimizing for these new platforms.

Create branded content and check for results

A while ago, I went to Austin, Texas, for a business outing. Before the trip, I wondered if I could “teach” ChatGPT about my upcoming travels. There was no public information about the trip on the web, so it was a completely clean test with no competition.

I asked ChatGPT, “is Gus Pelogia going to Austin soon?” The initial answer was what you’d expect: He doesn’t have any trips planned to Austin.

That same day, a few hours later, I wrote a blog post on my website about my trip to Austin. Six hours after I published the post, ChatGPT’s answer changed: Yes, Gus IS going to Austin to meet his work colleagues.

ChatGPT prompts with a blog post published in between queries, which was enough to change a ChatGPT answer.

ChatGPT used an AI framework called RAG (Retrieval Augmented Generation) to fetch the latest result. Basically, it didn’t have enough knowledge about this information in its training data, so it scanned the web to look for an up-to-date answer.

Interestingly enough, it took a few days until the actual blog post with detailed information was found by ChatGPT. Initially, ChatGPT had found a snippet of the new blog post on my homepage and reindexed the page within the six-hour range. It was using just the blog post’s page title to change its answer before actually “seeing” the whole content days later.

Some learnings from this experiment:

  • New information on webpages reaches ChatGPT answers in a matter of hours, even for small websites. Don’t think your website is too small or insignificant to get noticed by LLMs—they’ll notice when you add new content or refresh existing pages, so it’s important to have an ongoing brand content strategy.
  • The answers in ChatGPT are highly dependent on the content published on your website. This is especially true for new companies where there are limited sources of information. ChatGPT didn’t confirm that I had upcoming travel until it fetched the information from my blog post detailing the trip.
  • Use your webpages to optimize how your brand is portrayed beyond showing up in competitive keywords for search. This is your opportunity to promote a certain USP or brand tagline. For instance, “The Leading AI-Powered Marketing Platform” and “See everyday moments from your close friends” are used, respectively, by Semrush and Instagram on their homepages. While users probably aren’t searching for these keywords, it’s still an opportunity for brand positioning that will resonate with them.

Win every search with AI visibility + traditional SEO

Built for how people search today. Track your brand across Google rankings and AI search in one place.

Try free for 14 days

Get started with

Semrush One Logo

Test to see if ChatGPT is using Bing or Google’s index

The industry has been ringing alarm bells about whether ChatGPT uses Google’s index instead of Bing. So I ran another small test to find out: I added a <meta name=”googlebot” content=”noindex”> tag on the blog post, allowing only Bingbot for nine days.

If ChatGPT is using Bing’s index, it should find my new page when I prompt about it. Again, this was on a new topic and the prompt specifically asked for an article I wrote, so there wouldn’t be any doubts about what source to show.

The page got indexed by Bing after a couple of days, while Google wasn’t allowed to see it.

New article has been indexed by Bingbot

I kept asking ChatGPT, with multiple prompt variations, if it could find my new article. For nine days, nothing changed—it couldn’t find the article. It got to a point that ChatGPT hallucinated (actually, tried its best guess) a URL.

ChatGPT made-up URL: https://www.guspelogia.com/learnings-from-building-a-new-product-as-an-seo
Real URL: https://www.guspelogia.com/learnings-new-product-seo

GSC shows that it can’t index the page due to “noindex” tag

I eventually gave up and allowed Googlebot to index the page. A few hours later, ChatGPT changed its answer and found the correct URL.

On the top, ChatGPT’s answer when Googlebot was blocked. On the bottom, ChatGPT’s answer after Googlebot was allowed to see the page.

Interestingly enough, the link to the article was presented on my homepage and blog pages, yet ChatGPT couldn’t display it. It only found that the blog post existed based on the text on those pages, even though it didn’t follow the link.

Yet, there’s no harm in setting up your website for success on Bing. They’re one of the search engines that adopted IndexNow, a simple ping that informs search engines that a URL’s content has changed. This implementation allows Bing to reflect updates in their search results quickly. 

While we all suspect (with evidence) that ChatGPT isn’t using Bing’s index, setting up IndexNow is a low effort task that’s worthwhile.

Change the content on a source used by RAG

Clicks are becoming less important. Instead, being mentioned in sources like Google’s AI Mode is arising as a new KPI for marketing teams. SEOs are testing multiple tactics to “convince” LLMs about a topic. From using LinkedIn Pulse to write about a topic, to controlled experiments with expired domains and hacking sites, in some ways, it feels like old-school SEO is back.

We’re all talking about being included in AI search results, but what happens when a company or product loses a mention on a page? Imagine a specific model of earbuds is removed from a “top budget earbuds” list—would the product lose its mention, or would Google find a new source to back up its AI answer? 

While the answer could always be different for each user and each situation, I ran another small test to find out.

In a listicle that mentioned multiple certification courses, I identified one course that was no longer relevant, so I removed mentions of it from multiple pages on the same domain. I did this to keep the content relevant, so measuring the changes in AI Mode was a side effect.

Initially, within the first few days of the course getting removed from the cited URL, it continued to be part of the AI answer for a few pre-determined prompts. Google simply found a new URL in another domain to validate its initial view. 

However, within a week, the course disappeared from AI Mode and ChatGPT completely. Basically, even though Google found another URL validating the course listing, because the “original source” (in this case, the listicle) was updated to remove the course, Google (and, by extension, ChatGPT) subsequently updated its results as well.  

This experiment suggests that changing the content on the source cited by LLMs can impact the AI results. But take this conclusion with a pinch of salt, as it was a small test with a highly targeted query. I specifically had a prompt combining “domain + courses” so the answer would come from one domain.

Nonetheless, while in the real world it’s unlikely one citation URL would hold all the power, I’d hypothesize that losing a mention on a few high-authority pages would have the side effect of losing the mention in an AI answer.

Test small, then scale

Tests in small and controlled environments are important for learning and give confidence that your optimization has an effect. Like everything else I do in SEO, I start with an MVP (Minimum Viable Product), learn along the way, and once/if evidence is found, make changes at scale.

Do you want to change the perception of a product on ChatGPT? You won’t get dozens of cited sources to talk about you straight away, so you’d have to reach out to each single source and request a mention. You’ll quickly learn how hard it is to convince these sources to update their content and whether AI optimization becomes a pay-to-play game or if it can be done organically.

Perhaps you’re a source that’s mentioned often when people search for a product, like earbuds. Run your MVPs to understand how much changing your content influences AI answers before you claim your influence at scale, as the changes you make could backfire. For example, what if you stop being a source for a topic due to removing certain claims from your pages?

There’s no set time for these tests to show results. As a general rule, SEOs say results take a few months to appear. In the first test on this article, it took just a few hours to see results. 

Running LLM tests with larger websites

Working in large teams or on large websites can be a challenge when doing LLM testing. My suggestion is to create specific initiatives and inform all stakeholders about changes to avoid confusion later, as they might question why these changes are happening.

One simple but effective test done by SEER Interactive was to update their footer tagline.

  • From: Remote-first, Philadelphia-founded
  • To: 130+ Enterprise Clients, 97% Retention Rate 

By changing the footer, ChatGPT 5 started mentioning its new tagline within 36 hours for a prompt like “tell me about Seer Interactive.” I’ve checked, and while every time the answer is different, they still mention the “97% retention rate.”

Imagine if you decide to change the content on a number of pages, but someone else has an optimization plan for those same pages. Always run just one test per page, as results will become less reliable if you have multiple variables.

Make sure to research your prompts, have a tracking methodology, and spread the learnings across the company, beyond your SEO counterparts. Everyone is interested in AI right now, all the way up to C-levels.

Another suggestion is to use a tool like Semrush’s AI SEO toolkit to see the key sentiment drivers about a brand. Start with the listed “Areas for Improvement”—this should give you plenty of ideas for tests beyond “SEO Reason,” as it reflects how the brand is perceived beyond organic results.

Checklist: Getting started with LLM optimization

Things are changing fast with AI, and it’s certainly challenging to keep up to date. There’s an overload of content right now, a multitude of claims, and, I dare to say, not even the LLM platforms running them have things fully figured out.

My recommendation is to find the sources you trust (industry news, events, professionals) and run your own tests using the knowledge you have. The results you find for your brands and clients are always more valuable than what others are saying.

It’s a new world of SEO and everyone is trying to figure out what works for them. The best way to follow the curve (or stay ahead of it) is to keep optimizing and documenting your changes.

To wrap it up, here’s a checklist for your LLM optimization:

  • Before starting a test, make sure your selected prompts consistently return the answer you expect (such as not mentioning your brand or a feature of your product). Otherwise, the new brand mention or link could be a coincidence, not a result of your work.
  • If the same claim is made on multiple pages on your website, update them across the board to increase chances of success
  • Use your own website and external sources (e.g., via digital PR) to influence your brand perception. It’s unclear if users will cross-check AI answers or just trust what they’re told.

Read more at Read More

Most ChatGPT links get 0% CTR – even highly visible ones

AI vs organic search referral traffic

A leaked file reveals the user interactions that OpenAI is tracking, including how often ChatGPT displays publisher links and how few users actually click on them.

By the numbers. ChatGPT shows links, but hardly anyone clicks on them. For one top-performing page, the OpenAI file reports:

  • 610,775 total link impressions
  • 4,238 total clicks
  • 0.69% overall CTR
  • Best individual page CTR: 1.68%
  • Most other pages: 0.01%, 0.1%, 0%

ChatGPT metrics. The leaked file breaks down every place ChatGPT displays links and how users interact with them. It tracks:

  • Date range (date partition, report month, min/max report dates)
  • Publisher and URL details (publisher name, base URL, host, URL rank)
  • Impressions and clicks across:
    • Response
    • Sidebar
    • Citations
    • Search results
    • TL;DR
    • Fast navigation
  • CTR calculations for each display area
  • Total impressions and total clicks across all surfaces

Where the links appear. Interestingly, the most visible placements drive the fewest clicks. The document broke down performance by zone:

  • Main response: Huge impressions, tiny CTR
  • Sidebar and citations: Fewer impressions, higher CTR (6–10%)
  • Search results: Almost no impressions, zero clicks

Why we care. Hoping ChatGPT visibility might replace your lost Google organic search traffic? This data says no. AI-driven traffic is rising, but it’s still a sliver of overall traffic – and it’s unlikely to ever behave like traditional organic search traffic.

About the data. It was shared on LinkedIn by Vincent Terrasi, CTO and co-founder of Draft & Goal, which bills itself as “a multistep workflow to scale your content production.”

Read more at Read More

Microsoft Advertising adds AI-powered image animation to boost video creation

Microsoft's Audience Network expansion- Higher CPCs, lower CTRs, no added value

Microsoft Advertising is rolling out Image Animation, a new Copilot-powered feature that automatically converts static images into short, dynamic video assets — giving advertisers a faster path into video without traditional production.

How it works:

  • Copilot transforms existing static images into scroll-stopping animated video formats.
  • The tool extends the lifespan of strong image creatives by repurposing them for video placements across Microsoft’s global publisher network.
  • The feature is now in global pilot (excluding mainland China) and accessible through Ads Studio’s video templates.

Why we care. Video continues to dominate digital attention, with the average American now watching more than four hours of digital video per day. As video becomes essential in performance campaigns, advertisers need scalable ways to produce it — especially when budgets or resources are tight.

This update reduces production barriers, extends the value of top-performing images, and unlocks broader inventory across Microsoft’s premium video network.

Between the lines. For many advertisers, the biggest bottleneck to entering video isn’t strategy — it’s production. Microsoft is positioning Copilot as a creative multiplier, letting performance marketers upgrade image-based campaigns with lightweight, AI-generated motion.

The bottom line. Microsoft Advertising is working on their AI advances to close the gap between static creative and video demand — helping advertisers stay competitive as video consumption accelerates.

Read more at Read More

SEO vs. AI search: 101 questions that keep me up at night

SEO AI optimization GEO AEO LLMO

Look, I get it. Every time a new search technology appears, we try to map it to what we already know.

  • When mobile search exploded, we called it “mobile SEO.”
  • When voice assistants arrived, we coined “voice search optimization” and told everyone this would be the new hype.

I’ve been doing SEO for years.

I know how Google works – or at least I thought I did.

Then I started digging into how ChatGPT picks citations, how Perplexity ranks sources, and how Google’s AI Overviews select content.

I’m not here to declare that SEO is dead or to state that everything has changed. I’m here to share the questions that keep me up at night – questions that suggest we might be dealing with fundamentally different systems that require fundamentally different thinking.

The questions I can’t stop asking 

After months of analyzing AI search systems, documenting ChatGPT’s behavior, and reverse-engineering Perplexity’s ranking factors, these are the questions that challenge most of the things I thought I knew about search optimization.

When math stops making sense

I understand PageRank. I understand link equity. But when I discovered Reciprocal Rank Fusion in ChatGPT’s code, I realized I don’t understand this:

  • Why does RRF mathematically reward mediocre consistency over single-query excellence? Is ranking #4 across 10 queries really better than ranking #1 for one?
  • How do vector embeddings determine semantic distance differently from keyword matching? Are we optimizing for meaning or words?
  • Why does temperature=0.7 create non-reproducible rankings? Should we test everything 10 times over now?
  • How do cross-encoder rerankers evaluate query-document pairs differently than PageRank? Is real-time relevance replacing pre-computed authority?

These are also SEO concepts. However, they appear to be entirely different mathematical frameworks within LLMs. Or are they?

When scale becomes impossible

Google indexes trillions of pages. ChatGPT retrieves 38-65. This isn’t a small difference – it’s a 99.999% reduction, resulting in questions that haunt me:

  • Why do LLMs retrieve 38-65 results while Google indexes billions? Is this temporary or fundamental?
  • How do token limits establish rigid boundaries that don’t exist in traditional searches? When did search results become limited in size?
  • How does the k=60 constant in RRF create a mathematical ceiling for visibility? Is position 61 the new page 2?

Maybe they’re just current limitations. Or maybe, they represent a different information retrieval paradigm.

The 101 questions that haunt me:

  1. Is OpenAI also using CTR for citation rankings?
  2. Does AI read our page layout the way Google does, or only the text?
  3. Should we write short paragraphs to help AI chunk content better?
  4. Can scroll depth or mouse movement affect AI ranking signals?
  5. How do low bounce rates impact our chances of being cited?
  6. Can AI models use session patterns (like reading order) to rerank pages?
  7. How can a new brand be included in offline training data and become visible?
  8. How do you optimize a web/product page for a probabilistic system?
  9. Why are citations continuously changing?
  10. Should we run multiple tests to see the variance?
  11. Can we use long-form questions with the “blue links” on Google to find the exact answer?
  12. Are LLMs using the same reranking process?
  13. Is web_search a switch or a chance to trigger?
  14. Are we chasing ranks or citations?
  15. Is reranking fixed or stochastic?
  16. Are Google & LLMs using the same embedding model? If so, what’s the corpus difference?
  17. Which pages are most requested by LLMs and most visited by humans?
  18. Do we track drift after model updates?
  19. Why is EEAT easily manipulated in LLMs but not in Google’s traditional search?
  20. How many of us drove at least 10x traffic increases after Google’s algorithm leak?
  21. Why does the answer structure always change even when asking the same question within a day’s difference? (If there is no cache)
  22. Does post-click dwell on our site improve future inclusion?
  23. Does session memory bias citations toward earlier sources?
  24. Why are LLMs more biased than Google?
  25. Does offering a downloadable dataset make a claim more citeable?
  26. Why do we still have very outdated information in Turkish, even though we ask very up-to-date questions? (For example, when asking what’s the best e-commerce website in Turkiye, we still see brands from the late 2010s)
  27. How do vector embeddings determine semantic distance differently from keyword matching?
  28. Do we now find ourselves in need to understand the “temperature” value in LLMs?
  29. How can a small website appear inside ChatGPT or Perplexity answers?
  30. What happens if we optimize our entire website solely for LLMs?
  31. Can AI systems read/evaluate images in webpages instantly, or only the text around them?
  32. How can we track whether AI tools use our content?
  33. Can a single sentence from a blog post be quoted by an AI model?
  34. How can we ensure that AI understands what our company does?
  35. Why do some pages show up in Perplexity or ChatGPT, but not in Google?
  36. Does AI favor fresh pages over stable, older sources?
  37. How does AI re-rank pages once it has already fetched them?
  38. Can we train LLMs to remember our brand voice in their answers?
  39. Is there any way to make AI summaries link directly to our pages?
  40. Can we track when our content is quoted but not linked?
  41. How can we know which prompts or topics bring us more citations? What’s the volume?
  42. What would happen if we were to change our monthly client SEO reports by just renaming them to “AI Visibility AEO/GEO Report”?
  43. Is there a way to track how many times our brand is named in AI answers? (Like brand search volumes)
  44. Can we use Cloudflare logs to see if AI bots are visiting our site?
  45. Do schema changes result in measurable differences in AI mentions?
  46. Will AI agents remember our brand after their first visit?
  47. How can we make a local business with a map result more visible in LLMs?
  48. Will Google AI Overviews and ChatGPT web answers use the same signals?
  49. Can AI build a trust score for our domain over time?
  50. Why do we need to be visible in query fanouts? For multiple queries at the same time? Why is there synthetic answer generation by AI models/LLMs even when users are only asking a question?
  51. How often do AI systems refresh their understanding of our site? Do they also have search algorithm updates?
  52. Is the freshness signal sitewide or page-level for LLMs?
  53. Can form submissions or downloads act as quality signals?
  54. Are internal links making it easier for bots to move through our sites?
  55. How does the semantic relevance between our content and a prompt affect ranking?
  56. Can two very similar pages compete inside the same embedding cluster?
  57. Do internal links help strengthen a page’s ranking signals for AI?
  58. What makes a passage “high-confidence” during reranking?
  59. Does freshness outrank trust when signals conflict?
  60. How many rerank layers occur before the model picks its citations?
  61. Can a heavily cited paragraph lift the rest of the site’s trust score?
  62. Do model updates reset past re-ranking preferences, or do they retain some memory?
  63. Why can we find better results by 10 blue links without any hallucination? (mostly)
  64. Which part of the system actually chooses the final citations?
  65. Do human feedback loops change how LLMs rank sources over time?
  66. When does an AI decide to search again mid-answer? Why do we see more/multiple automatic LLM searches during a single chat window?
  67. Does being cited once make it more likely for our brand to be cited again? If we rank in the top 10 on Google, we can remain visible while staying in the top 10. Is it the same with LLMs?
  68. Can frequent citations raise a domain’s retrieval priority automatically?
  69. Are user clicks on cited links stored as part of feedback signals?
  70. Are Google and LLMs using the same deduplication process?
  71. Can citation velocity (growth speed) be measured like link velocity in SEO?
  72. Will LLMs eventually build a permanent “citation graph” like Google’s link graph?
  73. Do LLMs connect brands that appear in similar topics or question clusters?
  74. How long does it take for repeated exposure to become persistent brand memory in LLMs?
  75. Why doesn’t Google show 404 links in results but LLMs in answers?
  76. Why do LLMs fabricate citations while Google only links to existing URLs?
  77. Do LLMs retraining cycles give us a reset chance after losing visibility?
  78. How do we build a recovery plan when AI models misinterpret information about us?
  79. Why do some LLMs cite us while others completely ignore us?
  80. Are ChatGPT and Perplexity using the same web data sources?
  81. Do OpenAI and Anthropic rank trust and freshness the same way?
  82. Are per-source limits (max citations per answer) different for LLMs?
  83. How can we determine if AI tools cite us following a change in our content?
  84. What’s the easiest way to track prompt-level visibility over time?
  85. How can we make sure LLMs assert our facts as facts?
  86. Does linking a video to the same topic page strengthen multi-format grounding?
  87. Can the same question suggest different brands to different users?
  88. Will LLMs remember previous interactions with our brand?
  89. Does past click behavior influence future LLM recommendations?
  90. How do retrieval and reasoning jointly decide which citation deserves attribution?
  91. Why do LLMs retrieve 38-65 results per search while Google indexes billions?
  92. How do cross-encoder rerankers evaluate query-document pairs differently than PageRank?
  93. Why can a site with zero backlinks outrank authority sites in LLM responses?
  94. How do token limits create hard boundaries that don’t exist in traditional search?
  95. Why does temperature setting in LLMs create non-deterministic rankings?
  96. Does OpenAI allocate a crawl budget for websites?
  97. How does Knowledge Graph entity recognition differ from LLM token embeddings?
  98. How does crawl-index-serve differ from retrieve-rerank-generate?
  99. How does temperature=0.7 create non-reproducible rankings?
  100. Why is a tokenizer important?
  101. How does knowledge cutoff create blind spots that real-time crawling doesn’t have?

When trust becomes probabilistic

This one really gets me. Google links to URLs that exist, whereas AI systems can completely make things up:

  • Why can LLMs fabricate citations while Google only links to existing URLs?
  • How does a 3-27% hallucination rate compare to Google’s 404 error rate?
  • Why do identical queries produce contradictory “facts” in AI but not in search indices?
  • Why do we still have outdated information in Turkish even though we ask up-to-date questions?

Are we optimizing for systems that might lie to users? How do we handle that?

Where this leaves us

I’m not saying AI search optimization/AEO/GEO is completely different from SEO. I’m just saying that I have 100+ questions that my SEO knowledge can’t answer well, yet.

Maybe you have the answers. Maybe nobody does (yet). But as of now, I don’t have the answers to these questions.

What I do know, however, is this: These questions aren’t going anywhere. And, there will be new ones.

The systems that generate these questions aren’t going anywhere either. We need to engage with them, test against them, and maybe – just maybe – develop new frameworks to understand them.

The winners in this new field won’t be those who have all the answers. There’ll be those asking the right questions and testing relentlessly to find out what works.

This article was originally published on metehan.ai (as 100+ Questions That Show AEO/GEO Is Different Than SEO) and is republished with permission.

Read more at Read More

Tim Berners-Lee warns AI may collapse the ad-funded web

Sir Tim Berners-Lee, who invented the World Wide Web, is worried that the ad-supported web will collapse due to AI. In a new interview with Nilay Patel on Decoder, Berners-Lee said:

  • “I do worry about the infrastructure of the web when it comes to the stack of all the flow of data, which is produced by people who make their money from advertising. If nobody is actually following through the links, if people are not using search engines, they’re not actually using their websites, then we lose that flow of ad revenue. That whole model crumbles. I do worry about that.”

Why we care. There is a split in our industry, where one side thinks “it’s just SEO” and the other sees a near future where visibility in AI platforms has replaced rankings, clicks, and traffic. We know SEO still isn’t dead and people are still using search engines, but the writing is still on the wall (Google execs have said as much in private). Berners-Lee seems to envision the same future, warning that if people stop following links and visiting websites, the entire web model “crumbles,” leaving AI platforms with value while the ad-supported web and SEO fade.

On monopolies. In the same interview, Berners-Lee said a centralized provider or monopoly isn’t good for the web:

  • “When you have a market and a network, then you end up with monopolies. That’s the way markets work.
  • “There was a time before Google Chrome was totally dominant, when there was a reasonable market for different browsers. Now Chrome is dominant.
  • “There was a time before Google Search came along, there were a number of search engines and so on, but now we have basically one search engine.
  • “We have basically one social network. We have basically one marketplace, which is a real problem for people.”

On the semantic web. Berners-Lee worked on the Semantic Web for decades (a web that machines can read as easily as humans). As for where it’s heading next: data by AI, for AI (and also people, but especially AI):

  • “The Semantic Web has succeeded to the extent that there’s the linked open data world of public databases of all kinds of things, about proteins, about geography, the OpenStreetMap, and so on. To a certain extent, the Semantic Web has succeeded in two ways: all of that, and because of Schema.org.
  • “Schema.org is this project of Google. If you have a website and you want it to be recognized by the search engine, then you put metadata in Semantic Web data, you put machine-readable data on your website. And then the Google search engine will build a mental model of your band or your music, whatever it is you’re selling.
  • “In those ways, with the link to the data group and product database, the Semantic Web has been a success. But then we never built the things that would extract semantic data from non-semantic data. Now AI will do that.
  • “Now we’ve got another wave of the Semantic Web with AI. You have a possibility where AIs use the Semantic Web to communicate between one and two possibilities and they communicate with each other. There is a web of data that is generated by AIs and used by AIs and used by people, but also mainly used by AIs.”

On blocking AI crawlers. Discussion turned to Cloudflare and their attempt to block crawlers and its pay per crawl initiative. Berners-Lee was asked whether the web’s architecture could be redesigned so websites and database owners could bake a “not unless you pay me” rule into open standards, forcing AI crawlers and other clients across the ecosystem to honor payment requirements by default. His response:

  • “You could write the protocols. One, in fact, is micropayments. We’ve had micropayments projects in W3C every now and again over the decades. There have been projects at MIT, for example, for micropayments and so on. So, suddenly there’s a “payment required” error code in HTTP. The idea that people would pay for information on the web; that’s always been there. But of course whether you’re an AI crawler or whether you are an individual person, it’s the way you want to pay for things that’s going to be very different.”

The interview. Sir Tim Berners-Lee doesn’t think AI will destroy the web

Read more at Read More

Google expands image search ads with mobile carousel format

Google rolled out AI-powered ad carousels in the Images tab on mobile, now appearing across all categories — not just shopping-related ones.

Why we care. Ads are now showing directly within image search results, giving brands a new, highly visual placement to grab attention where users are actively browsing and comparing visuals. With users often browsing images to explore ideas or compare options, these AI-powered carousels give brands a chance to influence discovery earlier in the journey.

The details:

  • The new format features horizontally scrollable carousels with images, headlines, and links.
  • These carousels are powered by AI-driven ad matching, pulling in visuals relevant to the user’s query — even in non-commerce categories like law or insurance.
  • The feature was first spotted by ADSQUIRE founder Anthony Higman, who shared screenshots of the new layout on X.

The big picture. By integrating ads more seamlessly into visual search, Google is blurring the line between organic and paid discovery a continued shift toward immersive, image-based ad experiences that go beyond traditional text and product listings.

Read more at Read More

With negative review extortion scams on the rise, use Google’s report form

Google Business Profiles has a form where you can report negative review extortion scams, the form launched a month ago. You can find access to the form in this help document and I believe you need to be logged into your Google account with access to the Business Profile you want to report.

Review extortion scams. This negative review extortion scams are on the rise and a huge concern for local SEOs and businesses. A scammer will message you, likely over WhatsApp or email, and tell you that they left a one-star negative review and the only way to remove it is to pay them.

Google wrote in its help document, “These scams may involve a sudden increase in 1-star and 2-star reviews on your Google Business Profile, followed by someone demanding money, goods, or services in exchange for removing the negative reviews.”

The form. The form can be accessed while logged into your Google account by clicking here. The form asks for your information, the affected Google Business Profile details, more details on the extortion review, and additional evidence.

Do not engage. Google posted four tips when you are confronted with these scams:

  • Do not engage with or pay the malicious individuals. This can encourage further attempts and doesn’t guarantee the removal of reviews.
  • Do not try to resolve it yourself by offering money or services.
  • Gather all evidence immediately. The sooner you collect proof, the better.
  • Report all relevant communication you receive in the form.

Give it a try. There are some who are doubtful that this form actually does anything. But one local SEO tried it out over the weekend and within a few days, the review in question was removed. So it is worth giving it a shot.

Why we care. Reviews on your local listing, especially on Google Maps and Google Search, can have a huge impact on your business. Negative reviews will encourage customers to look for other businesses, even if those reviews are fraudulent. So, being on top of your reviews and removing the fake and fraudulent reviews is an important task most businesses should do on an ongoing basis.

This form will help you manage some of those fake reviews.

Read more at Read More

Google tightens rules on fraud-linked phone numbers in ads

Google Ads tactics to drop

Google Ads is updating its Destination requirements policy to block phone numbers tied to fraud or prior policy violations, part of the company’s ongoing effort to curb deceptive advertising practices.

The timeline:

  • Policy update effective: December 10, 2025
  • Enforcement ramp-up: Over roughly 8 weeks after rollout

What’s changing. Phone numbers flagged as fraudulent or with a history of violations will now be deemed unacceptable under the Destination requirements policy, leading to ad disapprovals.

Why we care. The change targets bad actors who use legitimate-looking phone numbers to mislead users or bypass enforcement, a recurring issue in sectors like tech support scams and lead generation. It’s a reminder to audit contact information across campaigns and ensure all numbers are verified and legitimate. Failing to do so could disrupt ad delivery, delay approvals, and hurt campaign performance during the enforcement rollout.

For advertisers. Those impacted will receive disapproval notices and can refer to Google’s help center for guidance on fixing disapproved ads or assets.

First seen. This update was shared by ADSQUIRE founder, Anthony Higman on X.

Between the lines. Google continues tightening ad verification and destination standards amid growing scrutiny over scams and consumer trust — showing that accountability for ad content now extends beyond just the landing page.

Read more at Read More