SEO vs. AI search: 101 questions that keep me up at night

SEO AI optimization GEO AEO LLMO

Look, I get it. Every time a new search technology appears, we try to map it to what we already know.

  • When mobile search exploded, we called it “mobile SEO.”
  • When voice assistants arrived, we coined “voice search optimization” and told everyone this would be the new hype.

I’ve been doing SEO for years.

I know how Google works – or at least I thought I did.

Then I started digging into how ChatGPT picks citations, how Perplexity ranks sources, and how Google’s AI Overviews select content.

I’m not here to declare that SEO is dead or to state that everything has changed. I’m here to share the questions that keep me up at night – questions that suggest we might be dealing with fundamentally different systems that require fundamentally different thinking.

The questions I can’t stop asking 

After months of analyzing AI search systems, documenting ChatGPT’s behavior, and reverse-engineering Perplexity’s ranking factors, these are the questions that challenge most of the things I thought I knew about search optimization.

When math stops making sense

I understand PageRank. I understand link equity. But when I discovered Reciprocal Rank Fusion in ChatGPT’s code, I realized I don’t understand this:

  • Why does RRF mathematically reward mediocre consistency over single-query excellence? Is ranking #4 across 10 queries really better than ranking #1 for one?
  • How do vector embeddings determine semantic distance differently from keyword matching? Are we optimizing for meaning or words?
  • Why does temperature=0.7 create non-reproducible rankings? Should we test everything 10 times over now?
  • How do cross-encoder rerankers evaluate query-document pairs differently than PageRank? Is real-time relevance replacing pre-computed authority?

These are also SEO concepts. However, they appear to be entirely different mathematical frameworks within LLMs. Or are they?

When scale becomes impossible

Google indexes trillions of pages. ChatGPT retrieves 38-65. This isn’t a small difference – it’s a 99.999% reduction, resulting in questions that haunt me:

  • Why do LLMs retrieve 38-65 results while Google indexes billions? Is this temporary or fundamental?
  • How do token limits establish rigid boundaries that don’t exist in traditional searches? When did search results become limited in size?
  • How does the k=60 constant in RRF create a mathematical ceiling for visibility? Is position 61 the new page 2?

Maybe they’re just current limitations. Or maybe, they represent a different information retrieval paradigm.

The 101 questions that haunt me:

  1. Is OpenAI also using CTR for citation rankings?
  2. Does AI read our page layout the way Google does, or only the text?
  3. Should we write short paragraphs to help AI chunk content better?
  4. Can scroll depth or mouse movement affect AI ranking signals?
  5. How do low bounce rates impact our chances of being cited?
  6. Can AI models use session patterns (like reading order) to rerank pages?
  7. How can a new brand be included in offline training data and become visible?
  8. How do you optimize a web/product page for a probabilistic system?
  9. Why are citations continuously changing?
  10. Should we run multiple tests to see the variance?
  11. Can we use long-form questions with the “blue links” on Google to find the exact answer?
  12. Are LLMs using the same reranking process?
  13. Is web_search a switch or a chance to trigger?
  14. Are we chasing ranks or citations?
  15. Is reranking fixed or stochastic?
  16. Are Google & LLMs using the same embedding model? If so, what’s the corpus difference?
  17. Which pages are most requested by LLMs and most visited by humans?
  18. Do we track drift after model updates?
  19. Why is EEAT easily manipulated in LLMs but not in Google’s traditional search?
  20. How many of us drove at least 10x traffic increases after Google’s algorithm leak?
  21. Why does the answer structure always change even when asking the same question within a day’s difference? (If there is no cache)
  22. Does post-click dwell on our site improve future inclusion?
  23. Does session memory bias citations toward earlier sources?
  24. Why are LLMs more biased than Google?
  25. Does offering a downloadable dataset make a claim more citeable?
  26. Why do we still have very outdated information in Turkish, even though we ask very up-to-date questions? (For example, when asking what’s the best e-commerce website in Turkiye, we still see brands from the late 2010s)
  27. How do vector embeddings determine semantic distance differently from keyword matching?
  28. Do we now find ourselves in need to understand the “temperature” value in LLMs?
  29. How can a small website appear inside ChatGPT or Perplexity answers?
  30. What happens if we optimize our entire website solely for LLMs?
  31. Can AI systems read/evaluate images in webpages instantly, or only the text around them?
  32. How can we track whether AI tools use our content?
  33. Can a single sentence from a blog post be quoted by an AI model?
  34. How can we ensure that AI understands what our company does?
  35. Why do some pages show up in Perplexity or ChatGPT, but not in Google?
  36. Does AI favor fresh pages over stable, older sources?
  37. How does AI re-rank pages once it has already fetched them?
  38. Can we train LLMs to remember our brand voice in their answers?
  39. Is there any way to make AI summaries link directly to our pages?
  40. Can we track when our content is quoted but not linked?
  41. How can we know which prompts or topics bring us more citations? What’s the volume?
  42. What would happen if we were to change our monthly client SEO reports by just renaming them to “AI Visibility AEO/GEO Report”?
  43. Is there a way to track how many times our brand is named in AI answers? (Like brand search volumes)
  44. Can we use Cloudflare logs to see if AI bots are visiting our site?
  45. Do schema changes result in measurable differences in AI mentions?
  46. Will AI agents remember our brand after their first visit?
  47. How can we make a local business with a map result more visible in LLMs?
  48. Will Google AI Overviews and ChatGPT web answers use the same signals?
  49. Can AI build a trust score for our domain over time?
  50. Why do we need to be visible in query fanouts? For multiple queries at the same time? Why is there synthetic answer generation by AI models/LLMs even when users are only asking a question?
  51. How often do AI systems refresh their understanding of our site? Do they also have search algorithm updates?
  52. Is the freshness signal sitewide or page-level for LLMs?
  53. Can form submissions or downloads act as quality signals?
  54. Are internal links making it easier for bots to move through our sites?
  55. How does the semantic relevance between our content and a prompt affect ranking?
  56. Can two very similar pages compete inside the same embedding cluster?
  57. Do internal links help strengthen a page’s ranking signals for AI?
  58. What makes a passage “high-confidence” during reranking?
  59. Does freshness outrank trust when signals conflict?
  60. How many rerank layers occur before the model picks its citations?
  61. Can a heavily cited paragraph lift the rest of the site’s trust score?
  62. Do model updates reset past re-ranking preferences, or do they retain some memory?
  63. Why can we find better results by 10 blue links without any hallucination? (mostly)
  64. Which part of the system actually chooses the final citations?
  65. Do human feedback loops change how LLMs rank sources over time?
  66. When does an AI decide to search again mid-answer? Why do we see more/multiple automatic LLM searches during a single chat window?
  67. Does being cited once make it more likely for our brand to be cited again? If we rank in the top 10 on Google, we can remain visible while staying in the top 10. Is it the same with LLMs?
  68. Can frequent citations raise a domain’s retrieval priority automatically?
  69. Are user clicks on cited links stored as part of feedback signals?
  70. Are Google and LLMs using the same deduplication process?
  71. Can citation velocity (growth speed) be measured like link velocity in SEO?
  72. Will LLMs eventually build a permanent “citation graph” like Google’s link graph?
  73. Do LLMs connect brands that appear in similar topics or question clusters?
  74. How long does it take for repeated exposure to become persistent brand memory in LLMs?
  75. Why doesn’t Google show 404 links in results but LLMs in answers?
  76. Why do LLMs fabricate citations while Google only links to existing URLs?
  77. Do LLMs retraining cycles give us a reset chance after losing visibility?
  78. How do we build a recovery plan when AI models misinterpret information about us?
  79. Why do some LLMs cite us while others completely ignore us?
  80. Are ChatGPT and Perplexity using the same web data sources?
  81. Do OpenAI and Anthropic rank trust and freshness the same way?
  82. Are per-source limits (max citations per answer) different for LLMs?
  83. How can we determine if AI tools cite us following a change in our content?
  84. What’s the easiest way to track prompt-level visibility over time?
  85. How can we make sure LLMs assert our facts as facts?
  86. Does linking a video to the same topic page strengthen multi-format grounding?
  87. Can the same question suggest different brands to different users?
  88. Will LLMs remember previous interactions with our brand?
  89. Does past click behavior influence future LLM recommendations?
  90. How do retrieval and reasoning jointly decide which citation deserves attribution?
  91. Why do LLMs retrieve 38-65 results per search while Google indexes billions?
  92. How do cross-encoder rerankers evaluate query-document pairs differently than PageRank?
  93. Why can a site with zero backlinks outrank authority sites in LLM responses?
  94. How do token limits create hard boundaries that don’t exist in traditional search?
  95. Why does temperature setting in LLMs create non-deterministic rankings?
  96. Does OpenAI allocate a crawl budget for websites?
  97. How does Knowledge Graph entity recognition differ from LLM token embeddings?
  98. How does crawl-index-serve differ from retrieve-rerank-generate?
  99. How does temperature=0.7 create non-reproducible rankings?
  100. Why is a tokenizer important?
  101. How does knowledge cutoff create blind spots that real-time crawling doesn’t have?

When trust becomes probabilistic

This one really gets me. Google links to URLs that exist, whereas AI systems can completely make things up:

  • Why can LLMs fabricate citations while Google only links to existing URLs?
  • How does a 3-27% hallucination rate compare to Google’s 404 error rate?
  • Why do identical queries produce contradictory “facts” in AI but not in search indices?
  • Why do we still have outdated information in Turkish even though we ask up-to-date questions?

Are we optimizing for systems that might lie to users? How do we handle that?

Where this leaves us

I’m not saying AI search optimization/AEO/GEO is completely different from SEO. I’m just saying that I have 100+ questions that my SEO knowledge can’t answer well, yet.

Maybe you have the answers. Maybe nobody does (yet). But as of now, I don’t have the answers to these questions.

What I do know, however, is this: These questions aren’t going anywhere. And, there will be new ones.

The systems that generate these questions aren’t going anywhere either. We need to engage with them, test against them, and maybe – just maybe – develop new frameworks to understand them.

The winners in this new field won’t be those who have all the answers. There’ll be those asking the right questions and testing relentlessly to find out what works.

This article was originally published on metehan.ai (as 100+ Questions That Show AEO/GEO Is Different Than SEO) and is republished with permission.

Read more at Read More

Tim Berners-Lee warns AI may collapse the ad-funded web

Sir Tim Berners-Lee, who invented the World Wide Web, is worried that the ad-supported web will collapse due to AI. In a new interview with Nilay Patel on Decoder, Berners-Lee said:

  • “I do worry about the infrastructure of the web when it comes to the stack of all the flow of data, which is produced by people who make their money from advertising. If nobody is actually following through the links, if people are not using search engines, they’re not actually using their websites, then we lose that flow of ad revenue. That whole model crumbles. I do worry about that.”

Why we care. There is a split in our industry, where one side thinks “it’s just SEO” and the other sees a near future where visibility in AI platforms has replaced rankings, clicks, and traffic. We know SEO still isn’t dead and people are still using search engines, but the writing is still on the wall (Google execs have said as much in private). Berners-Lee seems to envision the same future, warning that if people stop following links and visiting websites, the entire web model “crumbles,” leaving AI platforms with value while the ad-supported web and SEO fade.

On monopolies. In the same interview, Berners-Lee said a centralized provider or monopoly isn’t good for the web:

  • “When you have a market and a network, then you end up with monopolies. That’s the way markets work.
  • “There was a time before Google Chrome was totally dominant, when there was a reasonable market for different browsers. Now Chrome is dominant.
  • “There was a time before Google Search came along, there were a number of search engines and so on, but now we have basically one search engine.
  • “We have basically one social network. We have basically one marketplace, which is a real problem for people.”

On the semantic web. Berners-Lee worked on the Semantic Web for decades (a web that machines can read as easily as humans). As for where it’s heading next: data by AI, for AI (and also people, but especially AI):

  • “The Semantic Web has succeeded to the extent that there’s the linked open data world of public databases of all kinds of things, about proteins, about geography, the OpenStreetMap, and so on. To a certain extent, the Semantic Web has succeeded in two ways: all of that, and because of Schema.org.
  • “Schema.org is this project of Google. If you have a website and you want it to be recognized by the search engine, then you put metadata in Semantic Web data, you put machine-readable data on your website. And then the Google search engine will build a mental model of your band or your music, whatever it is you’re selling.
  • “In those ways, with the link to the data group and product database, the Semantic Web has been a success. But then we never built the things that would extract semantic data from non-semantic data. Now AI will do that.
  • “Now we’ve got another wave of the Semantic Web with AI. You have a possibility where AIs use the Semantic Web to communicate between one and two possibilities and they communicate with each other. There is a web of data that is generated by AIs and used by AIs and used by people, but also mainly used by AIs.”

On blocking AI crawlers. Discussion turned to Cloudflare and their attempt to block crawlers and its pay per crawl initiative. Berners-Lee was asked whether the web’s architecture could be redesigned so websites and database owners could bake a “not unless you pay me” rule into open standards, forcing AI crawlers and other clients across the ecosystem to honor payment requirements by default. His response:

  • “You could write the protocols. One, in fact, is micropayments. We’ve had micropayments projects in W3C every now and again over the decades. There have been projects at MIT, for example, for micropayments and so on. So, suddenly there’s a “payment required” error code in HTTP. The idea that people would pay for information on the web; that’s always been there. But of course whether you’re an AI crawler or whether you are an individual person, it’s the way you want to pay for things that’s going to be very different.”

The interview. Sir Tim Berners-Lee doesn’t think AI will destroy the web

Read more at Read More

Google expands image search ads with mobile carousel format

Google rolled out AI-powered ad carousels in the Images tab on mobile, now appearing across all categories — not just shopping-related ones.

Why we care. Ads are now showing directly within image search results, giving brands a new, highly visual placement to grab attention where users are actively browsing and comparing visuals. With users often browsing images to explore ideas or compare options, these AI-powered carousels give brands a chance to influence discovery earlier in the journey.

The details:

  • The new format features horizontally scrollable carousels with images, headlines, and links.
  • These carousels are powered by AI-driven ad matching, pulling in visuals relevant to the user’s query — even in non-commerce categories like law or insurance.
  • The feature was first spotted by ADSQUIRE founder Anthony Higman, who shared screenshots of the new layout on X.

The big picture. By integrating ads more seamlessly into visual search, Google is blurring the line between organic and paid discovery a continued shift toward immersive, image-based ad experiences that go beyond traditional text and product listings.

Read more at Read More

With negative review extortion scams on the rise, use Google’s report form

Google Business Profiles has a form where you can report negative review extortion scams, the form launched a month ago. You can find access to the form in this help document and I believe you need to be logged into your Google account with access to the Business Profile you want to report.

Review extortion scams. This negative review extortion scams are on the rise and a huge concern for local SEOs and businesses. A scammer will message you, likely over WhatsApp or email, and tell you that they left a one-star negative review and the only way to remove it is to pay them.

Google wrote in its help document, “These scams may involve a sudden increase in 1-star and 2-star reviews on your Google Business Profile, followed by someone demanding money, goods, or services in exchange for removing the negative reviews.”

The form. The form can be accessed while logged into your Google account by clicking here. The form asks for your information, the affected Google Business Profile details, more details on the extortion review, and additional evidence.

Do not engage. Google posted four tips when you are confronted with these scams:

  • Do not engage with or pay the malicious individuals. This can encourage further attempts and doesn’t guarantee the removal of reviews.
  • Do not try to resolve it yourself by offering money or services.
  • Gather all evidence immediately. The sooner you collect proof, the better.
  • Report all relevant communication you receive in the form.

Give it a try. There are some who are doubtful that this form actually does anything. But one local SEO tried it out over the weekend and within a few days, the review in question was removed. So it is worth giving it a shot.

Why we care. Reviews on your local listing, especially on Google Maps and Google Search, can have a huge impact on your business. Negative reviews will encourage customers to look for other businesses, even if those reviews are fraudulent. So, being on top of your reviews and removing the fake and fraudulent reviews is an important task most businesses should do on an ongoing basis.

This form will help you manage some of those fake reviews.

Read more at Read More

Google tightens rules on fraud-linked phone numbers in ads

Google Ads tactics to drop

Google Ads is updating its Destination requirements policy to block phone numbers tied to fraud or prior policy violations, part of the company’s ongoing effort to curb deceptive advertising practices.

The timeline:

  • Policy update effective: December 10, 2025
  • Enforcement ramp-up: Over roughly 8 weeks after rollout

What’s changing. Phone numbers flagged as fraudulent or with a history of violations will now be deemed unacceptable under the Destination requirements policy, leading to ad disapprovals.

Why we care. The change targets bad actors who use legitimate-looking phone numbers to mislead users or bypass enforcement, a recurring issue in sectors like tech support scams and lead generation. It’s a reminder to audit contact information across campaigns and ensure all numbers are verified and legitimate. Failing to do so could disrupt ad delivery, delay approvals, and hurt campaign performance during the enforcement rollout.

For advertisers. Those impacted will receive disapproval notices and can refer to Google’s help center for guidance on fixing disapproved ads or assets.

First seen. This update was shared by ADSQUIRE founder, Anthony Higman on X.

Between the lines. Google continues tightening ad verification and destination standards amid growing scrutiny over scams and consumer trust — showing that accountability for ad content now extends beyond just the landing page.

Read more at Read More

Why AI availability is the new battleground for brands

AI availability concept

GEO, AI SEO, AEO – call it what you like.

The label doesn’t matter nearly as much as understanding the shift behind it.

At the center of that shift lies one idea that explains everything: AI availability – and here’s why it matters.

What is AI availability?

The three pillars of brand availability

The idea of AI availability comes from Byron Sharp, research professor at the Ehrenberg-Bass Institute, who introduced it in a comment on one of my LinkedIn posts.

Sharp’s work underpins modern brand science and shows that growth depends on availability.

Brands grow through sales, and sales grow through two kinds of availability: mental and physical.

  • Mental availability refers to the likelihood of being considered in a purchasing situation.
  • Physical availability refers to the ease and convenience with which an item can be bought.

For years, these two principles have guided brand strategy.

They explain why Coca-Cola invests in constant visibility and why Amazon makes every click lead to a checkout.

But in the era of generative search, there’s now a third kind of availability marketers need to understand – the likelihood that your brand or product will be recommended by an AI system when a user is ready to buy.

That is AI availability – and it changes everything.

AI as the new influencer

If you are still thinking of AI as a technology, you are already behind.

Think of it instead as the world’s most powerful influencer.

ChatGPT alone is used by about 10% of the global adult population, according to recent research from OpenAI, Harvard, and Duke. 

That makes it far more pervasive than any social media platform at a similar stage in its life cycle.

Most people do not use it to code or write poetry – they use it to make decisions. 

Nearly 80% of ChatGPT conversations, the same study found, fall into three categories: 

  • Practical guidance.
  • Seeking information.
  • Writing.

In other words, people are asking AI to help them decide what to do, buy, and believe. 

The study also shows that these conversations are increasingly focused on everyday decisions rather than work. 

The distinction between search, research, and conversation is collapsing.

Source- “How People Use ChatGPT,” OpenAI, Harvard University, and Duke University
Source: “How People Use ChatGPT,” OpenAI, Harvard University, and Duke University

The result is simple.

AI systems are now the gatekeepers of modern discovery. They decide what information to surface and which businesses appear in front of consumers.

Forget the Kardashians. Forget influencer marketing.

If you’re invisible to AI, you’re invisible to the market.

AI is the new influencer.

From keywords to fitness signals

The SEO industry has spent two decades optimizing for how humans search with keywords – but that is changing.

Large language models (LLMs) infer meaning from context, probability, and performance.

They are scanning for what we can call fitness signals – a term from network science.

Fitness describes a product or service’s inherent ability to outcompete rivals, allowing one business to dominate a market even if others started earlier or invested more.

Think of how Google overtook Yahoo. 

It wasn’t just about better search algorithms – it was a better business model built on a stronger performance attribute: relevance.

These performance attributes are what make a business fit for survival. They are the qualities that define how well you solve a problem for a customer.

AI deploys search strategies to identify which businesses solve which problems most effectively. 

Because it exists to serve human needs, those same signals determine your AI availability.

Yes, AI uses search strings, fan-out queries, and reciprocal rank fusion, among many other strategies and tactics. 

It doesn’t search like humans because it isn’t bound by the same cognitive and speed limitations.

Humans search by “satisficing.” Keywords + Page 1 rankings = good enough.

Machines operate on an industrial scale – searching, gathering, assessing, and recommending.

Dig deeper: Fame engineering: The key to generative engine optimization

The psychology of performance

To understand why this works, we turn to evolutionary psychology.

Geoffrey Miller, author of “Spent,” explained that humans have always been driven by two fundamental needs. 

  • We seek to display fitness indicators that enhance our status.
  • We chase fitness cues that increase our chances of survival or pleasure.

Consumer products have evolved to meet those needs. Luxury goods signal success. 

Convenience products signal control. Both deliver psychological reassurance.

AI works in a similar way. Its goal is to satisfy human intent. 

When someone types a complex prompt into an LLM, the AI interprets it not as a string of keywords but as a statement of need. 

It then searches its training data and live information to find the most relevant and trustworthy performance attributes that match that need.

That is why context matters so much more than content. 

You are no longer competing for blue links – you are competing for cognitive inclusion in an AI’s mental model of your category. 

Your job is to make your brand’s fitness and performance attributes unmissable to that model.

Get the newsletter search marketers rely on.


Category entry points and the new SEO

Category entry points are the situations, needs, and triggers that put someone in the market to buy.

In the world of GEO, these are your new keywords.

They are what users express in prompts rather than in search terms. 

“Where can I find sustainable running shoes for flat feet?” is not a keyword query – it is a buying situation.

Your strategy is to:

  • Understand those buying situations.
  • Map them to your own performance attributes.
  • Create enough context that AI can confidently associate your brand with the solution.

That means describing not only what you do, but how you do it, who you do it for, and why you are distinctive.

This isn’t new. It’s the same foundational brand positioning marketers have always needed.

What’s changed is that it now feeds the world’s most sophisticated recommender system.

Dig deeper: AI search is booming, but SEO is still not dead

A local example: The sandwich shop in Stoke

Imagine a small sandwich shop in Stoke. It’s not glamorous, serving sausage sandwiches, bacon rolls, and coffee. 

The owners don’t want to be influencers. They just want customers.

How does a business like this make itself visible to AI?

Turn everyday details into data signals

The first step is to make its performance attributes explicit.

  • What ingredients are used?
  • Where do they come from?
  • What makes the sandwiches good value?
  • How long has the business served the local community?
  • Where is it located?
  • What is the hygiene rating?

All these details are small signals of trust and quality. 

A strong website should describe them in clear, human language. 

Every piece of information tells AI that:

  • This business exists.
  • It serves specific needs.
  • It performs well in doing so.

Build reputation where AI listens

Next, build local reputation. 

  • Encourage reviews on Google, TripAdvisor, and social media. 
  • Invite local bloggers to taste and review the food. 
  • Issue a press release about an anniversary or charity event. 

Every third-party mention adds more mutual information between your brand and the market – and that’s what AI learns from.

GEO is where good brand marketing meets intelligent technology.

Embrace both SEO and GEO

And for the “GEO is just SEO” crowd, yes, ranking on Google and in the local pack might be the best bet for increasing AI availability for this shop. 

However, it might also be hosting a relaunch event and inviting 30 local bloggers and press members to secure coverage.

Both are valid tactics with multiple benefits – and you can do both.

Until Google decides what it’s doing with the 10 blue links and AI Mode, bothism is the best plan – SEO and GEO, not just one.

From PR to performance

Larger businesses apply the same logic at scale. The recent wave of acquisitions in the SEO and analytics sector is a testament to this. 

These are deliberate attempts to control information ecosystems.

Owning media outlets, communities, and data platforms increases a company’s visibility in the information that AIs learn from. 

It creates an abundance of references that confirm expertise, authoritativeness, and relevance.

In traditional SEO, this is referred to as off-page optimization. 

In GEO, it is strategic distribution – where performance attributes and PR meet.

Your goal is to describe what you do, while making sure others also describe it.

Fame, distinctive assets, and consistency still matter. But the audience is no longer just human.

Dig deeper: AI search relies on brand-controlled sources, not Reddit: Report

Building AI availability

To make your brand visible to machines that now mediate discovery, you need to understand how and where that visibility is built.

Start with a visibility audit

Diagnose your current presence. 

Identify the category entry points most relevant to your products, and ask what prompts a user might type when they are ready to buy. 

Tools such as Semrush’s AI Enterprise platform can simulate these scenarios and show where your brand appears.

Get listed where AI looks

Identify the sources that AI models reference. 

Many LLMs use a mix of training data and live search, with listicles, directories, and “best of” articles among the most common data sources.

Being included in those lists is a sensible marketing strategy. 

Just as supermarkets stock their own shelves with their best products, you should position your brand among the best available options.

Expand your owned ecosystem

Over time, you’ll find saturation points where every competitor appears in the same lists. 

At that stage, innovation and owned media become essential. 

Start your own publication, commission original research, and contribute to conversations in your category.

Create context that earns recommendations

Digital shelf space isn’t the problem. Credible context amplifies your fitness signals.

Efficient, data-led, and creative, this is GEO’s manufactured style. But its success depends entirely on having a brand worth recommending. 

That’s why GEO is the outcome of proper marketing. 

Still, it’s proper marketing with a specific focus: increasing the likelihood of being recommended by AI.

The future of visibility

SEO has always been about optimization. 

GEO is about promotion – building and distributing enough credible, distinctive information about your business that an AI can recognize it as a trusted source.

The techniques look familiar: PR, branding, copywriting, partnerships, directories, and reviews. 

The difference lies in intent. You’re not feeding a search engine – you’re training an intelligence.

This requires a new mindset. 

  • You’re no longer optimizing for human users who type short queries into Google. You’re optimizing for a probabilistic model that interprets human intent across millions of contexts. 
  • It doesn’t care about your title tags. It cares about whether you look like the right answer to a real problem.

GEO is both exciting and humbling. 

It reconnects brand marketing and search after years of false division, and reminds us that while the tools evolve, the fundamentals endure.

You still need to be known, available, and distinctive. 

And now your audience includes machines that think like humans but learn on their own terms.

Back to fundamentals, forward with AI

GEO is a return to marketing fundamentals seen through a new lens. 

Businesses still grow by increasing availability. 

Consumers still buy from the brands they notice and can easily access. 

What has changed is the mediator: AI has become the primary distributor of attention.

Your task as a marketer is to make your brand’s performance attributes, category entry points, and distinctive assets visible in the data that AI consumes. 

The goal hasn’t changed – to be chosen. Only the mechanics are new.

Because in the age of AI, the only brands that matter are the ones the machines remember.

Read more at Read More

7 local SEO wins you get from keyword-rich Google reviews

Google reviews local SEO

Keywords in reviews are generally believed to help local rankings, although their impact is still actively debated within the local SEO community.

Regardless of where the truth on ranking impact ultimately lands, keyword-rich reviews can still provide meaningful value for local SEO beyond pure rankings.

Below are seven reasons why you should still encourage keyword-rich reviews.

1. Review justifications

If your reviews consistently mention a keyword related to your business, the likelihood that your Profile will get a Review justification in search increases.

This visibility can boost click-through rates. Higher engagement may lead to a secondary improvement in search engine rankings.

Plumbing Google review justifications

2. Place Topics

Google creates clickable Place Topics from keywords in your reviews. These topics:

  • Highlight your specialties.
  • Filter reviews for customers.
  • Can boost your Profile’s engagement.
Google place topics

3. Review snippets

Google bolds frequently mentioned terms in three review snippets on the Business Profile. This draws users searching for those terms to your Profile, hopefully increasing click-through rates.

Google review snippets

4. Menu Highlights (restaurants)

The Menu Highlights are generated from customer reviews and photos, similar to Place Topics.

Maestro Pasta menu

Recent analysis from Claudia Tomina showed that:

  • The menu highlights section impacts rankings.
  • Keywords in reviews impact the Menu Highlights section.
  • Therefore, when you get a menu highlight for a term mentioned in your reviews, you should rank better for that term.

5. AI editorial summaries

Google’s AI-generated business summaries pull concepts from reviews (e.g., “cozy”) to describe your business.

While Google’s AI summaries aren’t something you can edit, encouraging customers to include specific keywords in their reviews could influence the AI to emphasize aspects most beneficial to your business.

Basta Pesta AI summary

6. AI review summaries

Google’s AI generates review summaries by analyzing common sentiments and tips from customer feedback.

If your customers mention the right keywords in their reviews, your review summary will appear more compelling.

Google AI review summaries

7. Ask Maps about this place feature

Google is phasing out the old Q&A section and replacing it with an AI-powered feature that pulls answers from customer reviews.

This means reviews with detailed info (and the right keywords) are more valuable than ever.

Skyway Roofing Ask Maps about this place

How do you get keywords in your reviews?

It does not make sense to directly ask your customers, “Can you please add [keyword] to your review?” It’s unnatural and weird and will leave the customer wondering what your deal is.

But that doesn’t mean you have no options.

To encourage customers to naturally include relevant keywords in their reviews, begin by upgrading your review request templates.

Miriam Ellis recently wrote a helpful guide all about how to get keyword-rich reviews, which also includes three review request templates to make it extra easy for every business owner.

These templates guide customers on what to say, encouraging longer, more detailed, keyword-rich reviews — and can even prompt them to add photos to their reviews.

Here are three of those templates:

Scenario 1: Requesting reviews of specific products

Hi [customer name],
I’m [your name and job title] from [company name], and I’m writing to check in with you on your purchase of [product]. It’s my job to be sure you’re satisfied, and I wondered if you would be willing to provide your feedback in a review at [link]? 
I’m enclosing a photo of [product] for your use in your review if you don’t have your own photo, and I’d be so grateful if you could review your experience with:
– The features of this product that stand out most to you– What you like or dislike about it– How you’ve been using the product since you purchased it   
If there’s anything we could have done better for you, please feel free to contact us directly at [phone number or feedback form link]. I want to be sure you’re fully satisfied and we’re so grateful for your business. Thank you very much if you can take the time to tell us about your personal experience in your review.
[review us here link or button]
Sincerely,[name, job title, business]

Scenario 2: Requesting reviews of specific services

Hello [customer name],
This is [your name and job title] from [company name], and we were so happy to [service provided]. It’s my job to be sure you’re satisfied, and I wondered if you would be willing to provide your feedback in a review at [link]? 
I’m enclosing a photo of [the service that was provided] for your use in your review if you don’t have your own photo, and I’d be so grateful if you could review your experience with:
– Whether the service met your expectations– What you like/dislike about the service– How we did with our customer service 
If there’s anything we could have done better for you, please feel free to contact us directly at [phone number or feedback form link]. I want to be sure you’re fully satisfied, and we’re so grateful for your business. Thank you very much if you can take the time to tell us about your personal experience in your review.
[review us here link or button]
Sincerely,[name, job title, business]

Scenario 3: Requesting reviews when you’re not sure what a customer purchased

Email template
Hello [customer name],
Thank you for being our customer. I’m [your name and job title] from [company name], It’s my job to be sure you’re satisfied, and I wondered if you would be willing to provide your feedback in a review at [link]? 
I’m enclosing a photo of [the business premises] for your use in your review if you don’t have your own photo, and I’d love it if you could review:
– Whether you found our customer service helpful– What you like/dislike about our store– Why you chose our store 
If there’s anything we could have done better for you, please feel free to contact us directly at [phone number or feedback form link]. I want to be sure you’re fully satisfied and we’re so grateful for your business. Thank you very much if you can take the time to tell us about your personal experience in your review.
[review us here link or button]
Sincerely,[name, job title, business]

Now, make it work for you

By implementing a few simple improvements in your review requests, you will receive more detailed reviews from your customers, and their enhanced feedback will provide numerous benefits.

You may even increase your Google rankings for additional keywords, but I can’t guarantee anything. With all the other benefits, rankings shouldn’t be your primary goal anyway.

Read more at Read More

Google adds asset-level reporting to display campaigns

Inside Google Ads’ AI-powered Shopping ecosystem: Performance Max, AI Max and more

Google is rolling out asset-level reporting for Display campaigns, giving advertisers a clearer view of how individual creative assets perform — a move that brings Display closer to the visibility already seen in Performance Max campaigns.

Why we care. Until now, Display campaign insights have been limited to overall ad performance. With this update, advertisers can analyze results at the asset level — images, headlines, descriptions — to pinpoint what’s driving engagement and what’s not.

How it works. A new Assets tab in Google Ads will let users:

  • Compare performance of each creative asset.
  • View when assets were last updated to track iteration history.
  • Decide which assets to keep, refresh, or remove based on data.

The details. A new Google support page, “About asset reporting in Display,” outlines the update with links to:

  • Get started
  • How it works
  • Asset reporting for your Display campaigns
  • Evaluating asset performance

Between the lines. This upgrade mirrors reporting tools available in Performance Max, signaling Google’s continued effort to unify insights across campaign types and improve transparency in automated advertising.

What’s next. The feature hasn’t been spotted live yet, but its appearance on Google’s help center — first noticed by PPC News Feed founder Hana Kobzová — suggests a wider rollout is imminent.

Read more at Read More

Web Design and Development San Diego

Google’s new AI tool touts creating optimized content in a scalable way

A recent Google blog post announced the expansion of Opal, a Google tool that uses AI to get people create mini apps, and touted that the tool can be used to create “optimized” content in a “scalable way.” Many SEOs are asking if this is against Google search guidelines, specifically the scaled content abuse policy.

What Google wrote. Google wrote on the Google blog about reasons one should use Opal:

  • “Creators and marketers have also quickly adopted Opal to help them create custom content in a consistent, scalable way.”
  • “Marketing asset generators: Tools that take a single product concept and instantly generate optimized blog posts, social media captions and video ad scripts.”

Scaled content abuse policy. Meanwhile, the scaled content abuse policy states:

“Scaled content abuse is when many pages are generated for the primary purpose of manipulating search rankings and not helping users. This abusive practice is typically focused on creating large amounts of unoriginal content that provides little to no value to users, no matter how it’s created.”

The examples Google provided include:

“Using generative AI tools or other similar tools to generate many pages without adding value for users.”

Is this against Google’s policies. So the big question is, what Google promoted on its blog as a reason to use Opal is actually against Google’s policies. Google can argue that as long as your “primary purpose” is not “of manipulating search rankings” and it is to help users, than it is fine to use Opal or any other AI tool.

In fact, Reddit talked about how it was using AI tools to translate its pages at scale and it turned out, Google was okay with it.

SEOs not happy. Many SEOs feel these are double-standards and think Google should take a strong stance on using AI to generate content. Here are some of the complaints I posted from the community:

Why we care. Everyone is talking about “AI slop” and how it can ruin the web. When it comes to Google Search, Google has said it has algorithms to reward content that is helpful to users and that AI is not necessarily a bad thing.

Ultimately, if you are going to be using an AI tool, like Opal, to help you create content, you should use it as a tool and let it help you but don’t let it do it for you, fully automated, without oversight and at incredible scale.

Be careful with these tools.

I should note, we reached out to Google for a statement but we have not heard back yet. If we do, we will update the story with that statement.

Find Your Site’s SEO Issues in 30 Seconds

Find technical issues blocking search visibility. Get prioritized, actionable fixes in seconds.

Powered by data from

SEMrush Logo

Google statement. Google sent me the following statement:

This Google Labs experiment helps people develop mini-apps, and we’re seeing people create apps that help them brainstorm narratives and first drafts of marketing content to build upon. In Search, our systems aim to surface original content and our spam policies are focused on fighting content that is designed to manipulate Search while offering little value to users.

Read more at Read More

The reign of forums: How AI made conversation king

The reign of forums: How AI made conversation king

A year and a half ago, I wrote “The rise of forums: Why Google prefers them and how to adapt,” arguing that brands should build their own online forums and communities.

Let’s look at what’s happened since.

  • As of this writing, Reddit’s stock price has risen 177.6%. If you’d bought 100 shares of RDDT then, you’d be $13,113 richer today.
Reddit's stock price
  • In a June 2025 analysis of 150,000 AI citations, Semrush found that Reddit was the top source, appearing in more than 40% of LLM responses.
Top domains cited on LLMs per Semrush

So what happened? It comes down to the law of supply and demand. 

The supply-and-demand crisis of online answers

The demand for answers has skyrocketed as people increasingly turn to LLMs.

ChatGPT, Perplexity, Gemini, and Grok will try to come up with the answers from their training data, and failing that, they’ll search the web.

ChatGPT uses Bing, Gemini uses Google, and Claude, Grok, and Perplexity use their own internal search engine.

The web search engine will quickly find that the supply of long-tail answers is nonexistent. 

And so it will surface the closest thing it can find: a Reddit thread that matches the keywords, but could very well have been written by a novice, an armchair expert, or a troll.

Whose fault is it that the web is devoid of meaningful long-tail content?

Ultimately, it was Google’s. 

Even the best SEO professionals among us were told by our clients and bosses that nothing mattered except for the One Ring – getting ranked on the top for a competitive head term. 

We all started to write the same blog posts to try to grab that top spot, while the vast long tail went ignored.

The irony is that if your brand has any kind of expertise or authority in your space, you always could – and still can – completely own the undiscovered country of the long-tail of search for your industry, a frontier of questions no brand has yet answered.

The advantages of user-generated content

The best way to do this – by far – is through user-generated content (UGC), which has several key characteristics:

  • It matches search intent: Users post the same way they search, using the same words.
  • It’s always up-to-date: New posts keep topics current without constant editorial work.
  • It’s accurate: Assuming your brand can attract experienced experts who contribute, each new reply will add value or correction. 
  • It builds semantic depth: Conversations naturally surface related terms, subtopics, and entities that boost SEO and LLM discovery.
  • It’s trustworthy and AI-proof: Authentic human discussion is the one thing that LLMs can’t replicate.

If this all sounds familiar to you, it’s the same old E-E-A-T that Google has been trying to get us to do for years.

Only now, it really counts. 

Why brands hesitate

Most companies instinctively resist the idea of launching a forum. 

Here are the objections I hear most often – and how I respond.

  • It’s too expensive: Ironically, forum and Q&A software is among the most mature software in the open-source world. You can literally have a production-ready system up and running in a week at a cost less than a few cups of coffee. I’ll share some examples below. 
  • We don’t have the development resources: If you’re not familiar with the concept of open-source, you don’t need development resources other than for tasks like skinning and building single sign-on, which your developers can do in their sleep. 
  • We tried it before, and it didn’t work: In most cases, this is because forums were treated as side projects, and not owned media.
  • There’s no clear ROI: Forums have always reduced support tickets, but because it’s hard to prove a negative, most companies treated both online and offline customer service as cost centers – and the first things to cut. Today, forums still lower service costs and add valuable, search-friendly content. It’s time to redo the math.
  • Moderation is too much of a hassle: Today’s spam filters, coupled with smart heuristics, enforced policies, and AI-supported moderation, can handle 90% of bad actors. A strong community of users and in-house moderators can easily handle the rest. 
  • Everyone’s already on Reddit or Discord: Exactly. And those platforms own your audience, your brand, and your data. It’s time to take it back.
  • Forums are outdated: Reddit is a forum. It has a market cap of $38 billion. Time to re-do the math on that one, too. 

Discussion boards vs. Q&A sites

I tend to use the phrase “forums” interchangeably to refer to two kinds of sites: discussion boards and Q&A sites.

There are key differences, depending on your company’s goals.

A discussion board is built for ongoing conversation. 

It’s a social space where customers can connect, share experiences, swap ideas, and engage in the occasional friendly debate, like an always-on company event or conference.

A Q&A site, by contrast, is built for resolution. Each post centers on a single question from a community member. 

Some brands limit responses to verified experts, while others invite the whole community to contribute and vote on the best answer. 

The goal is clarity: one question, one accepted solution.

Both formats create a treasure trove of owned, uniquely human content. 

While other companies rely on generative AI to churn out soulless copy, with the help of your community, you’ll be building fresh content that feeds AI and, more importantly, reaches real customers. 

As derivative AI-generated content floods the web, that authentic human signal will become a huge competitive edge.

Get the newsletter search marketers rely on.


The open-source path to ownership

While many enterprise and SaaS options exist, most businesses can start with open-source software – ideal for small, mid-sized, or cost-conscious enterprises.

Here’s why open source makes sense.

Open source software is free

Every software package I recommend below will be free. 

All you need is a web server or hosting plan (your own infrastructure, a cloud provider, or even a managed host), and you can run it yourself.

Open source software is customizable 

Most mature open-source platforms enable brands to easily customize and extend functionality through plug-ins and extensions – all with a fraction of the development effort required to build a system from scratch.

Instead of building a huge system from scratch, your team can focus on customization, such as: 

  • Customizing the front-end design to match your brand website.
  • Using single sign-on with your existing customer database to make access seamless for your customers. 
  • Adding reputation and gamification systems, such as upvotes, leaderboards, and badges, to promote the most credible voices.

You own your own data

When you self-host your forum, you own the data and can export it at any time, with no dependencies on third-party platforms or APIs. 

This is increasingly important as we enter an era where unique content is literally an asset. 

SEO and LLM visibility

Most mature forum and Q&A software have SEO best practices built in, from automatic title tags to best internal linking practices that make it easy for search engines and AI bots to discover content. 

Moderation tools

Active moderation is crucial to the success of online communities. 

Choosing the right discussion board software

After extensive research, my go-to recommendations for discussion boards are Flarum and Discourse.

I like Flarum for its sleek, minimalist interface and Reddit-like familiarity. 

Built on PHP with Laravel components, it’s fast, lightweight, and highly extensible, supported by an active developer community. 

It’s ideal for small to mid-sized businesses, startups, and niche communities.

Flarum

Discourse is the gold standard for modern forums, built on Ruby on Rails and Ember.js. 

It offers robust features out of the box, including SSO, analytics, trust levels, and a powerful API, plus a paid option for fully managed deployments. 

Used by major brands like OpenAI, Samsung, and Shopify, it’s ideal for larger organizations, SaaS companies, and professional communities.

Discourse

Honorable mention goes to NodeBB and phpBB, older platforms that require a bit more care and feeding, but also have their advantages. 

Platforms built for Q&A

My go-tos here include Apache Answer and Question2Answer. 

Apache Answer is a modern, actively supported platform from the Apache Software Foundation, with a solid pedigree. 

Built on Go and Vue.js, it offers a full feature set – voting, accepted answers, categories, and a Reddit-style reputation system.

Apache Answer

Question2Answer, first released in 2010 and still actively maintained, is inspired by Stack Overflow, offering features such as voting and tagging. 

Its out-of-the-box interface looks dated, but a good designer can easily modernize it. It’s built in PHP.

Question2Answer

AskBot and Scoold are also worth exploring.

Test them out. They all have links to a demo and real-world client implementations on their sites. 

Find one you like. Pay $50 for a shared web hosting service, and another $50 for pizza for engineers and developers. 

You’ll have a fully functional forum within a week. 

Where most forums succeed – or fail

Unlike most software projects, building a discussion board or Q&A site is relatively straightforward. 

But it’s maintaining and running it that will determine whether it’ll be successful.

I’ve been fortunate enough to have launched, managed, and moderated several successful discussion forums and Q&A sites over the years. 

Here’s some practical advice.

Have a zero tolerance for spam

I mentioned this in my previous article; it’s the number one reason forums fail. 

The moment you launch a discussion board, it will be attacked. 

Fortunately, tools like Akismet, StopForumSpam, CleanTalk, and reCAPTCHA can block most spam before it reaches your site. 

You can even run your server logs through an LLM to generate smart filtering rules for your CDN. 

And if anything slips through, remove it fast – spam spreads apathy faster than any troll.

With Q&A sites, you’ll have a bit more control, depending on how many of the questions and answers you’d like to open up to the public. 

Require detailed and authentic titles 

This is another Achilles’ Heel of many forums. 

Discussion boards often have non-descript titles, such as “Help!” or “Need Advice!” You’ll also want to have a zero-tolerance policy toward those. 

Have instructional copy that reminds them to leave detailed titles, and if any slip through the cracks, either generate a title for them or reject the post.

Similarly, for Q&A sites, your titles must reflect actual questions that users ask in their own language, not the words of a marketer or other internal voice. 

Seed popular topics

To understand the questions people are asking, review:

  • Your on-site search data.
  • Google Search Console data.
  • Customer service inquiries.
  • External sites like Reddit. 

Post them to the discussion board from a moderator account, provide high-quality answers, and invite comments. 

As long as you’re authentic and transparent, users will respond.

Establish clear, public community guidelines

Set rules and boundaries clearly up-front and display them prominently. 

Keep them short enough that real users will read them, ideally 5-7 bullet points. 

Some thought starters:

  • Linking policy: Generally, you’ll want to allow only accounts that have been vetted or passed certain criteria to be able to post links.
  • Reinforce tone: “Disagree without being disagreeable”
  • Rules against harassment and bad language.
  • Rules against off-topic posts.

Establish clear categories

Define categories and tags clearly. 

Take a large pool of typical questions or discussion topics and categorize them. (Hint: Use your favorite LLM to help.)

Ensure that category names are immediately intuitive to users. Move or delete off-topic content quickly. 

Empower trusted regulars

Over time, many forums start to attract regular visitors. 

If this happens to your brand, tap into their passion by inviting them to take on small moderation privileges (e.g., editing titles, retagging, or flagging spam). 

Depending on your relationship with these fans, you can incentivize them with recognition, branded merchandise, free product, or monetary compensation. 

Community self-correction scales far better than centralized policing.

Gamify contributions for everyone with leaderboards, badges, upvote milestones, etc. 

Archive or merge duplicates

Especially in Q&A boards, you’ll want to make sure to avoid repeating questions. 

That causes duplicate content issues for SEO, but worse, it can frustrate visitors. 

Own the conversation before your competitors do

There are plenty more ways to run a successful discussion board or Q&A site. 

But the most important rule is this: don’t treat it as an SEO tactic, an LLM feeder, or a necessary evil. 

Build a destination you and your team would actually want to visit – a place for lively conversation, useful knowledge, and genuine connection with your customers and fans. 

That’s the real formula for success.

A year ago, I suggested that you start a forum. This year, it’s not optional. 

Reddit has proven that conversation has real value, and your competitors will soon catch on. 

Claim the conversations that belong to your brand, and you’ll:

  • Delight customers.
  • Strengthen your reputation.
  • Drive conversions.
  • Become the authority AI learns from – and trusts.

Read more at Read More