TikTok limits posts to five hashtags

TikTok SEO: The ultimate guide

TikTok is capping hashtags at five per post, a shift some users have recently noticed through in-app notifications.

Details. TikTok hasn’t formally announced the update. A Reddit user said a TikTok notification explained the change is aimed at:

  • Reducing hashtag clutter,
  • Discouraging spammy usage,
  • Improving discovery relevance.

TikTok is the latest social platform to sideline hashtags:

  • X dropped hashtags from ads.
  • Meta’s Threads limits posts to one topic tag, while Instagram is testing a five-hashtag cap.
  • LinkedIn has de-emphasized them.

Why we care. Hashtags have long been used to boost reach, but platforms are dialing them back as algorithms rely more on engagement signals – and as spammy, irrelevant tags clutter feeds. This could improve relevance and reduce spammy competition, but it also raises the stakes for picking the right hashtags to ensure campaigns still surface in discovery.

The big picture. For creators, the change means quality over quantity. Picking the most relevant hashtags matters more than piling on extras. TikTok’s Trends dashboard can help surface the tags most likely to drive discovery.

Read more at Read More

Winning the local SEO game in the age of AI by Edna Chavira

Search Engine Land live event-- Save your spot!
Search Engine Land live event-- Save your spot!

Google’s AI results are changing everything about how local businesses get discovered—and reviews are now at the center of it all. They shape visibility, build trust, and, when leveraged effectively, drive conversions.

In this live webinar, GatherUp VP of Marketing Mél Attia and renowned Local SEO expert Miriam Ellis will share never-before-seen research findings on how AI and consumer behavior are reshaping local SEO. You’ll discover:

  • How Google’s AI-powered results are prioritizing local businesses
  • What consumers really care about when evaluating businesses
  • Why reputation and reviews are the ranking lever most agencies underutilize
  • New consumer data, benchmarks, and tactical frameworks to boost your clients’ results

Whether you’re helping clients gain visibility, prove trustworthiness, or turn reviews into revenue, this session will equip your agency with actionable insights—and a narrative that makes review strategy impossible to ignore. You can save your seat here!

Read more at Read More

Want to win at local SEO? Focus on reviews and customer sentiment

Local SEO reviews sentiment

Search is changing fast. This year, we’ve seen more instances of search engine results sharing space with AI-powered features that are changing how people find information.

Along with the changes to how search engines display information, we’re also seeing users explore new methods to search for information. Google AI Mode, Gemini, ChatGPT, Perplexity – there are many large language models (LLMs) capturing users’ attention, providing new ways for users online to discover and make decisions about your brand. 

Customer sentiment, shown through reviews and ratings, is becoming a key part of both local and branded search.

For brands looking to stay ahead, focusing on sentiment, review ratings, and authority signals will be key. These are the items that not only affect rankings but also impact what shows up in search snippets and LLM responses.

LLMs like Google’s AI Mode are pulling together and highlighting customer sentiment within their responses when asked about specific brands or for geo-modified search queries, think “home repair near me”. 

For businesses, paying attention to their review strategy and reputation will be key to standing apart in local results, overall organic visibility, and showing up favorably in AI responses. However, even with these changes, many of the tried-and-true best practices that have helped brands succeed in local search in the past still apply. 

Searches with local intent: Google’s AI Mode

When it comes to local search, “near me” queries continue to be highly important. In traditional search, these typically trigger a Local Pack followed by organic blue links.

In Google’s AI Mode, the experience is similar. Users are shown a list of local businesses, often with short descriptions, star ratings, and review summaries.

The links cited are usually citation platforms like Yelp or TripAdvisor, business websites, or publications, and it’s common to find Google Business Profile place cards. Clicking these opens the familiar Google Business Profile interface, keeping users within the Google ecosystem.

Running store near me Google AI Mode

What does this mean for businesses aiming to capture visibility in AI-driven local search results? Many of the foundations of local SEO still apply.

  • NAP consistency: Ensure your business name, address, and phone number (NAP) are accurate and consistent across all listings.
  • Citations: Maintain listings on trusted third-party sites like Yelp, TripAdvisor, and local directories to help reinforce credibility.
  • Google Business Profile optimization: Fully complete and regularly update your profile with accurate info, photos, business hours, and relevant categories.
  • Reviews: Generate and respond to reviews to build trust and signal relevance to both users and search engines.

Branded search results for local businesses

When searching for a local business using branded terms in AI Mode, it’s common to see many of the same elements and data sources as traditional search. These business overviews often include a description of the company, the products or services offered, and customer sentiment.

Often, the customer sentiment section summarizes review data pulled from multiple sources, such as TripAdvisor, Yelp, industry-specific sites such as Apartments.com, and Google Business Profile.

Rachel's Ginger Beer Google AI Mode reviews

What’s unique about AI Mode is that it provides unbiased summaries of pros and cons about a business based directly on available customer reviews, which can come directly from Google Business Profile or be a mixed of review data from trusted online sources. These clear overviews include overall sentiment and often link to the business profiles.

AI Mode isn’t the first time Google has experimented with review summaries.

Some industries, like restaurants, already have “Review Summaries” in organic search results. These generative AI summaries highlight Google Business Profile review data, usually with a more positive tone, alongside the star rating and list of reviews.

Taziki's Mediterranean Cafe Google review summaries

The importance of reviews

Reviews shape how your brand appears online, whether they are displayed front and center on your Google Business Profile or surfaced as snippets in responses from LLMs. Google’s AI Mode, ChatGPT, and Perplexity all returned some information or mention of customer reviews when searching for local businesses, especially for branded queries. 

Von Elrod's Beer Garden Perplexity reviews

These responses emphasize how both positive and negative offline experiences can influence what is said about your brand online and the importance of customer perception, especially when those experiences get highlighted for customers who may be discovering your brand for the first time. 

West46 reviews and vibe ChatGPT

Businesses need to pay attention to reviews, if not across all platforms, then at least on Google Business Profile. Review data is being pulled into AI-driven results and also plays a role in local search visibility.

Google is placing more emphasis on reviews. In July, Google updated its documentation on local search rankings, with the most notable change found in the Prominence section:

“Prominence means how well-known a business is. Prominent places are more likely to show up in search results. This factor’s also based on info like how many websites link to your business and how many reviews you have. More reviews and positive ratings can help your business’s local ranking.”

How can businesses adapt?

By following the tactics local businesses should already be doing to succeed in local search:

  •  Focus on generating new, recent reviews.
  •  Respond to both positive and negative reviews.
  •  Read reviews to understand the strengths and weaknesses of your business. Seeing a trend in negative reviews? That could indicate it’s time to make some changes and address those weaknesses.
  • Monitor brand mentions not just for backlinks but also to understand what people are saying about your business online, including community forums, social media platforms, and online publications.

In addition to traditional review sites, platforms like Reddit, TikTok, and Quora are showing up more frequently in branded and local search results. These conversations are also being picked up and summarized in tools like Perplexity and ChatGPT. That means the things people are saying about your business in comment threads or short-form videos can influence how your brand is being represented across both organic and AI-powered results.

What else can be done:

  • Look closely at how your business is perceived online and do the same for your competitors.
  • Compare your review count and average star rating to those of businesses showing up alongside you in the Local Pack. How does your business stack up?
  • Check how AI tools like LLMs or Google’s AI Mode describe your competitors during branded searches and identify where they source that information.
  • Try asking AI tools to compare your business and a competitor. The way these tools summarize differences can give insight into strengths, weaknesses, and areas where you may need to improve to stay competitive in the market.

LLM data sources

LLMs pull from a range of online sources to build summaries about businesses. For local and branded search queries, much of the information they use closely mirrors what shows up in traditional organic search results. This includes data from:

  • Google Business Profiles.
  • Third-party review sites.
  • Official business websites.
  • Wikipedia.
  • Online directories and aggregators.
  • News articles.
  • Public conversations on forums or social media.

LLMs don’t use the same ranking algorithm as Google Search, but they rely on much of the same publicly available information.

Why this matters:

  • The efforts businesses make to improve local SEO, such as maintaining accurate listings, collecting reviews, and building authority, also help shape how their brand is represented in AI-generated search results.
  • Reinforces the importance of managing your presence across multiple platforms and staying aware of where your brand is mentioned.
  • Highlights trusted third-party sites where your business may be listed but not actively managed. These listings still influence visibility and should not be overlooked.
  • Identifies which platforms are trusted within your specific industry, revealing opportunities to strengthen your presence on niche or vertical-specific sites.

Managing reputation at scale for multi-location businesses

For multi-location and microbrand businesses, managing sentiment at the local level adds another layer of complexity. It is not just about how the overall brand is perceived, but how each location appears in search results. This is especially important for industries like senior living, apartment communities, and healthcare, where customer experience and trust are crucial in decision-making. 

A few negative reviews tied to a single location can shape perception across the board. That is why reputation strategies need to scale while still staying localized. Each location needs a clear plan to monitor feedback, respond to reviews, and build a strong presence in both traditional and AI-powered search results.

Core local SEO principles remain

Search is evolving fast, and we can expect more LLMs and AI-powered features to continue to shape how information is delivered to users.

Customer sentiment and brand perception are now more important in shaping how a business appears online, whether it’s in traditional organic search results or another platform.

Why?

Because perception matters, both online and in real life. Tools like Google’s AI Mode, Perplexity, Gemini, and ChatGPT are putting reviews, ratings, and sentiment summaries front and center, making customer feedback more visible than ever. 

Now is the time for brands to take a close look at how they appear in LLMs, understand the feedback being surfaced, and identify areas to improve. Doing this not only helps with visibility in AI-driven search but also strengthens your local market presence.

As part of a broader brand reputation and visibility strategy, it’s essential to regularly monitor how your business is showing up in both traditional and AI-powered search results. That includes checking branded SERP features like AI Overviews, People Also Ask, video carousels, and social content pull-ins. These elements shift often, and staying aware of what’s being surfaced helps inform both SEO and reputation efforts. 

You don’t need to reinvent the wheel. To keep up with the changing search landscape, you just need to focus your efforts in the right direction.

Read more at Read More

Google taps large language models to cut invalid ad traffic by 40%

How Google works: Experiments, entities, and the AI layer beneath search

Google is deploying large language models (LLMs) from its Ad Traffic Quality team, Google Research, and DeepMind to better detect and block invalid traffic – ad activity from non-human or uninterested sources – across its platforms.

Why we care. Invalid traffic drains advertiser budgets, skews publisher revenue, and undermines trust in the digital ad ecosystem. Google’s upgraded defenses aim to identify problematic ad placements more precisely, reducing policy-violating behaviors before they impact campaigns. This would mean fewer wasted impressions, better targeting accuracy, and stronger protection for their budgets.

By the numbers. Google said there was a 40% reduction in invalid traffic tied to deceptive or disruptive ad serving practices. This is due to faster detection of risky placements, which is accomplished in real time by analyzing app/web content, ad placements, and user interactions.

Between the lines: Google already runs extensive automated and manual checks to ensure advertisers aren’t billed for invalid traffic. However, the LLM-powered approach could be a bigger leap in speed and accuracy and could make deceptive ad strategies far harder to profit from.

Read more at Read More

Perplexity makes $34.5 billion bid to buy Google’s Chrome browser

Google Perplexity Cash Offer

AI search startup Perplexity today made an unsolicited $34.5 billion all-cash offer to buy Google’s Chrome browser, nearly double its own $18 billion valuation.

  • Perplexity told The Wall Street Journal that multiple large venture capital funds have agreed to finance the deal.
  • Google hasn’t indicated any willingness to sell Chrome.

Driving the news. Google’s parent company, Alphabet, is appealing a 2024 court ruling that it illegally monopolized search.

  • The bid comes as U.S. District Judge Amit Mehta considers whether to force Google to divest Chrome to restore competition in search.
  • Chrome has about 3.5 billion users and more than 60% global browser market share.
  • Perplexity said it would keep Chromium open source, invest $3 billion over two years, and maintain Google as Chrome’s default search engine (though users could change it).

Why we care. Chrome is one of the most powerful gateways to search. Chrome gives Google a massive data advantage, which helps shape everything from ad targeting to SERP features. A new owner could upend default search deals, disrupt traffic patterns, and rewrite the rules for how audiences are tracked, targeted, and monetized.

Yes, but. Analysts say the sale is unlikely. In the meantime, Perplexity will grab some attention and make some headlines.

The big picture. Perplexity recently launched a new browser, Comet. Perplexity believes browsers are strategic control points for the next era of agentic search and online advertising.

Read more at Read More

Gender exclusions spotted in Google Performance Max campaigns

PMax and the illusion of trust: ‘I’m Google – what could go wrong?’

Advertisers spotted a new beta feature in Google’s Performance Max (PMax) campaigns that allows gender-based audience exclusions – giving marketers more granular control over targeting. It was first announced, as part of the Google Ads API v 21, last week.

Why we care. The gender exclusion option could help brands tailor messaging, product feeds, and creative for different audiences, potentially improving ROAS and conversion rates.

How it could be used:

  • Separate campaigns for men’s and women’s products.
  • More relevant ad copy and creatives per audience.
  • Focused product feeds for higher shopping ad relevance.

Bottom line. If you have access to a Google Ads rep, now’s the time to ask to be added to this beta. Early movers could capture performance gains before rivals know the feature exists.

First seen. This update was first seen by Aleksejus Podpruginas, senior Google Ads campaigns specialist at Teleperformance.

Read more at Read More

Stop paying the Google tax and lower your CPCs by Edna Chavira

Search Engine Land live event-- Save your spot!
Search Engine Land live event-- Save your spot!

Many search marketers are unknowingly paying a “Google Tax”—overspending on branded keywords even when there’s no competition, due to a flaw in auction dynamics that causes them to bid against themselves.

In Stop Paying the Google Tax–Start Winning Paid Search, Jenn Paterson and John Beresford of BrandPilot AI will break down what they call the Uncontested Paid Search Problem and show you exactly how to detect and eliminate it. You’ll learn why uncontested keywords can still trigger inflated CPCs, how to spot when you’re paying too much for clicks you already own, and proven tactics to stop the waste and improve your paid search ROI.

You’ll take away:

  • Why uncontested keywords can still drive up CPCs
  • How to tell when you’re bidding against yourself
  • The true cost of the “Google Tax” on your brand campaigns
  • Strategies to cut waste and boost ROI

If you’re serious about paid search performance, it’s time to stop overpaying and make every click count. Save you spot here.

Read more at Read More

Google Preferred Sources rolling out in US and India

After several weeks of testing, Google is rolling out the Preferred Sources feature in the US and India. This feature lets searchers specify which sites they want to see in the Top Stories section of Google Search.

Google announced this feature is now graduating Search Labs beta, specifically in the US and India. Google added that it “is designed to give people more control over their Search experience, by enabling them to select the sites they want to see more of in Top Stories, whether that is a favorite blog and their local news outlet.”

How it works. This is currently only available in English in the U.S. and India.

Then you click the starred icon to the right of the Top Stories header in the search results. After you click the star icon, you will have the option to select your preferred sources, that is if a site is publishing fresh content.

Google will then start to show you more of the latest updates from your selected sites in Top Stories “when they have new articles or posts that are relevant to your search,” Google added

Google added. Google added that “people really value being able to select a range of sources — with over half of users choosing four or more.”

Labs users. If you’ve previously signed up in Labs, your selections will automatically apply and you’ll continue to see more of those sites within Top Stories. You can always change those selections at any time.

Publishers resource. Google also added more details on this in the publisher resource section.

Why we care. Top Stories can send nice traffic to publishers, so showing up as the preferred source can be a great way to see that traffic.

You may want to find an acceptable way to encourage your loyal visitors to select your site as a preferred source.

Read more at Read More

The $1 trillion generative economy that smart SEOs will own

The $1 trillion generative economy that smart SEOs will own
From SEO to GEO

Search is changing. The industry as we know it will radically alter in almost all aspects as we enter the “generative economy.”

By 2034, the generative AI market is expected to be worth roughly $1 trillion.

This article outlines how SEO professionals can own the generative economy and why they must embrace the change that is coming.

No, it’s not ‘just’ SEO

While there are many cross-over skills, GEO isn’t just SEO.

SEO works on the premise that ranking on page one for keyword variations that are typed into search engines by potential customers.

It matters because, for the last decade, the best place to hide a body has been on Google’s Page 2.

Humans don’t scroll past Page 1, because it is highly inefficient to do so.

LLMs and AI-powered search platforms don’t have this problem.

They can visit hundreds of websites in seconds for a variety of search terms and use their internal data.

Ranking does not matter in this world.

You can be on Page 5 of a web search and still get found and chosen by the LLMs.

Search engines organize the world’s information, and they do this exceptionally well.

Humans, however, are terrible at searching.

And this is among the largest differences between SEO and GEO.

Humans are being replaced in this process.

In SEO, businesses have been taught to target keywords that drive the largest potential commercial match.

But that doesn’t equal the best customers.

This is because keywords have represented the only way for humans to find what they want online, which means broad keywords drive the largest commercial terms.

And the long tail tended only to be a few words long for much the same reason.

Businesses only went after lucrative terms to win the most commercial traffic.

Or if deemed “too difficult,” SEO was turned into a blog channel, targeting non-commercial terms.

AI-powered search changes this.

It is easier than ever for consumers to find the products or services that best match their needs.

Ranking on Page 1 is no longer the goal. So what is?

How to win the generative game

AI search is vastly better than human search, so it will likely become the dominant form of search online.

Organic search will not disappear completely. It does a perfect job of surfacing businesses through direct or navigational search.

However, for situations where a chosen product or business is not known, AI-powered search will be able to find the best match – and fast.

It’s easy to see why this is so valuable for businesses.

  • A business can find the consumers it seeks to serve more easily.
  • A business will convert customers faster.
  • A business will be able to identify and indeed expand into new markets where customers are currently being either ignored or poorly serviced by existing providers.

Often, these customers have been ignored because a brand’s profit margins are insufficient in these sectors.

AI-powered search changes this.

It will allow more businesses to activate organic search as a driver of revenue.

This is because GEO has a different value proposition.

Dig deeper: The new SEO imperative: Building your brand

The value proposition of GEO compared to SEO

GEO revolves directly around three core positions.

  • The customers a brand is trying to target.
  • The products and services you sell.
  • The differentiation of the value the business offers compared to others.

SEO revolves around keywords.

We have openly observed the “SEOing” of webpages to find ways to force keywords into things. 

More lately, this is seen with the resurgence of exact match domains ranking.

On top of that, SEO’s value proposition has largely been: “Buy now, rank six months later.”

GEO is different. It is optimizing a business around its strategic brand positioning.

Suppose a business wants to set up as a broker of car insurance for female first-time drivers. 

GEO and AI-powered search will allow the business to find more of the customers it seeks to serve.

Something that can be evidenced today with a quick search comparing Google and ChatGPT.

Google - Female first time driver insurance

The ChatGPT search gives me exactly what I asked for, whereas a similar keyword-focused search mainly returns the large insurance comparison sites.

ChatGPT - Young driver insurance

AI-powered search can help humans find businesses that closely match their needs – opening up untapped business and consumer opportunities.

Dig deeper: How AI is reshaping SEO: Challenges, opportunities, and brand strategies for 2025

How GEO works – and why SEOs are best placed to help

Generative engine optimization is all about understanding the information AI search platforms need and supplying this information.

And what they need is “mutual information.”

Machines hate ambiguity.

GEO is about supplying enough information about a brand’s positioning so that, when someone uses an LLM to find a solution to a problem the business solves, the likelihood of that LLM referencing the business increases.

Say you need an employment solicitor offering free online consultations.

Traditional SEO targets a keyword like “free online advice for employee rights.”

An LLM instead:

  • Breaks down your request.
  • Searches across multiple queries.
  • Weighs everything from case studies to testimonials before recommending a firm.

At a technical level, you can see how this works.

You need:

However, most businesses are considerably underoptimized for LLMs.

In our example, you might offer a free consultation on employment law.

Other elements, such as not explicitly stating that you service the whole of the UK or that you specialize in sexual discrimination and have case studies on your site detailing your wins, might exclude you from being returned in a generative result that matters.

GEO doesn’t just need on-page optimization.

You need those off-page signals as well.

  • Brand mentions related to your positioning. 
  • Success stories. 
  • Listicle placements. 
  • Directory placements. 
  • Podcast interviews. 
  • News stories.

The list goes on.

You must supply the “machines” with enough information to increase the likelihood that they are “certain” you are a good solution for their user’s query.

You must satisfy the machines. AI is the gatekeeper now.

If this sounds just like good SEO, yes, you’re right, it does.

The difference here is that you’re not chasing keywords.

You’re optimizing for online presence in terms of the business’s positioning.

This means that the business needs to have a position in the first place.

Many businesses have been built around organic or paid search keywords.

SEO and or paid search has allowed them to win big.

AI-powered search changes the game considerably, and as it grows in usage, brands that previously did well in organic and paid search will naturally see a reduction in leads and sales.

And SEOs are the single most experienced people to help brands traverse this new search world.

Dig deeper: LLM perception match: The hurdle before fanout and why it matters

Get the newsletter search marketers rely on.


The battle between SEO and GEO is just beginning

The customers are still out there.

That’s key when we talk about SEO and GEO.

And right now, the vast majority of your customers are using traditional search engines.

SEO is still very much needed.

Eventually, Google will need to make AI mode its default mode to prevent the loss of users.

Until this happens, things are going to be messy.

Websites will lose traffic, rankings, and sales.

This will be due to Google’s constant updates and users’ movement between search modes and tools.

It’s not that the customers aren’t there. 

It’s just that those who have mainly enjoyed stable revenue generation from both organic and paid search will see a change happening.

At the same time, you might “accidentally win.”

Your business might gain more leads and sales due to the splintering of search.

We are in a period of transition.

So, what should you do?

Dig deeper: SEO beyond the website: Winning visibility in the AI era

It’s SEO and GEO: The importance of bothism (for now)

Whether SEO keeps the name or evolves into GEO, the reality is that search rankings will lose much of their value. 

That’s the real meaning behind “SEO is dead” when people talk about the rise of GEO.

We are in a turbulent transition period.

But this doesn’t mean you throw SEO in the bin.

We already know that SEO carries value into generative engines.

And you’ll likely see your websites gaining more traffic from LLMs.

So, right now isn’t the time to abandon SEO either. 

Smart brands should be adopting a “bothism” approach.

Looking at leveraging LLMs and building their brands digitally – and preparing for the eventual switch to default AI mode.

This won’t be gradual. It’s not a “sometime in the next five years” shift.

It’s happening soon. 

But how can you get your business ready for this?

The ROI of the generative economy

I started this article with a simple premise – that SEOs can own the generative economy.

And they can.

However, we need to tackle the pain in the backside of all marketers: ROI.

Paid search has thrived because it gave business owners certainty around marketing spend.

ROAS was easy to calculate, and as a result, creativity and brand marketing were ignored.

GEO makes tracking attribution from SEO seem like a day in the park.

And as a result, we are seeing the rise of platforms dedicated to showing the visibility of brand mentions in LLMs.

These tools show potential presence in LLM-based results by using a range of “pseudo” prompts fired into the platforms via APIs.

It’s not a real result, because a user didn’t do the prompt.

SEO has always been a hard sell.

Budgets are low because it takes time to see results, and often those results have been harder to attribute to revenue.

GEO is different.

You are paying for deliverables rather than results.

Yes, those deliverables should lead to business results.

And those deliverables will create value.

  • Your on-site optimization is a deliverable.
  • Any content that you create is a deliverable.
  • Being mentioned in a listicle is a deliverable.
  • That directory listing you added is a deliverable.

In this sense, GEO becomes more palatable as a service, because you are doing “sensible and visible” brand marketing from Day 1.

GEO is more closely linked to copywriting, brand marketing, advertising, and PR than SEO ever was.

For SEOs, this is a good thing. You are tethered to activity and not rankings.

At its heart, GEO is about strengthening a brand’s digital presence around its brand positioning.

And this is why businesses should be investing in their online presence.

Now is a perfect time to look at your organic traffic and prepare.

Plan for all scenarios.

  • Prepare for AI Mode being switched on next week, and assess the potential business impact.
  • Anticipate a loss of leads and revenue, and map out how you will recover that lost revenue (which might mean activating other channels).
  • Review your online brand marketing, and identify where your brand is – or isn’t – listed.
  • Evaluate the publicity your business has earned over the last few years, and determine if it’s enough.
  • Check your brand search – does it exist?
  • Audit all other marketing channels – are you leveraging social, email, and more?

Which leads me to the final point.

SEOs can own the generative economy – but it doesn’t mean they will.

Dig deeper: In GEO, brand mentions do what links alone can’t

The competition was paid search – now it’s PR and copywriters

SEOs are indeed well placed to supply services to win the generative economy. But they aren’t the only ones.

As Page 1 rankings lose their value, so will the skill sets of thousands of SEOs.

Despite the weird desperation of SEO consultants to overcomplicate search. It now needs to become more polished and brand-focused than ever.

Brand marketing, PR, and copywriting are among the most critical skills for SEOs to understand moving forward.

Good GEO will be about managing, building, and increasing the value of a business’s online reputation and presence.

But equally, that also means that SEOs can start to enter markets they haven’t worked in before.

Adding PR and copywriting to an SEO agency’s range of services is a natural step that many are doing.

And this is where we’re all heading.

But for SEOs – no matter how much they might hate change – the future is brighter and more exciting than ever.

Embrace it.

Dig deeper: Why AI will break the traditional SEO agency model

Read more at Read More

Most SEO research doesn’t lie – but doesn’t tell the truth either

Most SEO research doesn’t lie – but doesn’t tell the truth either

A mirage in the desert looks like water from a distance and can fool even experienced travelers into chasing something that isn’t there.

SEO research can be the same.

It looks like science, sounds legitimate, and can trick even seasoned marketers into believing they’ve found something real.

Daniel Kahneman once said people would rather use a map of the Pyrenees while lost in the Alps than have no map at all.

In SEO, we take it further: we use a map of the Pyrenees, call it the Alps, and then confidently teach others our “navigation techniques.”

Worse still, most of us rarely question the authorities presenting these maps.

As Albert Einstein said, “Blind obedience to authority is the greatest enemy of the truth.” 

It’s time to stop chasing mirages and start demanding better maps.

This article shows:

  • How unscientific SEO research misleads us.
  • Why we keep falling for it.
  • What we can do to change that.

Spoiler: I’ll also share a prompt I created to quickly spot pseudoscientific SEO studies – so you can avoid bad decisions and wasted time.

The problems with unscientific SEO research

Real research should map the terrain and either validate or falsify your techniques. 

It should show:

  • Which routes lead to the summit and which end in deadly falls.
  • What gear will actually hold under pressure.
  • Where the solid handholds are – versus the loose rock that crumbles when you need it most.

Bad research sabotages all of that. Instead of standing on solid ground, you’re balancing on a shaky foundation.

Take one common example: “We GEO’d our clients to X% more traffic from ChatGPT.” 

These studies often skip a critical factor – ChatGPT’s own natural growth. 

Between September 2024 and July 2025, chatgpt.com’s traffic jumped from roughly 3 billion visits to 5.5 billion – an 83% increase. 

That growth alone could explain the numbers.

Yet these findings are repackaged into sensational headlines that flood social media, boosted by authoritative accounts with massive followings.

Most of these studies fail the basics. 

They lack replicability and can’t be generalized.

Yet they are presented as if they are the definitive map for navigating the foggy AI mountain we’re climbing.

Let’s look at some examples of dubious SEO research.

AI Overview overlap studies

AI Overview overlap studies try to explain how much influence traditional SEO rankings have on appearing inside AI Overviews – often considered the new peak in organic search. 

Since its original inception as Search Generative Experience (SGE), dozens of these overlap studies have emerged.

I’ve read through all of them – so you don’t have to – and pulled together my own non-scientific meta study.

My meta study: AI Overviews vs. search overlap

I went back to early 2024, reviewed every study I could find, and narrowed them down to 11 that met three basic criteria:

  • Comparison of URLs, not domains.
  • Measure the overlap of the organic Top 10 with the AI Overviews URLs.
  • Based on all URLs in the Top 10, not just 1.

The end result (sorted by overlap in %):

Meta study- AI Overviews vs. search overlap
  • Overlap ranged from 5-77%
  • Average: 45.84%
  • Median: 46.40%

These huge discrepancies come down to a few factors:

  • Different numbers of keywords.
  • Different keyword sets in general.
  • Different time frames.
  • Likely different keyword types.

In summary:

  • Most studies focused on the U.S. market. 
  • Only one provided a dataset for potential peer review. 
  • Just two included more than 100,000 keywords.
  • And none explained in detail how the keywords were chosen.

There are only two noteworthy patterns across the studies:

  • Over time, inclusion in the organic Top 10 seems to make it more likely to rank in AI Overviews.

In other words, Google now seems to rely more heavily on Top 10 results for AI Overview content than it did in the early days.

A chart of the AIO overlap data shown over time. It gradually increases, has a strong peak and a strong decline in 2 spots that are marked in red.

If we exclude these studies (marked in the graph above) that didn’t disclose the number of keywords, we get this graph:

A chart with the headline "Over time, inclusion is more likely if you rank in the top 10. The graph once again shows AIO overlap in % over time. While there are some peaks, the generel trend is upwards.
  • Ranking in the Top 10 correlates with being more likely to also rank in an AI Overviews.

That’s it. But even then, there are several reasons why these studies are generally flawed.

  • None of the studies uses a keyword set big enough: The results cannot be generalized, like mapping one cliff face and claiming it applies to the entire mountain range.
  • AI is always changingand always has been: The insights become outdated quickly, like GPS directions to a road that no longer exists.
  • It’s not always clear what was measured: Some reports are promoted with obscure marketing material, and you wouldn’t understand them without the additional context – like a gear review that never mentions what type of rock it was tested on.
  • Too much focus on averages – and averages are dangerous: For one keyword type or niche, the overlap might be low. For others, it might be high. The average is in the middle. It’s like a bridge built for average traffic – handles normal loads fine, but collapses when the heavy trucks come.
  • Ignore query fan-out in the analysis: These studies give directions for where to go – too bad they’re driving a car while we’re in a boat. All major AI chatbots use query fan-out, yet none of the studies accounted for it. 

This isn’t new knowledge. Google filed a patent for generative engine summaries in March 2023, stating that they also use search result documents (SRDs) that are:

  • Related-query-responsive.
  • Recent-query-responsive.
  • Implied-query-responsive.
Google's patent on generative summaries for search results. Figure 2 is shown and related-query-responsive SRD(s), recent-query-responsive SRD(s), and implied-query-responsive SRD(s) are highlighted.

Google may not have marketed this until May 2025, but it’s been in plain sight for over two years.

The real overlap of AI Overviews with Google Search depends on the overlap of all queries used, including synthetic queries. 

If you can’t measure that, at least mention it as part of your limitations going forward.

Here are three more examples of recent SEO research that I find questionable.

Profound’s ‘The Surprising Gap Between ChatGPT and Google

Marketed as “wow, only 8-12% overlap between ChatGPT and Google Search Top 10 results,” this claim is actually based on just two queries repeated a few hundred times. 

I seriously doubt the data provider considered this high-quality research. 

Yet, despite its flaws, it’s been widely shared by creators.

German researchers’ study, ‘Is Google Getting Worse?,’ and multiple surveys on the same question from Statista, The Verge, and Wallethub

I covered these in my article, “Is Google really getting worse? (Actually, it’s complicated).” 

In short, the study has been frequently misquoted.

The surveys:

  • Contradict one another.
  • Often use suggestive framing.
  • Rely on what people say rather than what they actually do.

Adobe’s ‘How ChatGPT is changing the way we search

A survey with only 1,000 people participating, 200 of them being marketers and small business owners – all of them using ChatGPT.

Yet, they promote the survey, stating that “77% of people in the U.S. use ChatGPT as a search engine.”

Why do we fall victim to these traps?

Not all SEO research is unscientific for the same reasons. I see four main causes.

Ignorance

Ignorance is like darkness. 

At nighttime, it’s natural to have an impoverished sight. 

It means “I don’t know better (yet).” 

You are currently missing the capability and knowledge to conduct scientific research. It’s more or less neutral.

Stupidity 

This is when you are literally incapable, therefore also neutral. You just can’t. 

Few people are intellectually capable of working in a position to conduct research and then fail to do so.

Amathia (voluntary stupidity)

Worse than both is when the lights are on and you still decide not to see. Then you don’t lack knowledge, but deny it. 

This is described as “Amathia” in Greek. You could know better, but actively seek out not to.

A pyramid with the headline "Amathia is voluntary stupidity". At the bottom there is stupidity. Then there is ignorance. On top is Amathia. Over time, danger increases.

While all forms are dangerous, Amathia is the most dangerous. 

Amathia resists correction, insists it is good, and actively misleads others.

Biases, emotions, hidden agendas, and incentives

You want to be right and can’t see clearly, or openly try to deceive others.

You don’t have to lie to not tell the truth. You can deceive yourself just as well as you can deceive others. 

The best way to convince others of something is if you actually believe it yourself. We are masters at self-deception.

Few promote products/services they don’t believe in themselves. 

You just don’t realize the tricks a paycheck plays on your perception of reality.

Reasons why we fall for bad research in SEO

We have the ability to open our minds more than ever before. 

Yet, we decide to shrink ourselves down.

This is encouraged in part because of smartphones and social media, both induced by big tech companies, which are also responsible for the greatest theft of mankind (you could call it Grand Theft AI or GTAI).

In a 2017 interview, Facebook’s founding president Sean Parker said:

  • “The thought process that went into building these applications, Facebook being the first of them, … was all about: ‘How do we consume as much of your time and conscious attention as possible?’ And that means that we need to sort of give you a little dopamine hit every once in a while. […] It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”

They don’t care what kind of engagement they get. Fake news that polarizes? Great, give it a boost. 

Most people are stuck in this hamster wheel of being bombarded with crap all day. 

The only missing piece? A middleman that amplifies. Those are content creators, publishers, news outlets, etc.

Now we have a loop. 

  • Platforms where research providers publish questionable studies.
  • Amplifiers seeking engagement for personal gain.
  • Consumers overwhelmed by a flood of information are all flooded with data.
The loop of doom. Social media platforms are a foundation for research providers and amplifiers. Lastly, there is consumers. They all meet in a roundabout.

We are stuck in social media echo chambers. 

We want simple answers, and we are mostly driven by our emotions. 

And social media plays into all of that.

Get the newsletter search marketers rely on.


How to fix all of this

As outlined throughout, we have three points that need fixing. 

  • Conducting the research.
  • Reporting on the research.
  • Consuming the research. 

Conducting SEO research with scientific rigor

Philosopher Karl Popper said that what scientists do is to try and prove they were wrong in what they do or believe. 

Most of us move the other way, trying to prove we’re right. This is a mindset problem. 

Research is more convincing when you try to prove yourself wrong.

Think steelmanning > strawmanning. 

Ask yourself if the opposite of what you believe could also be true, and seek out data and arguments. 

You sometimes also have to accept the fact that you can be wrong or not have an answer.

A few other things that would improve most SEO research:

  • Peer reviews: Provide the dataset you used and let others verify your findings. That automatically increases the believability of your work.
  • Observable behavior: Focus less on what is said and more on what you can see. What people say is almost never what they truly feel, believe, or do.
  • Continuous observation: Search quality and AI vs. search overlap are constantly changing, so they should also be observed and studied continuously.
  • Rock-solid study design: Read a good book on how to do scientific research. (Consider the classic, “The Craft of Research.”) Implement aspects like having test and control groups, randomization, acknowledging limitations, etc.

I know that we can do better.

Reporting more accurately on SEO research – and news in general

Controversial and questionable studies gain traction through attention and a lack of critical thinking.

Responsibility lies not just with the “researchers” but also with those who amplify their work.

What might help bring more balance to the conversation?

  • Avoid sensationalism: It’s likely that 80% of people only read the headline, so while it has to be click-attractive, it should avoid being click-baity.
  • Read yourself: Don’t be a parrot of what other people say. Be very careful with AI summaries. Remember:
  • Check the (primary) sources: Whether it’s an AI chatbot or someone else reporting on something, always check sources.
  • Have a critical stance: There is naive optimism and informed skepticism. Always ask yourself, “Does this make sense?”

Value truth over being first. That’s journalism’s responsibility.

Avoid falling for bad SEO research

A curious mind is your best friend. 

Socrates used to ask a lot of questions to expose gaps in people’s knowledge. 

Using this method, you can uncover whether the researchers have solid evidence for their claims or if they are drawing conclusions that their data doesn’t actually support.

Here are some questions that are worth asking:

  • Who conducted the research?
    • Who are the people behind it?
    • What is their goal?
    • Are there any conflicts of interest?
    • What incentives could influence their judgment?
  • How solid is the methodology of the study?
    • What time frame was used for the study?
    • Did they have test and control groups and were they observing or surveying?
    • Under what criteria was the sample selected?
    • Are the results statistically significant?
  • How generalizable and replicable are the results?
    • Did they differentiate between geolocations?
    • How big was the sample size?
    • Do they talk about replicability and potential peer reviews?
    • In what way are they talking about limitations of their research?

It’s unlikely that you can ask too many questions and will end up drinking hemlock like Socrates.

Your research bulls*** detector

To leave you with something actionable, I built a prompt that you can use to assess research.

Copy the following prompt:

# Enhanced Research Evaluation Tool

You are a *critical research analyst. Your task is to evaluate a research article, study, experiment, or survey based on **methodological integrity, clarity, transparency, bias, reliability, and **temporal relevance*.

---

## Guiding Principles

- Always *flag missing or unclear information*.
- Use *explicit comments* for *anything ambiguous* that requires manual follow-up.
- Don't add emojis to headlines unless provided in the prompt.
- Apply *domain-aware scrutiny* to *timeliness. In rapidly evolving fields (e.g., AI, genomics, quantum computing), data, tools, or models older than **12–18 months* may already be outdated. In slower-moving disciplines (e.g., historical linguistics, geology), older data may still be valid.
- Use your own corpus knowledge to assess what counts as *outdated*, and if uncertain, flag the timeframe as needing expert verification.
- 📈 All scores use the same logic:  
  ➤ *Higher = better*  
  ➤ For bias and transparency, *higher = more transparent and reliable*  
  ➤ For evidence and methodology, *higher = more rigorous and valid*

- *AI-specific guidance*:  
   - Use of *GPT-3.5 or earlier (e.g., GPT-3.5 Turbo, DaVinci-003)* after 2024 should be treated as *outdated unless explicitly justified*.  
   - Models such as *GPT-4o, Claude 4, Gemini 2.5* are considered current *as of mid 2025*.  
   - *Flag legacy model use* unless its relevance is argued convincingly.

---

## 1. Extract Key Claims and Evidence

| *Claim* | *Evidence Provided* | *Quote/Passage* | *Supported by Data?* | *Score (1–6)* | *Emoji* | *Comment* |
|----------|------------------------|--------------------|-------------------------|------------------|-----------|-------------|
|          |                        |                    | Yes / No / Unclear      |                  | 🟥🟧🟩       | Explain rationale. Flag ambiguous or unsupported claims. |

*Legend* (for Claims & Evidence Strength):  
🟥 = Weak (1–2) 🟧 = Moderate (3–4) 🟩 = Strong (5–6) Unclear = Not Provided or Needs Review  
📈 Higher score = better support and stronger evidence

---

## 2. Evaluate Research Design and Methodology

| *Criteria* | *Score (1–6)* | *Emoji* | *Comment / Flag* |
|--------------|------------------|-----------|---------------------|
| Clarity of hypothesis or thesis                        |          | 🟥🟧🟩 |             |
| Sample size adequacy                                    |          | 🟥🟧🟩 |             |
| Sample selection transparency (e.g., age, location, randomization) | | 🟥🟧🟩 |         |
| Presence of test/control groups (or clarity on observational methods) | | 🟥🟧🟩 |       |
| *Time frame of the study (data collection window)*    | ? / 1–6 | Unclear / 🟥🟧🟩 | If not disclosed, mark as Unclear. If disclosed, assess whether the data is still timely for the domain. |
| *Temporal Relevance* (Is the data or model still valid?) | ? / 1–6 | Unclear / 🟥🟧🟩 | Use domain-aware judgment. For example:  
   - AI/biotech = < 12 months preferred  
   - Clinical = within 3–5 years  
   - History/philosophy = lenient  
   - For AI, if models like *GPT-3.5 or earlier* are used without explanation, flag as outdated. |
| Data collection methods described                       |          | 🟥🟧🟩 |             |
| Statistical testing / significance explained            |          | 🟥🟧🟩 |             |
| Acknowledgment of limitations                           |          | 🟥🟧🟩 |             |
| Provision of underlying data / replicability info       |          | 🟥🟧🟩 |             |
| Framing and neutrality (no sensationalism or suggestive language) | | 🟥🟧🟩 |   |
| Bias minimization (e.g., blinding, naturalistic observation) |      | 🟥🟧🟩 |             |
| Transparency about research team, funders, affiliations |          | 🟥🟧🟩 |             |
| Skepticism vs. naive optimism                           |          | 🟥🟧🟩 |             |

*Legend* (for Methodology):  
🟥 = Poor (1–2) 🟧 = Moderate (3–4) 🟩 = Good (5–6) Unclear = Not Specified / Requires Manual Review  
📈 Higher score = better design and methodological clarity

---

## 3. Bias Evaluation Tool

| *Bias Type* | *Score (1–6)* | *Emoji* | *Comment* |
|---------------|------------------|-----------|-------------|
| Political Bias or Framing            |          | 🟥🟧🟩 |             |
| Economic/Corporate Incentives       |          | 🟥🟧🟩 |             |
| Ideological/Advocacy Bias           |          | 🟥🟧🟩 |             |
| Methodological Bias (design favors specific outcome) | | 🟥🟧🟩 |     |
| Lack of Disclosure or Transparency  |          | 🟥🟧🟩 |             |

*Legend* (for Bias):  
🟥 = Low transparency (1–2) 🟧 = Moderate (3–4) 🟩 = High transparency (5–6)  
📈 Higher score = less bias, more disclosure

---

## 4. Summary Box

### Scores

| *Category*                  | *Summary* |
|------------------------------|-------------|
| *Average Methodology Score* | X.X / 6 🟥🟧🟩 (higher = better) |
| *Average Bias Score*        | X.X / 6 🟥🟧🟩 (higher = better transparency and neutrality) |
| *Judgment*        | ✅ Trustworthy / ⚠ Needs Caution / ❌ Unreliable |
| *Comment*  | e.g., “Study relies on outdated models (GPT-3.5),” “Time window not disclosed,” “Highly domain-specific assumptions” |

---

### 👍 Strengths
- ...
- ...
- ...

### 👎 Weaknesses
- ...
- ...
- ...

### 🚩 Flag / Warnings
- ...
- ...
- ...

Here’s an example output of the study on generative engine optimization:

An example output of the prompt cited above. There are number 1 and 2. 1 shows the claims made including columns like evidence provided, quote/passage, a score, and a comment. 2 is an evaluation of the research design and methodology, following a similar column layout including scores.
  • What claims are made and how they are supported.
  • How the research design and methodology fare.
3 and 4 show the bias evaluation, once again with scores. Lastly, 4 shows a summary of the criteria, including strengths, weaknesses, and flags/warnings.
  • Potential biases that are visible in the research.
  • A summary box with strengths, weaknesses, and potential flags/warnings.

This study scores high as it follows a robust scientific methodology. The researchers even provided their dataset. (I checked the link.) 

Important notes: 

  • An analysis like this doesn’t replace taking a look yourself or thinking critically about the information presented. What it can do, however, is to give you an indication if what you’re reading is inherently flawed.
  • If the researchers include some form of prompt injection that is supposed to manipulate an evaluation, you could get a wrong evaluation.

That said, working with a structured prompt like this will yield much better results than “summarize this study briefly.”

Want better, more honest SEO research? Look at the person in the mirror

SEO is not deterministic – it’s not predictable with a clear cause-and-effect relationship.

Most of what we do in SEO is probabilistic. 

Uncertainty and randomness always play a part, even though we often don’t like to admit it.

As a result, SEO research can’t and doesn’t have to meet other disciplines’ standards. 

But the uncomfortable truth is that our industry’s hunger for certainty has created a marketplace for false confidence. 

We’ve built an ecosystem where suspect research gets rewarded with clicks and authority while rigorous honesty gets ignored, left alone in the dark.

The mountain we’re climbing isn’t getting any less foggy. 

But we can choose whether to follow false maps or build better ones together. 

Science isn’t always about having all the answers – it’s about asking better questions.

I like to say that changing someone else’s behavior and standards takes time. 

In contrast, you can immediately change yours. Change begins with the person in the mirror. 

Whether you conduct, report, or consume SEO research.

Read more at Read More