Posts

Google shifts Lookalike to AI signals in Demand Gen

The Google Ads Demand Gen playbook for today’s fractured consumer journey

A core targeting lever in Google Demand Gen campaigns is changing. Starting March 2026, Lookalike audiences will act as optimization signals — not hard constraints — potentially widening reach and leaning more heavily on automation to drive conversions.

What is happening. Per an update to Google’s Help documentation, Lookalike segments in Demand Gen are moving from strict similarity-based targeting to an AI-driven suggestion model.

  • Before: Advertisers selected a similarity tier (narrow, balanced, broad), and campaigns targeted users strictly within that Lookalike pool.
  • After: The same tiers act as signals. Google’s system can expand beyond the Lookalike list to reach users it predicts are likely to convert.

Between the lines. This effectively reframes Lookalikes from a fence to a compass. Instead of limiting delivery to a defined cohort, advertisers are feeding intent signals into Google’s automation and allowing it to search for performance outside preset boundaries.

How this interacts with Optimized Targeting. The new Lookalike-as-signal approach resembles Optimized Targeting — but it doesn’t replace it.

  • When advertisers layer Optimized Targeting on top, Google says the system may expand reach even further.
  • In practice, this stacks multiple automation signals, increasing the algorithm’s freedom to pursue lower CPA or higher conversion volume.

Opt-out option. Advertisers who want to preserve legacy behavior can request continued access to strict Lookalike targeting through a dedicated opt-out form. Without that request, campaigns will default to the new signal-based model.

Why we care. This update changes how much control advertisers will have over who their ads reach in Google Demand Gen campaigns. Lookalike audiences will no longer strictly limit targeting — they’ll guide AI expansion — which can significantly affect scale, CPA, and overall performance.

It also signals a broader shift toward automation, similar to trends driven by Meta Platforms. Advertisers will need to test carefully, rethink audience strategies, and decide whether to embrace the added reach or opt out to preserve tighter targeting.

Zoom out. The shift mirrors a broader industry trend toward AI-first audience expansion, similar to moves by Meta Platforms over the past few years. Platforms are steadily trading granular manual controls for machine-led optimization.

Why Google is doing this. Digital markerter Dario Zannoni, has two reasons as to why Google is doing this:

  • Strict Lookalike targeting can cap scale and constrain performance in conversion-focused campaigns.
  • Maintaining high-quality similarity models is increasingly complex, making broader automation more attractive.

The bottom line. For performance marketers, this is another step toward automation-centric buying. While reduced control may be uncomfortable, comparable platform changes have often produced performance gains in mainstream use cases. Expect a new testing cycle as advertisers measure how expanded Lookalike signals affect CPA, reach, and incremental conversions.

First seen. This update was spotted by Zannoni who shared his thoughts on LinkedIn.

Dig deeper. Use Lookalike segments to grow your audience

Read more at Read More

Google’s Jeff Dean: AI Search relies on classic ranking and retrieval

AI search stack

Jeff Dean says Google’s AI Search still works like classic Search: narrow the web to relevant pages, rank them, then let a model generate the answer.

In an interview on Latent Space: The AI Engineer Podcast, Google’s chief AI scientist explained how Google’s AI systems work and how much they rely on traditional search infrastructure.

The architecture: filter first, reason last. Visibility still depends on clearing ranking thresholds. Content must enter the broad candidate pool, then survive deeper reranking before it can be used in an AI-generated response. Put simply, AI doesn’t replace ranking. It sits on top of it.

Dean said an LLM-powered system doesn’t read the entire web at once. It starts with Google’s full index, then uses lightweight methods to identify a large candidate pool — tens of thousands of documents. Dean said:

  • “You identify a subset of them that are relevant with very lightweight kinds of methods. You’re down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is the final 10 results or 10 results plus other kinds of information.”

Stronger ranking systems narrow that set further. Only after multiple filtering rounds does the most capable model analyze a much smaller group of documents and generate an answer. Dean said:

  • “And I think an LLM-based system is not going to be that dissimilar, right? You’re going to attend to trillions of tokens, but you’re going to want to identify what are the 30,000-ish documents that are with the maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked me to do?”

Dean called this the “illusion” of attending to trillions of tokens. In practice, it’s a staged pipeline: retrieve, rerank, synthesize. Dean said:

  • “Google search gives you … not the illusion, but you are searching the internet, but you’re finding a very small subset of things that are relevant.”

Matching: from keywords to meaning. Nothing new here, but we heard another reminder that covering a topic clearly and comprehensively matters more than repeating exact-match phrases.

Dean explained how LLM-based representations changed how Google matches queries to content.

Older systems relied more on exact word overlap. With LLM representations, Google can move beyond the idea that particular words must appear on the page and instead evaluate whether a page — or even a paragraph — is topically relevant to a query. Dean said:

  • “Going to an LLM-based representation of text and words and so on enables you to get out of the explicit hard notion of particular words having to be on the page. But really getting at the notion of this topic of this page or this page paragraph is highly relevant to this query.”

That shift lets Search connect queries to answers even when wording differs. Relevance increasingly centers on intent and subject matter, not just keyword presence.

Query expansion didn’t start with AI. Dean pointed to 2001, when Google moved its index into memory across enough machines to make query expansion cheap and fast. Dean said:

  • “One of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Because if you don’t have the page in your index, you’re going to not do well.
  • “And then we also needed to scale our capacity because we were, our traffic was growing quite extensively. So we had a sharded system where you have more and more shards as the index grows, you have like 30 shards. Then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. And then as traffic grows, you add more and more replicas of each of those.
  • And so we eventually did the math that realized that in a data center where we had say 60 shards and 20 copies of each shard, we now had 1,200 machines with disks. And we did the math and we’re like, Hey, one copy of that index would actually fit in memory across 1,200 machines. So in 2001, we … put our entire index in memory and what that enabled from a quality perspective was amazing.

Before that, adding terms was expensive because it required disk access. Once the index lived in memory, Google could expand a short query into dozens of related terms — adding synonyms and variations to better capture meaning. Dean said:

  • “Before, you had to be really careful about how many different terms you looked at for a query, because every one of them would involve a disk seek.
  • “Once you have the whole index in memory, it’s totally fine to have 50 terms you throw into the query from the user’s original three- or four-word query. Because now you can add synonyms like restaurant and restaurants and cafe and bistro and all these things.
  • “And you can suddenly start … getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was … 2001, very much pre-LLM, but really it was about softening the strict definition of what the user typed in order to get at the meaning.”

That change pushed Search toward intent and semantic matching years before LLMs. AI Mode (and its other AI experiences) continues Google’s ongoing shift toward meaning-based retrieval, enabled by better systems and more compute.

Freshness as a core advantage. Dean said one of Search’s biggest transformations was update speed. Early systems refreshed pages as rarely as once a month. Over time, Google built infrastructure that can update pages in under a minute. Dean said:

  • “In the early days of Google, we were growing the index quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most.”

That improved results for news queries and affected the main search experience. Users expect current information, and the system is designed to deliver it. Dean said:

  • “If you’ve got last month’s news index, it’s not actually that useful.”

Google uses systems to decide how often to crawl a page, balancing how likely it is to change with how valuable the latest version is. Even pages that change infrequently may be crawled often if they’re important enough. Dean said:

  • “There’s a whole … system behind the scenes that’s trying to decide update rates and importance of the pages. So, even if the update rate seems low, you might still want to recrawl important pages quite often because the likelihood they change might be low, but the value of having updated is high.”

Why we care. AI answers don’t bypass ranking, crawl prioritization, or relevance signals. They depend on them. Eligibility, quality, and freshness still determine which pages are retrieved and narrowed. LLMs change how content is synthesized and presented — but the competition to enter the underlying candidate set remains a search problem.

The interview. Owning the AI Pareto Frontier — Jeff Dean

Read more at Read More

Why AI optimization is just long-tail SEO done right

The return of long-tail SEO in the AI era

If you look at job postings on Indeed and LinkedIn, you’ll see a wave of acronyms added to the alphabet soup as companies try to hire people to boost visibility on large language models (LLMs).

Some people are calling it generative engine optimization (GEO). Others call it answer engine optimization (AEO). Still others call it artificial intelligence optimization (AIO). I prefer large model answer optimization (LMAO).

I find these new acronyms a bit ridiculous because while many like to think AI optimization is new, it isn’t. It’s just long-tail SEO — done the way it was always meant to be done.

Why LLMs still rely on search

Most LLMs (e.g., GPT-4o, Claude 4.5, Gemini 1.5, Grok-2) are transformers trained to do one thing: predict the next token given all previous tokens.

AI companies train them on massive datasets from public web crawls, such as:

  • Common Crawl.
  • Digitized books.
  • Wikipedia dumps.
  • Academic papers.
  • Code repositories.
  • News archives.
  • Forums.

The data is heavily filtered to remove spam, toxic content, and low-quality pages. Full pretraining is extremely expensive, so companies run major foundation training cycles only every few years and rely on lighter fine-tuning for more frequent updates.

So what happens when an LLM encounters a question it can’t answer with confidence, despite the massive amount of training data?

AI companies use real-time web search and retrieval-augmented generation (RAG) to keep responses fresh and accurate, bridging the limits of static training data. In other words, the LLM runs a web search.

To see this in real time, many LLMs let you click an icon or “Show details” to view the process. For example, when I use Grok to find highly rated domestically made space heaters, it converts my question into a standard search query.

Dig deeper: AI search is booming, but SEO is still not dead

The long-tail SEO playbook is back

Many of us long-time SEO practitioners have praised the value of long-tail SEO for years. But one main reason it never took off for many brands: Google.

As long as Google’s interface was a single text box, users were conditioned to search with one- and two-word queries. Most SEO revenue came from these head terms, so priorities focused on competing for the No. 1 spot for each industry’s top phrase.

Many brands treated long-tail SEO as a distraction. Some cut content production and community management because they couldn’t see the ROI. Most saw more value in protecting a handful of head terms than in creating content to capture the long tail of search.

Fast forward to 2026. People typing LLM prompts do so conversationally, adding far more detail and nuance than they would in a traditional search engine. LLMs take these prompts and turn them into search queries. They won’t stop at a few words. They’ll construct a query that reflects whatever detail their human was looking for in the prompt.

Suddenly, the fat head of the search curve is being replaced with a fat tail. While humans continue to go to search engines for head terms, LLMs are sending these long-tail search queries to search engines for answers.

While AI companies are coy about disclosing exactly who they partner with, most public information points to the following search engines as the ones their LLMs use most often:

  • ChatGPT – Bing Search.
  • Claude – Brave Search.
  • Gemini – Google Search.
  • Grok – X Search and its own internal web search tool.
  • Perplexity – Uses its own hybrid index.

Right now, humans conduct billions of searches each month on traditional search engines. As more people turn to LLMs for answers, we’ll see exponential growth in LLMs sending search queries on their behalf.

SEO is being reborn.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Dig deeper: Why ‘it’s just SEO’ misses the mark in the era of AI SEO

How to do long-tail SEO with help from AI

The principles of long-tail SEO haven’t changed much. It’s best summed up by Baseball Hall of Famer Wee Willie Keeler: “Keep your eye on the ball and hit ’em where they ain’t.”

Success has always depended on understanding your audience’s deepest needs, knowing what truly differentiates your brand, and creating content at the intersection of the two.

As straightforward as this strategy has been, few have executed it well, for understandable reasons.

Reading your customers’ minds is hard. Keyword research is tedious. Content creation is hard. It’s easy to get lost in the weeds.

Happily, there’s someone to help: your favorite LLM.

Here are a few best practices I’ve used to create strong long-tail content over the years, with a twist. What once took days, weeks, or even months, you can now do in minutes with AI.

1. Ask your LLM what people search when looking for your product or service

The first rule of long-tail SEO has always been to get into your audience’s heads and understand their needs. This once required commissioning surveys and hiring research firms to figure out.

But for most brands and industries, an LLM can handle at least the basics. Here’s a sample prompt you can use.

Act as an SEO strategist and customer research analyst. You're helping with long-tail keyword discovery by modeling real customer questions.

I want to discover long-tail search questions real people might ask about my business, products, and industry. I’m not looking for mere keyword lists. Generate realistic search questions that reflect how people research, compare options, solve problems, and make decisions.

Company name: [COMPANY NAME]
Industry: [INDUSTRY]
Primary product/service: [PRIMARY PRODUCT OR SERVICE]
Target customer: [TARGET AUDIENCE]
Geography (if relevant): [LOCATION OR MARKET]

Generate a list of 75 – 100 realistic, natural-language search queries grouped into the following categories:

AWARENESS
• Beginner questions about the category
• Problem-based questions (pain points, frustrations, confusion)

CONSIDERATION
• Comparison questions (alternatives, competitors, approaches)
• “Best for” and use-case questions
• Cost and pricing questions

DECISION
• Implementation or getting-started questions
• Trust, credibility, and risk questions

POST-PURCHASE
• Troubleshooting questions
• Optimization and advanced/expert questions

EDGE CASES
• Niche scenarios
• Uncommon but realistic situations
• Advanced or expert questions

Guidelines:
• Write queries the way real people search in Google or ask AI assistants.
• Prioritize specificity over generic keywords.
• Include question formats, “how to” queries, and scenario-based searches.
• Avoid marketing language.
• Include emotional, situational, and practical context where relevant.
• Don't repeat the same query structure with minor variations.
• Each query should suggest a clear content angle.

Output as a clean bullet list grouped by category.

You can tweak this prompt for your brand and industry. The key is to force the LLM (and yourself) to think like a customer and avoid the trap of generating keyword lists that are just head-term variations dressed up as long-tail queries.

With a prompt like this, you move away from churning out “keyword ideas” and toward understanding real customer needs you can build useful content around.

Dig deeper: If SEO is rocket science, AI SEO is astrophysics

2. Use your LLM to analyze your search data

Most large brands and sites don’t realize they’ve been sitting on a treasure trove of user intelligence: on-site search data.

When customers type a query into your site’s search box, they’re looking for something they expect your brand to provide.

If you see the same searches repeatedly, it usually means one of two things:

  • You have the information, but users can’t find it.
  • You don’t have it at all.

In both cases, it’s a strong signal you need to improve your site’s UX, add meaningful content, or both.

There’s another advantage to mining on-site search data: it reveals the exact words your audience uses, not the terms your team assumes they use.

Historically, the challenge has been the time required to analyze it. I remember projects where I locked myself in a room for days, reviewing hundreds of thousands of queries line by line to find patterns — sorting, filtering, and clustering them by intent.

If you’ve done the same, you know the pattern. The first few dozen keywords represent unique concepts, but eventually you start seeing synonyms and variations.

All of this is buried treasure waiting to be explored. Your LLM can help. Here’s a sample prompt you can use:

You're an SEO strategist analyzing internal site search data.

My goal is to identify content opportunities from what users are searching for on my website – including both major themes and specific long-tail needs within those themes.

I have attached a list of site search queries exported from GA4. Please:

STEP 1 – Cluster by intent
Group the queries into logical intent-based themes.

STEP 2 – Identify long-tail signals inside each theme
Within each theme:
• Identify recurring modifiers (price, location, comparisons, troubleshooting, etc.)
• Identify specific entities mentioned (products, tools, features, audiences, problems)
• Call out rare but high-intent searches
• Highlight wording that suggests confusion or unmet expectations

STEP 3 – Generate content ideas
For each theme:
• Suggest 3 – 5 content ideas
• Include at least one long-tail content idea derived directly from the queries
• Include one “high-intent” content idea
• Include one “problem-solving” content idea

STEP 4 – Identify UX or navigation issues
Point out searches that suggest:
• Users cannot find existing content
• Misleading navigation labels
• Missing landing pages

Output format:
Theme:
Supporting queries:
Long-tail insights:
Content opportunities:
UX observations:

Again, customize this prompt based on what you know about your audience and how they search.

The detail matters. Many SEO practitioners stop at a prompt like “give me a list of topics for my clients,” but this pushes the LLM beyond simple clustering to understand the intent behind the searches.

I used on-site search data because it’s one of the richest, most transparent, and most actionable sources. But similar prompts can uncover hidden value in other keyword lists, such as “striking distance” terms from Google Search Console or competitive keywords from Semrush.

Even better, if your organization keeps detailed customer interaction records (e.g., sales call notes, support tickets, chat transcripts), those can be more valuable. Unlike keyword datasets, they capture problems in full sentences, in the customer’s own words, often revealing objections, confusion, and edge cases that never appear in traditional keyword research.

Get the newsletter search marketers rely on.


3. Create great content

The next step is to create great content.

Your goal is to create content so strong and authoritative that it’s picked up by sources like Common Crawl and survives the intense filtering AI companies apply when building LLM training sets. Realistically, only pioneering brands and recognized authorities can expect to operate in this rarefied space.

For the rest of us, the opportunity is creating high-quality long-tail content that ranks at the top across search engines — not just Google, but Bing, Brave, and even X.

This is one area where I wouldn’t rely on LLMs, at least not to generate content from scratch.

Why?

LLMs are sophisticated pattern matchers. They surface and remix information from across the internet, even obscure material. But they don’t produce genuinely original thought.

At best, LLMs synthesize. At worst, they hallucinate.

Many worry AI will take their jobs. And it will — for anyone who thinks “great content” means paraphrasing existing authority sources and competing with Wikipedia-level sites for broad head terms. Most brands will never be the primary authority on those terms. That’s OK.

The real opportunity is becoming the authority on specific, detailed, often overlooked questions your audience actually has. The long tail is still wide open for brands willing to create thoughtful, experience-driven content that doesn’t already exist everywhere else.

We need to face facts. The fat head is shrinking. The land rush is now for the “fat tail.” Here’s what brands need to do to succeed:

Dominate searches for your brand

Search your brand name in a keyword tool like Semrush and review the long-tail variations people type into Google. You’ll likely find more than misspellings. You’ll see detailed queries about pricing, alternatives, complaints, comparisons, and troubleshooting.

If you don’t create content that addresses these topics directly — the good and the bad — someone else will. It might be a Reddit thread from someone who barely knows your product, a competitor attacking your site, a negative Google Business Profile review, or a complaint on Trustpilot.

When people search your brand, your site should be the best place for honest, complete answers — even and especially when they aren’t flattering. If you don’t own the conversation, others will define it for you.

The time for “frequently asked questions” is over. You need to answer every question about your brand—frequent, infrequent, and everything in between.

Go long

Head terms in your industry have likely been dominated by top brands for years. That doesn’t mean the opportunity is gone.

Beneath those competitive terms is a vast layer of unbranded, long-tail searches that have likely been ignored. Your data will reveal them.

Review on-site search, Google Search Console queries, customer support questions, and forums like Reddit. These are real people asking real questions in their own words.

The challenge isn’t finding questions to write about. It’s delivering the best answers — not one-line responses to check a box, but clear explanations, practical examples, and content grounded in real experience that reflects what sets your brand apart.

Dig deeper: Timeless SEO rules AI can’t override: 11 unshakeable fundamentals

Expertise is now a commodity: Lean into experience, authority, and trust

Publishing expert content still matters, but its role has changed. Today, anyone can generate “expert-sounding” articles with an LLM.

Whether that content ranks in Google is increasingly beside the point, as many users go straight to AI tools for answers.

As the “expertise” in E-E-A-T becomes table stakes, differentiation comes from what AI and competitors can’t easily replicate: experience, authority, and trust.

That means publishing:

  • Original insights and genuine thought leadership from people inside your company.
  • Real customer stories with measurable outcomes.
  • Transparent reviews and testimonials.
  • Evidence that your brand delivers what it promises.

This isn’t just about blog content. These signals should appear across your site — from your About page to product pages to customer support content. Every page should reinforce why a real person should trust your brand.

Stop paywalling your best content

I’m seeing more brands put their strongest content behind logins or paywalls. I understand why. Many need to protect intellectual property and preserve monetization. But as a long-term strategy, this often backfires.

If your content is truly valuable, the ideas will spread anyway. A subscriber may paraphrase it. An AI system may summarize it. A crawler may access it through technical workarounds. In the end, your insights circulate without attribution or brand lift.

When your best content is publicly accessible, it can be cited, linked to, indexed, and discussed. That visibility builds authority and trust over time.

In a search- and AI-driven ecosystem, discoverability often outweighs modest direct content monetization.

This doesn’t mean content businesses can’t charge for anything. It means being strategic about what you charge for. A strong model is to make core knowledge and thought leadership open while monetizing things such as:

  • Tools.
  • Community access.
  • Premium analysis or data.
  • Courses or certifications.
  • Implementation support.
  • Early access or deeper insights.

In other words, let your ideas spread freely and monetize the experience, expertise, and outcomes around them.

Stop viewing content as a necessary evil

I still see brands hiding content behind CSS “read more” links or stuffing blocks of “SEO copy” at the bottom of pages, hoping users won’t notice but search engines will.

Spoiler alert: they see it. They just don’t care.

Content isn’t something you add to check an SEO box or please a robot. Every word on your site must serve your customers. When content genuinely helps users understand, compare, and decide, it becomes an asset that builds trust and drives conversions.

If you’d be embarrassed for users to read your content, you’re thinking about it the wrong way. There’s no such thing as content that’s “bad for users but good for search engines.” There never was.

Embrace user-generated content

No article on long-tail SEO is complete without discussing user-generated content. I covered forums and Q&A sites in a previous article (see: The reign of forums: How AI made conversation king), and they remain one of the most efficient ways to generate authentic, unique content.

The concept is simple. You have an audience that’s already passionate and knowledgeable. They likely have more hands-on experience with your brand and industry than many writers you hire. They may already be talking about your brand offline, in customer communities, or on forums like Reddit.

Your goal is to bring some of those conversations onto your site.

User-generated content naturally produces the long-tail language marketing teams rarely create on their own. Customers

  • Describe problems differently.
  • Ask unexpected questions.
  • Compare products in ways you didn’t anticipate.
  • Surface edge cases, troubleshooting scenarios, and real-world use cases that rarely appear in polished marketing copy.

This is exactly the kind of content long-tail SEO thrives on.

It’s also the kind of content AI systems and search engines increasingly recognize as credible because it reflects real experience rather than brand messaging many dismiss as inauthentic.

Brands that do this well don’t just capture long-tail traffic. They build trust, reduce support costs, and dominate long-tail searches and prompts.

In the age of AI-generated content, real human experience is one of the strongest differentiators.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

The new SEO playbook looks a lot like the old one

For years, SEO has been shaped by the limits of the search box. Short queries and head terms dominated strategy, and long-tail content was often treated as optional.

LLMs are changing that dynamic. AI is expanding search, not eliminating it.

AI systems encourage people to express what they actually want to know. Those detailed prompts still need answers, and those answers come from the web.

That means the SEO opportunity is shifting from competing over a small set of keywords to becoming the best source of answers to thousands of specific questions.

Brands that succeed will:

  • Deeply understand their audience.
  • Publish genuinely useful content.
  • Build trust through real engagement and experience.

That’s always been the recipe for SEO success. But our industry has a habit of inventing complex tactics to avoid doing the simple work well.

Most of us remember doorway pages, exact match domains, PageRank sculpting, LSI obsession, waves of auto-generated pages, and more. Each promised an edge. Few replaced the value of helping users.

We’re likely to see the same cycle repeat in the AI era.

The reality is simpler. AI systems aren’t the audience. They’re intermediaries helping humans find trustworthy answers.

If you focus on helping people understand, decide, and solve problems, you’re already optimizing for AI — whatever you call it.

Dig deeper: Is SEO a brand channel or a performance channel? Now it’s both

Read more at Read More

Google Search Console AI-powered configuration rolling out

Over two months ago, Google began testing its AI-powered configuration tool. It allows you to ask AI questions about the Google Search Console performance reports and it would bring back answers for you. Well, Google is now rolling out this tool for all.

Google said on LinkedIn, “The Search Console’s new AI-powered configuration is now available to everyone!”

AI-powered configuration. AI-powered configuration “lets you describe the analysis you want to see in natural language. Your inputs are then transformed into the appropriate filters and settings, instantly configuring the report for you,” Google said.

Rolling out now. If you login to your Search Console account and click on the performance report, you may see a note at the top that says “New! Customize your Performance report using Al.”

When you click on it, you get into the AI tool:

More details. As we reported earlier, Google said “The AI-powered configuration feature is designed to streamline your analysis by handling three key elements for you.”

  • Selecting metrics: Choose which of the four available metrics – Clicks, Impressions, Average CTR, and Average Position – to display based on your question.
  • Applying filters: Narrow down data by query, page, country, device, search appearance, or date range.
  • Configuring comparisons: Set up complex comparisons (like custom date ranges) without manual setup.

Why we care. This is only supported in the Performance report for Search results. It isn’t available for Discover or News reports, yet. Plus, it is AI, so the answers may not be perfect. But it can be fun to play with and get you thinking about things you may not have thought about yet.

So give it a try.

Read more at Read More

Web Design and Development San Diego

Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it

Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it

Rand Fishkin just published the most important piece of primary research the AI visibility industry has seen so far.

His conclusion – that AI tools produce wildly inconsistent brand recommendation lists, making “ranking position” a meaningless metric – is correct, well-evidenced, and long overdue.

But Fishkin stopped one step short of the answer that matters.

He didn’t explore why some brands appear consistently while others don’t, or what would move a brand from inconsistent to consistent visibility. That solution is already formalized, patent pending, and proven in production across 73 million brand profiles.

When I shared this with Fishkin directly, he agreed. The AI models are pulling from a semi-fixed set of options, and the consistency comes from the data. He just didn’t have the bandwidth to dig deeper, which is fair enough, but the digging has been done – I’ve been doing it for a decade.

Here’s what Fishkin found, what it actually means, and what the data proves about what to do about it.

Fishkin’s data killed the myth of AI ranking position

Fishkin and Patrick O’Donnell ran 2,961 prompts across ChatGPT, Claude, and Google AI, asking for brand recommendations across 12 categories. The findings were surprising for most.

Fewer than 1 in 100 runs produced the same list of brands, and fewer than 1 in 1,000 produced the same list in the same order. These are probability engines that generate unique answers every time. Treating them as deterministic ranking systems is – as Fishkin puts it – “provably nonsensical,” and I’ve been saying this since 2022. I’m grateful Fishkin finally proved it with data.

But Fishkin also found something he didn’t fully unpack. Visibility percentage – how often a brand appears across many runs of the same prompt – is statistically meaningful. Some brands showed up almost every time, while others barely appeared at all.

That variance is where the real story lies.

Fishkin acknowledged this but framed it as a better metric to track. The real question isn’t how to measure AI visibility, it’s why some brands achieve consistent visibility and others don’t, and what moves your brand from the inconsistent pile to the consistent pile.

That’s not a tracking problem. It’s a confidence problem.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

AI systems are confidence engines, not recommendation engines

AI platforms – ChatGPT, Claude, Google AI, Perplexity, Gemini, all of them – generate every response by sampling from a probability distribution shaped by:

  • What the model knows.
  • How confidently it knows it.
  • What it retrieved at the moment of the query.

When the model is highly confident about an entity’s relevance, that entity appears consistently. When the model is uncertain, the entity sits at a low probability weight in the distribution – included in some samples, excluded in others – not because the selection is random but because the AI doesn’t have enough confidence to commit.

That’s the inconsistency Fishkin documented, and I recognized it immediately because I’ve been tracking exactly this pattern since 2015. 

  • City of Hope appearing in 97% of cancer care responses isn’t luck. It’s the result of deep, corroborated, multi-source presence in exactly the data these systems consume. 
  • The headphone brands at 55%-77% are in a middle zone – known, but not unambiguously dominant. 
  • The brands at 5%-10% have low confidence weight, and the AI includes them in some outputs and not others because it lacks the confidence to commit consistently. 

Confidence isn’t just about what a brand publishes or how it structures its content. It’s about where that brand stands relative to every other entity competing for the same query – a dimension I’ve recently formalized as Topical Position.

I’ve formalized this phenomenon as “cascading confidence” – the cumulative entity trust that builds or decays through every stage of the algorithmic pipeline, from the moment a bot discovers content to the moment an AI generates a recommendation. It’s the throughline concept in a framework I published this week.

Dig deeper: Search, answer, and assistive engine optimization: A 3-part approach

Every piece of content passes through 10 gates before influencing an AI recommendation

The pipeline is called DSCRI-ARGDW – discovered, selected, crawled, rendered, indexed, annotated, recruited, grounded, displayed, and won. That sounds complicated, but I can summarize it in a single question that repeats at every stage: How confident is the system in this content?

  • Is this URL worth crawling? 
  • Can it be rendered correctly? 
  • What entities and relationships does it contain? 
  • How sure is the system about those annotations? 
  • When the AI needs to answer a question, which annotated content gets pulled from the index? 

Confidence at each stage feeds the next. A URL from a well-structured, fast-rendering, semantically clean site arrives at the annotation stage with high accumulated confidence before a single word of content is analyzed. A URL from a slow, JavaScript-heavy site with inconsistent information arrives with low confidence, even if the actual content is excellent.

This is pipeline attenuation, and here’s where the math gets unforgiving. The relationship is multiplicative, not additive:

  • C_final = C_initial × ∏τᵢ

In plain English, the final confidence an AI system has in your brand equals the initial confidence from your entity home multiplied by the transfer coefficient at every stage of the pipeline. The entity home – the canonical web property that anchors your entity in every knowledge graph and every AI model – sets the starting confidence, and then each stage either preserves or erodes it. 

Maintain 90% confidence at each of 10 stages, and end-to-end confidence is 0.9¹⁰ = 35%. At 80% per stage, it’s 0.8¹⁰ = 11%. One weak stage – say 50% at rendering because of heavy JavaScript – drops the total from 35% to 19% even if every other stage is at 90%. One broken stage can undo the work of nine good ones.

This multiplicative principle isn’t new, and it doesn’t belong to anyone. In 2019, I published an article, How Google Universal Search Ranking Works: Darwinism in Search, based on a direct explanation from Google’s Gary Illyes. He described how Google calculates ranking “bids” by multiplying individual factor scores rather than adding them. A zero on any factor kills the entire bid, no matter how strong the other factors are.

Google applies this multiplicative model to ranking factors within a single system, and nobody owns multiplication. But what the cascading confidence framework does is apply this principle across the full 10-stage pipeline, across all three knowledge graphs.

The system provides measurable transfer coefficients at every transition and bottleneck detection that identifies exactly where confidence is leaking. The math is universal, but the application to a multi-stage, multi-graph algorithmic pipeline is the invention.

This complete system is the subject of a patent application I filed with the INPI titled “Système et procédé d’optimisation de la confiance en cascade à travers un pipeline de traitement algorithmique multi-étapes et multi-graphes.” It’s not a metaphor, it’s an engineered system with an intellectual lineage going back seven years to a principle a Google engineer confirmed to me in person.

Fishkin measured the output – the inconsistency of recommendation lists. But the output is a symptom, and the cause is confidence loss at specific stages of this pipeline, compounded across multiple knowledge representations.

You can’t fix inconsistency by measuring it more precisely. You can only fix it by building confidence at every stage.

The corroboration threshold is where AI shifts from hesitant to assertive

There’s a specific transition point where AI behavior changes. I call it the “corroboration threshold” – the minimum number of independent, high-confidence sources corroborating the same conclusion about your brand before the AI commits to including it consistently.

Below the threshold, the AI hedges. It says “claims to be” instead of “is,” it includes a brand in some outputs but not others, and the reason isn’t randomness but insufficient confidence.

The brand sits in the low-confidence zone, where inconsistency is the predictable outcome. Above the threshold, the AI asserts – stating relevance as fact, including the brand consistently, operating with the kind of certainty that produces City of Hope’s 97%.

My data across 73 million brand profiles places this threshold at approximately 2-3 independent, high-confidence sources corroborating the same claim as the entity home. That number is deceptively small because “high-confidence” is doing the heavy lifting – these are sources the algorithm already trusts deeply, including Wikipedia, industry databases, and authoritative media. 

Without those high-authority anchors, the threshold rises considerably because more sources are needed and each carries less individual weight. The threshold isn’t a one-time gate. Once crossed, the confidence compounds with every subsequent corroboration, which is why brands that cross it early pull further ahead over time, while brands that haven’t crossed it yet face an ever-widening gap.

Not identical wording, but equivalent conviction. The entity home states, “X is the leading authority on Y,” two or three independent, authoritative third-party sources confirm it with their own framing, and the AI encodes it as fact.

This fact is visible in my data, and it explains exactly why Fishkin’s experiment produced the results it did. In narrow categories like LA Volvo dealerships or SaaS cloud computing providers – where few brands exist and corroboration is dense – AI responses showed higher pairwise correlation. 

In broad categories like science fiction novels – where thousands of options exist and corroboration is thin – responses were wildly diverse. The corroboration threshold aligns with Fishkin’s findings.

Dig deeper: The three AI research modes redefining search – and why brand wins

Authoritas proved that fabricated entities can’t fool AI confidence systems

Authoritas published a study in December 2025 – “Can you fake it till you make it in the age of AI?” – that tested this directly, and the results confirm that Cascading Confidence isn’t just theory. Where Fishkin’s research shows the output problem – inconsistent lists – Authoritas shows the input side.

Authoritas investigated a real-world case where a UK company created 11 entirely fictional “experts” – made-up names, AI-generated headshots, faked credentials. They seeded these personas into more than 600 press articles across UK media, and the question was straightforward: Would AI models treat these fake entities as real experts?

The answer was absolute: Across nine AI models and 55 topic-based questions – “Who are the UK’s leading experts in X?” – zero fake experts appeared in any recommendation. Six hundred press articles, and not a single AI recommendation. That might seem to contradict a threshold of 2-3 sources, but it confirms it. 

The threshold requires independent, high-confidence sources, and 600 press articles from a single seeding campaign are neither independent – they trace to the same origin – nor high-confidence – press mentions sit in the document graph only.

The AI models looked past the surface-level coverage and found no deep entity signals – no entity home, no knowledge graph presence, no conference history, no professional registration, no corroboration from the kind of authoritative sources that actually move the needle.

The fake personas had volume, they had mentions, but what they lacked was cascading confidence – the accumulated trust that builds through every stage of the pipeline. Volume without confidence means inconsistent appearance at best, while confidence without volume still produces recommendations.

AI evaluates confidence — it doesn’t count mentions. Confidence requires multi-source, multi-graph corroboration that fabricated entities fundamentally can’t build.

Get the newsletter search marketers rely on.


AI citability concentration increased 293% in under two months

Authoritas used the weighted citability score, or WCS, a metric that measures how much AI engines trust and cite entities, calculated across ChatGPT, Gemini, and Perplexity using cross-context questions.

I have no influence over their data collection or their results. Fishkin’s methodology and Authoritas’ aren’t identical. Fishkin pinged the same query repeatedly to measure variance, while Authoritas tracks varied queries on the same topic. That said, the directional finding is consistent.

Their dataset includes 143 recognized digital marketing experts, with full snapshots from the original study by Laurence O’Toole and Authoritas in December 2025 and their latest measurement on Feb. 2. The pattern across the entire dataset tells a story that goes far beyond individual scores.

  • The top 10 experts captured 30.9% of all citability in December. By February, they captured 59.5% – a 92% increase in concentration in under two months.
  • The HHI, or Herfindahl-Hirschman Index, the standard measure of market concentration, rose from 0.026 to 0.104 – a 293% increase in concentration. This happened while the total expert pool widened from 123 to 143 tracked entities.

More experts are being cited, the field is getting bigger, and the top is pulling away faster. Dominance is compounding while the long tail grows.

This is cascading confidence at population scale. The experts who actively manage their digital footprint – clean entity home, corroborated claims, consistent narrative across the algorithmic trinity – aren’t just maintaining their position, they’re accelerating away from everyone else.

Each cycle of AI training and retrieval reinforces their advantage – confident entities generate confident AI outputs, which build user trust, which generate positive engagement signals, which further reinforce the AI’s confidence. It’s a flywheel, and once it’s spinning, it becomes very, very hard for competitors to catch up.

At the individual level, the data confirms the mechanism. I lead the dataset at a WCS of 23.50, up from 21.48 in December, a gain of +2.02. That’s not because I’m more famous than everyone else on the list.

It’s because we’ve been systematically building my cascading confidence for years – clean entity home, corroborated claims across the algorithmic trinity, consistent narrative, structured data, deep knowledge graph presence.

I’m the primary test case because I’m in control of all my variables – I have a huge head start. In a future article, I’ll dig into the details of the scores and why the experts have the scores they do.

The pattern across my client base mirrors the population data. Brands that systematically clean their digital footprint, anchor entity confidence through the entity home, and build corroboration across the algorithmic trinity don’t just appear in AI recommendations.

They appear consistently, their advantage compounds over time, and they exit the low-confidence zone to enter the self-reinforcing recommendation set.

Dig deeper: From SEO to algorithmic education: The roadmap for long-term brand authority

AI retrieves from three knowledge representations simultaneously, not one

AI systems pull from what I call the Three Graphs model – the algorithmic trinity – and understanding this explains why some brands achieve near-universal visibility while others appear sporadically.

  • The entity graph, or knowledge graph, contains explicit entities with binary verified edges and low fuzziness – either a brand is in, or it’s not.
  • The document graph, or search engine index, contains annotated URLs with scored and ranked edges and medium fuzziness.
  • The concept graph, or LLM parametric knowledge, contains learned associations with high fuzziness, and this is where the inconsistency Fishkin documented comes from.

When retrieval systems combine results from multiple sources – and they do, using mechanisms analogous to reciprocal rank fusion – entities present across all three graphs receive a disproportionate boost.

The effect is multiplicative, not additive. A brand that has a strong presence in the knowledge graph and the document index and the concept space gets chosen far more reliably than a brand present in only one.

This explains a pattern Fishkin noticed but didn’t have the framework to interpret – why visibility percentages clustered differently across categories. The brands with near-universal visibility aren’t just “more famous,” they have dense, corroborated presence across all three knowledge representations. The brands in the inconsistent pool are typically present in only one or two. 

The Authoritas fake expert study confirms this from the negative side. The fake personas existed only in the document graph, press articles, with zero entity graph presence and negligible concept graph encoding. One graph out of three, and the AI treated them accordingly.

What I tell every brand after reading Fishkin’s data

Fishkin’s recommendations were cautious – visibility percentage is a reasonable metric, ranking position isn’t, and brands should demand transparent methodology from tracking vendors. All fair, but that’s analyst advice. What follows is practitioner advice, based on doing this work in production.

Stop optimizing outputs and start optimizing inputs

The entire AI tracking industry is fixated on measuring what AI says about you, which is like checking your blood pressure without treating the underlying condition. Measure if it helps, but the work is in building confidence at every stage of the pipeline, and that’s where I focus my clients’ attention from day one.

Start at the entity home

My experience clearly demonstrates that this single intervention produces the fastest measurable results. Your entity home is the canonical web property that should anchor your entity in every knowledge graph and every AI model. If it’s ambiguous, hedging, or contradictory with what third-party sources say about you, it is actively training AI to be uncertain. 

I’ve seen aligning the entity home with third-party corroboration produce measurable changes in bottom-of-funnel AI citation behavior within weeks, and it remains the highest ROI intervention I know.

Cross the corroboration threshold for the critical claims

I ask every client to identify the claims that matter most:

  • Who you are.
  • What you do.
  • Why you’re credible. 

Then, I work with them to ensure each claim is corroborated by at least 2-3 independent, high-authority sources. Not just mentioned, but confirmed with conviction. 

This is what flips AI from “sometimes includes” to “reliably includes,” and I’ve seen it happen often enough to know the threshold is real.

Dig deeper: SEO in the age of AI: Becoming the trusted answer

Build across all three graphs simultaneously

Knowledge graph presence (structured data, entity recognition), document graph presence (indexed, well-annotated content on authoritative sites), and concept graph presence (consistent narrative across the corpus AI trains on) all need attention. 

The Authoritas study showed exactly what happens when a brand exists in only one – the AI treats it accordingly.

Work the pipeline from Gate 1, not Gate 9

Most SEO and GEO advice operates at the display stage, optimizing what AI shows. But if your content is losing confidence at discovery, selection, rendering, or annotation, it will never reach display consistently enough to matter. 

I’ve watched brands spend months on display-stage optimization that produced nothing because the real bottleneck was three stages earlier, and I always start my diagnostic at the beginning of the pipeline, not the end.

Maintain it because the gap is widening

The WCS data across 143 tracked experts shows that AI citability concentration increased 293% in under two months. The experts who maintain their digital footprint are pulling away from everyone else at an accelerating rate. 

Starting now still means starting early, but waiting means competing against entities whose advantage compounds every cycle. This isn’t a one-time project. It’s an ongoing discipline, and the returns compound with every iteration.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Fishkin proved the problem exists. The solution has been in production for a decade.

Fishkin’s research is a gift to the industry. He killed the myth of AI ranking position with data, he validated that visibility percentage, while imperfect, correlates with something real, and he raised the right questions about methodology that the AI tracking vendors should have been answering all along.

But tracking AI visibility without understanding why visibility varies is like tracking a stock price without understanding the business. The price is a signal, and the business is the thing.

AI recommendations are inconsistent when AI systems lack confidence in a brand. They become consistent when that confidence is built deliberately, through:

  • The entity home.
  • Corroborated claims that cross the corroboration threshold.
  • Multi-graph presence.
  • Every stage of the pipeline that processes your content before AI ever generates a response.

This isn’t speculation, and the evidence comes from every direction.

The process behind this approach has been under development since 2015 and is formalized in a peer-review-track academic paper. Several related patent applications have been filed in France, covering entity data structuring, prompt assembly, multi-platform coherence measurement, algorithmic barrier construction, and cascading confidence optimization.

The dataset supporting the work spans 25 billion data points across 73 million brand profiles. In tracked populations, shifts in AI citability have been observed — including cases where the top 10 experts increased their share from 31% to 60% in under two months while the overall field expanded. Independent research from Authoritas reports findings that align with this mechanism.

Fishkin proved the problem exists. My focus over the past decade has been on implementing and refining practical responses to it.

This is the first article in a series. The second piece, “What the AI expert rankings actually tell us: 8 archetypes of AI visibility,” examines how the pipeline’s effects manifest across 57 tracked experts. The third, “The ten gates between your content and an AI recommendation,” opens the DSCRI-ARGDW pipeline itself.

Read more at Read More

Web Design and Development San Diego

How to create a persona GPT for SEO audience research

How to create a persona GPT for SEO audience research

In a perfect world, you could call up a top customer to pick their brain about a piece of content. But in reality, it can be extremely difficult and time-consuming to conduct audience interviews every time you need to create a new topic or refresh an old piece. 

A few years ago, content marketing was simpler – keyword intent and quality content was enough to rank at the top of Google’s SERP to get clicks. But in the new era of AI, expectations are different.

Audience research has become critical. However, some companies may not have the resources to perform it.

One way to better understand your target audience is to create a custom GPT in ChatGPT, configured with your persona research. These aren’t replacements for audience research or interviews, but they can help you quickly identify what might be missing or wrong in your content. 

Below, I’ll explain how GPTs work so you can use them for audience research.

Perform audience research

Now that the SEO landscape is evolving, audience research is one of your strongest tools to understand the “why” behind search intent. 

Here are several easy-to-use methods and tools to get you started on research. 

  • SparkToro: Search by website, interest, or specific URL to segment different audience types. Research can be in-depth or give an overview of your audience. 
  • Review mining: Create automations through various tools and scrape reviews of your company or competitors to see what users are saying, and then analyze them. What does your target customer like? Why did they like it? What didn’t they like? Why?
  • Listen to calls/review leads: Listen to sales team interactions with customers to hear questions in real time and what led up to a call with a particular client.

Dig deeper: How to do audience research for SEO

Create a customer persona

After completing your research, create a persona – a representation of your target audience. Figma and FigJam are strong tools for building them.

Your persona should include: 

  • Name, bio, and trait slider.
  • Interests, influences, goals, pain points.
  • User stories.
  • The emotional journey during and after.
  • Content focus, trigger words, and calls to action (CTAs).
  • Full customer journey steps.
  • Reviews that support data.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Create a custom GPT of your persona

Now that you have all your research and your persona, it’s time to make a GPT. 

First, log in to ChatGPT, then go to Explore GPTs in the sidebar. 

In the upper right corner, click on Create.

ChatGPT - Create

Once there, prompt ChatGPT with your audience research data and persona information. You can paste in screenshots of your data to make it easier. 

ChatGPT - Hank persona

Once all your data is in and a GPT is created, you can start talking to it. Under the Configure tab, you can use conversation starters to ask it about changes, updates, and copy.

ChatGPT conversation starters

These GPTs, like all AI models, aren’t 100% accurate. They don’t replace a real audience survey or interview, but they can help you quickly identify issues with a piece of content and how it might not connect with your audience. 

Here’s an example of an optimized page. GPT “Hank” helped make sure the section above the fold did what was intended. 

GPT Hank 1
GPT Hank 2
GPT Hank 3

Hank has said what’s working, what isn’t working, and where to improve.

But should you take his advice 100% of the time? Of course not. 

But the GPT helps quickly identify issues you may have missed. That’s where the real benefit of using a GPT comes in. 

Dig deeper: 7 custom GPT ideas to automate SEO workflows

Get the newsletter search marketers rely on.


Ensure data from your GPT is accurate

Nothing analyzed or generated by AI is conclusive evidence. If you’re unsure your GPT is giving you accurate information, double-check by prompting it to provide evidence from the sources you gave it. 

GPT Hank - data accuracy

The GPT can correct itself if the information sounds off. When it does, again ask for evidence from the persona information you provided to double-check the new information. 

Update your persona-based GPT

You can always add more information to your GPT to make it more robust. 

To do this, go back to Explore GPTs in ChatGPT. 

Instead of Create, go to My GPTs in the top right-hand corner. 

Click on your persona. 

GPT Hank Haul

Click on Configure to update, add, or delete your current information.

GPT Hank Haul configuration

Remember that a persona is never a one-and-done situation. The more you learn about your audience and the more information you give a GPT, the better, to keep it up to date. 

Leverage persona GPTs for SEO content

Personas aren’t absolute, and AI can hallucinate. 

But both tools can still help you optimize content. 

Once you’re comfortable creating personas, you can build them for your general audience, specific segments, and individual campaigns.

SEO and marketing are always changing, and you can’t just set it and forget it. As you gain audience insights or if audience intent shifts, update information or delete anything no longer relevant in your GPT. 

When leveraged correctly, these tools can work with SEO to drive traffic and gain more conversions.

Read more at Read More

The Step-by-Step Guide to Designing Local Landing Pages That Convert

While the growth of artificial intelligence (AI) and global conveniences like Amazon has been a great thing for society, there’s still an undercurrent of people returning to a local, more personal-feeling shopping experience.

But this “return to local” doesn’t change the fact that we still live in an internet age. Enter local search engine optimization (SEO) and landing pages.

Local SEO tends to work best for businesses with physical locations that require direct customer contact, but it can also work for virtual online businesses that don’t necessarily meet their customers before a business transaction takes place.

This is why local landing pages are so important. They can give customers the convenience of an online transaction while still providing the trust and personal feel of a local business—if your landing page is done right, of course.

Optimizing your landing page design with the proper elements can help you attract local customers to your business, increase lead generation, and boost conversion rates.

Key Takeaways

  • Local landing pages only work when they’re built for real locations and real intent. One page per city or service area, with localized keywords, metadata, and copy that matches how people actually search (“service + city” or “near me”).
  • Trust signals drive both rankings and conversions. Consistent NAP data, real reviews from nearby customers, local photos, and clear business details help you show up in map features and convince visitors to take action.
  • Content needs to feel local, not duplicated. Strong local landing pages include tailored copy, location-specific frequently asked questions (FAQs), social proof, and visuals that prove you serve that area, as opposed to generic pages with city names swapped in.
  • Mobile optimization is nonnegotiable for local SEO. Most local searches happen on mobile and convert fast. Pages must load quickly, display contact info above the fold, and make calling or getting directions effortless.
  • Schema markup and clear calls to action (CTAs) turn visibility into results. Structured data helps search engines and AI tools understand your business, while strong, localized CTAs guide users to call, book, or request a quote immediately.

Why Are Local Landing Pages Important?

Local landing pages help you show up when people search for services near them, and they’re key to winning conversions in your area.

Think about how people search: “best dentist in Austin,” “roof repair near me,” or “24/7 locksmith in Chicago.”

A local landing page.

If you don’t have dedicated pages that target these local queries, you’re invisible in search engine results. In fact, recent stats show 80% of U.S. consumers surveyed search for local businesses online once a week, with about one-third (32%) searching for local businesses multiple times a day. Google’s local algorithm prioritizes relevance and proximity, and a well-optimized local page checks both boxes.

But optimizing your local SEO and landing pages is about more than appeasing Google’s algorithm. These pages can actually convert.

When someone lands on a page with your local address and glowing reviews from nearby customers, trust builds fast. In fact, according to Uberall.com, 85% of customers visit local businesses within a week of discovering them online. 17% of those visit the very next day. That’s why smart local businesses treat these like high-converting landing pages, not just generic content dumps.

With large language models (LLMs) and AI tools pulling content to answer local questions, the need for detailed, well-structured local pages becomes even more critical. These models lean on content that clearly signals relevance and authority, something a basic homepage or generic service page won’t do.

An AI overview of what are some of the best locksmiths in Chicago.

Bottom line: if local traffic matters to you, local landing pages need to be part of your SEO and conversion rate optimization (CRO) strategy.

A chart showing top ranking factors for the Local Pack.

Step 1: Identify where your customers are located.

Local landing pages only work when you know exactly which towns, neighborhoods, or service areas you’re trying to win. Otherwise, you can rack up traffic and still feel stuck because the visits come from places you can’t serve and don’t convert.

Start by answering two questions: Which locations do you want customers to come from? And which locations are they actually coming from today? Once you have both, planning local pages gets a lot easier.

Before you even open your reports, define your real-world service area. If you’re a storefront, your address needs to match how you operate in the real world (and be consistent everywhere it appears). If you’re a service-area business (such as a plumber, cleaner, or mobile vet), set a clear service area in your Google Business Profile so you don’t waste time targeting locations you can’t support.

Then, stop relying on a single data source. Use a few location signals together:

  • Google Analytics 4 (GA4) to spot city/region trends for session and key events (keep in mind location and demographics reporting is aggregated and can be limited by consent).
Demographics overview for Google Analytics 4.

Source

  • Google Search Console to see the “intent layer”—which local queries are driving clicks and impressions.
Google Search Console's intent layer.

Source

Finally, turn those insights into simple personas with local references, clear benefits, and social proof, so your page reads like it was made for that person in that place.

Step 2: Use localized keywords and metadata to create relevance.

Relevance still matters, but that doesn’t mean you can stuff a city name into every sentence and call it a day. Good local SEO matches what the searcher wants (intent) with what the page promises, starting right in the SERP.

Here’s the key difference: a local landing page usually targets transactional intent (“dentist in Austin,” “emergency plumber near me,” “book HVAC repair”), so your keyword + metadata strategy should read like a clear offer, not a watered-down blog headline.

A landing page for an Austin dentist.

Start with the basics that actually move the needle:

  • Title tag: Make a descriptive, concise, and unique title (Google can rewrite titles, but strong input helps). A simple formula works: Primary service + city + differentiator (and brand if it fits). 
  • Meta description: Google primarily builds snippets from on-page content, but it may use your meta description when it better matches the query. Write unique descriptions per page, include the “what” + “where,” and add a reason to click (pricing, availability, social proof). Avoid long strings of keywords. 
  • Meta keywords: Skip them. Google has said it ignores the keywords meta tag for web ranking.

Now, a quick warning: if you’re cranking out dozens of near-identical city pages that funnel to similar destinations, that’s exactly what Google calls doorway abuse. And lists of cities jammed onto a page can fall into keyword stuffing territory. 

Step 3: Use consistent NAP data

NAP stands for name, address, and phone number, and it needs to be exactly the same everywhere your business appears online. That includes your local landing pages, your Google Business Profile, directories, and social platforms.

Why does this matter? Because Google (and users) rely on NAP consistency to trust your business is legit. Inconsistent info can hurt your rankings and knock you out of key local SERP features like the map pack.

An infographic on how to create NAP data.

Source

Make sure your NAP is crawlable text, not embedded in an image. Add it in the footer or near your CTA, and match it letter-for-letter with your business listings. Even something small, like “Street” vs. “St.”, can throw off search engines.

If you serve multiple locations, each page should have its own unique NAP. No shortcuts here. Clean data builds trust, and trust drives clicks.

Step 4: Create and publish valuable content

Implementing local landing page design best practices in your content does two things: it helps you rank for location-specific searches and gives visitors a reason to trust you.

Start with copy that speaks directly to your audience in that area. Mention the city or neighborhood naturally, highlight the services you offer there, and include local differentiators like special hours or nearby service coverage. Make it feel personal.

Next, layer in content that builds credibility. Local reviews and case studies show real proof that your business delivers. Include names, star ratings, and even short quotes to make the social proof pop. Photos help, too. Real images of your team or completed projects add authenticity.

You should also include a brief FAQ section that answers questions specific to that location. Not only does this help your readers, but it also increases your chances of showing up in featured snippets or AI-generated results.

Source

Step 5: Add an effective CTA

Every local landing page needs a clear call to action. Without it, you’re leaving conversions on the table.

The best CTAs guide visitors to take the next logical step, whether that’s calling your business, booking an appointment, or requesting a quote. To be effective, your CTA must feel local and relevant. “Get a Free Quote” is okay. “Get a Free Plumbing Quote in Phoenix” is better. It reinforces the location and makes the offer feel tailored.

Make sure your CTA stands out visually. Use buttons, bold text, and color contrast to grab attention. And don’t just put it at the bottom. Add it near the top of the page and repeat it throughout, especially after sections like testimonials or service descriptions.

If phone calls are your goal, use a click-to-call button—especially for mobile users. For forms, keep them short. Name, email, and one key question is usually enough.

Remember, your local landing page should do more than just inform, it should drive action. The CTA is where that happens.

Step 6: Optimize your local landing pages for mobile users

Mobile search isn’t just dominant, it drives action. In fact, 88% of mobile local business searches result in a call or visit within 24 hours, showing how urgent mobile intent has become.

Start with your page performance. Speed is critical. Slow mobile pages frustrate users and push them to competitors. Tools like Google PageSpeed Insights help identify bottlenecks, enabling you to improve load times by compressing images and deferring unused scripts. Fast pages mean better user experience (UX), which, in turn, leads to higher engagement.

Google PageSpeed Insigihts.

Responsive design is nonnegotiable. Your layout must adapt to screens of all sizes with easily readable text and minimal pop-up interference. Prioritize large, clickable CTAs, and ensure your contact info is visible without scrolling.

Mobile users are often on the go. Clearly display your NAP details front and center, ideally above the fold. Clean navigation and quick access to key info make it easier for people to act immediately.

Step 7: Add schema markup

Schema markup helps search engines understand the context of your content, and that’s a big deal for local SEO.

Schema markup in action.

Source

When you add local business schema to your landing pages, you’re giving Google structured data that it can easily read. This increases the chances  your business showing up in rich results like the map features or AI-generated summaries. It’s not just about visibility. It’s about making your information easier to find, trust, and act on.

At a minimum, include schema for your business name, address, phone number (NAP), hours of operation, and service area. This aligns perfectly with the on-page content you’ve already built. The more complete your schema, the more signals you’re sending to Google that your business is real, local, and helpful.

You can generate local business schema using tools like Google’s Structured Data Markup Helper or Schema.org. Then either embed it as JSON-LD in the <head> of your page or use a plugin if you’re on a platform like WordPress.

Don’t forget to test it. Use Google’s Rich Results Test to make sure your markup is working as intended.

It takes a few extra steps, but schema markup is one of the easiest technical wins you can add to a local landing page. It won’t guarantee rankings, but it gives your content a better shot at being seen and trusted.

FAQs

How do I create content for local landing pages for SEO?

Start with localized keywords (e.g., “[service] in [city]”) and ensure they appear naturally in your headlines and throughout the copy. Then, write content that actually helps local visitors: include location-specific details, highlight nearby landmarks, and speak directly to the needs of that community. Bonus points if you add customer reviews or links to local pages.

How to make local SEO landing pages

Structure each page around one location or service area with unique URLs (like /plumbing-los-angeles). Don’t forget your Google Business Profile and local schema markup. They help search engines match your page with nearby searchers.

How to optimize landing page for local SEO

Use consistent NAP (name, address, phone) info across the page and the web. Add a local map, embed reviews from customers in that area, and link internally to relevant services. Make sure your page loads fast and works well on mobile because that’s where most local searches happen.

Conclusion

To maximize your search results and lead generation, make sure that you design separate landing pages for each city that you’re targeting.

Above all, create unique, location-specific copy for your landing pages. Building a local landing page requires an investment. It could be the investment of your time, money, or both.

However, it’s become a lot easier these days because of the plethora of landing page creators and landing page templates.

Read more at Read More

Why Entity-Based SEO is a New Way of Thinking About Optimization

Search engine optimization (SEO) was once defined by the number of keywords and synonyms scattered across your content. If you used the right word enough times, you’d rank.

Those days are long gone.

Since the launch of its Knowledge Graph in 2012, Google has been moving away from literal text matching toward deep semantic understanding. 

Search engines no longer evaluate pages as collections of words. They evaluate meaning.

This goes beyond Google and search engine results pages (SERPs). Modern discovery operates on entities—distinct people, places, brands, and concepts connected through context and relationships. Search systems now interpret queries by mapping how these entities relate rather than counting keyword usage.

That’s where entity SEO comes in. Entity-based structures set the groundwork for the more intuitive search results we see today in AI platforms and large language models (LLMs). Grouping queries around one central “thing” gives these platforms a clear reference point they can connect to related concepts.

Ultimately, entity SEO helps these platforms research and provide information in a more human way. It gives us the answers we want quickly, and it powers Google’s more complex search features that take our query results beyond a simple list of blue links.

In this article, we’ll explain what entities are, how to use them, and how they’ll continue to shape the future of SEO.

Key Takeaways

  • Entity SEO focuses on clearly defined people, brands, products, and concepts and the relationships between them, rather than isolated keywords.
  • When Google understands the primary entity behind a page, it can rank that page across a broader range of relevant queries without exact-match targeting.
  • Site structure communicates meaning. Topic clusters, internal links, and consistent terminology help search engines map how content fits together.
  • AI-driven search relies on entity context to disambiguate terms and interpret intent, not keyword strings alone.
  • Maintaining consistent signals across pages and trusted third-party profiles strengthens entity recognition and long-term visibility.

What Is Entity-Based SEO?

Entity-based SEO uses context (not just keywords) to help users find exactly what they’re looking for.

You can see this shift in action every time you type a query. For example, when you type a common name like “Malcolm” into a search bar, Google doesn’t just look for those seven letters. It tries to determine which entity you’re looking for:

A Google search dropdown for the name “Malcolm,” showing a Knowledge Panel for author Malcolm Gladwell alongside various entity-based search suggestions like “Malcolm in the Middle” and “Malcolm X.”

Google offers suggestions to searchers to provide immediate context. It speeds up the search for those looking for popular figures like Malcolm Gladwell or Malcolm X, and it prompts others to add more specific details if their intended “thing” isn’t listed.

Once you select a specific entity, the search engine stops scanning for keywords and starts delivering a comprehensive Knowledge Panel.

A Google search results page for "Malcolm Gladwell" showcasing a comprehensive Knowledge Panel. The layout displays the subject as a defined entity with categorized data points, including a photo gallery, biographical details (age, parents), linked YouTube videos, and a list of his published books, like "The Tipping Point" and "Revenge of the Tipping Point."

This layout displays the subject as a defined entity, grouping biographical details, books, and videos into a single source. While this shift makes search more intuitive for users, it makes things slightly more complicated for content creators. 

Here are three ways entity-based SEO has changed the landscape:

  1. AI visibility: Entity SEO revolves around an entity record. These records parse dozens of data points about a particular search query, making all information easy for AI platforms to access. Brands that structure their data properly make themselves much more visible in LLM search. 
  2. Better mobile capabilities: Entities allowed SEO to improve mobile results and improved mobile-first indexing
  3. Translation improvements: Entities can be found regardless of homonyms, synonyms, and foreign language use, thanks to context clues. For instance, a search for “red” will include results for “rouge” or “rojo” if the searcher’s settings allow it.

Let’s dig a little deeper into entity records to understand how they connect to LLMs and search engines like Google.

To start, let’s look at a hypothetical entity record about Taylor Swift:

A hypothetical entity record.

(Image Source)

This makes it clear how entity SEO works in practice. Search engines don’t rely on a single page or keyword to understand a brand. They aggregate structured signals across the web to build a unified view of the entity.

The reason behind this is that search systems and LLMs don’t read content the way humans do. They extract discrete facts, attributes, and relationships, then assemble them into a coherent understanding.

The example above illustrates how an entity can be broken into clear, machine-readable components.

Keywords vs. Entities: What’s the Difference

Entities might sound similar to keywords, but they’re actually quite different. Here’s how they differ and why those differences are so important.

Keywords

Keywords are words or phrases people use to express intent in search. They take many forms, including questions, sentences, or single words.

For example, users looking for makeup tutorials might search for “makeup tutorial,” “smokey eye,” “how to do a smokey eye,” or something similar.

Google search results page for “how to do a smokey eye,” showing a video carousel with multiple YouTube makeup tutorials and a step-by-step blog result below.

Today, keywords tend to work best as demand signals rather than quotas to be filled. They show how users frame their intent, whether they want to learn, compare, buy, or solve a problem, and give you language to match your content to that intent.

That’s why long-tail queries and modifiers (“best,” “near me,” “for beginners,” “price,” “vs.”) are still gold. 

These modifiers provide the intent that tells a search engine how to connect a user to your brand. Your goal is to rank for these high-intent terms to drive organic traffic and establish your site as the definitive source of truth for your niche. 

Long-tail and informational (what, how, why) keywords also help you line up your content with where search is heading. 

Data shows that about 90 percent of influential SERP features, like AI summaries and “People also ask,” come from queries like these, making them useful inputs for LLM-powered workflows like content production plans based on real query language.

If your page answers the query fully and clearly, you’re using keywords the modern way.

Entities

Google defines an entity as “a thing or concept that is singular, unique, well-defined, and distinguishable.” They can be people, places, products, companies, or abstract concepts. 

What makes entities powerful is not just what they are, but how they connect. They are defined by their relationships to other entities, which helps search engines and LLMs understand how each concept fits into the “big picture.”

Once Google is confident about what your page is about, it can rank you for searches you never explicitly targeted. That happens because entities carry built-in relationships, including attributes, categories, synonyms, and commonly associated concepts.

This is where entity SEO really starts to differ from keyword-based optimization. Essentially, entity SEO prioritizes mentions and human discussion over keywords. 

For example, a search for the word “apple” could result in pages about the fruit or pages about the company. As interesting as both topics are, reading about iPhones probably won’t be too helpful if you’re trying to figure out whether apple seeds are indeed poisonous. 

You need to add some keywords or modifiers to give crawlers and LLMs context. 

A side-by-side comparison illustrating entity disambiguation. On the left is a realistic photo of a red apple fruit; on the right is the minimalist black logo of Apple Inc., the technology company.

This is also why pages sometimes rank for “weird” keywords. If your content clearly describes the entity—what it is or related terms—Google can connect you to unexpected queries that share the same underlying intent. This concept is known as latent semantic intent (LSI).

That’s not magic. It’s entity understanding plus context signals.

For entities to be useful, search engines map them into knowledge graphs, which are structured systems that connect related information across the web and make retrieval more reliable.

As of May 2024, Google’s Knowledge Graph contains 1.6 trillion facts about 54 billion entities, and about 1.6 trillion facts about them. Not only do these data points help answer complex informational or long-tail queries, but they also power Google’s Knowledge Panel. Here’s an example:  

A Google Search Results Page for "Eddie Aikau" featuring a Knowledge Panel highlighted in a red box.

(Image Source)

To help search engines or LLMs make sense of which entity fits your query, you want the pages of your website to behave like solid references. Spell out defining details (names, dates, specs, locations), connect related subtopics, and use consistent terminology. 

Add supporting cues like internal links to your own deeper pages and clear headings that map to common questions. Structured data is also key here, making it easier for engines to see specific information that you deem to be important on a given page, like product information, locations, or other items.

How Do Entities and Keywords Work Together?

An effective SEO strategy recognizes that keywords are the signals, but entities are the destination. On-page, you can treat your website as a mini knowledge graph that uses keywords to link to different pages on your site. 

You can further validate your brand by connecting your content to established knowledge graphs like Wikipedia or LinkedIn, which are high in experience, expertise, authoritativeness, and trust (E-E-A-T). While this won’t directly affect your page rank, it can improve your page’s authority in search results.

Practically, this means your keywords should map to specific entity details (features, use cases, comparisons, FAQs, structured data). The clearer those entity connections are, the easier it is for search engines to match your page to related searches. That’s especially the case for those long-tail ones where intent is clear, but the wording is inconsistent.

How To Start Building Up Your Entity-Based SEO

The biggest upside of entity clarity is that it helps your whole site act like a connected knowledge hub. When search systems recognize your brand, products, services, locations, and experts as distinct entities, they can more accurately map your content to complex user intent.

Content Depth and Topical Relevance

Entity-based SEO nudges you away from thin, keyword-targeted pages toward deep, comprehensive content. Instead of fragmented articles, build authoritative topic clusters that cover definitions, use cases, and FAQs. 

This depth reinforces the “identity” of your subject matter, signaling to search engines that your site is the definitive source for that specific entity across all related queries.

Strengthening Relationships via Internal Linking

Internal linking is the connective tissue of your entity strategy. 

Consistently linking supporting content to a central entity page explicitly defines relationships for search engines. That can be as simple as connecting which services belong to which categories or which authors are connected to which brands. 

This internal relationship graph is essential for earning broader semantic visibility and is a core component of reputation management, as it ensures search engines never lose the thread of who you are.

Consistency as a Signal of Authority

Your entity becomes much more powerful when your brand and authors remain consistent across the web. Using the same naming conventions, professional bios, and expertise signals makes it easier for search systems to verify your “identity.” 

Consistency cuts through ambiguity to make sure your authority is attributed to the correct entity. And that goes a long way in preventing your brand from being confused with unrelated concepts.

Trust Signals and Entity Clarity

Trust signals like reviews and citations match up perfectly with entity clarity. Clear, consistent data—like name, address, phone number (NAP) details—help search engines attach your content to the right real-world entity for local SEO

Modern algorithms prioritize clear signals like these when deciding which brands to feature in high-stakes search results and AI-generated overviews.

The Role of AI in Entity SEO

AI-driven search doesn’t “read” the web like a human. It builds a model of the world. 

That model is made of entities (people, brands, products, places, concepts) and the connections between them.

That’s why entities are foundational. A keyword is just a string of text. An entity has a unique identity. 

When Google sees “Jaguar,” it has to decide between the animal, the car brand, or the NFL team? AI makes that call by looking at entity context—nearby terms, linked pages, structured data, and known relationships in systems like the Knowledge Graph.

The screenshots below show how that entity resolution plays out in real search results. The same keyword produces entirely different SERPs based on which entity Google identifies as the best match.

Google search results for “jaguar animal,” showing an animal Knowledge Panel with images, facts, and Wikipedia information about the jaguar species.

Google search results for “jaguar car,” displaying a brand Knowledge Panel for Jaguar as a luxury vehicle manufacturer with models, company details, and images.

This is also how AI gets better at interpreting intent. 

Someone searching “best running shoes for flat feet” isn’t asking for a dictionary definition of shoes. They’re signaling a problem, a use case, a set of constraints. 

Entity relationships help AI connect that query to brands, product categories, medical concepts, reviews, and comparisons before picking results that match the implied goal.

You can see the shift in your data. In Google Search Console, queries often widen into themes, with multiple variations driving impressions to the same page. 

 In the SERPs, features like Knowledge Panels, AI Overviews, and “People also ask” reflect entity understanding, not exact-match phrasing. Content performance aligns better with topic clusters and user journeys than with single keywords.

Entity SEO future-proofs your content by aligning with how AI systems learn. 

If your pages clearly define the entities you cover, connect them with strong internal linking, and stay consistent in terminology and positioning, they’re easier to interpret, categorize, and reuse as search evolves.

How to Shift Your Strategy to Entity-Based SEO

Understanding entity SEO is only useful if it changes how you work. Here are the concrete changes that move a keyword-first strategy toward an entity-based one.

Identify Core Entities Tied to the Business

A core entity is a small, intentional set of “things” that you want Google to associate with your brand. It goes beyond what you want to rank for. 

Start by pressure testing your site against three questions: 

  • Who is this? (the brand/author entity)
  • What do they do? (the offering entity)
  • Who do they serve? (the audience/market entity)

If the answer to any of these feels fuzzy, your entities are too broad or buried within your content.

Keep core entities limited and intentional. Pick the ones that define your positioning, then give each one a clear home on the site. 

An example structure might be: a homepage for the brand, service pages for offerings, an about page for brand/author credibility, and supporting content that links back to those pillars.

Build Topic Clusters Around Those Entities

One page can define the entity, but topic clusters give it depth and context. The goal is coverage, not volume.

For each core entity, build one primary page that acts as the hub (your “entity’s home”). Then publish supporting pages that answer related questions, common use cases, comparisons, and next-step topics that your audience actually searches for. This is known as the hub and spoke model.

Your supporting content should do three things: 

  • Answer real follow-up questions.
  • Reinforce the same entity from different angles.
  • Link back to the hub page with clear, consistent anchor text. 

That internal structure is what helps search engines connect the dots.

Reinforce Entities Through Internal Links and Content Structure

Internal links are how you “wire” entities together across your site. Structure matters as much as the words on the page.

Link pages with related topics, not whatever feels convenient in the moment. If two articles support the same entity, connect them. If a page is a subtopic, point it to the hub and to other closely related subtopics.

NerdWallet’s credit cards hub shows how internal linking reinforces entities, with a single category page connecting related subtopics like cash back, travel rewards, and balance transfers under one clear concept.

NerdWallet credit cards hub page showing a central “Credit Cards” category with multiple subcategory links, including cash back, travel rewards, balance transfer, and business credit cards.

Keep your anchor text consistent and descriptive. And use the entity name (or a tight variation) instead of vague links like “click here” or “learn more.”

Make sure your cluster works both ways. In other words, supporting pages should link up to the main entity page, and related supporting pages should link to each other where it genuinely helps the reader move to the next logical question.

Maintain Entity Consistency Across the Site and Beyond

One way to leverage entity-based SEO is to list your business on directories across the internet.  These directory sites are a popular data source for search engine crawlers and LLMs. Your Google Business Profile, for example, is used as a data source for the Google Knowledge Graph. 

Other listing services, such as Yelp, can also help create strong, authoritative backlinks for your brand and define a well-known entity. 

Listing sites may vary by location, so do your research when deciding where to list. Additionally, be sure to choose sites with high domain authority to improve your search engine standing. 

Ultimately, consistency is key. Listing your business in multiple locations across the internet eventually turns entity signals into trust signals, but it’s important to list your business carefully.

Avoid using multiple names for the same entity and conflicting descriptions from page to page. Also, make sure your listings stay focused on topics related to entities in your industry. Don’t lose focus or drift to unrelated topics.  

Prioritize Brand Building

Brand building is another essential tactic in entity-based SEO. Offline brand signals should be mirrored online wherever search engines and AI systems look for training data.

This includes your about page, author bios, case studies, podcast/webinar pages, and third-party profiles (Crunchbase, G2, LinkedIn, industry directories, etc.). For LLM optimization, you want consistent, crawlable signals in the places models and search engines pull from. 

Use the same brand description, key services, and leadership names everywhere. That consistency makes it easier for systems to connect the dots.

Common Entity SEO Mistakes

Entity SEO fails when you treat it like a checklist instead of a system. These are some of the mistakes that do the most damage:

  • Treating schema as a shortcut. Markup helps Google label what’s on the page. It doesn’t create authority. If the content is thin or unclear, schema just highlights that faster.
  • Publishing thin entity pages. A quick definition page won’t earn trust. Weak entity pages struggle to rank, and they don’t attract links or support clusters.
  • Chasing unrelated entities. Dropping in trendy topics or random brands dilutes relevance. It can also confuse search engines about what you actually do.
  • Ignoring internal linking and structure. Entities need connections. If supporting pages don’t link to the hub (and to each other where it makes sense), Google can’t map the relationship.
  • Sending inconsistent signals. Mixed terminology, shifting positioning, and conflicting service descriptions make your entity harder to identify.

FAQs

What are entities in SEO?

Entities are the “things” search engines recognize—people, places, brands, concepts, and more. Unlike keywords, entities have context and relationships. Google uses them to understand meaning and intent. For example, “Amazon” as a company is an entity, and it’s different from the Amazon rainforest. 

How do you find SEO entities?

Start with your main topic and use tools like Google’s Knowledge Graph, Wikipedia, and Ubersuggest to identify related entities. Look for people, brands, terms, and categories commonly associated with your topic. Also, check competitor content. What entities are they connecting to? Use this to build a structured, semantically rich content plan. 

What is entity SEO?

Entity SEO is the practice of optimizing content around recognizable concepts, not just keywords, so search engines better understand and rank your site.

Conclusion

Entity SEO isn’t some advanced trick. It’s how modern search actually works. 

Search engines no longer rely on traditional keyword research alone. They map concepts, understand relationships, and evaluate authority across connected topics.

If you want to stay visible long term, your content needs more than keywords. 

Clarity and a strong topical focus are the way to go. That’s how you build trust with Google and future-proof your branding strategy as AI continues to reshape the search landscape.

Leaning into entity-focused optimization builds a durable presence that lines up with how users search and how Google works.

Read more at Read More

Google pushes AI Max tool with in-app ads

Google vs. AI systems visitors

Google is now promoting its own AI features inside Google Ads — a rare move that inserts marketing directly into advertisers’ workflow.

What’s happening. Users are seeing promotional messages for AI Max for Search campaigns when they open campaign settings panels.

  • The notifications appear during routine account audits and updates.
  • It essentially serves as an internal advertisement for Google’s own tooling.

Why we care. The in-platform placement signals Google is pushing to accelerate AI adoption among advertisers, moving from optional rollouts to active promotion. While Google often introduces AI-driven features, promoting them directly within existing workflows marks a more aggressive adoption strategy.

What to watch. Whether this promotional approach expands to other Google Ads features — and how advertisers respond to marketing within their management interface.

First seen. Julie Bacchini, president and founder of Neptune Moon, spotted the notification and shared it on LinkedIn. She wrote: “Nothing like Google Ads essentially running an ad for AI Max in the settings area of a campaign.”

Read more at Read More

Bing Webmaster Tools officially adds AI Performance report

Microsoft today launched AI Performance in Bing Webmaster Tools in beta. AI Performance lets you see where, and how often, your content is cited in AI-generated answers across Microsoft Copilot, Bing’s AI summaries, and select partner integrations, the company said.

  • AI Performance in Bing Webmaster Tools shows which URLs are cited, which queries trigger those citations, and how citation activity changes over time.
  • Search Engine Land first reported on Jan. 27 that Microsoft was testing the AI Performance report.

What’s new. AI Performance is a new, dedicated dashboard inside Bing Webmaster Tools. It tracks citation visibility across supported AI surfaces. Instead of measuring clicks or rankings, it shows whether your content is used to ground AI-generated answers.

  • Microsoft framed the launch as an early step toward Generative Engine Optimization (GEO) tooling, designed to help publishers understand how their content shows up in AI-driven discovery.

What it looks like. Microsoft shared this image of AI Performance in Bing Webmaster Tools:

What the dashboard shows. The AI Performance dashboard introduces metrics focused specifically on AI citations:

  • Total citations: How many times a site is cited as a source in AI-generated answers during a selected period.
  • Average cited pages: The daily average number of unique URLs from a site referenced across AI experiences.
  • Grounding queries: Sample query phrases AI systems used to retrieve and cite publisher content.
  • Page-level citation activity: Citation counts by URL, highlighting which pages are referenced most often.
  • Visibility trends over time: A timeline view showing how citation activity rises or falls across AI experiences.

These metrics only reflect citation frequency. They don’t indicate ranking, prominence, or how a page contributed to a specific AI answer.

Why we care. It’s good to know where and how your content gets cited, but Bing Webmaster Tools still won’t reveal how those citations translate into clicks, traffic, or any real business outcome. Without click data, publishers still can’t tell if AI visibility delivers value.

How to use it. Microsoft said publishers can use the data to:

  • Confirm which pages are already cited in AI answers.
  • Identify topics that consistently appear across AI-generated responses.
  • Improve clarity, structure, and completeness on indexed pages that are cited less often.

The guidance mirrors familiar best practices: clear headings, evidence-backed claims, current information, and consistent entity representation across formats.

What’s next. Microsoft said it plans to “improve inclusion, attribution, and visibility across both search results and AI experiences,” and continue to “evolve these capabilities.”

Microsoft’s announcement. Introducing AI Performance in Bing Webmaster Tools Public Preview 

Read more at Read More