The Step-by-Step Guide to Designing Local Landing Pages That Convert

While the growth of artificial intelligence (AI) and global conveniences like Amazon has been a great thing for society, there’s still an undercurrent of people returning to a local, more personal-feeling shopping experience.

But this “return to local” doesn’t change the fact that we still live in an internet age. Enter local search engine optimization (SEO) and landing pages.

Local SEO tends to work best for businesses with physical locations that require direct customer contact, but it can also work for virtual online businesses that don’t necessarily meet their customers before a business transaction takes place.

This is why local landing pages are so important. They can give customers the convenience of an online transaction while still providing the trust and personal feel of a local business—if your landing page is done right, of course.

Optimizing your landing page design with the proper elements can help you attract local customers to your business, increase lead generation, and boost conversion rates.

Key Takeaways

  • Local landing pages only work when they’re built for real locations and real intent. One page per city or service area, with localized keywords, metadata, and copy that matches how people actually search (“service + city” or “near me”).
  • Trust signals drive both rankings and conversions. Consistent NAP data, real reviews from nearby customers, local photos, and clear business details help you show up in map features and convince visitors to take action.
  • Content needs to feel local, not duplicated. Strong local landing pages include tailored copy, location-specific frequently asked questions (FAQs), social proof, and visuals that prove you serve that area, as opposed to generic pages with city names swapped in.
  • Mobile optimization is nonnegotiable for local SEO. Most local searches happen on mobile and convert fast. Pages must load quickly, display contact info above the fold, and make calling or getting directions effortless.
  • Schema markup and clear calls to action (CTAs) turn visibility into results. Structured data helps search engines and AI tools understand your business, while strong, localized CTAs guide users to call, book, or request a quote immediately.

Why Are Local Landing Pages Important?

Local landing pages help you show up when people search for services near them, and they’re key to winning conversions in your area.

Think about how people search: “best dentist in Austin,” “roof repair near me,” or “24/7 locksmith in Chicago.”

A local landing page.

If you don’t have dedicated pages that target these local queries, you’re invisible in search engine results. In fact, recent stats show 80% of U.S. consumers surveyed search for local businesses online once a week, with about one-third (32%) searching for local businesses multiple times a day. Google’s local algorithm prioritizes relevance and proximity, and a well-optimized local page checks both boxes.

But optimizing your local SEO and landing pages is about more than appeasing Google’s algorithm. These pages can actually convert.

When someone lands on a page with your local address and glowing reviews from nearby customers, trust builds fast. In fact, according to Uberall.com, 85% of customers visit local businesses within a week of discovering them online. 17% of those visit the very next day. That’s why smart local businesses treat these like high-converting landing pages, not just generic content dumps.

With large language models (LLMs) and AI tools pulling content to answer local questions, the need for detailed, well-structured local pages becomes even more critical. These models lean on content that clearly signals relevance and authority, something a basic homepage or generic service page won’t do.

An AI overview of what are some of the best locksmiths in Chicago.

Bottom line: if local traffic matters to you, local landing pages need to be part of your SEO and conversion rate optimization (CRO) strategy.

A chart showing top ranking factors for the Local Pack.

Step 1: Identify where your customers are located.

Local landing pages only work when you know exactly which towns, neighborhoods, or service areas you’re trying to win. Otherwise, you can rack up traffic and still feel stuck because the visits come from places you can’t serve and don’t convert.

Start by answering two questions: Which locations do you want customers to come from? And which locations are they actually coming from today? Once you have both, planning local pages gets a lot easier.

Before you even open your reports, define your real-world service area. If you’re a storefront, your address needs to match how you operate in the real world (and be consistent everywhere it appears). If you’re a service-area business (such as a plumber, cleaner, or mobile vet), set a clear service area in your Google Business Profile so you don’t waste time targeting locations you can’t support.

Then, stop relying on a single data source. Use a few location signals together:

  • Google Analytics 4 (GA4) to spot city/region trends for session and key events (keep in mind location and demographics reporting is aggregated and can be limited by consent).
Demographics overview for Google Analytics 4.

Source

  • Google Search Console to see the “intent layer”—which local queries are driving clicks and impressions.
Google Search Console's intent layer.

Source

Finally, turn those insights into simple personas with local references, clear benefits, and social proof, so your page reads like it was made for that person in that place.

Step 2: Use localized keywords and metadata to create relevance.

Relevance still matters, but that doesn’t mean you can stuff a city name into every sentence and call it a day. Good local SEO matches what the searcher wants (intent) with what the page promises, starting right in the SERP.

Here’s the key difference: a local landing page usually targets transactional intent (“dentist in Austin,” “emergency plumber near me,” “book HVAC repair”), so your keyword + metadata strategy should read like a clear offer, not a watered-down blog headline.

A landing page for an Austin dentist.

Start with the basics that actually move the needle:

  • Title tag: Make a descriptive, concise, and unique title (Google can rewrite titles, but strong input helps). A simple formula works: Primary service + city + differentiator (and brand if it fits). 
  • Meta description: Google primarily builds snippets from on-page content, but it may use your meta description when it better matches the query. Write unique descriptions per page, include the “what” + “where,” and add a reason to click (pricing, availability, social proof). Avoid long strings of keywords. 
  • Meta keywords: Skip them. Google has said it ignores the keywords meta tag for web ranking.

Now, a quick warning: if you’re cranking out dozens of near-identical city pages that funnel to similar destinations, that’s exactly what Google calls doorway abuse. And lists of cities jammed onto a page can fall into keyword stuffing territory. 

Step 3: Use consistent NAP data

NAP stands for name, address, and phone number, and it needs to be exactly the same everywhere your business appears online. That includes your local landing pages, your Google Business Profile, directories, and social platforms.

Why does this matter? Because Google (and users) rely on NAP consistency to trust your business is legit. Inconsistent info can hurt your rankings and knock you out of key local SERP features like the map pack.

An infographic on how to create NAP data.

Source

Make sure your NAP is crawlable text, not embedded in an image. Add it in the footer or near your CTA, and match it letter-for-letter with your business listings. Even something small, like “Street” vs. “St.”, can throw off search engines.

If you serve multiple locations, each page should have its own unique NAP. No shortcuts here. Clean data builds trust, and trust drives clicks.

Step 4: Create and publish valuable content

Implementing local landing page design best practices in your content does two things: it helps you rank for location-specific searches and gives visitors a reason to trust you.

Start with copy that speaks directly to your audience in that area. Mention the city or neighborhood naturally, highlight the services you offer there, and include local differentiators like special hours or nearby service coverage. Make it feel personal.

Next, layer in content that builds credibility. Local reviews and case studies show real proof that your business delivers. Include names, star ratings, and even short quotes to make the social proof pop. Photos help, too. Real images of your team or completed projects add authenticity.

You should also include a brief FAQ section that answers questions specific to that location. Not only does this help your readers, but it also increases your chances of showing up in featured snippets or AI-generated results.

Source

Step 5: Add an effective CTA

Every local landing page needs a clear call to action. Without it, you’re leaving conversions on the table.

The best CTAs guide visitors to take the next logical step, whether that’s calling your business, booking an appointment, or requesting a quote. To be effective, your CTA must feel local and relevant. “Get a Free Quote” is okay. “Get a Free Plumbing Quote in Phoenix” is better. It reinforces the location and makes the offer feel tailored.

Make sure your CTA stands out visually. Use buttons, bold text, and color contrast to grab attention. And don’t just put it at the bottom. Add it near the top of the page and repeat it throughout, especially after sections like testimonials or service descriptions.

If phone calls are your goal, use a click-to-call button—especially for mobile users. For forms, keep them short. Name, email, and one key question is usually enough.

Remember, your local landing page should do more than just inform, it should drive action. The CTA is where that happens.

Step 6: Optimize your local landing pages for mobile users

Mobile search isn’t just dominant, it drives action. In fact, 88% of mobile local business searches result in a call or visit within 24 hours, showing how urgent mobile intent has become.

Start with your page performance. Speed is critical. Slow mobile pages frustrate users and push them to competitors. Tools like Google PageSpeed Insights help identify bottlenecks, enabling you to improve load times by compressing images and deferring unused scripts. Fast pages mean better user experience (UX), which, in turn, leads to higher engagement.

Google PageSpeed Insigihts.

Responsive design is nonnegotiable. Your layout must adapt to screens of all sizes with easily readable text and minimal pop-up interference. Prioritize large, clickable CTAs, and ensure your contact info is visible without scrolling.

Mobile users are often on the go. Clearly display your NAP details front and center, ideally above the fold. Clean navigation and quick access to key info make it easier for people to act immediately.

Step 7: Add schema markup

Schema markup helps search engines understand the context of your content, and that’s a big deal for local SEO.

Schema markup in action.

Source

When you add local business schema to your landing pages, you’re giving Google structured data that it can easily read. This increases the chances  your business showing up in rich results like the map features or AI-generated summaries. It’s not just about visibility. It’s about making your information easier to find, trust, and act on.

At a minimum, include schema for your business name, address, phone number (NAP), hours of operation, and service area. This aligns perfectly with the on-page content you’ve already built. The more complete your schema, the more signals you’re sending to Google that your business is real, local, and helpful.

You can generate local business schema using tools like Google’s Structured Data Markup Helper or Schema.org. Then either embed it as JSON-LD in the <head> of your page or use a plugin if you’re on a platform like WordPress.

Don’t forget to test it. Use Google’s Rich Results Test to make sure your markup is working as intended.

It takes a few extra steps, but schema markup is one of the easiest technical wins you can add to a local landing page. It won’t guarantee rankings, but it gives your content a better shot at being seen and trusted.

FAQs

How do I create content for local landing pages for SEO?

Start with localized keywords (e.g., “[service] in [city]”) and ensure they appear naturally in your headlines and throughout the copy. Then, write content that actually helps local visitors: include location-specific details, highlight nearby landmarks, and speak directly to the needs of that community. Bonus points if you add customer reviews or links to local pages.

How to make local SEO landing pages

Structure each page around one location or service area with unique URLs (like /plumbing-los-angeles). Don’t forget your Google Business Profile and local schema markup. They help search engines match your page with nearby searchers.

How to optimize landing page for local SEO

Use consistent NAP (name, address, phone) info across the page and the web. Add a local map, embed reviews from customers in that area, and link internally to relevant services. Make sure your page loads fast and works well on mobile because that’s where most local searches happen.

Conclusion

To maximize your search results and lead generation, make sure that you design separate landing pages for each city that you’re targeting.

Above all, create unique, location-specific copy for your landing pages. Building a local landing page requires an investment. It could be the investment of your time, money, or both.

However, it’s become a lot easier these days because of the plethora of landing page creators and landing page templates.

Read more at Read More

Why Entity-Based SEO is a New Way of Thinking About Optimization

Search engine optimization (SEO) was once defined by the number of keywords and synonyms scattered across your content. If you used the right word enough times, you’d rank.

Those days are long gone.

Since the launch of its Knowledge Graph in 2012, Google has been moving away from literal text matching toward deep semantic understanding. 

Search engines no longer evaluate pages as collections of words. They evaluate meaning.

This goes beyond Google and search engine results pages (SERPs). Modern discovery operates on entities—distinct people, places, brands, and concepts connected through context and relationships. Search systems now interpret queries by mapping how these entities relate rather than counting keyword usage.

That’s where entity SEO comes in. Entity-based structures set the groundwork for the more intuitive search results we see today in AI platforms and large language models (LLMs). Grouping queries around one central “thing” gives these platforms a clear reference point they can connect to related concepts.

Ultimately, entity SEO helps these platforms research and provide information in a more human way. It gives us the answers we want quickly, and it powers Google’s more complex search features that take our query results beyond a simple list of blue links.

In this article, we’ll explain what entities are, how to use them, and how they’ll continue to shape the future of SEO.

Key Takeaways

  • Entity SEO focuses on clearly defined people, brands, products, and concepts and the relationships between them, rather than isolated keywords.
  • When Google understands the primary entity behind a page, it can rank that page across a broader range of relevant queries without exact-match targeting.
  • Site structure communicates meaning. Topic clusters, internal links, and consistent terminology help search engines map how content fits together.
  • AI-driven search relies on entity context to disambiguate terms and interpret intent, not keyword strings alone.
  • Maintaining consistent signals across pages and trusted third-party profiles strengthens entity recognition and long-term visibility.

What Is Entity-Based SEO?

Entity-based SEO uses context (not just keywords) to help users find exactly what they’re looking for.

You can see this shift in action every time you type a query. For example, when you type a common name like “Malcolm” into a search bar, Google doesn’t just look for those seven letters. It tries to determine which entity you’re looking for:

A Google search dropdown for the name “Malcolm,” showing a Knowledge Panel for author Malcolm Gladwell alongside various entity-based search suggestions like “Malcolm in the Middle” and “Malcolm X.”

Google offers suggestions to searchers to provide immediate context. It speeds up the search for those looking for popular figures like Malcolm Gladwell or Malcolm X, and it prompts others to add more specific details if their intended “thing” isn’t listed.

Once you select a specific entity, the search engine stops scanning for keywords and starts delivering a comprehensive Knowledge Panel.

A Google search results page for "Malcolm Gladwell" showcasing a comprehensive Knowledge Panel. The layout displays the subject as a defined entity with categorized data points, including a photo gallery, biographical details (age, parents), linked YouTube videos, and a list of his published books, like "The Tipping Point" and "Revenge of the Tipping Point."

This layout displays the subject as a defined entity, grouping biographical details, books, and videos into a single source. While this shift makes search more intuitive for users, it makes things slightly more complicated for content creators. 

Here are three ways entity-based SEO has changed the landscape:

  1. AI visibility: Entity SEO revolves around an entity record. These records parse dozens of data points about a particular search query, making all information easy for AI platforms to access. Brands that structure their data properly make themselves much more visible in LLM search. 
  2. Better mobile capabilities: Entities allowed SEO to improve mobile results and improved mobile-first indexing
  3. Translation improvements: Entities can be found regardless of homonyms, synonyms, and foreign language use, thanks to context clues. For instance, a search for “red” will include results for “rouge” or “rojo” if the searcher’s settings allow it.

Let’s dig a little deeper into entity records to understand how they connect to LLMs and search engines like Google.

To start, let’s look at a hypothetical entity record about Taylor Swift:

A hypothetical entity record.

(Image Source)

This makes it clear how entity SEO works in practice. Search engines don’t rely on a single page or keyword to understand a brand. They aggregate structured signals across the web to build a unified view of the entity.

The reason behind this is that search systems and LLMs don’t read content the way humans do. They extract discrete facts, attributes, and relationships, then assemble them into a coherent understanding.

The example above illustrates how an entity can be broken into clear, machine-readable components.

Keywords vs. Entities: What’s the Difference

Entities might sound similar to keywords, but they’re actually quite different. Here’s how they differ and why those differences are so important.

Keywords

Keywords are words or phrases people use to express intent in search. They take many forms, including questions, sentences, or single words.

For example, users looking for makeup tutorials might search for “makeup tutorial,” “smokey eye,” “how to do a smokey eye,” or something similar.

Google search results page for “how to do a smokey eye,” showing a video carousel with multiple YouTube makeup tutorials and a step-by-step blog result below.

Today, keywords tend to work best as demand signals rather than quotas to be filled. They show how users frame their intent, whether they want to learn, compare, buy, or solve a problem, and give you language to match your content to that intent.

That’s why long-tail queries and modifiers (“best,” “near me,” “for beginners,” “price,” “vs.”) are still gold. 

These modifiers provide the intent that tells a search engine how to connect a user to your brand. Your goal is to rank for these high-intent terms to drive organic traffic and establish your site as the definitive source of truth for your niche. 

Long-tail and informational (what, how, why) keywords also help you line up your content with where search is heading. 

Data shows that about 90 percent of influential SERP features, like AI summaries and “People also ask,” come from queries like these, making them useful inputs for LLM-powered workflows like content production plans based on real query language.

If your page answers the query fully and clearly, you’re using keywords the modern way.

Entities

Google defines an entity as “a thing or concept that is singular, unique, well-defined, and distinguishable.” They can be people, places, products, companies, or abstract concepts. 

What makes entities powerful is not just what they are, but how they connect. They are defined by their relationships to other entities, which helps search engines and LLMs understand how each concept fits into the “big picture.”

Once Google is confident about what your page is about, it can rank you for searches you never explicitly targeted. That happens because entities carry built-in relationships, including attributes, categories, synonyms, and commonly associated concepts.

This is where entity SEO really starts to differ from keyword-based optimization. Essentially, entity SEO prioritizes mentions and human discussion over keywords. 

For example, a search for the word “apple” could result in pages about the fruit or pages about the company. As interesting as both topics are, reading about iPhones probably won’t be too helpful if you’re trying to figure out whether apple seeds are indeed poisonous. 

You need to add some keywords or modifiers to give crawlers and LLMs context. 

A side-by-side comparison illustrating entity disambiguation. On the left is a realistic photo of a red apple fruit; on the right is the minimalist black logo of Apple Inc., the technology company.

This is also why pages sometimes rank for “weird” keywords. If your content clearly describes the entity—what it is or related terms—Google can connect you to unexpected queries that share the same underlying intent. This concept is known as latent semantic intent (LSI).

That’s not magic. It’s entity understanding plus context signals.

For entities to be useful, search engines map them into knowledge graphs, which are structured systems that connect related information across the web and make retrieval more reliable.

As of May 2024, Google’s Knowledge Graph contains 1.6 trillion facts about 54 billion entities, and about 1.6 trillion facts about them. Not only do these data points help answer complex informational or long-tail queries, but they also power Google’s Knowledge Panel. Here’s an example:  

A Google Search Results Page for "Eddie Aikau" featuring a Knowledge Panel highlighted in a red box.

(Image Source)

To help search engines or LLMs make sense of which entity fits your query, you want the pages of your website to behave like solid references. Spell out defining details (names, dates, specs, locations), connect related subtopics, and use consistent terminology. 

Add supporting cues like internal links to your own deeper pages and clear headings that map to common questions. Structured data is also key here, making it easier for engines to see specific information that you deem to be important on a given page, like product information, locations, or other items.

How Do Entities and Keywords Work Together?

An effective SEO strategy recognizes that keywords are the signals, but entities are the destination. On-page, you can treat your website as a mini knowledge graph that uses keywords to link to different pages on your site. 

You can further validate your brand by connecting your content to established knowledge graphs like Wikipedia or LinkedIn, which are high in experience, expertise, authoritativeness, and trust (E-E-A-T). While this won’t directly affect your page rank, it can improve your page’s authority in search results.

Practically, this means your keywords should map to specific entity details (features, use cases, comparisons, FAQs, structured data). The clearer those entity connections are, the easier it is for search engines to match your page to related searches. That’s especially the case for those long-tail ones where intent is clear, but the wording is inconsistent.

How To Start Building Up Your Entity-Based SEO

The biggest upside of entity clarity is that it helps your whole site act like a connected knowledge hub. When search systems recognize your brand, products, services, locations, and experts as distinct entities, they can more accurately map your content to complex user intent.

Content Depth and Topical Relevance

Entity-based SEO nudges you away from thin, keyword-targeted pages toward deep, comprehensive content. Instead of fragmented articles, build authoritative topic clusters that cover definitions, use cases, and FAQs. 

This depth reinforces the “identity” of your subject matter, signaling to search engines that your site is the definitive source for that specific entity across all related queries.

Strengthening Relationships via Internal Linking

Internal linking is the connective tissue of your entity strategy. 

Consistently linking supporting content to a central entity page explicitly defines relationships for search engines. That can be as simple as connecting which services belong to which categories or which authors are connected to which brands. 

This internal relationship graph is essential for earning broader semantic visibility and is a core component of reputation management, as it ensures search engines never lose the thread of who you are.

Consistency as a Signal of Authority

Your entity becomes much more powerful when your brand and authors remain consistent across the web. Using the same naming conventions, professional bios, and expertise signals makes it easier for search systems to verify your “identity.” 

Consistency cuts through ambiguity to make sure your authority is attributed to the correct entity. And that goes a long way in preventing your brand from being confused with unrelated concepts.

Trust Signals and Entity Clarity

Trust signals like reviews and citations match up perfectly with entity clarity. Clear, consistent data—like name, address, phone number (NAP) details—help search engines attach your content to the right real-world entity for local SEO

Modern algorithms prioritize clear signals like these when deciding which brands to feature in high-stakes search results and AI-generated overviews.

The Role of AI in Entity SEO

AI-driven search doesn’t “read” the web like a human. It builds a model of the world. 

That model is made of entities (people, brands, products, places, concepts) and the connections between them.

That’s why entities are foundational. A keyword is just a string of text. An entity has a unique identity. 

When Google sees “Jaguar,” it has to decide between the animal, the car brand, or the NFL team? AI makes that call by looking at entity context—nearby terms, linked pages, structured data, and known relationships in systems like the Knowledge Graph.

The screenshots below show how that entity resolution plays out in real search results. The same keyword produces entirely different SERPs based on which entity Google identifies as the best match.

Google search results for “jaguar animal,” showing an animal Knowledge Panel with images, facts, and Wikipedia information about the jaguar species.

Google search results for “jaguar car,” displaying a brand Knowledge Panel for Jaguar as a luxury vehicle manufacturer with models, company details, and images.

This is also how AI gets better at interpreting intent. 

Someone searching “best running shoes for flat feet” isn’t asking for a dictionary definition of shoes. They’re signaling a problem, a use case, a set of constraints. 

Entity relationships help AI connect that query to brands, product categories, medical concepts, reviews, and comparisons before picking results that match the implied goal.

You can see the shift in your data. In Google Search Console, queries often widen into themes, with multiple variations driving impressions to the same page. 

 In the SERPs, features like Knowledge Panels, AI Overviews, and “People also ask” reflect entity understanding, not exact-match phrasing. Content performance aligns better with topic clusters and user journeys than with single keywords.

Entity SEO future-proofs your content by aligning with how AI systems learn. 

If your pages clearly define the entities you cover, connect them with strong internal linking, and stay consistent in terminology and positioning, they’re easier to interpret, categorize, and reuse as search evolves.

How to Shift Your Strategy to Entity-Based SEO

Understanding entity SEO is only useful if it changes how you work. Here are the concrete changes that move a keyword-first strategy toward an entity-based one.

Identify Core Entities Tied to the Business

A core entity is a small, intentional set of “things” that you want Google to associate with your brand. It goes beyond what you want to rank for. 

Start by pressure testing your site against three questions: 

  • Who is this? (the brand/author entity)
  • What do they do? (the offering entity)
  • Who do they serve? (the audience/market entity)

If the answer to any of these feels fuzzy, your entities are too broad or buried within your content.

Keep core entities limited and intentional. Pick the ones that define your positioning, then give each one a clear home on the site. 

An example structure might be: a homepage for the brand, service pages for offerings, an about page for brand/author credibility, and supporting content that links back to those pillars.

Build Topic Clusters Around Those Entities

One page can define the entity, but topic clusters give it depth and context. The goal is coverage, not volume.

For each core entity, build one primary page that acts as the hub (your “entity’s home”). Then publish supporting pages that answer related questions, common use cases, comparisons, and next-step topics that your audience actually searches for. This is known as the hub and spoke model.

Your supporting content should do three things: 

  • Answer real follow-up questions.
  • Reinforce the same entity from different angles.
  • Link back to the hub page with clear, consistent anchor text. 

That internal structure is what helps search engines connect the dots.

Reinforce Entities Through Internal Links and Content Structure

Internal links are how you “wire” entities together across your site. Structure matters as much as the words on the page.

Link pages with related topics, not whatever feels convenient in the moment. If two articles support the same entity, connect them. If a page is a subtopic, point it to the hub and to other closely related subtopics.

NerdWallet’s credit cards hub shows how internal linking reinforces entities, with a single category page connecting related subtopics like cash back, travel rewards, and balance transfers under one clear concept.

NerdWallet credit cards hub page showing a central “Credit Cards” category with multiple subcategory links, including cash back, travel rewards, balance transfer, and business credit cards.

Keep your anchor text consistent and descriptive. And use the entity name (or a tight variation) instead of vague links like “click here” or “learn more.”

Make sure your cluster works both ways. In other words, supporting pages should link up to the main entity page, and related supporting pages should link to each other where it genuinely helps the reader move to the next logical question.

Maintain Entity Consistency Across the Site and Beyond

One way to leverage entity-based SEO is to list your business on directories across the internet.  These directory sites are a popular data source for search engine crawlers and LLMs. Your Google Business Profile, for example, is used as a data source for the Google Knowledge Graph. 

Other listing services, such as Yelp, can also help create strong, authoritative backlinks for your brand and define a well-known entity. 

Listing sites may vary by location, so do your research when deciding where to list. Additionally, be sure to choose sites with high domain authority to improve your search engine standing. 

Ultimately, consistency is key. Listing your business in multiple locations across the internet eventually turns entity signals into trust signals, but it’s important to list your business carefully.

Avoid using multiple names for the same entity and conflicting descriptions from page to page. Also, make sure your listings stay focused on topics related to entities in your industry. Don’t lose focus or drift to unrelated topics.  

Prioritize Brand Building

Brand building is another essential tactic in entity-based SEO. Offline brand signals should be mirrored online wherever search engines and AI systems look for training data.

This includes your about page, author bios, case studies, podcast/webinar pages, and third-party profiles (Crunchbase, G2, LinkedIn, industry directories, etc.). For LLM optimization, you want consistent, crawlable signals in the places models and search engines pull from. 

Use the same brand description, key services, and leadership names everywhere. That consistency makes it easier for systems to connect the dots.

Common Entity SEO Mistakes

Entity SEO fails when you treat it like a checklist instead of a system. These are some of the mistakes that do the most damage:

  • Treating schema as a shortcut. Markup helps Google label what’s on the page. It doesn’t create authority. If the content is thin or unclear, schema just highlights that faster.
  • Publishing thin entity pages. A quick definition page won’t earn trust. Weak entity pages struggle to rank, and they don’t attract links or support clusters.
  • Chasing unrelated entities. Dropping in trendy topics or random brands dilutes relevance. It can also confuse search engines about what you actually do.
  • Ignoring internal linking and structure. Entities need connections. If supporting pages don’t link to the hub (and to each other where it makes sense), Google can’t map the relationship.
  • Sending inconsistent signals. Mixed terminology, shifting positioning, and conflicting service descriptions make your entity harder to identify.

FAQs

What are entities in SEO?

Entities are the “things” search engines recognize—people, places, brands, concepts, and more. Unlike keywords, entities have context and relationships. Google uses them to understand meaning and intent. For example, “Amazon” as a company is an entity, and it’s different from the Amazon rainforest. 

How do you find SEO entities?

Start with your main topic and use tools like Google’s Knowledge Graph, Wikipedia, and Ubersuggest to identify related entities. Look for people, brands, terms, and categories commonly associated with your topic. Also, check competitor content. What entities are they connecting to? Use this to build a structured, semantically rich content plan. 

What is entity SEO?

Entity SEO is the practice of optimizing content around recognizable concepts, not just keywords, so search engines better understand and rank your site.

Conclusion

Entity SEO isn’t some advanced trick. It’s how modern search actually works. 

Search engines no longer rely on traditional keyword research alone. They map concepts, understand relationships, and evaluate authority across connected topics.

If you want to stay visible long term, your content needs more than keywords. 

Clarity and a strong topical focus are the way to go. That’s how you build trust with Google and future-proof your branding strategy as AI continues to reshape the search landscape.

Leaning into entity-focused optimization builds a durable presence that lines up with how users search and how Google works.

Read more at Read More

Google pushes AI Max tool with in-app ads

Google vs. AI systems visitors

Google is now promoting its own AI features inside Google Ads — a rare move that inserts marketing directly into advertisers’ workflow.

What’s happening. Users are seeing promotional messages for AI Max for Search campaigns when they open campaign settings panels.

  • The notifications appear during routine account audits and updates.
  • It essentially serves as an internal advertisement for Google’s own tooling.

Why we care. The in-platform placement signals Google is pushing to accelerate AI adoption among advertisers, moving from optional rollouts to active promotion. While Google often introduces AI-driven features, promoting them directly within existing workflows marks a more aggressive adoption strategy.

What to watch. Whether this promotional approach expands to other Google Ads features — and how advertisers respond to marketing within their management interface.

First seen. Julie Bacchini, president and founder of Neptune Moon, spotted the notification and shared it on LinkedIn. She wrote: “Nothing like Google Ads essentially running an ad for AI Max in the settings area of a campaign.”

Read more at Read More

Bing Webmaster Tools officially adds AI Performance report

Microsoft today launched AI Performance in Bing Webmaster Tools in beta. AI Performance lets you see where, and how often, your content is cited in AI-generated answers across Microsoft Copilot, Bing’s AI summaries, and select partner integrations, the company said.

  • AI Performance in Bing Webmaster Tools shows which URLs are cited, which queries trigger those citations, and how citation activity changes over time.
  • Search Engine Land first reported on Jan. 27 that Microsoft was testing the AI Performance report.

What’s new. AI Performance is a new, dedicated dashboard inside Bing Webmaster Tools. It tracks citation visibility across supported AI surfaces. Instead of measuring clicks or rankings, it shows whether your content is used to ground AI-generated answers.

  • Microsoft framed the launch as an early step toward Generative Engine Optimization (GEO) tooling, designed to help publishers understand how their content shows up in AI-driven discovery.

What it looks like. Microsoft shared this image of AI Performance in Bing Webmaster Tools:

What the dashboard shows. The AI Performance dashboard introduces metrics focused specifically on AI citations:

  • Total citations: How many times a site is cited as a source in AI-generated answers during a selected period.
  • Average cited pages: The daily average number of unique URLs from a site referenced across AI experiences.
  • Grounding queries: Sample query phrases AI systems used to retrieve and cite publisher content.
  • Page-level citation activity: Citation counts by URL, highlighting which pages are referenced most often.
  • Visibility trends over time: A timeline view showing how citation activity rises or falls across AI experiences.

These metrics only reflect citation frequency. They don’t indicate ranking, prominence, or how a page contributed to a specific AI answer.

Why we care. It’s good to know where and how your content gets cited, but Bing Webmaster Tools still won’t reveal how those citations translate into clicks, traffic, or any real business outcome. Without click data, publishers still can’t tell if AI visibility delivers value.

How to use it. Microsoft said publishers can use the data to:

  • Confirm which pages are already cited in AI answers.
  • Identify topics that consistently appear across AI-generated responses.
  • Improve clarity, structure, and completeness on indexed pages that are cited less often.

The guidance mirrors familiar best practices: clear headings, evidence-backed claims, current information, and consistent entity representation across formats.

What’s next. Microsoft said it plans to “improve inclusion, attribution, and visibility across both search results and AI experiences,” and continue to “evolve these capabilities.”

Microsoft’s announcement. Introducing AI Performance in Bing Webmaster Tools Public Preview 

Read more at Read More

How to make automation work for lead gen PPC

B2B advertising faces a distinct challenge: most automation tools weren’t built for lead generation.

Ecommerce campaigns benefit from hundreds of conversions that fuel machine learning. B2B marketers don’t have that luxury. They deal with lower conversion volume, longer sales cycles, and no clear cart value to guide optimization.

The good news? Automation can still work.

Melissa Mackey, Head of Paid Search at Compound Growth Marketing, says the right strategy and signals can turn automation into a powerful driver of B2B leads. Below is a summary of the key insights and recommendations she shared at SMX Next.

The fundamental challenge: Why automation struggles with lead gen

Automation systems are built for ecommerce success, which creates three core obstacles for B2B marketers:

  • Customer journey length: Automation performs best with short journeys. A user visits, buys, and checks out within minutes. B2B journeys can last 18 to 24 months. Offline conversions only look back 90 days, leaving a large gap between early engagement and closed revenue.
  • Conversion volume requirements: Google’s automation works best with about 30 leads per campaign per month. Google says it can function with less, but performance is often inconsistent below that level. Ecommerce campaigns easily hit hundreds of monthly conversions. B2B lead gen rarely does.
  • The cart value problem: In ecommerce, value is instant and obvious. A $10 purchase tells the system something very different than a $100 purchase. Lead generation has no cart. True value often isn’t clear until prospects move through multiple funnel stages — sometimes months later.

The solution: Sending the right signals

Despite these challenges, proven strategies can make automation work for B2B lead generation.

Offline conversions: Your number one priority

Connecting your CRM to Google Ads or Microsoft Ads is essential for making automation work in lead generation. This isn’t optional. It’s the foundation. If you haven’t done this yet, stop and fix it first.

In Google Ads’ Data Manager, you’ll find hundreds of CRM integration options. The most common B2B setups include:

  • HubSpot and Salesforce: Both offer native, seamless integrations with Google Ads. Setup is simple. Once connected, customer stages and CRM data flow directly into the platform.
  • Other CRMs: If you don’t use HubSpot or Salesforce, you can build a custom data table with only the fields you want to share. Use connectors like Snowflake to send that data to Google Ads while protecting user privacy and still supplying strong automation signals.
  • Third-party integrations: If your CRM doesn’t integrate directly, tools like Zapier can connect almost anything to Google Ads. There’s a cost, but the performance gains typically pay for it many times over.

Embrace micro conversions with strategic values

Micro conversions signal intent. They show a “hand raiser” — someone engaged on your site who isn’t an MQL yet but clearly interested.

The key is assigning relative value to these actions, even when you don’t know their exact revenue impact. Use a simple hierarchy to train automation what matters most:

  • Video views (value: 1): Shows curiosity, but qualification is unclear.
  • Ungated asset downloads (value: 10): Indicates stronger engagement and added effort.
  • Form fills (value: 100): Reflects meaningful commitment and willingness to share personal information.
  • Marketing qualified leads (value: 1,000): The highest-value signal and top optimization priority.

This value structure tells automation that one MQL matters more than 999 video views. Without these distinctions, campaigns chase impressive conversion rates driven by low-value actions — while real leads slip through the cracks.

Making Performance Max work for lead generation

You might dismiss Performance Max (PMax) for lead generation — and for good reason. Run it on a basic maximize conversions strategy, and it usually produces junk leads and wastes budget.

But PMax can deliver exceptional results when you combine conversion values and offline conversion data with a Target ROAS bid strategy.

One real client example shows what’s possible. They tracked three offline conversion actions — leads, opportunities, and customers — and valued customers at 50 times a lead. The results were dramatic:

  • Leads increased 150%
  • Opportunities increased 350%
  • Closed deals increased 200%

Closed deals became the campaign’s top-performing metric because they reflected real, paying customers. The key difference? Using conversion values with a Target ROAS strategy instead of basic maximize conversions.

Campaign-specific goals: An underutilized feature

Campaign-specific goals let you optimize campaigns for different conversion actions, giving you far more control and flexibility.

You can set conversion goals at the account level or make them campaign-specific. With campaign-specific goals, you can:

  • Run a mid-funnel campaign optimized only for lead form submissions using informational keywords.
  • Build audiences from those form fills to capture engaged prospects.
  • Launch a separate campaign optimized for qualified leads, targeting that warm audience with higher-value offers like demos or trials.

This approach avoids asking someone to “marry you on the first date.” It also keeps campaigns from competing against themselves by trying to optimize for conflicting goals.

Portfolio bidding: Reaching the data threshold faster

Portfolio bidding groups similar campaigns so you can reach the critical 30-conversions-per-month threshold faster.

For example, four separate campaigns might generate 12, 11, 0, and 15 conversions. On their own, none qualify. Grouped into a single portfolio, they total 38 conversions — giving automation far more data to optimize against.

You may still need separate campaigns for valid reasons — regional reporting, distinct budgets, or operational constraints. Portfolio bidding lets you keep that structure while still feeding the system enough volume to perform.

Bonus benefit: Portfolio bidding lets you set maximum CPCs. This prevents runaway bids when automation aggressively targets high-propensity users. This level of control is otherwise only available through tools like SA360.

First-party audiences: Powerful targeting signals

First-party audiences send strong signals about who you want to reach, which is critical for AI-powered campaigns.

If HubSpot or Salesforce is connected to Google Ads, you can import audiences and use them strategically:

  • Customer lists: Use them as exclusions to avoid paying for existing customers, or as lookalikes in Demand Gen campaigns.
  • Contact lists: Use them for observation to signal ideal audience traits, or for targeting to retarget engaged users.

Audiences make it much easier to trust broad match keywords and AI-driven campaign types like PMax or AI Max — approaches that often feel too loose for B2B without strong audience signals in place.

Leveraging AI for B2B lead generation

AI tools can significantly improve B2B advertising efficiency when you use them with intent. The key is remembering that most AI is trained on consumer behavior, not B2B buying patterns.

The essential B2B prompt addition

Always tell the AI you’re selling to other businesses. Start prompts with clear context, like: “You’re a SaaS company that sells to other businesses.” That single line shifts the AI’s lens away from consumer assumptions and toward B2B realities.

Client onboarding and profile creation

Use AI to build detailed client profiles by feeding it clear inputs, including:

  • What you sell and your core value.
  • Your unique selling propositions.
  • Target personas.
  • Ideal customer profiles.

Create a master template or a custom GPT for each client. This foundation sharpens every downstream AI task and dramatically improves accuracy and relevance.

Competitor research in minutes, not hours

Competitive analysis that once took 20–30 hours can now be done in 10–15 minutes. Ask AI to analyze your competitors and break down:

  • Current offers
  • Positioning and messaging
  • Value propositions
  • Customer sentiment
  • Social proof
  • Pricing strategies

AI delivers clean, well-structured tables you can screenshot for client decks or drop straight into Google Sheets for sorting and filtering. Use this insight to spot gaps, uncover opportunities, and identify clear strategic advantages.

Competitor keyword analysis

Use tools like Semrush or SpyFu to pull competitor keyword lists, then let AI do the heavy lifting. Create a spreadsheet with columns for each competitor’s keywords alongside your client’s keywords. Then ask the AI to:

  • Identify keywords competitors rank for that you don’t to uncover gaps to fill.
  • Identify keywords you own that competitors don’t to surface unique advantages.
  • Group keywords by theme to reveal patterns and inform campaign structure.

What once took hours of pivot tables, filtering, and manual cleanup now takes AI about five minutes.

Automating routine tasks

  • Negative keyword review: Create an AI artifact that learns your filtering rules and decision logic. Feed it search query reports, and it returns clear add-or-ignore recommendations. You spend time reviewing decisions instead of doing first-pass analysis, which makes SQR reviews faster and easier to run more often.
  • Ad copy generation: Tools like RSA generators can produce headlines and descriptions from sample keywords and destination URLs. Pair them with your custom client GPT for even stronger starting points. Always review AI-generated copy, but refining solid drafts is far faster than writing from scratch.

Experiments: testing what works

The Experiments feature is widely underused. Put it to work by testing:

  • Different bid strategies, including portfolio vs. standard
  • Match types
  • Landing pages
  • Campaign structures

Google Ads automatically reports performance, so there’s no manual math. It even includes insight summaries that tell you what to do next — apply the changes, end the experiment, or run a follow-up test.

Solutions: Pre-built scripts made easy

Solutions are prebuilt Google Ads scripts that automate common tasks, including:

  • Reporting and dashboards
  • Anomaly detection
  • Link checking
  • Flexible budgeting
  • Negative keyword list creation

Instead of hunting down scripts and pasting code, you answer a few setup questions and the solution runs automatically. Use caution with complex enterprise accounts, but for simpler structures, these tools can save a significant amount of time.

Key takeaways

Automation wasn’t built for lead generation, but with the right strategy, you can still make it work for B2B.

  • Send the right signals: Offline conversions with assigned values aren’t optional. First-party audiences add critical targeting context. Together, these signals make AI-driven campaigns work for B2B.
  • AI is your friend: Use AI to automate repetitive work — not to replace people. Take 50 search query reports off your team’s plate so they can focus on strategy instead of tedious analysis.
  • Leverage platform tools: Experiments, Solutions, campaign-specific goals, and portfolio bidding are powerful features many advertisers ignore. Use what’s already built into your ad platforms to get more out of every campaign.

Watch: It’s time to embrace automation for B2B lead gen 

Read more at Read More

Web Design and Development San Diego

Why governance maturity is a competitive advantage for SEO

How SEO governance shifts teams from reaction to prevention

Let me guess: you just spent three months building a perfectly optimized product taxonomy, complete with schema markup, internal linking, and killer metadata. 

Then, the product team decided to launch a site redesign without telling you. Now half your URLs are broken, the new templates strip out your structured data, and your boss is asking why organic traffic dropped 40%.

Sound familiar?

Here’s the thing: this isn’t an SEO failure, but a governance failure. It’s costing you nights and weekends trying to fix problems that should never have happened in the first place.

This article covers why weak governance keeps breaking SEO, how AI has raised the stakes, and how a visibility governance maturity model helps SEO teams move from firefighting to prevention.

Governance isn’t bureaucracy – it’s your insurance policy

I know what you’re thinking. “Great, another framework that means more meetings and approval forms.” But hear me out.

The Visibility Governance Maturity Model (VGMM) isn’t about creating red tape. It’s about establishing clear ownership, documented processes, and decision rights that prevent your work from being accidentally destroyed by teams who don’t understand SEO.

Think of it this way: VGMM is the difference between being the person who gets blamed when organic traffic tanks versus being the person who can point to documentation showing exactly where the process broke down – and who approved skipping the SEO review.

This maturity model:

  • Protects your work from being undone by releases you weren’t consulted on.
  • Documents your standards so you’re not explaining canonical tags for the 47th time.
  • Establishes clear ownership so you’re not expected to fix everything across six different teams.
  • Gets you a seat at the table when decisions affecting SEO are being made.
  • Makes your expertise visible to leadership in ways they understand.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

The real problem: AI just made everything harder

Remember when SEO was mostly about your website and Google? Those were simpler times.

Now you’re trying to optimize for:

  • AI Overviews that rewrite your content.
  • ChatGPT citations that may or may not link back.
  • Perplexity summaries that pull from competitors.
  • Voice assistants that only cite one source.
  • Knowledge panels that conflict with your site.

And you’re still dealing with:

  • Content teams who write AI-generated fluff.
  • Developers who don’t understand crawl budget.
  • Product managers who launch features that break structured data.
  • Marketing directors who want “just one small change” that tanks rankings.

Without governance, you’re the only person who understands how all these pieces fit together. 

When something breaks, everyone expects you to fix it – usually yesterday. When traffic is up, it’s because marketing ran a great campaign. When it’s down, it’s your fault.

You become the hero the organization depends on, which sounds great until you realize you can never take a real vacation, and you’re working 60-hour weeks.

Dig deeper: Why most SEO failures are organizational, not technical

What VGMM actually measures – in terms you care about

VGMM doesn’t care about your keyword rankings or whether you have perfect schema markup. It evaluates whether your organization is set up to sustain SEO performance without burning you out. Below are the five maturity levels that translate to your daily reality:

Level 1: Unmanaged (your current nightmare)

  • Nobody knows who’s responsible for SEO decisions.
  • Changes happen without SEO review.
  • You discover problems after they’ve tanked traffic.
  • You’re constantly firefighting.
  • Documentation doesn’t exist or is ignored.

Level 2: Aware (slightly better)

  • Leadership admits SEO matters.
  • Some standards exist but aren’t enforced.
  • You have allies but no authority.
  • Improvements happen but get reversed next quarter.
  • You’re still the only one who really gets it.

Level 3: Defined (getting somewhere)

  • SEO ownership is documented.
  • Standards exist, and some teams follow them.
  • You’re consulted before major changes.
  • QA checkpoints include SEO review.
  • You’re working normal hours most weeks.

Level 4: Integrated (the dream)

  • SEO is built into release workflows.
  • Automated checks catch problems before they ship.
  • Cross-functional teams share accountability.
  • You can actually take a vacation without a disaster.
  • Your expertise is respected and resourced.

Level 5: Sustained (unicorn territory)

  • SEO survives leadership changes.
  • Governance adapts to new AI surfaces automatically.
  • Problems are caught before they impact traffic.
  • You’re doing strategic work, not firefighting.
  • The organization values prevention over reaction.

Most organizations sit at Level 1 or 2. That’s not your fault – it’s a structural problem that VGMM helps diagnose and fix.

Dig deeper: SEO’s future isn’t content. It’s governance

How VGMM works: The less boring explanation

VGMM coordinates multiple domain-specific maturity models. Think of it as a health checkup that looks at all your vital signs, not just one metric.

It evaluates maturity across domains like:

  • SEO governance: Your core competency.
  • Content governance: Are writers following standards?
  • Performance governance: Is the site actually fast?
  • Accessibility governance: Is the site inclusive?
  • Workflow governance: Do processes exist and work?

Each domain gets scored independently, then VGMM looks at how they work together. Because excellent SEO maturity doesn’t matter if the performance team deploys code that breaks the site every Tuesday or if the content team publishes AI-generated nonsense that tanks your E-E-A-T signals.

VGMM produces a 0–100% score based on:

  • Domain scores: How mature is each area?
  • Weighting: Which domains matter most for your business?
  • Dependencies: Are weaknesses in one area breaking strengths in another?
  • Coherence: Do decision rights and accountability actually align?

The final score isn’t about effort – it’s about whether governance actually works.

Get the newsletter search marketers rely on.


What this means for your daily life

Before VGMM-style governance:

  • Product launches a redesign → You find out when traffic drops.
  • Content team uses AI → You discover thin content in Search Console.
  • Dev changes URL structure → You spend a week fixing redirects.
  • Marketing wants “quick changes” → You explain why it’s not quick (again).
  • Site goes down → Everyone asks why you didn’t catch it.

After governance maturity improves:

  • Product can’t launch without SEO sign-off.
  • Content AI usage has review checkpoints.
  • URL changes require documented SEO approval.
  • Marketing requests go through defined workflows.
  • Site monitoring includes automated SEO health checks.

You move from reactive firefighting to proactive prevention. Your weekends become yours again.

The supporting models: What they actually check

VGMM doesn’t score you on technical SEO execution. It checks whether the organization has processes in place to prevent SEO disasters.

SEO Governance Maturity Model (SEOGMM) asks:

  • Are there documented SEO standards?
  • Who can override them, and how?
  • Do templates enforce SEO requirements?
  • Are there QA checkpoints before releases?
  • Can SEO block launches that will cause problems?

Content Governance Maturity Model (CGMM) asks:

  • Are content quality standards documented?
  • Is AI-generated content reviewed?
  • Are writers trained on SEO basics?
  • Is there a process for updating outdated content?

Website Performance Maturity Model (WPMM) asks:

  • Are Core Web Vitals monitored?
  • Can releases be rolled back if they break performance?
  • Is there a performance budget?
  • Are third-party scripts governed?

You get the idea. Each domain has its own checklist, and VGMM shows leadership where gaps create risk.

Dig deeper: SEO execution: Understanding goals, strategy, and planning

How to pitch this to your boss

You don’t need to explain VGMM theory. You need to connect it to problems leadership already knows exist.

  • Frame it as risk reduction: “We’ve had three major traffic drops this year from changes that SEO didn’t review. VGMM helps us identify where our process breaks down so we can prevent this.”
  • Frame it as efficiency: “I’m spending 60% of my time firefighting problems that could have been prevented. VGMM establishes processes so I can focus on growth opportunities instead.”
  • Frame it as a competitive advantage: “Our competitors are getting cited in AI Overviews, and we’re not. VGMM evaluates whether we have the governance structure to compete in AI-mediated search.”
  • Frame it as scalability: “Right now, our SEO capability depends entirely on me. If I get hit by a bus tomorrow, nobody knows how to maintain what we’ve built. VGMM establishes documentation and processes that make our SEO sustainable.”
  • The ask: “I’d like to conduct a VGMM assessment to identify where our processes need strengthening.”

What success actually looks like

Organizations with higher VGMM maturity experience measurably better outcomes:

  • Fewer unexplained traffic drops because changes are reviewed.
  • More stable AI citations because content quality is governed.
  • Less rework after launches because SEO is built into workflows.
  • Clearer accountability because ownership is documented.
  • Better resource allocation because gaps are visible to leadership.

But the real win for you personally: 

  • You stop being the hero who saves the day and become the strategist who prevents disasters. 
  • Your expertise is recognized and properly resourced. 
  • You can take actual vacations. 
  • You work normal hours most of the time.

Your job becomes about building and improving, not constantly fixing.

Getting started: Practical next steps

Step 1: Self-assessment

Look at the five maturity levels. Where is your organization honestly sitting? If you’re at Level 1 or 2, you have evidence for why governance matters.

Step 2: Document current-state pain

Make a list of the last six months of SEO incidents:

  • Changes that weren’t reviewed.
  • Traffic drops from preventable problems.
  • Time spent fixing avoidable issues.
  • Requests that had to be explained multiple times.

This becomes your business case.

Step 3: Start with one domain

You don’t need to implement full VGMM immediately. Start with SEOGMM:

  • Document your standards.
  • Create a review checklist.
  • Establish who can approve exceptions.
  • Get stakeholder sign-off on the process.

Step 4: Show results 

Track prevented problems. When you catch an issue before it ships, document it. When a process prevents a regression, quantify the impact. Build your case for expanding governance.

Step 5: Expand systematically

Once SEOGMM is working, expand to related domains (content, performance, accessibility). Show how integrated governance catches problems that individual domain checks miss.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Why governance determines whether SEO survives

Governance isn’t about making your job harder. It’s about making your organization work better so your job becomes sustainable.

VGMM gives you a framework for diagnosing why SEO keeps getting undermined by other teams and a roadmap for fixing it. It translates your expertise into language that leadership understands. It protects your work from accidental destruction.

Most importantly, it moves you from being the person who’s always fixing emergencies to being the person who builds systems that prevent them.

You didn’t become an SEO professional to spend your career firefighting. VGMM helps you get back to doing the work that actually matters – the strategic, creative, growth-focused work that attracted you to SEO in the first place.

If you’re tired of watching your best work get undone by teams who don’t understand SEO, if you’re exhausted from being the only person who knows how everything works, if you want your expertise to be recognized and protected – start the VGMM conversation with your leadership.

The framework exists. What’s missing is someone in your organization saying, “We need to govern visibility like we govern everything else that matters.”

That someone is you.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Read more at Read More

Tips and tricks to write SEO-friendly blog posts in the AI era

It is no secret that publishing SEO-friendly blog posts is one of the easiest and most effective ways to drive organic traffic and improve SERP rankings. However, in the era of artificial intelligence, blog posts matter more than ever. They help establish brand authority by consistently delivering fresh, valuable content that can be cited in AI-generated answers.

In this guide, we will share a practical, detailed approach to writing SEO-friendly blog content that not only ranks on Google SERPs but is also surfaced by AI models.

Key takeaways

  • SEO friendly blog post now means writing with search intent, ensuring content is clear and quotable for AI systems
  • Key factors for SEO friendly blog posts include trustworthiness, machine-readability, answer-first structure, and topical authority
  • Conduct thorough keyword research and find readers’ questions to match search intent effectively
  • Use clear headings, improve readability, include inclusive language, and add relevant media to engage readers
  • Write compelling meta titles and descriptions, link to existing content, and focus on building authority to enhance visibility

What does an SEO-friendly blog post mean in the AI era?

The way people search for information has changed, and with it, the meaning of an SEO-friendly blog post. Before the rise of generative AI, writing an SEO-friendly blog post mostly meant this:

‘Writing content with the intention of ranking highly in search engine results pages (SERPs). The content is optimized for specific target keywords, easy to read, and provides value to the reader.’

That definition is not wrong. But it is no longer complete.

In the AI era, an SEO-friendly blog post is written with search intent first, answering a user’s question clearly and efficiently. It is not just about placing keywords in the right spots. It is about creating an information-dense piece with accurate, well-structured, and quotable sentences that AI systems can confidently extract and surface as direct answers.

The new definition clearly shows that strong SEO foundations still matter, and they matter more than ever. What has changed is how content is evaluated and discovered. Search engines and AI models now look beyond clicks and rankings to understand whether your content is trustworthy, helpful, and easy to interpret.

Here are some key factors that play a key role in determining whether a blog post is truly SEO-friendly:

  • Trustworthiness (E-E-A-T): Demonstrating real-world experience, expertise, and credibility helps your content stand out from low-value AI-generated rehashes
  • Machine-readability: Clear structure, clean HTML, and technical signals such as schema markup help search engines and AI systems understand what your content is about
  • Answer-first structure: Placing concise, direct answers at the beginning of sections makes it easier for AI models to extract and reference your content
  • Topical authority: Publishing interconnected, in-depth content around a subject is far more effective than creating isolated blog posts

9 tips to write SEO-friendly blogs for LLM and SERP visibility

Now we get to the core of this guide. Below are some foundational tips to help you plan and write SEO-friendly blog posts that are genuinely helpful, easy to understand, and focused on solving real reader problems. When done right, these practices not only improve search visibility but also shape how your brand is perceived by both users and AI systems.

1. Conduct thorough keyword research

Before you start writing a single word, start with solid keyword research. This step helps you understand how people search for a topic, which terms carry demand, and how competitive those searches are. It also ensures your content aligns with real user intent instead of assumptions.

You can use tools like Google Keyword Planner, Ahrefs, or Semrush for this. Personally, I prefer using Semrush’s Keyword Magic Tool because it quickly surfaces thousands of relevant keyword ideas around a single topic.

keyword magic tool by semrush for keyword researcg
Keyword Magic Tool by Semrush for the relevant keyword list

Here’s how I usually approach it. I enter a broad keyword related to my topic, for example, ‘SEO.’ The tool then returns an extensive list of related keywords along with important metrics. I mainly focus on three of them:

  • Search intent, to understand what the user is really looking for
  • Keyword Difficulty (KD%), to estimate how hard it is to rank
  • Search volume, to gauge demand

This combination helps me choose keywords that are realistic to rank for and meaningful for readers.

If you use Yoast SEO, this process becomes even easier. Semrush is integrated into Yoast SEO (both free and Premium), giving you keyword suggestions directly in Yoast SEO. With a single click, you can access relevant keyword data while writing, making it easier to create focused, useful content from the start.

Looking for keyphrase suggestions? When you’ve set a focus keyword in Yoast SEO, you can click on ‘Get related keyphrases’ and our Semrush integration will help you find high-performing keyphrases!

Also read: How to use the Semrush related keyphrases feature in Yoast SEO for WordPress

2. Finding readers’ questions

Keyword research tells you what people search for. Questions tell you why they search.

When you actively look for the questions your audience is asking, you move closer to matching search intent. This is especially important in the AI era, where search engines and AI models prioritize clear, answer-driven content.

For example, consider these two queries:

What are the key features of good running shoes?

This shows informational intent. The searcher wants to understand what makes a running shoe good.

What are the best running shoes?

This suggests a transactional or commercial intent. The searcher is likely comparing options before making a purchase.

Both questions are valid, but they require very different content approaches.

There are two simple ways I usually find relevant questions. The first is by checking the People also ask section in Google search results. By typing in a broad keyphrase, you can see related questions that Google itself considers relevant.

people also ask section on google serps
The People also ask section showing questions related to the broad keyphrase ‘SEO’

The second method is to use the Questions filter in Semrush’s Keyword Magic Tool. This helps uncover question-based queries directly tied to your main topic.

Apart from these methods, I also like using Google’s AI Overview and AI mode as a quick research layer. When I search for my main topic, I pay close attention to AI-cited sources, as they often surface broad questions people are actively seeking. The structured points and highlighted terms usually reflect the answers and subtopics that matter most to users. If I want to go deeper, I click “Show more,” which reveals additional angles and follow-up questions I might not have considered initially.

google ai overview citing resources
AI cited sources by Google AI Overview

Finding and answering these questions helps you do lightweight online audience research and create content that feels genuinely helpful. It also increases the chances of your blog post being referenced in AI-generated answers, since LLMs are designed to surface clear responses to specific questions.

3. Structure your content with headings and subheadings

In our 2026 SEO predictions, we highlighted that editorial quality is no longer just about good writing. It has become a machine-readability requirement. Content that is clearly structured is easier to understand, reuse, and surface across both search and AI-driven experiences.

How LLMs use headings

AI models rely on headings to identify topics, questions, and answers within a page. When your content is broken into clear sections, it becomes easier for them to extract key information and include it in AI-generated summaries.

Why headings still matter for SEO

Headings help search engines understand the hierarchy of your content and the main points you are trying to rank for. They also improve scannability and usability, especially on mobile devices, and increase the chances of earning featured snippets.

Good structure has always been a core SEO principle. In the AI era, it remains one of the simplest and most effective ways to improve visibility and discoverability.

4. Focus on readability aspects

An SEO-friendly blog post should be easy to read before it can rank or get picked up by AI systems. Readability helps readers stay engaged and helps search engines and AI models better understand your content.

A few key readability aspects to focus on while writing:

  • Avoid passive voice where possible
    Active sentences are clearer and more direct. They make it easier for readers to understand who is doing what, and they reduce ambiguity for AI systems processing your content.
  • Use transition words
    Transition words like “because,” “for example,” and “however” guide readers through your content. They improve flow and make it easier to follow relationships between sentences and paragraphs.
  • Keep sentences and paragraphs short
    Long, complex sentences reduce clarity. Breaking content into shorter sentences and paragraphs improves scannability and comprehension.
  • Avoid consecutive sentences starting in the same way
    Varying sentence structure keeps your writing engaging and prevents it from sounding repetitive or robotic.
The readability analysis in the Yoast SEO for WordPress metabox
The readability analysis in the Yoast SEO for WordPress metabox

If you are a WordPress or Shopify user, Yoast SEO (and Yoast SEO for Shopify for Shopify users) can help here. Its readability analysis checks for passive voice, transition words, sentence length, and other clarity signals while you write. If you prefer drafting in Google Docs, you can use the Yoast SEO Google Docs add-on to get the same readability feedback before publishing.

Use Yoast SEO in Google Docs

Optimize as you draft for SEO, inclusivity, and readability. The Yoast SEO Google Docs add-on lets you export content ready for WordPress, no reformatting required.

Get Yoast for Google Docs add-onOnly $5 / month (ex VAT)

 

Good readability is not just about pleasing algorithms. It helps readers understand your message more quickly and makes your content easier to reuse in AI-generated responses.

5. Use inclusive language

Inclusive language helps ensure your content is respectful, clear, and welcoming to a broader audience. It avoids assumptions about gender, ability, age, or background, and focuses on people-first communication.

From an SEO and AI perspective, inclusive language also improves clarity. Content that avoids vague or biased terms is easier to interpret, digest, and trust. This directly supports brand perception, especially when your content is surfaced in AI-generated responses.

Yoast SEO supports this through its inclusive language check, which flags potentially non-inclusive terms and suggests better alternatives. This feature is available in Yoast SEO, Yoast SEO Premium, and in the Yoast SEO Google Docs add-on, making it easier to build inclusive habits directly into your writing workflow.

Inclusive language ensures your content is intentional, thoughtful, and clear, aligning closely with what modern SEO and AI systems value.

6. Add relevant media and interaction points

A well-written blog post should not feel like a long block of text. Adding the right media and interaction points helps guide readers through your content, keeps them engaged, and encourages them to take action.

Why media matters

Media elements such as images, videos, embeds, and infographics make your content easier to consume and more engaging. Blog posts that include images receive 94% more views than those without, simply because visuals break up large blocks of text and make pages easier to scan.

Video content plays an even bigger role. Embedded videos help explain complex ideas faster and can significantly improve organic visibility compared to text-only posts. Together, these elements encourage readers to stay longer on your page, which is a strong signal of content quality for search engines and AI systems alike.

Media also improves accessibility. Properly optimized images with descriptive alt text make content usable for screen readers, while original visuals, screenshots, or diagrams help reinforce credibility and expertise.

Use interaction points to guide and engage readers

Interaction does not always mean complex features. Even simple elements can significantly improve engagement when used well.

Table of contents and sidebar CTA used as interaction points in a Yoast blog post

A table of contents, for example, allows readers to jump directly to the section they care about most.

Other interaction points include clear calls to action (CTAs) that guide readers to the next step, relevant recommendations that encourage users to keep exploring your site, and social sharing buttons that make it easy to amplify your content. Interactive elements like polls, quizzes, or embedded tools further encourage participation and increase time on page.

7. Plan your content length

Content length still matters, but not in the way many people think it does.

A common question is what the ideal word count is for a blog post that performs well. A 2024 study by Backlinko found that while longer content tends to attract more backlinks, the average page ranking on Google’s first page contains around 1,500 words.

That said, this should not be treated as a fixed benchmark. The ideal length is the one that fully answers the user’s question. In an AI-driven era, publishing long content that adds little value or is padded with unnecessary fluff can do more harm than good.

If a topic genuinely requires a longer format, breaking the content into clear subheadings makes a big difference. I personally prefer structuring long articles this way because it improves readability, helps readers navigate the page more easily, and makes the content easier for search engines and AI systems to understand.

Must read: How to use headings on your site

If you use Yoast SEO or Yoast SEO Premium, the paragraph and sentence length checks can help here. These checks exist to prevent pages from being too thin to provide real value. Pages with very low word counts often lack context and struggle to demonstrate relevance or expertise. Yoast SEO flags such cases as a warning, while clearly indicating that adding more words alone does not guarantee better rankings.

Think of word count as a guideline, not a goal. Your focus should always be on clarity, completeness, and usefulness.

8. Link to existing content

Internal linking is one of the most underrated SEO practices, yet it does a lot of heavy lifting behind the scenes.

By linking to relevant content within your site, you help readers discover additional resources and help search engines understand how your content is connected. Over time, this strengthens topical authority and signals that your site consistently covers a subject in depth.

Good internal linking follows a few simple principles:

  • Link only when it adds value and feels natural in context
  • Use clear, descriptive anchor text so users and search engines know what to expect
  • Avoid linking to outdated URLs or pages that redirect, as this wastes crawl signals

Internal links also keep readers engaged longer by guiding them to related articles. This improves overall site engagement while reinforcing your expertise on a topic.

From an AI and search perspective, internal linking plays an even bigger role. Modern search systems analyze content structure, metadata hierarchies, schema markup, and internal links to assess topical depth and clarity. Well-linked content clusters make it easier for search engines and AI systems to understand what your site is about and which pages are most important.

For WordPress users, Yoast SEO Premium offers internal linking suggestions directly in the editor. This makes it easier to spot relevant linking opportunities as you write, helping you build stronger content connections without interrupting your workflow.

A smarter analysis in Yoast SEO Premium

Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!

Get Yoast SEO Premium Only $118.80 / year (ex VAT)

9. Write compelling meta titles and descriptions

Meta titles and meta descriptions help users decide whether to click on your content. While meta descriptions are not a direct ranking factor, they strongly influence click-through rates, making them an essential part of writing SEO-friendly blog posts.

A good meta title clearly communicates what the page is about. Place your main keyword near the beginning, keep it concise, and aim for roughly 55-60 characters so it doesn’t get truncated in search results.

Meta descriptions act like a short invitation. They should explain what the reader will gain from clicking and why it matters. Instead of stuffing keywords, focus on clarity and usefulness. Mention what aspects of the topic your content covers and how it helps the reader. Simple language works best.

Pro tip: Using action-oriented verbs such as “learn,” “discover,” or “read” can also encourage clicks and make your description more engaging.

If you use Yoast SEO Premium, this process becomes much easier. The AI-powered meta title and description generation feature helps you create relevant, well-structured metadata in just one click. It follows SEO best practices while producing descriptions and titles that are clear, engaging, and aligned with search intent.

Bonus tips

Once you have the fundamentals in place, a few extra refinements can go a long way. The following bonus tips help improve usability, clarity, and long-term discoverability. They are not mandatory, but when applied thoughtfully, they can make your blog posts more helpful for readers and easier to surface across search engines and AI-driven experiences.

1. Add a table of contents

A table of contents (TOC) helps readers quickly understand what your blog post covers and jump straight to the section they care about. This is especially useful for long-form content, where users often scan rather than scroll from top to bottom.

From an SEO perspective, a TOC improves structure and readability and can create jump links in search results, which may increase click-through rates. It reduces bounce rates by helping users find answers faster and improves accessibility by offering clear navigation.

By the way, did you know Yoast can help you here too? Yes, the Yoast SEO Internal linking blocks feature lets you add a TOC block to your blog post that automatically includes all the headings with just one click!

2. Add key takeaways

Key takeaways help readers quickly grasp the main points of your blog post without having to read the whole post. This is especially helpful for time-constrained users who want quick, actionable insights.

Summaries also support SEO by reinforcing topic relevance and improving content comprehension for search engines and AI systems. Well-written takeaways might increase visibility in featured snippets and “People also ask” results.

If you use Yoast SEO Premium, the Yoast AI Summarize feature can generate key takeaways for your content in just one click, making it easier to add concise summaries without extra effort.

3. Add an FAQ section

An FAQ section gives you space to answer specific questions your readers may still have after reading your post. This improves user experience by addressing concerns directly and building trust.

FAQs also help search engines better understand your content by clearly outlining common questions and answers related to your topic. While they can support rankings, their real value lies in reducing friction, improving clarity, and even supporting conversions by clearing doubts.

4. Short permalinks

A permalink is the permanent URL of your blog post. Short, descriptive permalinks are easier to read, easier to share, and more likely to be clicked.

Good permalinks clearly describe what the page is about, avoid unnecessary words, and include the main topic where relevant. They improve usability and help search engines understand page context at a glance.

5. Focus on building authority (EEAT aspect)

Building authority is critical, especially for sites that cover sensitive or high-impact topics. Demonstrating Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) helps both users and search engines trust your content.

This includes citing reliable sources, showing real-world experience, maintaining consistent quality, and clearly communicating who is behind the content. Strong E-E-A-T signals are especially important for YMYL topics, where accuracy and credibility matter most.

6. Plan content distribution

Writing a great blog post is only half the work. Distribution helps your content reach the right audience.

Sharing posts on social media, repurposing key insights into newsletters, and earning backlinks from relevant sites can drive more traffic and visibility. Distribution also increases engagement signals and helps your content gain traction faster, which supports long-term SEO performance.

Target your readers always!

In AI-driven search, retrieval beats ranking. Clarity, structure, and language alignment now decide if your content gets seen. – Carolyn Shelby

This perfectly sums up what writing SEO-friendly blog posts looks like today. Success is no longer just about rankings. It is about being clear, helpful, and easy to understand for both readers and AI systems.

Throughout this guide, we focused on the fundamentals that still matter: understanding search intent, structuring content well, improving readability, using inclusive language, and supporting your writing with media, internal links, and thoughtful metadata. These are not new tricks. They are strong SEO foundations, adapted for how search and discovery work in the AI era.

If there is one takeaway, it is this: always write for your readers first. When your content genuinely helps people, answers their questions, and respects how they search and read, it naturally becomes easier to surface across SERPs and AI-driven experiences.

Good SEO has not changed. It has simply become more human.

The post Tips and tricks to write SEO-friendly blog posts in the AI era appeared first on Yoast.

Read more at Read More

Web Design and Development San Diego

Why PPC measurement feels broken (and why it isn’t)

Why PPC measurement works differently in a privacy-first world

If you’ve been managing PPC accounts for any length of time, you don’t need a research report to tell you something has changed. 

You see it in the day-to-day work: 

  • GCLIDs missing from URLs.
  • Conversions arriving later than expected.
  • Reports that take longer to explain while still feeling less definitive than they used to.

When that happens, the reflex is to assume something broke – a tracking update, a platform change, or a misconfiguration buried somewhere in the stack.

But the reality is usually simpler. Many measurement setups still assume identifiers will reliably persist from click to conversion, and that assumption no longer holds consistently.

Measurement hasn’t stopped working. The conditions it depends on have been shifting for years, and what once felt like edge cases now show up often enough to feel like a systemic change.

Why this shift feels so disorienting

I’ve been close to this problem for most of my career. 

Before Google Ads had native conversion tracking, I built my own tracking pixels and URL parameters to optimize affiliate campaigns. 

Later, while working at Google, I was involved in the acquisition of Urchin as the industry moved toward standardized, comprehensive measurement.

That era set expectations that nearly everything could be tracked, joined, and attributed at the click level. Google made advertising feel measurable, controllable, and predictable. 

As the ecosystem now shifts toward more automation, less control, and less data, that contrast can be jarring.

It has been for me. Much of what I once relied on to interpret PPC data no longer applies in the same way. 

Making sense of today’s measurement environment requires rethinking those assumptions, not trying to restore the old ones. This is how I think about it now.

Dig deeper: How to evolve your PPC measurement strategy for a privacy-first future

The old world: click IDs and deterministic matching

For many years, Google Ads measurement followed a predictable pattern. 

  • A user clicked an ad. 
  • A click ID, or gclid, was appended to the URL. 
  • The site stored it in a cookie. 
  • When a conversion fired, that identifier was sent back and matched to the click.

This produced deterministic matches, supported offline conversion imports, and made attribution relatively easy to explain to stakeholders. 

As long as the identifier survived the journey, the system behaved in ways most advertisers could reason about. 

We could literally see what happened with each click and which ones led to individual conversions.

That reliability depended on a specific set of conditions.

  • Browsers needed to allow parameters through. 
  • Cookies had to persist long enough to cover the conversion window. 
  • Users had to accept tracking by default. 

Luckily, those conditions were common enough that the model worked really well.

Why that model breaks more often now

Browsers now impose tighter limits on how identifiers are stored and passed.

Apple’s Intelligent Tracking Prevention, enhanced tracking protection, private browsing modes, and consent requirements all reduce how long tracking data persists, or whether it’s stored at all.

URL parameters may be stripped before a page loads. Cookies set via JavaScript may expire quickly. Consent banners may block storage entirely.

Click IDs sometimes never reach the site, or they disappear before a conversion occurs.

This is expected behavior in modern browser environments, not an edge case, so we have to account for it.

Trying to restore deterministic click-level tracking usually means working against the constant push toward more privacy and the resulting browser behaviors.

This is another of the many evolutions of online advertising we simply have to get on board with, and I’ve found that designing systems to function with partial data beats fighting the tide.

The adjustment isn’t just technical

On my own team, GA4 is a frequent source of frustration. Not because it’s incapable, but because it’s built for a world where some data will always be missing. 

We hear the same from other advertisers: the data isn’t necessarily wrong, but it’s harder to reason about.

This is the bigger challenge. Moving from a world where nearly everything was observable to one where some things are inferred requires accepting that measurement now operates under different conditions. 

That mindset shift has been uneven across the industry because measurement lives at the periphery of where many advertisers spend most of their time, working in ad platforms.

A lot of effort goes into optimizing ad platform settings when sometimes the better use of time might’ve been fixing broken data so better decisions could be made.

Dig deeper: Advanced analytics techniques to measure PPC

Get the newsletter search marketers rely on.


What still works: Client-side and server-side approaches

So what approaches hold up under current constraints? The answer involves both client-side and server-side measurement.

Pixels still matter, but they have limits

Client-side pixels, like the Google tag, continue to collect useful data.

They fire immediately, capture on-site actions, and provide fast feedback to ad platforms, whose automated bidding systems rely on this data.

But these pixels are constrained by the browser. Scripts can be blocked, execution can fail and consent settings can prevent storage. A portion of traffic will never be observable at the individual level.

When pixel tracking is the only measurement input, these gaps affect both reporting and optimization. Pixels haven’t stopped working. They just no longer cover every case.

Changing how pixels are delivered

Some responses to declining pixel data focus on the mechanics of how pixels are served rather than measurement logic.

Google Tag Gateway changes where tag requests are routed, sending them through a first-party, same-origin setup instead of directly to third-party domains.

This can reduce failures caused by blocked scripts and simplify deployment for teams using Google Cloud.

What it doesn’t do is define events, decide what data is collected, or correct poor tagging choices. It improves delivery reliability, not measurement logic.

This distinction matters when comparing Tag Gateway and server-side GTM.

  • Tag Gateway focuses on routing and ease of setup.
  • Server-side GTM enables event processing, enrichment, and governance. It requires more maintenance and technical oversight, but it provides more control.

The two address different problems.

Here’s the key point: better infrastructure affects how data moves, not what it means.

Event definitions, conversion logic, and consistency across systems still determine data quality.

A reliable pipeline delivers whatever it’s given, so it’d be just as good at making sure the garbage you put in also comes back out.

Offline conversion imports: Moving measurement off the browser

Offline conversion imports take a different approach, moving measurement away from the browser entirely. Conversions are recorded in backend systems and sent to Google Ads after the fact.

Because this process is server to server, it’s less affected by browser privacy restrictions. It works for longer sales cycles, delayed purchases, and conversions that happen outside the site. 

This is why Google commonly recommends running offline imports alongside pixel-based tracking. The two cover different parts of the journey. One is immediate, the other persists.

Offline imports also align with current privacy constraints. They rely on data users provide directly, such as email addresses during a transaction or signup.

The data is processed server-side and aggregated, reducing reliance on browser identifiers and short-lived cookies.

Offline imports don’t replace pixels. They reduce dependence on them.

Dig deeper: Offline conversion tracking: 7 best practices and testing strategies

How Google fills the gaps

Even with pixels and offline imports working together, some conversions can’t be directly observed.

Matching when click IDs are missing

When click IDs are unavailable, Google Ads can still match conversions using other inputs.

This often begins with deterministic matching through hashed first-party identifiers such as email addresses, when those identifiers can be associated with signed-in Google users.

This is what Enhanced Conversions help achieve.

When deterministic matching, if this then that, isn’t possible, the system relies on aggregated and validated signals rather than reconstructing individual click paths.

These can include session-level attributes and limited, privacy-safe IP information, combined with timing and contextual constraints.

This doesn’t recreate the old click-level model, but it allows conversions to be associated with prior ad interactions at an aggregate level.

One thing I’ve noticed: adding these inputs typically improves matching before it affects bidding.

Bidding systems account for conversion lag and validate new signals over time, which means imported or modeled conversions may appear in reporting before they’re fully weighted in optimization.

Matching, attribution, and bidding are related but separate processes. Improvements in one don’t immediately change the others.

Modeled conversions as a standard input

Modeled conversions are now a standard part of Google Ads and GA4 reporting.

They’re used when direct observation isn’t possible, such as when consent is denied or identifiers are unavailable.

These models are constrained by available data and validated through consistency checks and holdback experiments.

When confidence is low, modeling may be limited or not applied. Modeled data should be treated as an expected component of measurement rather than an exception.

Dig deeper: Google Ads pushes richer conversion imports

Boundaries still matter

Tools like Google Tag Gateway or Enhanced Conversions for Leads help recover measurement signal, but they don’t override user intent. 

Routing data through a first-party domain doesn’t imply consent. Ad blockers and restrictive browser settings are explicit signals. 

Overriding them may slightly increase the measured volume, but it doesn’t align with users’ expectations regarding how your organization uses their data.

Legal compliance and user intent aren’t the same thing. Measurement systems can respect both, but doing so requires deliberate choices.

Designing for partial data

Missing signals are normal. Measurement systems that assume full visibility will continue to break under current conditions.

Redundancy helps: pixels paired with hardened delivery, offline imports paired with enhanced identifiers, and multiple incomplete signals instead of a single complete one.

But here’s where things get interesting. Different systems will see different things, and this creates a tension many advertisers now face daily.

Some clients tell us their CRM data points clearly in one direction, while Google Ads automation, operating on less complete inputs, nudges campaigns another way.

In most cases, neither system is wrong. They’re answering different questions with different data, on different timelines. Operating in a world of partial observability means accounting for that tension rather than trying to eliminate it.

Dig deeper: Auditing and optimizing Google Ads in an age of limited data

Making peace with partial observability

The shift toward privacy-first measurement changes how much of the user journey can be directly observed. That changes our jobs.

The goal is no longer perfect reconstruction of every click, but building measurement systems that remain useful when signals are missing, delayed, or inferred.

Different systems will continue to operate with different views of reality, and alignment comes from understanding those differences rather than trying to eliminate them.

In this environment, durable measurement depends less on recovering lost identifiers and more on thoughtful data design, redundancy, and human judgment.

Measurement is becoming more strategic than ever.

Read more at Read More

Web Design and Development San Diego

How SEO leaders can explain agentic AI to ecommerce executives

How to communicate agentic AI to ecommerce leadership without the hype

Agentic AI is increasingly appearing in leadership conversations, often accompanied by big claims and unclear expectations. For SEO leaders working with ecommerce brands, this creates a familiar challenge.

Executives hear about autonomous agents, automated purchasing, and AI-led decisions, and they want to know what this really means for growth, risk, and competitiveness.

What they don’t need is more hype. They need clear explanations, grounded thinking, and practical guidance. 

This is where SEO leaders can add real value, not by predicting the future, but by helping leadership understand what is changing, what isn’t, and how to respond without overreacting. Here’s how.

Start by explaining what ‘agentic’ actually means

A useful first step is to remove the mystery from the term itself. Agentic systems don’t replace customers, they act on behalf of customers. The intent, preferences, and constraints still come from a person.

What changes is who does the work.

Discovery, comparison, filtering, and sometimes execution are handled by software that can move faster and process more information than a human can.

When speaking to executive teams, a simple framing works best:

  • “We’re not losing customers, we’re adding a new decision-maker into the journey. That decision-maker is software acting as a proxy for the customer.” 

Once this is clear, the conversation becomes calmer and more practical, and the focus moves away from fear and toward preparation.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Keep expectations realistic and avoid the hype

Another important role for SEO leaders is to slow the conversation down. Agentic behavior will not arrive everywhere at the same time. Its impact will be uneven and gradual.

Some categories will see change earlier because their products are standardized and data is already well structured. Others will move more slowly because trust, complexity, or regulation makes automation harder.

This matters because leadership teams often fall into one of two traps:

  1. Panic, where plans are rewritten too quickly, budgets move too fast, and teams chase futures that may still be some distance away. 
  2. Dismissal, where nothing changes until performance clearly drops, and by then the response is rushed.

SEO leaders can offer a steadier view. Agentic AI accelerates trends that already exist. Personalized discovery, fewer visible clicks, and more pressure on data quality are not new problems. 

Agents simply make them more obvious. Seen this way, agentic AI becomes a reason to improve foundations, not a reason to chase novelty.

Dig deeper: Are we ready for the agentic web?

Change the conversation from rankings to eligibility

One of the most helpful shifts in executive conversations is moving away from rankings as the main outcome of SEO. In an agent-led journey, the key question isn’t “do we rank well?” but “are we eligible to be chosen at all?”

Eligibility depends on clarity, consistency, and trust. An agent needs to understand what you sell, who it is for, how much it costs, whether it is available, and how risky it is to choose you on behalf of a user. This is a strong way to connect SEO to commercial reality.

Questions worth raising include whether product information is consistent across systems, whether pricing and availability are reliable, and whether policies reduce uncertainty or create it. Framed this way, SEO becomes less about chasing traffic and more about making the business easy to select.

Explain why SEO no longer sits only in marketing

Many executives still see SEO as a marketing channel, but agentic behavior challenges that view.

Selection by an agent depends on factors that sit well beyond marketing. Data quality, technical reliability, stock accuracy, delivery performance, and payment confidence all play a role.

SEO leaders should be clear about this. This isn’t about writing more content. It’s about making sure the business is understandable, reliable, and usable by machines.

Positioned correctly, SEO becomes a connecting function that helps leadership see where gaps in systems or data could prevent the brand from being selected. This often resonates because it links SEO to risk and operational health, not just growth.

Dig deeper: How to integrate SEO into your broader marketing strategy

Be clear that discovery will change first

For most ecommerce brands, the earliest impact of agentic systems will be at the top of the funnel. Discovery becomes more conversational and more personal.

Users describe situations, needs, and constraints instead of typing short search phrases, and the agent then turns that context into actions.

This reduces the value of simply owning category head terms. If an agent knows a user’s budget, preferences, delivery expectations, and past behavior, it doesn’t behave like a first-time visitor. It behaves like a well-informed repeat customer.

This creates a reporting challenge. Some SEO work will no longer look like direct demand creation, even though it still influences outcomes. Leadership teams need to be prepared for this shift.

Get the newsletter search marketers rely on.


Reframe consideration as filtering, not persuasion

The middle of the funnel also changes shape. Today, consideration often involves reading reviews, comparing options, and seeking reassurance.

In an agent-led journey, consideration becomes a filtering process, where the agent removes options it believes the user would reject and keeps those that fit.

This has clear implications. Generic content becomes less effective as a traffic driver because agents can generate summaries and comparisons instantly. Trust signals become structural, meaning claims need to be backed by consistent and verifiable information.

In many cases, a brand may be chosen without the user being consciously aware of it. That can be positive for conversion, but risky for long-term brand strength if recognition isn’t built elsewhere.

Dig deeper: How to align your SEO strategy with the stages of buyer intent

Set honest expectations about measurement

Executives care about measurement, and agentic AI makes this harder. As more discovery and consideration happen inside AI systems, fewer interactions leave clean attribution trails. Some impact will show up as direct traffic, and some will not be visible at all.

SEO leaders should address this early. This isn’t a failure of optimization. It reflects the limits of today’s analytics in a more mediated world.

The conversation should move toward directional signals and blended performance views, rather than precise channel attribution that no longer reflects how decisions are made.

Promote a proactive, low-risk response

The most important part of the leadership discussion is what to do next. The good news is that most sensible responses to agentic AI are low risk.

Improving product data quality, reducing inconsistencies across platforms, strengthening reliability signals, and fixing technical weaknesses all help today, regardless of how quickly agents mature.

Investing in brand demand outside search also matters. If agents handle more of the comparison work, brands that users already trust by name are more likely to be selected.

This reassures leaders that action doesn’t require dramatic change, only disciplined improvement.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Agentic AI changes the focus, not the fundamentals

For SEO leaders, agentic AI changes the focus of the role. The work shifts from optimizing pages to protecting eligibility, from chasing visibility to reducing ambiguity, and from reporting clicks to explaining influence.

This requires confidence, clear communication, and a willingness to challenge hype. Agentic AI makes SEO more strategic, not any less important.

Agentic AI should not be treated as an immediate threat or a guaranteed advantage. It’s a shift in how decisions are made.

For ecommerce brands, the winners will be those that stay calm, communicate clearly, and adapt their SEO thinking from driving clicks to earning selection.

That is the conversation SEO leaders should be having now.

Dig deeper: The future of search visibility: What 6 SEO leaders predict for 2026

Read more at Read More

Web Design and Development San Diego

What repeated ChatGPT runs reveal about brand visibility

What repeated ChatGPT runs reveal about brand visibility

We know AI responses are probabilistic – if you ask an AI the same question 10 times, you’ll get 10 different responses.

But how different are the responses?

That’s the question Rand Fishkin explored in some interesting research.

And it has big implications for how we should think about tracking AI visibility for brands.

In his research, he tested prompts asking for recommendations in all sorts of products and services, including everything from chef’s knives to cancer care hospitals and Volvo dealerships in Los Angeles.

Basically, he found that:

  • AIs rarely recommend the same list of brands in the same order twice.
  • For a given topic (e.g., running shoes), AIs recommend a certain handful of brands far more frequently than others.

For my research, as always, I’m focusing exclusively on B2B use cases. Plus, I’m building on Fishkin’s work by addressing these additional questions:

  • Does prompt complexity affect the consistency of AI recommendations?
  • Does the competitiveness of the category affect the consistency of recommendations?

Methodology

To explore those questions, I first designed 12 prompts:

  • Competitive vs. niche: Six of the prompts are about highly competitive B2B software categories (e.g., accounting software), and the other six are about less crowded categories (e.g., user entity behavior analytics (UEBA) software). I identified the categories using Contender’s database, which tracks how many brands ChatGPT associates with 1,775 different software categories.
  • Simple vs. nuanced prompts: Within both sets of “competitive” and “niche” prompts, half of the prompts are simple (“What’s the best accounting software?”) and the other half are nuanced prompts including a persona and use case (”For a Head of Finance focused on ensuring financial reporting accuracy and compliance, what’s the best accounting software?”)

I ran the 12 prompts 100 times, each, through the logged-out, free version of ChatGPT at chatgpt.com (i.e., not the API). I used a different IP address for each of the 1,200 interactions to simulate 1,200 different users starting new conversations.

Limitations: This research only covers responses from ChatGPT. But given the patterns in Fishkin’s results and the similar probabilistic nature of LLMs, you can probably generalize the directional (not absolute value) findings below to most/all AIs.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Findings

So what happens when 100 different people submit the same prompt to ChatGPT, asking for product recommendations?

How many ‘open slots’ in ChatGPT responses are available to brands?

On average, ChatGPT will mention 44 brands across 100 different responses. But one of the response sets included as many as 95 brands – it really depends on the category.

How many brands does ChatGPT draw from, on average?

Competitive vs. niche categories

On that note, for prompts covering competitive categories, ChatGPT mentions about twice as many brands per 100 responses compared to the responses to prompts covering “niche” categories. (This lines up with the criteria I used to select the categories I studied.)

Simple vs. nuanced prompts

On average, ChatGPT mentioned slightly fewer brands in response to nuanced prompts. But this wasn’t a consistent pattern – for any given software category, sometimes nuanced questions ended up with more brands mentioned, and sometimes simple questions did.

This was a bit surprising, since I expected more specific requests (e.g., “For a SOC analyst needing to triage security alerts from endpoints efficiently, what’s the best EDR software?”) to consistently yield a narrower set of potential solutions from ChatGPT.

I think ChatGPT might not be better at tailoring a list of solutions to a specific use case because it doesn’t have a deep understanding of most brands. (More on this data in an upcoming note.)

Return of the ’10 blue links’

In each individual response, ChatGPT will, on average, mention only 10 brands.

There’s quite a range, though – a minimum of 6 brands per response and a maximum of 15 when averaging across response sets.

How many brands per response, on average?

But a single response typically names about 10 brands regardless of category or prompt type.

The big difference is in how much the pool of brands rotates across responses – competitive categories draw from a much deeper bench, even though each individual response names a similar count.

Everything old (in SEO) truly is new again (in GEO/AEO). It reminds me of trying to get a placement in one of Google’s “10 blue links”.

Dig deeper: How to measure your AI search brand visibility and prove business impact

Get the newsletter search marketers rely on.


How consistent are ChatGPT’s brand recommendations?

When you ask ChatGPT for a B2B software recommendation 100 different times, there are only ~5 brands, on average, that it’ll mention 80%+ of the time.

To put it in context, that’s just 11% of all the 44 brands it’ll mention at all across those 100 responses.

ChatGPT knows ~44 brands in your category

So it’s quite competitive to become one of the brands ChatGPT consistently mentions whenever someone asks for recommendations in your category.

As you’d expect, these “dominant” brands tend to be big, established brands with strong recognition. For example, the dominant brands in the accounting software category are QuickBooks, Xero, Wave, FreshBooks, Zoho, and Sage.

If you’re not a big brand, you’re better off being in a niche category:

It's easier to get good AI visibility in niche categories

When you operate in a niche category, not only are you literally competing with fewer companies, but there are also more “open slots” available to you to become a dominant brand in ChatGPT’s responses.

In niche categories, 21% of all the brands ChatGPT mentions are dominant brands, getting mentioned 80%+ of the time.

Compare this to just 7% of all brands being dominant in competitive categories, where the majority of brands (72%) are languishing in the long tail, getting mentioned less than 20% of the time.

The responses to nuanced prompts are harded to dominate

A nuanced prompt doesn’t dramatically change the long tail of little-seen brands (with <20% visibility), but it does change the “winner’s circle.” Adding persona context to a prompt makes it a bit more difficult to reach the dominant tier – you can see the steeper “cliff” a brand has to climb in the “nuanced prompts” graph above.

This makes intuitive sense: when someone asks “best accounting software for a Head of Finance,” ChatGPT has a more specific answer in mind and commits a bit more strongly to fewer top picks.

Still, it’s worth noting that the overall pool doesn’t shrink much – ChatGPT mentions ~42 brands in 100 responses to nuanced prompts, just a handful fewer than the ~46 mentioned in response to simple prompts. If nuanced prompts make the winner’s circle a bit more exclusive, why don’t they also narrow the total field?

Partly, it could be that the “nuanced” questions we fed it weren’t meaningfully more narrow and specific than what was implied in the simple questions we asked.

But, based on other data I’m seeing, I think this is partly about ChatGPT not knowing enough about most brands to be more selective. I’ll share more on this in an upcoming note.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

What does this mean for B2B marketers?

If you’re not a dominant brand, pick your battles – niche down

It’s never been more important to differentiate. 21% of mentioned brands reach dominant status in niche categories vs. 7% in competitive ones.

Without time and a lot of money for brand marketing, an upstart tech company isn’t going to become a dominant brand in a broad, established category like accounting software.

But the field is less competitive when you lean into your unique, differentiating strengths. ChatGPT is more likely to treat you like a dominant brand if you work to make your product known as “the best accounting software for commercial real estate companies in North America.”

Most AI visibility tracking tools are grossly misleading

Given the inconsistency of ChatGPT’s recommendations, a single spot-check for any given prompt is nearly meaningless. Unfortunately, checking each prompt just once per time period is exactly what most AI visibility tracking tools do.

If you want anything approaching a statistically-significant visibility score for any given prompt, you need to run the prompt at least dozens of times, even 100+ times, depending on how precise you need the data to be.

But that’s obviously not practical for most people, so my suggestion is: For the key, bottom-of-funnel prompts you’re tracking, run them each ~5 times whenever you pull data.

That’ll at least give you a reasonable sense of whether your brand tends to show up most of the time, some of the time, or never.

Your goal should be to have a confident sense of whether your brand is in the little-seen long tail, the visible middle, or the dominant top-tier for any given prompt. Whether you use my tiers of ‘under 20%’, ‘20–80%’, and ‘80%+’, or your own thresholds, this is the approach that follows the data and common sense.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

What’s next?

In future newsletters and LinkedIn posts, I’m going to build on these findings with new research:

  • How does ChatGPT talk about the brands it consistently recommends? Is it indicative of how much ChatGPT “knows” about brands?
  • Do different prompts with the same search intent tend to produce the same set of recommendations?
  • How consistent is “rank” in the responses? Do dominant brands tend to get mentioned first?

This article was originally published on Visible on beehiiv (as Most AI visibility tracking is misleading (here’s my new data)) and is republished with permission.

Read more at Read More