Posts

Web Design and Development San Diego

Content scoring tools work, but only for the first gate in Google’s pipeline

Content scoring tools work, but only for the first gate in Google’s pipeline

Most SEO professionals give Google too much credit. We assume Google understands content the way we do — that it reads our pages, grasps nuance, evaluates expertise, and rewards quality in some deeply intelligent way. The DOJ antitrust trial told a different story.

Under oath, Google VP of Search Pandu Nayak described a first-stage retrieval system built on inverted indexes and postings lists, traditional information retrieval methods that predate modern AI by decades. Court exhibits from the remedies phase reference “Okapi BM25,” the canonical lexical retrieval algorithm that Google’s system evolved from. The first gate your content has to pass through isn’t a neural network. It’s word matching.

Google does deploy more advanced AI further down the pipeline, including BERT-based models, dense vector embeddings, and entity understanding systems. But those operate only on the much smaller candidate set traditional retrieval produces. We’ll walk through where each technology enters the process.

This matters for content optimization tools like Surfer SEO, Clearscope, and MarketMuse. Their core methodology — a mix of TF-IDF analysis, topic modeling, and entity evaluation — maps directly to how that first retrieval stage scores documents. The tools are built on the right foundation. The problem is that most people use them incorrectly, and the studies backing them have real limitations.

Below, I’ll explain how first-stage retrieval works and why it still matters, what the research on content scoring tools actually shows — and doesn’t show — and most importantly, how to use these tools to produce content that earns its way into the candidate set without wasting time chasing a perfect score.

How first-stage retrieval works and why content tools map to it

Best Matching 25 (BM25) is the retrieval function most commonly associated with Google’s first-stage system. 

Nayak’s testimony described the mechanics it formalizes: an inverted index that walks postings lists and scores topicality across hundreds of billions of indexed pages, narrowing the field to tens of thousands of candidates in milliseconds. 

Here’s what matters for content creators:

  • Term frequency with saturation: The first mention of a relevant term captures roughly 45% of the maximum possible score for that term. Three mentions get you to about 71%. Going from three to thirty adds almost nothing. Repetition has steep diminishing returns.
  • Inverse document frequency: Rare, specific terms carry more scoring weight than common ones. “Pronation” is worth roughly 2.5 times more than “shoes” in a running shoe query because fewer pages contain it.
  • Document length normalization: Longer documents get penalized for the same raw term count. All of these scoring algorithms are essentially looking at some degree of density relative to word count, which is why every content tool measures it.
  • The zero-score cliff: If a term doesn’t appear in your document at all, your score for that term is exactly zero. Not low. Zero. You’re invisible for every query containing it.

That last point is the single most important reason content optimization tools have value. If you write a comprehensive rhinoplasty article but never mention “recovery time,” you score zero for that entire cluster of queries, regardless of how good the rest of your content is. 

Google has systems like synonym expansion and Neural Matching — RankEmbed — that can supplement lexical retrieval and surface additional documents. But counting on those systems to rescue a page with vocabulary gaps is a risky strategy when you can simply cover the term.

After first-stage retrieval, the pipeline gets progressively more expensive and more sophisticated. RankEmbed adds candidates keyword matching missed. Mustang applies roughly 100+ signals, including topicality, quality scores, and NavBoost — accumulated click data over 13 months, described by Nayak as “one of the strongest” ranking signals. 

DeepRank applies BERT-based language understanding to only the final 20 to 30 results because these models are too expensive to run at scale. The practical implication is clear: no amount of authority or engagement signals helps if your page never passes the first gate. Content optimization tools help you get through it. What happens after is a different problem.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

What the research on content tools actually shows

Three major studies have examined whether content tool scores correlate with rankings: Ahrefs (20 keywords, May 2025), Originality.ai (~100 keywords, October 2025), and Surfer SEO (10,000 queries, July 2025). All found weak positive correlations in the 0.10 to 0.32 range.

A 0.24 to 0.28 correlation is actually meaningful in this context. But these numbers need serious qualification. Every study was conducted by a vendor, and in every case, the vendor’s own tool performed best. 

No study controlled for confounding variables like backlinks, domain authority, or accumulated click data. The methodology is fundamentally circular: the tools generate recommendations by analyzing pages that already rank in the top 10 to 20, then the studies test whether pages in the top 10 to 20 score well on those same tools.

The real question — whether following tool recommendations helps a new, unranked page climb — has never been rigorously tested. Clearscope’s Bernard Huang put it directly: “A 0.26 correlation is not the brag they think it is.” 

He’s right. But a weak positive correlation is exactly what you’d expect if these tools solve the retrieval problem — getting into the candidate set — without solving the ranking problem — beating competitors once there. Understanding that distinction is what makes these tools useful rather than misleading.

Why not skip these tools altogether?

Expert writers are terrible at predicting how their audience actually searches. MIT Sloan’s Miro Kazakoff calls it the curse of knowledge. Once you know something, you forget what it was like before you knew it. 

Clearscope’s case study with Algolia illustrates the problem precisely. Algolia’s writers were technical experts producing genuinely excellent content that sat on Page 9. The problem wasn’t quality. The team was using internal jargon instead of the language their audience actually typed into Google. 

After adopting Clearscope, their SEO manager Vince Caruana said the tool helped the organization “start writing for our audience instead of ourselves” by breaking out of internal vocabulary. Blog posts moved from Page 9 to Page 1 within weeks. Not because the writing improved, but because the vocabulary finally matched search behavior.

Google’s own SEO Starter Guide acknowledges this dynamic, noting that users might search for “charcuterie” while others search for “cheese board.” Content optimization tools surface that gap by showing you the actual vocabulary of pages that have already demonstrated retrieval success. 

You can do everything a tool does manually by reading top results and noting common themes, but the tools automate hours of SERP analysis into minutes. At $79 to $399 per month, the investment is justified when teams publish frequently in competitive niches or assign work to freelancers lacking domain expertise. For a solo blogger publishing once or twice a month, manual analysis works fine.

What about AI-powered retrieval?

Dense vector embeddings are the same core technology behind LLMs and AI-powered search features. They compress a document into a fixed-length numerical representation and can match semantically similar content even without shared keywords. Google uses them via RankEmbed, but they supplement lexical retrieval rather than replace it.

The reason is computational: A 768-dimensional embedding can preserve only so much information, and research from Google DeepMind’s 2025 LIMIT paper showed that single-vector models max out at roughly 1.7 million documents before relevance distinctions break down — a small fraction of Google’s index. Multiple studies, including findings on the BEIR benchmark, show hybrid approaches combining BM25 with dense retrieval outperform either method alone.

The bottom line for practitioners is clear: The AI layer matters, but it sits lower in the pipeline, and the traditional retrieval stage your content tools map to still does the heavy lifting at scale.

Get the newsletter search marketers rely on.


How to actually use content scoring tools

This is where most guidance on content tools falls short. The typical advice is “use Surfer/Clearscope, get a high score, rank better.” 

That misses the point entirely. Here’s a framework built on how these tools actually intersect with Google’s retrieval mechanics.

Prioritize zero-usage terms over everything else

The highest-leverage action these tools identify is a term with zero mentions in your content. That’s a term where your retrieval score is literally zero, and you’re invisible for every query containing it. Going from zero to one mention is the single most impactful edit you can make. Going from four mentions to eight is nearly worthless because of the saturation curve.

When reviewing tool recommendations, filter for terms you haven’t used at all. Clearscope’s “Unused” filter does this explicitly. 

Ask yourself: Does this missing term represent a subtopic my audience would expect me to cover? If yes, work it in naturally. If the tool suggests a term that doesn’t fit your angle — a beginner’s guide doesn’t need advanced technical terminology — skip it. 

A high score achieved by forcing irrelevant terms into your content is worse than a moderate score with genuinely useful writing. As Ahrefs noted in its 2025 study, “you can literally copy-paste the entire keyword list, draft nothing else, and get a high score.” That tells you everything about the limits of chasing the number.

Be selective about which competitor pages you analyze

Default settings on most tools pull from the top 10 to 20 ranking pages, which frequently includes Wikipedia, major media outlets, and enterprise sites with overwhelming domain authority. These pages often rank despite their content, not because of it. Their term patterns reflect authority advantage, not content quality, and they’ll skew your recommendations.

A better approach: Look for pages that rank for a high number of organic keywords on mid-authority domains. 

Ahrefs’ data shows the average page ranking No. 1 also ranks in the top 10 for nearly 1,000 other keywords. A page ranking for 500 keywords on a DR 35 site has demonstrated broad retrieval success through vocabulary and topical coverage, not just backlinks. Those pages contain term patterns proven effective across hundreds of separate retrieval events, not just one. 

In most tools, you can manually exclude specific URLs from competitor analysis. Remove the Wikipedia pages, the Amazon listings, and any high-authority site where you know authority is doing the work. What’s left gives you a much cleaner picture of what content actually needs to include.

Use tools during research, not during writing

The worst workflow is writing with the scoring editor open, watching your number tick up in real time. That pulls your attention toward keyword insertion instead of communicating expertise. Practitioners reporting the worst experiences with these tools tend to be the ones writing to a live score.

The better workflow: Run the tool first. Review the term list. Identify gaps in your outline, especially terms with zero usage that represent subtopics you should cover. Then close the tool and write for your reader. 

Run it again at the end as a sanity check. Did you miss any major subtopics? Add them. Is the score significantly lower than competitors? That’s information worth investigating. But your job is to build the best page on the internet for this topic, not to match a number.

Understand that content is one player in the game

NavBoost, RankEmbed, PageRank-derived quality scores, site authority, click data, and engagement signals all operate on the candidate set that first-stage retrieval produces. Content optimization gets you through the gate. It doesn’t win the race. 

If you optimize a page, push the score to 90, and don’t see ranking improvements, that doesn’t mean the tool failed. It likely means the other ranking factors — backlinks, domain authority, and click signals — are doing more work for your competitors than content alone can overcome.

This is especially important when scoping on-page optimization projects. Be honest about what content changes can and can’t accomplish. If a page is on a DR 15 domain competing against DR 70+ sites, perfect content optimization is necessary but probably not sufficient. 

When a client asks why they’re not ranking after you pushed their score to 95, the answer shouldn’t be “we need more content.” It should be a clear explanation of which part of the problem content solves — retrieval — which parts it doesn’t — authority, engagement, brand — and what the next strategic move actually is.

Focus on going beyond, not just matching

The philosophy behind these tools — structure your content after what top results cover — is sound. You need to demonstrate topical relevance to enter the candidate set. But the goal isn’t to produce another version of what already exists.

The pages that rank broadly, the ones that show up for hundreds or thousands of keywords, consistently do more than match the competitive baseline. They add original research, practitioner experience, specific examples, or angles the existing results don’t cover.

Surfer SEO’s December 2024 study supports this. It measured “facts coverage” across articles and found that top-performing content by keyword breadth had significantly higher coverage scores than bottom performers.

The content that ranks for the most queries doesn’t just include the right terms. It includes more information, more specifically. Use the tool to establish the floor of topical coverage. Then build the ceiling with value the tool can’t measure.

A note on entities

Google’s Knowledge Graph contains an estimated 54 billion entities. Entity understanding becomes most powerful in the later ranking stages where BERT and DeepRank process final candidates. 

Some content tools are starting to incorporate entity analysis, but even the best versions present entities as flat keyword lists, missing the relationships between entities that Google’s systems actually evaluate. 

Knowing that “Dr. Smith” and “rhinoplasty” appear on your page is different from understanding that Dr. Smith is a board-certified surgeon with published research at a specific institution. That relational depth is what Google processes, and no content scoring tool currently captures it. 

Treat entity coverage as an additional layer beyond what keyword-focused tools measure, not a replacement for the fundamentals.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Retrieval before ranking

Content optimization tools work because they’ve reverse-engineered the vocabulary of the retrieval stage. That’s a less exciting claim than “they’ve cracked Google’s algorithm,” but it’s the honest one, and it’s supported by what the DOJ trial revealed about Google’s infrastructure.

Use these tools to identify missing terms and subtopics. Be skeptical of exact frequency targets. Exclude high-authority outliers from your competitor analysis. Prioritize zero-usage terms over further optimization of terms you’ve already covered. 

Understand that a perfect content score addresses one stage of a multi-stage pipeline and use the competitive baseline as your floor, not your ceiling. The content that ranks the broadest isn’t the content that best matches what already exists. It’s the content that covers what already exists and then goes further.

Read more at Read More

How to Create a Wikipedia Page for Your Company

Wikipedia is a fascinating experiment. It’s a community-built encyclopedia that’s always in motion. It runs on volunteer energy and openly shared infrastructure, and it’s closer to an open-source project in how it’s built than a traditional encyclopedia book. Anyone can write, edit, and debate what belongs on a page.

And that’s the twist. The “truth” on Wikipedia isn’t handed down by a single editor or community member. It’s negotiated in public, guided by community standards, citations, and a whole lot of conversation. Contributors don’t so much control a subject’s story as they continually test it. They’re constantly asking questions: What can we verify? What deserves weight? What’s missing?

When you read a Wikipedia article, you’re seeing a current snapshot of a living, evolving community decision.

This whole experiment has scale, too. As of February 6, 2026, the English Wikipedia had 7.13 million articles, and the project spanned more than 340 languages.

If you’re thinking about creating a Wikipedia page for your company, it helps to know what you’re signing up for. Wikipedia isn’t a marketing channel, and it isn’t designed for companies to shape their narrative. 

It’s designed to summarize what independent, reliable sources have already said about a company, so not every organization qualifies for a stand-alone article. Wikipedia cautions that only a small percentage of organizations meet the requirements for an article in the first place.

The easiest way to orient yourself with the platform is to keep Wikipedia’s “five pillars” top of mind. Wikipedia is, first and foremost, an encyclopedia. It aims for a neutral point of view, the content is free for anyone to use and edit, editors are expected to be civil, and there are no hard-and-fast rules. It’s just policies and guidelines applied with unbiased judgment.

If your company is genuinely notable by Wikipedia’s standards and you’re willing to play by its guidelines, there’s a real visibility upside in a solid, well-sourced page that holds up over time.

Key Takeaways

  • Wikipedia isn’t for marketing. If a Wikipedia page reads like company positioning, a feature brochure, or a pricing page, it’ll get rejected, reverted, or flagged. Even if other company pages “get away with it,” you need to focus on creating a deeply researched, informative draft to give strong notability in Wikipedia’s eyes. 
  • Notability = independent coverage. You need multiple strong secondary sources (real reporting with editorial standards). Press releases, paid placements, niche trade mentions, and contributor “interviews” don’t hold up.
  • Sources drive the outline (and the page). Build your outline from what your credible secondary sources already cover. Possible sections could include a lead, history, high-level operations, leadership, or controversies, if documented. Each company’s outline may look different depending on what information can be strongly sourced. If you can’t source a section cleanly, it doesn’t belong.
  • Use Wikipedia’s Articles for Creation (AfC) process to avoid conflict of interest (COI) roadblocks. If you’re connected to a company or paid to write a Wikipedia page for them, you must disclose it and lean on the AfC process instead of directly pushing a company page live.
  • Getting published isn’t the finish line. Volunteers continuously review pages. Expect ongoing edits, scrutiny, and occasional challenges, so monitor a live page and keep it updated with strong, independent citations.

What Are the Benefits of Creating a Wikipedia Page?

The most significant benefit of Wikipedia is its sheer size and reach. It is one of the most visited websites in the world, averaging more than 1.1 billion unique visitors per month.

In addition to the size of its audience, the platform offers other benefits to marketers and company owners:

  • Credibility via independent validation (earned, not claimed): A live Wikipedia page signals that reliable, third-party sources have covered your organization in a meaningful way. For journalists, partners, investors, and enterprise buyers, this can reduce skepticism during research.
  • Search and AI visibility (off-page, long-term): Wikipedia tends to surface prominently in search results and is commonly referenced by knowledge systems. A well-sourced page can support progress in how your company appears in search features, AI overviews (AIOs), and large language model (LLM) output, based on what independent sources say, not what a company wants to say.
  • A neutral orientation page for readers: Wikipedia’s format helps readers quickly understand a company’s basics, including history, products or services, leadership, milestones, and context. The tradeoff is accessible neutrality. Anything included needs support from reliable secondary sources, and promotional language rarely lasts.
  • Clarity and disambiguation: If your name overlaps with other companies, or your story includes mergers, rebrands, or multiple founders, Wikipedia can help people land on the right entity and timeline.
  • A durable reference hub: A good Wikipedia page often becomes a stable directory of the strongest independent sources about you, such as press, books, and other reputable coverage, so readers can verify details without relying on your website alone.
  • Consistency across the web (a quiet multiplier): Wikipedia and related knowledge sources are reused in many downstream places. When the facts are clean, cited, and consistent, it can improve how your company is represented across third-party profiles and information panels over time.

A Wikipedia page is rarely a conversion engine, and it isn’t a place to “own” your story. The value is credibility and discoverability that can compound, but benefits can vary based on the strength of independent coverage and ongoing community scrutiny.

Below, we’ll cover the 10 steps on how to create a Wikipedia page, as well as considerations to keep in mind.

1. Check to See If Your Company is a Good Fit for a Wikipedia Page

Before you think about how to create a Wikipedia page for your company, you need to answer one question:

Would Wikipedia editors consider your company “notable”?

On Wikipedia, “notability” has nothing to do with how compelling your company story is. It means there’s enough independent, reliable coverage about your company that an article can be written from what third parties have already published, without filling in gaps with interpretation, insider knowledge, or marketing claims.

This is also where a lot of brand teams get tripped up. Again, Wikipedia isn’t a marketing channel. It’s not a place to shape messaging or control a narrative. If the only story you can tell is the one you want to tell, the page will be declined during initial submission review or deleted later.

What Notability Actually Looks Like

A company is usually considered notable when it receives significant coverage in multiple reliable sources independent of the company. “Significant coverage” is the key phrase here. Editors are looking for articles that discuss your company in real depth, not quick mentions or short blurbs.

A helpful way to think about it is this: if you can’t outline a neutral article using independent secondary sources alone, you probably don’t have enough notability yet.

Editors typically want coverage that checks these boxes:

  • Independent: Truly third-party reporting. Not press releases, paid placements, sponsored posts, advertorials, partner blogs, or content your PR team arranged. If a piece exists because the company made it happen, editors tend to discount it.
  • Significant: More than a passing mention. A funding announcement, product launch blurb, or event listing can be real coverage and still not be enough. The strongest sources are the ones that explain context, impact, history, or controversy in detail.
  • Secondary: Sources that analyze, summarize, or report on the company from the outside. Primary sources like your website, blog, press page, or social channels can support basic facts in limited cases, but they do not establish notability.
  • Reliable: Publications with editorial oversight and a reputation for accuracy. Big-name outlets can help, but they are not the only option. Trade and industry publications can be excellent sources when they have real editorial standards and provide in-depth coverage, but you can rarely use them to establish notability.
  • Multiple and sustained: A single great source is rarely enough on its own. Editors want to see more than one strong source, ideally across time, so the page can hold up after more people review it.
  • Neutral tone: Even when a source is independent, it can still be weak if it reads like promotion. Glowing profiles, “thought leadership” posts, or contributor content that feels like marketing often carry less weight than staff-reported coverage.

One nuance that matters a lot in practice is that “lots of links” does not equal notability. Companies can appear all over the internet through routine announcements and PR-driven writeups and still fail Wikipedia’s notability test.

What matters is whether independent sources have treated the company as worthy of real, substantive coverage. This also means that magazines and trade publications can’t work as reliable coverage to establish notability. Many industry leaders also run trade organizations, creating a conflict of interest (COI, in Wikipedia’s terms) if their trade publication were to cover their own company or the companies of friends or contributors. 

If your company does not meet this bar yet, that’s not a judgment on it. It just means a Wikipedia article is likely premature, and the better move is to wait until there is enough independent coverage to support a neutral, well-sourced page.

A Note on Conflict of Interest (COI)

If you’re writing about your own company (or you’re paid to write for a company), Wikipedia considers that a conflict of interest (COI). That doesn’t automatically ban you from participating, but it does change how you should approach it.

When creating a new page, submit it to Articles for Creation (AfC) to ensure community editors review it properly. 

When editing an existing page, you want to create your edits in a Sandbox draft (the Sandbox is a personal workspace where you can safely draft and refine changes to an article before submitting them for public review). Then, you submit that Sandbox draft onto the live Wikipedia page’s Talk page, along with a comment that asks community members to review and collaborate on the edits you suggested. Once a community consensus is reached, you can push those edits or additions live. 

An example of a sandbox page on Wikipedia.

Source: https://courses.shroutdocs.org/tutorials/editing-your-wikipedia-sandbox/

It’s also a good idea to disclose your COI connection. Your disclosure should be one of the following:

  • A statement on your User page.
  • A statement on the Talk page accompanying any paid contributions.
  • A statement in the edit summary accompanying any paid contributions.

Avoid directly creating or heavily editing an article and stick to Wikipedia’s COI process to request edits for independent editors to review.

Again, this is about expectations. If your team is hoping to just write a draft and hit “publish,” like you do with a blog, you’re going to have a bad time. But if you do have strong, independent coverage from credible outlets, you’ve got a real shot and can move to the next step.

2. Create a Wikipedia Account

Creating an account is a practical next step if you plan to contribute to Wikipedia. While you don’t need an account to read Wikipedia (or even to edit some pages), registering gives you features that make collaboration and transparency easier.

With an account, you can:

  • Create a User page (a simple profile and a place to draft in a Sandbox).
  • Use your Talk page to communicate with other editors.
  • Build an edit history tied to your username (helpful for credibility and continuity).
  • Work through article creation more smoothly, including drafting and submitting via AfC.

If you add images to your User page, make sure they’re properly licensed. Wikipedia generally accepts only freely licensed uploads.

To register, use Wikipedia’s account creation form.

The Create Account Page on Wikipedia.

After that, you’re set up to start editing, drafting, and participating in the community.

3. Contribute to Existing Pages

Quick reminder from earlier: If you’re connected to the company, you’re dealing with a COI. That’s why Wikipedia prefers that company pages undergo independent review before publication.

As a newbie, a good way to get comfortable on Wikipedia is to start by editing existing articles that have nothing to do with your organization. When you spend time improving clarity, tightening wording, and backing up facts with solid sources, you learn how Wikipedia works, and you build a history of helpful contributions.

As you do that, your account may become autoconfirmed. That usually happens automatically after your account has been around for more than four days and you’ve made at least 10 edits to Wikipedia pages that need them. Autoconfirmed status primarily grants a few basic permissions, such as creating pages and editing some semi-protected articles.

An Autoconfirmed Wikipedia account.

Here’s the key point, though: “Autoconfirmed” does not change your COI situation. Even if you can technically publish a page directly, a company-related article should still be written as a draft and submitted through AfC. This is the step that gets you the independent review Wikipedia expects, and it’s the safest, most appropriate route for a company page.

4. Conduct Research and Gather Sources

Before you write a single line of your Wikipedia draft, do the homework. Wikipedia doesn’t prioritize non-source-backed storytelling. The platform only cares about verifiability, meaning every meaningful claim must be backed by a reliable secondary source that an editor can check. Your company story could play well on Wikipedia, as long as there’s enough reliable evidence to back it up. 

This is where most company pages fall apart. Not because the company isn’t real, but because the sources are thin, biased, or too “inside baseball.”

Why sources matter so much on Wikipedia

Wikipedia runs on two big rules:

  • No original research: You can’t “introduce” new facts, even if they’re true, without proper citation. Which leads to the next point…
  • Cite everything that matters: If it’s notable, controversial, or specific (revenue, awards, history, key dates, acquisitions), you need a secondary source to back it up.

Primary vs. secondary vs. tertiary sources (and how Wikipedia treats them)

Wikipedia breaks sources down into three categories: primary, secondary, and tertiary. Here is a look at each and how they play into the strength of your Wiki page:

  • Primary sources (you): Your website, press releases, investor decks, published reports, filings (e.g., Securities Exchange Commission (SEC), etc.).
    • Upside: Can work for basic, factual details (launch dates, historical milestones, etc.).
    • Downside: Biased by default. Editors won’t accept these for “notability” or big claims like “industry leader.”
  • Secondary sources (best for Wikipedia): Independent journalism, books, academic analysis, reputable profiles.
    • Upside: Shows the world noticed you. This is the backbone of the strongest pages.
    • Downside: Harder to earn, and fluff pieces don’t carry much weight.
  • Tertiary sources: Encyclopedias, databases, reputable directories.
    • Upside: Useful for quick confirmation and context.
    • Downside: Often too shallow to prove notability on their own.

Overall, secondary sources are the most important to your success. By their nature, these sources are pivotal in helping you summarize what experts think about a company or topic in Wikipedia’s voice. Relying heavily on these gives you a really strong case for notability in Wikipedia’s eyes. 

What Makes a Good Wikipedia Source?

Good Wikipedia sources cover topics while maintaining editorial standards. Think major publications, local newspapers of record, respected business outlets, and independent industry analysis. If you’re short on that kind of coverage, that’s usually a PR problem, not a Wikipedia problem. Strengthening your digital PR (DPR) efforts can help you earn credible mentions that hold up under editor scrutiny.

But DPR for a Wikipedia use case must be handled carefully. What tends to work is focusing on independent coverage first. This looks like pitching credible story angles to journalists and outlets that genuinely cover your industry, and accepting that they may say no, or cover the story in a way you can’t control.

When an outlet does publish real, editorial reporting, that’s the kind of secondary source Wikipedia editors are more likely to accept.

Reliable Sources at a Glance

After seeing what Wiki editors consider reliable sources, you might be wondering where you even find sources that hit all their criteria. It helps to look at real-world use cases of which sources are best for your company. Here are some of the types of sites you can choose from.

For company pages, the sources that matter most are the ones that provide significant, independent coverage; the kind that demonstrates notability and gives editors something substantial to cite.

  • Major national/international newsrooms (strongest for notability + facts): Reuters, AP, BBC, Financial Times, The Wall Street Journal, Bloomberg, The New York Times, The Washington Post, NPR (news reporting over opinion).
  • Reputable business and investigative reporting: Deep dives and investigations from established outlets (e.g., ProPublica) can be highly valuable, especially for controversies, legal issues, and accountability reporting.
  • High-quality trade press with editorial oversight (context-dependent): Useful for industry coverage when it’s independent and more than a product announcement or reposted PR. You cannot use trade press as a primary indicator of notability, though.
  • Books from reputable publishers: Especially helpful for founders, company history, and industry impact when written by independent authors and published by established presses.
  • Government and major non-governmental organization (NGO) reports (within remit): Strong for regulatory actions, enforcement, public contracts, or formal assessments (but not a substitute for independent secondary coverage).
  • Medical/health claims (only when relevant): For biomedical statements, prioritize high-quality secondary sources like systematic reviews and authoritative guidelines (MEDRS standard), not individual studies or marketing claims.

Check out Wikipedia’s Perennial Sources list to see which sources have a good community track record because they all meet a high level of fact-checking and editorial standards. But remember, the sources featured in this list are still contextual; it’s not a whitelist. 

Non-reliable Sources

To paint a clearer picture, here are some of the sources you should avoid:

  • Self-published/user-generated content (UGC): Personal blogs, Substack/Medium posts, self-hosted sites, most social media. 
  • Press releases/advertorial: Company press rooms, PR wires; these are fine to state that an announcement occurred, not to establish third-party facts or notability. 
  • Sensational/tabloid sources: Outlets known for gossip/sensationalism; poor for verifying facts. 
  • Anonymous forums and crowdsourced threads: Message boards, comment sections, most Reddit/4chan/Discord posts. 

Wikipedia views these types of sources as weaker because they aren’t research-backed, trustworthy, or credible. The common thread is that they undergo minimal editorial oversight (if any) or, in Reddit’s case, most of the content is UGC and self-published. 

5. Research Your Competition

Like many things when it comes to Wikipedia, researching your competitors is fine if you do it the right way. As you start your research, view your competitors’ pages through the lens of what Wikipedia editors ultimately want. 

The challenge here is that Wikipedia isn’t perfectly consistent. Some company pages are old, lightly monitored, or haven’t been updated to match today’s standards.

When someone says, But other pages include feature lists and product tier breakdowns,” that doesn’t really matter. Editors don’t treat “other pages do it” as a justification. They judge your page on whether it reads like an encyclopedia entry and whether it’s backed by independent, reliable sources.

General Competitor Research Rules

Use competing Wiki pages to answer questions like:

  • What’s the typical structure for a company page in your category? Take note of the typical section titles. (We’ll dive into this next.) 
  • What kind of claims survive without getting reverted? (Neutral, sourced, non-promotional.)
  • What sources are doing the heavy lifting on pages that stay live?

A “Wiki-safe” Research Method

Pick 3–5 competitors with live pages, then audit them like an editor would:

  1. Scan the citations first. Are they mostly independent, secondary news coverage, press releases/company sites, or paid placements?
  2. Check the tone. If it reads like a promotional brochure (feature-by-feature, pricing tiers, “best-in-class”), that’s a red flag, even if it hasn’t been removed yet.
  3. Look at the page history and Talk page. Lots of reverts, banners, or sourcing disputes usually mean the page is shaky.
  4. Note what’s missing. If competitors avoid detailed feature lists, that’s usually a sign that those details don’t belong on Wikipedia.

6. Create an Outline

Once you’ve got your sources, your outline has a starting point. The hard part is deciding what belongs.

On Wikipedia, an outline is not “everything you want to say.” It’s you making careful decisions about what independent, reliable sources have actually covered, what they have not covered, and what deserves space without turning the page into a brochure. That takes judgment, and it often takes multiple passes.

The mindset you want is simple: Wikipedia pages are built around what reliable secondary sources already said about the subject. Your outline is how you organize those sourced facts into a structure that editors recognize and are willing to review.

Start with the standard Wikipedia “shape”

Most company pages follow a formulaic layout:

  • Infobox (quick facts): Founded, founders, headquarters, industry, key people, website, and similar basics. Only include items you can verify.
  • Lead (opening summary): 2–4 neutral sentences explaining what the company is, where it’s based, what it does at a high level, and why it’s notable. This is not a tagline.
  • History: Founding and major milestones, expansions, acquisitions, funding or IPO, only if independent sources cover them, and major pivots. Focus on events that third parties actually reported.
  • Operations/Business (optional, and only if sourced): What the company does at a high level and what markets it serves. Avoid feature-by-feature descriptions and pricing tiers.
  • Leadership/Ownership (optional): Only if reliable sources discuss executives, ownership changes, or governance in a meaningful way.
  • Reception/Controversies (only if they exist in sources): Reviews, notable criticism, legal issues, regulatory actions, all written neutrally and backed by sources.
  • See also / References / External links: References do the heavy lifting; external links are usually minimal (often just the official site).
An example company Wikipedia page.

Using Your Sources to Build the Outline

Start with your strongest independent secondary sources and work outward. As you read through them, you’re identifying what the coverage actually emphasizes.

As you review sources, pull out:

  • Events they cover (those become history sections)
  • Claims they support (those become lead and operations sections)
  • Any recurring themes across sources (those become section headings)

Each major section in your outline should be supported by multiple secondary sources, not a single mention. Also, keep an eye on the length as you draft. Wikipedia discourages overly long articles unless the amount of independent coverage truly warrants it. If a section or topic isn’t discussed in depth by reliable secondary sources, it usually doesn’t belong at length in the article.

If you focus on covering the topic from an encyclopedic angle and you leave out anything that feels like marketing, you will give your draft a much better chance of surviving review.

7. Write a Draft of Your Wikipedia Page

Take your time as you write a draft of your Wikipedia page from your outline. You want your content to be source-backed, thorough, thoughtful, and genuinely useful, giving readers the information they came for.

At this stage, it’s best to write your draft in a Wikipedia Sandbox. As mentioned earlier, this is a personal workspace where you can draft safely, revise freely, and share the link with others for informal feedback without accidentally publishing anything live.

While a Wikipedia page can support your broader visibility, the platform’s purpose is encyclopedic and impartial. Anything that reads as emotional, salesy, or promotional is likely to be flagged and can lead to rejection later in the process.

Aim for short, direct sentences that stick to verifiable facts. And those facts need strong secondary sources. For example, if you write, “Spot ran to the big oak tree yesterday,” that claim would need a source. Not just any source, but a credible, independent secondary source that Wikipedia considers reliable.

It’s also critical to remember you’re writing on behalf of Wikipedia. Aka, you’re writing in Wikipedia’s unbiased, impartial, and neutral voice.

Here are some examples to show what this looks like in practice:

Example 1: Product Description

  • Promotional: “XYZ Software is a revolutionary, industry-leading platform that empowers businesses to achieve unprecedented productivity gains. With its cutting-edge AI technology and intuitive interface, XYZ transforms the way teams collaborate, delivering exceptional results that exceed expectations.“​
  • Neutral: “XYZ Software is a project management platform that combines task tracking, team messaging, and file sharing. The software is used by businesses to coordinate work across departments.[1][2]“​

Example 2: Company History

  • Promotional: “Founded by visionary entrepreneur Jane Smith, the company quickly rose to prominence as a game-changer in the industry. Through relentless innovation and unwavering commitment to excellence, it has become the trusted choice for Fortune 500 companies worldwide.“​
  • Neutral: “The company was founded in 2015 by Jane Smith in Seattle.[3] It launched its enterprise tier in 2019 and rebranded from “TaskFlow” to its current name in 2021.[4][5]“​

Wikipedia also defines “promotional” language differently. It’s more than simply using words like “revolutionary” or “legendary.” Factually correct statements can still be considered “promotional” in a Wikipedia editor’s eyes if they meet certain structure and emphasis criteria:

  • Long, comprehensive feature inventories.​
  • Plan/tier breakdowns that resemble packaging (“Free vs. Premium vs. Enterprise”).​
  • Performance claims that read like sales positioning.​
  • Product-benefit phrasing stacked repeatedly (“includes tools for…,” “enables…,” “helps…”).​
  • Details that feel like purchase guidance (pricing, quotas, storage limits, admin entitlements).​

Let’s talk about specs and features for a second. If your company is well-known for a particular product or service, it can be tempting to include a specification or feature list on your Wikipedia page. Unfortunately, that can cause problems with Wikipedia for several reasons.

Here’s why:

  1. Wikipedia isn’t a manual or catalog: Wikipedia tries to avoid becoming vendor documentation. Specs and feature matrices belong on the company site, in the documentation center, in release notes, or on third-party comparison sites, not in an encyclopedia.​
  2. Specs change constantly: Feature sets, tiers, storage limits, and admin/security capabilities change frequently. Wikipedia content must remain stable and verifiable over time. Highly granular spec content becomes outdated quickly and attracts disputes.​
  3. It’s hard to verify neutrally: If the only source for a feature or tier is the vendor’s own site or press release, Wikipedia considers that primary sourcing; useful for limited factual verification, but not ideal for describing capabilities in detail or making value claims.​
  4. “Undue weight” and imbalance: Even accurate feature lists can give a product more prominence than independent sources do. Wikipedia tries to reflect external coverage: if reliable third parties don’t treat a feature as notable, Wikipedia typically won’t either.​

What a Company’s Wikipedia Draft Should Look Like

Much like sourcing, it’s hard to imagine what an acceptable draft should look like, given all of Wikipedia’s guidelines. Here’s a brief rundown of what a solid draft should look like when you’re done:

  • A clear, high-level description of what a company is (one paragraph, not a feature catalog).​
  • A history/timeline of major milestones (launches, renames, major releases) backed by independent sources.​
  • Widely covered integrations/partnerships only when reported by reliable third parties.​
  • A short, selective “features” summary only for capabilities that independent sources treat as notable and cover in-depth.​

8. Upload Your Page into the Article Wizard

Once your Sandbox draft is in good shape, move over to the Wikipedia Article Wizard. The Wizard is the guided tool that helps you move what you wrote from your Sandbox into Wikipedia’s Draft space, which is where new articles are typically prepared before they go live.

For company-related pages, the key takeaway is that the Wizard is the structured path to getting your draft into the right place so it can be submitted for independent review.

The Wikipedia Article Wizard confirming a page was uploaded.

9. Submit Your Article for Review

Now that your draft is in Draft space, you’re ready for the step that triggers formal evaluation by the community. Submit your draft through Articles for Creation by clicking “Submit for review.” This is when your draft enters the AfC queue, and a volunteer reviewer takes a look.

The timeline can range from a few weeks to a few months, depending on backlog and whether the reviewer requests changes. It’s also common for drafts to be declined at first, with feedback you’ll need to address before approval.

At NPD, we’ve found that sticking with AfC is the best practice for companies looking to go live. Even though autoconfirmed accounts may have the technical ability to publish directly, that path often creates more friction for company-related topics. AfC sets expectations for independent review from the start and helps reduce avoidable issues related to COI and other Wikipedia guidelines.

10. Continue Making Improvements

Once your page is accepted, the work is not really over.

Wikipedia is editable by anyone, so changes can happen at any time. Some edits will be helpful, some will be mistaken, and some may reflect a negative point of view. The best approach is to keep an eye on the page so you can understand what is changing and respond appropriately, usually by suggesting improvements on the Talk page or updating the article with strong, independent sourcing.

As the page gets more visibility and gains traction on Google and LLMs, focus on accuracy and neutrality rather than “updating marketing messaging.” Wikipedia is not the place for routine product updates, but it is the right place to reflect significant, well-covered developments when reliable third-party sources have written about them.

You should also plan for the possibility that your draft will be declined. That is common, especially for company-related topics. If it happens, do not get discouraged. Read the reviewer’s comments carefully, make the requested changes, and resubmit when you have addressed the specific issues that kept the draft from being accepted.

FAQs

Should I build a Wikipedia page for my company?

A Wikipedia page can be a meaningful credibility asset, but it isn’t a fit for every company. The deciding factor is whether there’s enough independent, reliable secondary coverage to support a neutral article. If you can’t outline the page using third-party sources alone, it’s usually too early.

If your company does qualify, the value tends to be indirect: stronger brand legitimacy, clearer “who you are” context in search results, and more consistent entity information across the web. It’s less about immediate conversions and more about long-term visibility and trust signals that can compound.

Yes. Creating, publishing, and maintaining a company page is challenging because Wikipedia is community-reviewed and built around strict expectations: neutral tone, verifiable claims, and high-quality sourcing. You also have to plan for ongoing edits and scrutiny after the page goes live.

The opportunity is achievable if you have strong independent coverage and treat the process as encyclopedic documentation rather than company messaging.

How do I know if my Wikipedia page will be published?

There’s no guaranteed way to know. Even well-prepared drafts can be declined, revised, and resubmitted, especially for company topics.

Your best indicators are practical: you have multiple independent sources with significant coverage, your draft reads neutrally (not like marketing), and you submit through the Articles for Creation (AfC) process so reviewers can evaluate it in draft space.

How long will my Wikipedia article be under review before publication?

Review time varies widely. Some drafts are reviewed quickly, but it’s also common for company-related submissions to take weeks (or longer) depending on backlog and how many revisions are needed. A decline doesn’t mean “never”; it usually means “not yet” or “needs stronger sourcing and a more neutral rewrite.”

Conclusion

If you’re looking to increase traffic, improve your search everywhere visibility, or build credibility, Wikipedia can be part of the equation. But it’s not a marketing channel, and it isn’t built for companies to shape their narratives. It’s a community-edited encyclopedia that summarizes what independent, reliable sources have already said about you.

Where Wikipedia can help is in discovery and trust signals. A stable, well-sourced page often shows up prominently for company and topic queries, and it can reinforce consistent “entity facts” that search engines and other knowledge systems use to understand companies. 

That’s also why Wikipedia often pairs well with entity SEO. When key details about your organization are documented consistently across reputable sources, your company is easier to interpret and surface accurately across platforms, including some LLM-style experiences. Results may vary based on implementation, the strength of independent coverage, and ongoing community review.

As you evaluate whether your company is a good fit for a Wikipedia page, keep in mind that the process is complicated, and it won’t be fully in your control. What matters most is having enough independent, reliable secondary coverage to justify a stand-alone article and being willing to follow Wikipedia’s COI expectations.

Read more at Read More

How to Build Audience Personas for Modern Search + Template

Search has changed, and so should your audience personas.

Your audience searches across Google, ChatGPT, Reddit, YouTube, and many other channels.

Knowing who they are isn’t enough anymore. You need to know how they search.

Search-focused audience personas fill gaps that traditional personas miss.

Think insights like:

  • Where this person actually goes for answers
  • What triggers them to look for solutions right now
  • Which proof points win their trust

And you don’t need months of research or expensive tools to build them.

An audience persona is a profile of who you’re creating for — what they need, how they search, and what makes them trust (or tune out). Done well, it aligns your team around a shared understanding of who you’re serving.


In this guide, I’ll walk you through nine strategic questions that dig deep into your persona’s search behavior. I’ve also included AI prompts to speed up your analysis.

They’ll help you spot patterns and synthesize findings without the manual work.

By the end, you’ll have a complete audience persona to guide your content strategy.

Free template: Download our audience persona template to document your insights. It includes a persona example for a fictional SaaS brand to guide you through the process.


1. Where Is Your Audience Asking Questions?

Answer this question to find out:

  • Where you need to build authority and presence
  • Which platforms to target for every persona
  • Which formats work well for each persona


Knowing where your persona hangs out tells you which channels influence their decisions.

So, you can show up in places they already trust.

It also reveals how they think and what will resonate with them.

For example, someone posting on Reddit wants honest advice based on lived experiences. But someone searching on TikTok wants visual content like tutorials or unboxing videos.

Where Your Audience Searches Reveals How They Think

How to Answer This Question

Start with an audience intelligence tool that lets you identify your persona’s preferred platforms and communities.

I’ll be using SparkToro.

Note: Throughout this guide, I’ll walk you through this persona-building process using the example of Podlinko, a fictional podcasting software. You’ll see every step of the research in action, so you can replicate it for your own business.


For this example, we’re building out one of Podlinko’s core personas: Marcus, a marketing professional on a one-person or small team team, so he’s scrappy and in-the-weeds.

Pro tip: Start with one primary persona and build it completely before adding others. Focus on your most valuable customer segment (the one driving the highest revenue for your business).


In SparkToro, enter a relevant keyword that describes your persona’s professional identity or core interests.

This could be their job title, industry, or a topic they care deeply about.

I went with “how to start a podcast.” Marcus would likely search for this early in his journey.

SparkToro – How to start a podcast

The report gives a pretty solid overview of Marcus’s online behavior.

For example, Google, ChatGPT, YouTube, and Facebook are his primary research channels.

SparkToro – Audience Research

But it could be worth testing a few other platforms too.

Compared to the average user, he’s 24.66% more likely to use X and 12.92% more likely to use TikTok.

SparkToro – Social networks report

The report also tells me the specific YouTube channels where he spends time.

He’s watching automation, editing, and business tutorials.

SparkToro – YouTube Channels & Podcasts

He’s also active in multiple industry-related Reddit communities.

Maybe he’s posting, commenting, or even just lurking to read advice.

SparkToro – SubReddits

Since Marcus uses ChatGPT, I also did a quick search on this platform to see which sources the platform frequently cites.

I searched for some prompts he might ask, like “Which podcast hosting platforms should I use for marketing?”

If you see large language models (LLMs) repeatedly mention the same sources, they likely carry authority for the topic.

And by extension, they influence your persona’s research as well.

ChatGPT – Sources – Podcast hosting platforms

Compare these sources to the ones you identified earlier. If they match, you have validation.

If they’re different, assess which ones to add to your persona document.

Here’s how I filled out the persona template with Marcus’s search behavior:

Persona template – Search behavior

2. What Exact Questions Are They Asking?

Answer this question to find out:

  • What language to mirror in your content
  • How to structure content for AI visibility
  • What content gaps exist in your market


Your buyer persona’s language rarely matches marketing jargon.

Companies might talk about “podcast production tools” and “integrated workflows.”

But personas use more personal and specific language:

  • What’s the cheapest way to record remote podcasts?
  • How long does it take to edit a 30-minute podcast?

Knowing your audience’s actual questions reveals the gap between how you describe your solution and how they experience the problem.

And shows you exactly how to bridge it.

How to Answer This Question

Start by going to the platforms and communities you identified in Question 1.

Search 3-5 topics related to your persona.

Review the context around headlines, posts, and comments:

  • How they phrase questions (exact words matter)
  • What emotions do they express
  • What outcomes they’re trying to achieve

Pro tip: As you research, save persona comments, discussions, and reviews in full — not just snippets. You’ll analyze the same sources in Questions 3-5. But through different lenses (challenges, triggers, language patterns). Having everything saved means you won’t need to revisit platforms multiple times.


For example, I searched “how to start a podcast for a business” on Google.

Then, I checked People Also Ask for related questions Marcus might have:

PAA – How to start a podcast for a business

On YouTube, I searched “how to edit a podcast” and reviewed video comments.

Users asked follow-up questions about mic issues and screen sharing.

This gave me insight into language and questions beyond the video’s main topic.

YouTube – How to edit a podcast – Comments

In Facebook Groups, I found users asking questions related to their goals, constraints, and challenges.

It also provided the unfiltered language Marcus uses when he’s stuck.

Facebook – Podcasters on Facebook

Now, use a keyword research tool to visualize how your persona’s questions connect throughout their journey.

I used AlsoAsked for this task. But AnswerThePublic and Semrush’s Topic Research tool would also work.

For Marcus, I searched “Best AI podcasting editing software,” which revealed this path:

Which AI tool is best for audio editing? → Can I use AI to edit audio? → Which software do professionals use for audio editing? → How much does AI audio editor cost?

AlsoAsked – Best Podcast Software

It’s helpful to visualize how Marcus’s questions change as he progresses through his search.

Next, learn the questions your persona asks in AI search.

You’ll need a specialized tool like Semrush’s AI Visibility Toolkit for this task.

It tells you the exact prompts people use when searching topics related to your brand.

(And if your brand appears in the answers.)

If you don’t have a subscription, sign up for a free trial of Semrush One, which includes the AI Visibility Toolkit and Semrush Pro.

Since Podlinko is fictional, I used a real podcasting platform (Zencastr.com) for this example.

Semrush – Visibility Overview – Zencastr

This brand appears often in AI answers for user questions like:

  • What equipment do I need to create a professional podcast setup?
  • Can you recommend popular tools for managing and promoting online radio or podcasts?

Semrush – Visibility Overview – Zencastr – Performing Topics

You’ll also see citation gaps — questions where your brand isn’t mentioned. These reveal content opportunities.

For this brand, one gap includes:

“Which AI tools are best for recording, editing, and distributing an AI-focused podcast?”

Semrush – Topic Opportunities – Questions

After reviewing all the questions I gathered, I narrowed them down to the top 5 for the template:

Top 5 template questions

3. What Challenges Influence Their Search Behavior?

Answer this question to find out:

  • What constraints influence their decision-making process
  • How to anticipate objections before they arise
  • What kind of solutions does your persona need


Challenges are the ongoing issues driving your persona’s search behavior. These overarching problems shape their decisions to find a solution.

Understanding these challenges can help you:

  • Position your solution in the context of these pain points
  • Anticipate and address objections before they come up
  • Structure your campaigns to speak directly to their limitations

How to Answer This Question

Review the questions you collected in Question 2 to identify underlying pain points.

For example, this Facebook Group post contains some telling language for Marcus’s persona:

Facebook – Telling language for Marcus's persona

Specific phrases highlight ongoing challenges:

  • “Tech support is no help”
  • Can’t find an editing software that consistently works”

Now, visit industry-specific review platforms.

Check G2, Capterra, Trustpilot, Amazon, Yelp, or another site, depending on your niche.

Look for reviews where people describe recurring frustrations.

Positive reviews may mention what drove a user to seek a new solution. For example, this one references poor audio and video quality:

G2 – Riverside – Review

Negative reviews reveal what users constantly struggle with.

Unresolved pain points often push people to find workarounds or alternatives.

This user noted issues with a podcasting tool, including loss of backups, unreliable tech, and more.

G2 – Riverside – Negative review

Pay close attention to the language people use. Word choice can signal underlying feelings and constraints.

When someone asks for the “easiest” and “most cost-effective” solution, they’re signaling:

  • Limited resources
  • Low confidence
  • Risk aversion

After reviewing conversations and communities, you’ll likely have dozens of data points.

Copy the reviews, questions, and phrases into an AI tool to identify your persona’s top challenges.

Use this prompt:

Based on these reviews and discussions, identify the five biggest challenges for this persona.

For each challenge, show:

(1) exact phrases they use to describe it

(2) what constraints make it harder (budget, time, skills)

(3) how it influences where and when they search.

Format as a table.


This analysis helped me identify Marcus’s recurring challenges:

Persona template – Challenges

4. What Triggers Them to Search Right Now?

Answer this question to find out:

  • What emotional and situational context should you address in your content
  • How to structure content for different urgency levels
  • Which pain points to lead with


Search triggers explain why your audience is ready to take action.

But they’re not the same as challenges.

Challenges are ongoing constraints your persona faces. This could be a limited budget, small team, or skill gap.

Triggers are the specific events or goals that push them to act right now. Like a looming deadline or a competitor launching a podcast.

Understanding triggers helps you reach your persona when they’re most receptive.

Decoding Persona Search Triggers

How to Answer This Question

If you have access to internal data, start there.

Your sales and customer support teams can spot patterns that push prospects from browsing to buying.

For example, your sales conversations might reveal that one of Marcus’s triggers is urgency. His manager might ask him to improve the sound quality by the next episode, prompting his search.

If you don’t have internal intel, use tools like AnswerThePublic, AlsoAsked, or Semrush’s Keyword Magic Tool.

Keyword Magic Tool – Podcast editing

This will help you identify the language people use when they’re ready to act.

For Marcus, my AlsoAsked research led to questions like:

  • “Can I record a podcast with just my phone?” This may suggest a desire to start immediately, without professional equipment.
  • “How to make a podcast with someone far away” could suggest the trigger of a sudden need to work with a remote guest/host

AlsoAsked – Questions

You can also refer back to your research on community spaces.

(Or conduct additional audience research, if needed.)

These spaces are where people describe the exact moments they decide to take action. Aka plateaus, milestones, and failed attempts.

When I searched “podcast marketing” on Reddit, I found a post from someone experiencing clear triggers:

Reddit – Podcast marketing

This user has been unable to get a consistent flow of organic listeners despite high-quality content.

Trigger: A growth plateau that pushed him to ask for help.

He’s also trying to hit his first 1,000 listeners.

Trigger: A goal that pushed him to look for solutions.

If you collected a lot of content, upload it to an AI tool to quickly identify triggers.

Use this prompt:

Analyze these community posts and discussions. Identify the specific trigger moments that pushed people to actively search for solutions.

For each trigger, show:

  1. The exact moment or event described (quote the language they use)
  2. The type of trigger (situational, temporal, emotional, or goal-driven)
  3. What action did they take as a result

Format as a table.


After analyzing the content I gathered, I identified the key triggers pushing Marcus to search:

Persona template – Triggers

5. What Language Resonates (and What Turns Them Off)?

Answer this question to find out:

  • Which messaging angles resonate
  • What tones build trust with your audience
  • Which phrases trigger objections or skepticism


The words you use can affect whether your persona trusts you or tunes out.

The right language makes people feel understood. The wrong language creates friction and drives them away.

When you know what resonates, you can create messaging that builds trust and motivates your personas to act.

How to Answer This Question

Refer back to your research from Questions 3 and 4.

This time, focus specifically on language patterns in reviews and community discussions.

Look at:

  • Exact phrases people use to describe success, relief, or satisfaction
  • Words highlighting frustration, disappointment, and concerns

For example, on Capterra, users praised podcasting platforms that “do a lot” and let them “distribute with ease.”

Capterra – Review on podcasting platform

This language signals Marcus’s preference for all-in-one platforms.

He would likely connect with messaging that emphasizes functionality without complexity.

Next, review the content you previously gathered from community spaces.

In r/podcasting, users like Marcus write with direct, benefit-focused language:

Reddit – r/podcasting – Benefit focused language

Notice what he values: simplicity and concrete outcomes (“automatic transcripts”).

He’s not mentioning jargon like “AI-powered transcription engine” or “enterprise-grade recording infrastructure.”

Plain language that emphasizes quick results over technical capabilities works best with this persona.

Once you have enough data, use this LLM prompt to identify language patterns:

Analyze these customer reviews and community discussions I’ve shared. Identify:

  1. Most common words and phrases people use to describe positive experiences
  2. Most common words and phrases that signal frustration or concerns
  3. Emotional undertones in how they describe problems and solutions

Create a table organizing these insights.


This analysis revealed the specific language that Marcus reacts to positively (and negatively).

Persona template – Language

6. What Content Types Do They Engage With Most?

Answer this question to find out:

  • Content types to prioritize in your content strategy
  • How to structure content for maximum engagement
  • What length and style work best for each format


Knowing the content types your audience prefers has multiple benefits.

It lets you create content that captures your persona’s attention and keeps them engaged.

Think about it: You could write the most comprehensive guide on podcast equipment.

But if your ideal customer prefers video reviews, they’ll scroll right past it.

How to Answer This Question

You identified your persona’s most-used platforms in Question 1. Now analyze which content formats perform best on each.

Conduct a few Google Searches to identify popular content types.

You’ll learn what users (and search engines) prefer for specific queries. Look at videos, written guides, infographics, carousels, podcasts, and more.

For example, when I search “how to set up podcast equipment,” the top results are a mix: long-form articles, video tutorials, and community discussions.

Google SERP – How to set up podcast equipment

But organic search rankings don’t tell the full story.

Analyze content directly on your persona’s preferred platforms, too.

I searched “How to distribute a podcast” on YouTube and assessed the top 20 videos and Shorts for:

  • Video length
  • Views
  • Comments
  • Engagement patterns

Look at the creators your persona follows on each platform. (From the SparkToro report in Question 1).

SparkToro – Report

Pay attention to:

  • Content types drive the most engagement (videos vs. carousels vs. threads)
  • How these creators structure content (length, style, tone)
  • Which topics resonate most with their audience

Once you’ve collected this data, look for patterns.

Or drop your data into an LLM and ask it to find the patterns for you:

Analyze this engagement data I’ve collected for my audience persona.

Identify:

  1. Which video lengths perform best (views, comments, engagement rate) and why
  2. Which content styles generate the most engagement (tutorials, vlogs, behind-the-scenes, etc.)
  3. Any patterns in thumbnails, titles, or formats that consistently perform well

Summarize my persona’s content preferences by video type and rank them as low, medium, or high


For Marcus, I learned that 5- to 15-minute video tutorials generated the highest engagement.

Shorts consistently underperformed for how-to queries, showing his preference for in-depth tutorials.

I documented my findings and ranked each content type by engagement level: high, medium, or low.

Persona template – Content Preferences

7. What Proof Points and Signals Matter?

Answer this question to find out:

  • What proof points influence buyers
  • How to structure case studies and testimonials
  • Where to place proof points to win people’s trust


Proof points can influence whether someone acts on your content or bounces.

They’re also a ranking factor.

Search engines and LLMs reward content that demonstrates Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T).

What is E-E-A-T

But different personas might value different proof points.

Understanding what matters to each persona is crucial to building trust and visibility.

How to Answer This Question

Identify the most common trust markers on your persona’s preferred sites.

Look for:

  • Author credentials: Bylines with relevant expertise
  • Methods: Transparency about the method for creating this content
  • Citations: Links to studies, expert quotes, industry reports, original research
  • Recency signals: Publication and last updated dates
  • Visual proof: Screenshots, before/after comparisons, annotated walkthroughs
  • Social validation: Comment sections, user discussions, engagement metrics

Use Semrush’s Keyword Overview tool to find this information.

Note: A free Semrush account gives you 10 searches in this tool per day. Or you can use this link to access a free Semrush One trial.


Enter your keyword (I used “how to start a podcast”).

Scroll to the SERP Analysis report to view the ranking domains.

Keyword Analytics – How to start a podcast – SERP Analysis – URL

Aim to review 20 to 50 pages for the best results. (Create a spreadsheet to organize the information.)

Identify which proof points they use and how prominently they’re displayed.

Here’s how I did this for one of the articles I assessed:

  • Quantified track record: “Since 2009, Buzzsprout has helped over 400,000 podcasters”
  • First-person experience: “I’ve drawn on lessons from my own podcasts and thousands of conversations with creators”
  • Third-party sources: Expert advice cited from Apple Podcasts on naming conventions
  • Visual demonstrations: Embedded tutorials showing recommendations in action

Buzzsprout – How to start a podcast

Then, use an LLM to quickly spot patterns:

I’ve analyzed top-ranking pages for my persona and uploaded my findings.

Identify:

  1. Which proof points appear most frequently (e.g., “8 out of 10 pages include X”)
  2. How these proof points are displayed (above the fold, in sidebar, throughout content)
  3. Which combinations of proof points appear together most often

Format as a summary with the top 5 most common patterns.


Ultimately, you’ll want to infuse your content with these same trust markers to attract and convert your persona.

After identifying Marcus’s top proof points, I ranked them from medium to high in the template:

Persona template – Proof points

8. Where (and How) Should You Distribute Content to Reach This Persona?

Answer this question to find out:

  • Which platforms deserve your investment
  • What content formats work best on each platform
  • How to maximize organic reach through distribution


Where you distribute content determines whether it reaches your audience.

If you only publish content on your website but buyers find solutions on LinkedIn, you’re overlooking key touchpoints.

Even worse, you’re invisible on major platforms that LLMs scan for answers, recommendations, and citations.

How to Answer This Question

By now, you know your audience persona’s top platforms.

These are your initial distribution targets.

But you’ll ideally be able to validate them against real behavioral data.

If possible, survey recent customers to find concrete patterns about their search behavior.

Send a short survey to customers who converted in the last 90 days:

  • Where did you first hear about us?
  • Where do you go for advice about [primary pain points]?
  • What platforms do you use when researching [your product category]?
  • How do you prefer to learn about new solutions in your workflow?

Once responses come in, look for patterns in how each segment discovers, researches, and evaluates solutions.

Here’s a prompt you can use in an AI tool for faster analysis:

I surveyed recent customers about their search and discovery behavior.

Analyze this data and identify:

  1. The top 3-5 platforms where customers discovered us or researched solutions
  2. Common pain points or information needs they mentioned
  3. Preferred content formats for learning about solutions
  4. Any patterns in how different customer segments discover and evaluate us

Highlight the platforms and channels that appear most frequently, and flag any gaps between where customers search and where we currently have a presence.


Next, cross-reference your research against existing data in Google Analytics.

Open Google Analytics and navigate to Reports > Lifecycle > Acquisition > Traffic acquisition.

GA – Traffic acquisition

Sort by engagement rate or average session duration to see which channels drive genuinely engaged visitors.

Look for high time on site (2+ minutes) and multiple pages per session (3+).

Then, map each platform to the content format that performs best there.

Combine insights from Question 1 (preferred platforms) and Question 6 (preferred formats) to build your distribution strategy.

Here’s what this looks like for Marcus:

Persona template – Distribution strategy

9. What Keeps This Persona Coming Back?

Answer this question to find out:

  • What product features or experiences to double down on
  • How to position your solution beyond initial use cases
  • What content to create for existing customers


Winning your audience’s attention once is easy. Earning it repeatedly is the real challenge.

Understanding what keeps your persona engaged is the key to getting them to return.

How to Answer This Question

Review all the audience persona insights you’ve gathered so far to identify recurring needs.

Look at triggers, pain points, content preferences, and community discussions.

Pinpoints problems that can’t be solved with a single article or resource.

This could include:

  • Tasks they do every week (editing, distribution, promotion)
  • Decisions they face with each piece of content (format, platform, messaging)
  • Skills they’re continuously learning (new tools, changing algorithms)
  • Friction points that slow them down every time

Then, outline the content types that repeatedly solve these problems.

Think tools, templates, checklists, and guides they’ll use repeatedly.

If you don’t want to do this manually, drop this prompt into an AI tool to synthesize your findings:

Based on my audience persona research, here’s what I’ve learned:

Questions they ask: [Paste top questions from Q2]

Challenges they face: [Paste challenges from Q3]

Triggers that push them to act: [Paste triggers from Q4]

Their preferred content types: [Paste formats from Q6]

Identify recurring problems they face repeatedly (not one-time issues).

For each recurring problem:

  1. Describe the problem in their own words
  2. Explain why it’s recurring (weekly task, ongoing decision, changing landscape, etc.)
  3. Suggest 2-3 content types that would provide repeatable value each time they face this problem

Format as a table with columns: Problem | Why It’s Recurring | Content Solutions


For Marcus, this could look something like this:

Problem areas Content assets
Marcus spends too long cleaning audio
  • Editing workflow template (step-by-step, repeatable each week)
  • Breakdown video: “How to Edit a 30-minute Episode in Under 12 Minutes”
Marcus wants consistent reach across platforms
  • Podcast distribution checklist (Apple, Spotify, YouTube, LinkedIn, newsletter)
  • Repurposing templates (social snippets, video clips, carousel outlines)

Every time Marcus faces these challenges, he can turn to them for a reliable solution.

These are the content types that have repeatable value for him:

Persona template – What brings Marcus back

Build Audience Personas That Win AI Visibility

Forget surface-level demographics.

These nine audience persona questions give you actionable, in-depth search intelligence.

You now know a lot about your persona.

You’ve uncovered where they search, what language resonates, and which proof points earn trust.

This is everything you need to show up in the right places with the right message.

If you haven’t already, download our audience persona template to organize your research.

Use it to guide your content creation, search strategy, and distribution efforts.

Your next move: Expand your visibility further with our guide to ranking in AI search. Our Seen & Trusted Framework will help you increase mentions, citations, and recommendations for your brand.

The post How to Build Audience Personas for Modern Search + Template appeared first on Backlinko.

Read more at Read More

How to Leverage Google Natural Language to Boost Your ASO Efforts 

Over the past year, Google has significantly accelerated its investment in artificial intelligence and machine learning across its products and platforms. While most marketers are familiar with ChatGPT, Google has been advancing its own AI capabilities in parallel, including the relaunch of Bard as Gemini and the steady rollout of AI-assisted features across Google Play.

For app marketers and ASO specialists, these developments are not abstract. They represent a fundamental shift in how apps are understood, categorized, and surfaced to users. Google Play is no longer relying primarily on keyword matching. Instead, it is moving toward a deeper, semantic understanding of apps, their functionality, and the problems they solve.

This evolution raises an important question. If Google increasingly generates, interprets, and evaluates app metadata itself, how do ASO teams maintain control, differentiation, and long-term competitive advantage?

One underutilized answer lies in a tool that has existed for years but is rarely discussed in an ASO context: the Google Natural Language.

Key Takeaways

  • Google Play is moving away from keyword density and toward semantic understanding driven by machine learning and natural language processing.
  • The Google Natural Language provides valuable insight into how Google interprets app metadata, including entities, sentiment, and category relevance.
  • Optimizing for category confidence and entity relevance can improve keyword coverage and resilience during algorithm updates.
  • ASO teams that align metadata with user intent and natural language patterns are better positioned for long-term discovery performance.
  • Using tools like the Google Natural Language helps future-proof ASO strategies as automation and AI-driven ranking signals continue to expand.

Why Traditional ASO Signals Are Losing Impact

Before exploring how the Google Natural Language can support ASO, it is important to understand the broader shifts in Google Play’s ranking algorithms.

Over the past two years, Google Play has shifted away from frequent, visible algorithm swings towards a more continuous learning model. While ASO teams still see volatility, it is now driven less by discrete updates and more by ongoing recalibration as models ingest new behavioural, linguistic, and performance data. Reindexing events still occur, but they are increasingly tied to semantic reassessment rather than simple metadata changes.

At the same time, the effectiveness of traditional optimization levers such as keyword density, exact-match repetition, and rigid keyword placement has continued to erode. These tactics no longer align with how Google Play evaluates relevance.

Like Google Search, Google Play is now firmly optimized for meaning, not mechanics. Its systems are designed to understand intent, function, and audience context rather than rely on surface-level keyword signals. The algorithm is increasingly capable of identifying what an app does, who it serves, and the problems it solves, even when those ideas are expressed using varied, natural language.

This is where natural language processing becomes central to modern ASO tools and practices.

Explanation of Natural Language processing.

What is the Goal of the Google Natural Language

Google Natural Language is designed to help machines understand human language in a way that more closely mirrors human interpretation. It powers a wide range of Google products and capabilities, including sentiment analysis, entity recognition, content classification, and contextual understanding.

In practical terms, it analyzes a body of text and identifies:

  • The overall sentiment and tone.
  • Key entities and their relative importance.
  • The categories and subcategories that the content most strongly aligns with.

For ASO teams, this offers a rare opportunity. Instead of guessing how Google might interpret app metadata, it provides a proxy for understanding how Google’s machine learning systems read and categorise text.

Used correctly, it can help ASO specialists align metadata more closely with Google’s evolving ranking logic.

How Google Natural Language Applies to ASO

When applied to app metadata, Google Natural Language can reveal how Google is likely to associate an app with certain concepts, categories, and keyword themes. This insight is particularly valuable as keyword density becomes less influential and semantic relevance takes priority.

Below are the key components that matter most for ASO.

Sentiment Analysis

Sentiment analysis evaluates the emotional tone of a piece of text and categorises it as positive, negative, or neutral. While sentiment is not a primary ranking factor for app discovery, it does provide useful contextual information.

For example, overly promotional, aggressive, or unclear language can introduce noise into metadata. Reviewing sentiment outputs can help teams ensure that descriptions maintain a clear, neutral, and informative tone that supports both user trust and algorithmic interpretation.

Entity Recognition and Salience

Entity recognition identifies specific entities within a text and classifies them into predefined types such as company, product, feature, or concept. Each entity is assigned a salience score, which reflects how central that entity is to the overall content.

In an ASO context, entities might include:

  • Core app features
  • Functional use cases
  • Industry-specific terms
  • Recognisable product or service concepts

Salience scores range from 0 to 1.0. Higher scores indicate that an entity plays a more important role in defining the content.

From an optimization perspective, this is critical. If key features or use cases are not appearing as highly salient, it suggests Google may not be strongly associating the app with those concepts.

Strategically incorporating relevant entities into metadata in a natural, user-focused way can improve clarity and strengthen topical relevance. Placement also matters. Important entities that appear early in descriptions or are reinforced toward the end of the text tend to carry more weight.

Metadata entities.

Categories and Confidence Scores

Category classification is arguably the most impactful element of Google Natural Language for ASO.

When text is analyzed, it assigns it to one or more categories and subcategories, each with an associated confidence score. These scores indicate how strongly the content aligns with a given category.

For Google Play, this has major implications. Higher category confidence increases the likelihood that an app will be associated with a broader range of relevant search queries within that category. Rather than ranking for a narrow set of exact keywords, apps can gain visibility across an expanded semantic keyword space.

In practice, we have seen that improving category confidence can significantly enhance keyword coverage and ranking stability, particularly during periods of algorithm change.

To increase category confidence:

  • Use clear, natural language that reflects real user intent
  • Focus on describing functionality and value, not just features
  • Avoid keyword stuffing or forced phrasing
  • Reinforce category-relevant concepts consistently throughout metadata
Hinge's Dating App.

Applying GNL Insights to Metadata Strategy

The real value of Google Natural Language lies not in isolated analysis, but in iterative optimization. By repeatedly testing metadata drafts through the Google Natural Language, ASO teams can refine language until category confidence, entity salience, and overall clarity improve.

This approach aligns well with broader 2026 ASO best practices, which emphasize:

  • User intent over keyword lists
  • Semantic relevance over repetition
  • Long-term stability over short-term gains

Case Study Insights

We have applied GNL-driven optimisation techniques across multiple app categories. While results vary by vertical, the overall pattern has been consistent.

During periods of significant Google Play algorithm updates, apps optimized around category confidence and entity relevance showed greater resilience. In several cases, visibility improved despite widespread volatility elsewhere in the store.

In one example, keyword coverage expanded substantially following metadata updates that increased confidence across both a core category and secondary related categories. This translated into a more than fivefold increase in organic Explore installs over time.

A Yodel Mobile case study about keyword coverage.

These results reinforce an important principle. When ASO strategies align with how Google understands language, they are better positioned to benefit from algorithm evolution rather than being disrupted by it.

Connecting GNL to 2026 ASO Strategy

Looking ahead, the role of natural language processing in app discovery will only grow. As Google continues to automate metadata creation and interpretation, manual optimization will shift from mechanical execution to strategic guidance.

ASO teams that understand and leverage tools like Google Natural Language will be better equipped to:

  • Guide AI-generated content rather than react to it
  • Maintain differentiation in an increasingly automated ecosystem
  • Build metadata that supports both paid and organic discovery

This approach also complements broader trends such as AI-powered search, cross-platform discovery, and privacy-first measurement frameworks.

Conclusion

The rise of natural language processing does not signal the end of ASO. Instead, it marks a shift in how optimization should be approached.

By moving beyond keyword density and embracing semantic relevance, ASO teams can align more closely with Google’s evolving algorithms. Google Natural Language offers a practical way to understand how app metadata is interpreted and how it can be improved to support discovery, conversion, and long-term stability.

As automation continues to expand across Google Play, the teams that succeed will be those who understand the systems behind it and adapt their strategies accordingly. Natural language optimization is no longer optional. It is becoming a core pillar of modern ASO.

Read more at Read More

Meta adds Manus AI tools into Ads Manager

Inside Meta’s AI-driven advertising system: How Andromeda and GEM work together

Meta Platforms is embedding newly acquired AI agent tech directly into Ads Manager, giving advertisers built-in automation tools for research and reporting as the company looks to show faster returns on its AI investments.

What’s happening. Some advertisers are seeing in-stream prompts to activate Manus AI inside Ads Manager.

  • Manus is now available to all advertisers via the Tools menu.
  • Select users are also getting pop-up alerts encouraging in-workflow adoption.
  • The feature rollout signals deeper integration ahead.

What is Manus. Manus AI is designed to power AI agents that can perform tasks like report building and audience research, effectively acting as an assistant within the ad workflow.

Why we care. Manus AI brings AI-powered automation directly into Meta Platforms Ads Manager, making tasks like report-building, audience research, and campaign analysis faster and more efficient.

Meta is currently prioritizing tying AI investment to measurable ad performance, giving advertisers new ways to optimize campaigns and potentially gain a competitive edge by testing workflow efficiencies early.

Between the lines. Meta is under pressure to demonstrate practical value from its aggressive AI spending. Advertising remains its clearest path to monetization, and embedding Manus into everyday ad tools offers a direct way to tie AI investment to performance gains.

Zoom out. The move aligns with CEO Mark Zuckerberg’s push to weave AI across Meta’s product stack. By positioning Manus as a performance tool for advertisers, Meta is betting that workflow efficiencies will translate into stronger ad results — and a clearer AI revenue story.

The bottom line. For advertisers, Manus adds another layer of built-in automation worth testing. Early adopters may uncover time savings and optimization gains as Meta continues expanding AI inside its ad ecosystem.

Read more at Read More

Why AI optimization is just long-tail SEO done right

The return of long-tail SEO in the AI era

If you look at job postings on Indeed and LinkedIn, you’ll see a wave of acronyms added to the alphabet soup as companies try to hire people to boost visibility on large language models (LLMs).

Some people are calling it generative engine optimization (GEO). Others call it answer engine optimization (AEO). Still others call it artificial intelligence optimization (AIO). I prefer large model answer optimization (LMAO).

I find these new acronyms a bit ridiculous because while many like to think AI optimization is new, it isn’t. It’s just long-tail SEO — done the way it was always meant to be done.

Why LLMs still rely on search

Most LLMs (e.g., GPT-4o, Claude 4.5, Gemini 1.5, Grok-2) are transformers trained to do one thing: predict the next token given all previous tokens.

AI companies train them on massive datasets from public web crawls, such as:

  • Common Crawl.
  • Digitized books.
  • Wikipedia dumps.
  • Academic papers.
  • Code repositories.
  • News archives.
  • Forums.

The data is heavily filtered to remove spam, toxic content, and low-quality pages. Full pretraining is extremely expensive, so companies run major foundation training cycles only every few years and rely on lighter fine-tuning for more frequent updates.

So what happens when an LLM encounters a question it can’t answer with confidence, despite the massive amount of training data?

AI companies use real-time web search and retrieval-augmented generation (RAG) to keep responses fresh and accurate, bridging the limits of static training data. In other words, the LLM runs a web search.

To see this in real time, many LLMs let you click an icon or “Show details” to view the process. For example, when I use Grok to find highly rated domestically made space heaters, it converts my question into a standard search query.

Dig deeper: AI search is booming, but SEO is still not dead

The long-tail SEO playbook is back

Many of us long-time SEO practitioners have praised the value of long-tail SEO for years. But one main reason it never took off for many brands: Google.

As long as Google’s interface was a single text box, users were conditioned to search with one- and two-word queries. Most SEO revenue came from these head terms, so priorities focused on competing for the No. 1 spot for each industry’s top phrase.

Many brands treated long-tail SEO as a distraction. Some cut content production and community management because they couldn’t see the ROI. Most saw more value in protecting a handful of head terms than in creating content to capture the long tail of search.

Fast forward to 2026. People typing LLM prompts do so conversationally, adding far more detail and nuance than they would in a traditional search engine. LLMs take these prompts and turn them into search queries. They won’t stop at a few words. They’ll construct a query that reflects whatever detail their human was looking for in the prompt.

Suddenly, the fat head of the search curve is being replaced with a fat tail. While humans continue to go to search engines for head terms, LLMs are sending these long-tail search queries to search engines for answers.

While AI companies are coy about disclosing exactly who they partner with, most public information points to the following search engines as the ones their LLMs use most often:

  • ChatGPT – Bing Search.
  • Claude – Brave Search.
  • Gemini – Google Search.
  • Grok – X Search and its own internal web search tool.
  • Perplexity – Uses its own hybrid index.

Right now, humans conduct billions of searches each month on traditional search engines. As more people turn to LLMs for answers, we’ll see exponential growth in LLMs sending search queries on their behalf.

SEO is being reborn.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Dig deeper: Why ‘it’s just SEO’ misses the mark in the era of AI SEO

How to do long-tail SEO with help from AI

The principles of long-tail SEO haven’t changed much. It’s best summed up by Baseball Hall of Famer Wee Willie Keeler: “Keep your eye on the ball and hit ’em where they ain’t.”

Success has always depended on understanding your audience’s deepest needs, knowing what truly differentiates your brand, and creating content at the intersection of the two.

As straightforward as this strategy has been, few have executed it well, for understandable reasons.

Reading your customers’ minds is hard. Keyword research is tedious. Content creation is hard. It’s easy to get lost in the weeds.

Happily, there’s someone to help: your favorite LLM.

Here are a few best practices I’ve used to create strong long-tail content over the years, with a twist. What once took days, weeks, or even months, you can now do in minutes with AI.

1. Ask your LLM what people search when looking for your product or service

The first rule of long-tail SEO has always been to get into your audience’s heads and understand their needs. This once required commissioning surveys and hiring research firms to figure out.

But for most brands and industries, an LLM can handle at least the basics. Here’s a sample prompt you can use.

Act as an SEO strategist and customer research analyst. You're helping with long-tail keyword discovery by modeling real customer questions.

I want to discover long-tail search questions real people might ask about my business, products, and industry. I’m not looking for mere keyword lists. Generate realistic search questions that reflect how people research, compare options, solve problems, and make decisions.

Company name: [COMPANY NAME]
Industry: [INDUSTRY]
Primary product/service: [PRIMARY PRODUCT OR SERVICE]
Target customer: [TARGET AUDIENCE]
Geography (if relevant): [LOCATION OR MARKET]

Generate a list of 75 – 100 realistic, natural-language search queries grouped into the following categories:

AWARENESS
• Beginner questions about the category
• Problem-based questions (pain points, frustrations, confusion)

CONSIDERATION
• Comparison questions (alternatives, competitors, approaches)
• “Best for” and use-case questions
• Cost and pricing questions

DECISION
• Implementation or getting-started questions
• Trust, credibility, and risk questions

POST-PURCHASE
• Troubleshooting questions
• Optimization and advanced/expert questions

EDGE CASES
• Niche scenarios
• Uncommon but realistic situations
• Advanced or expert questions

Guidelines:
• Write queries the way real people search in Google or ask AI assistants.
• Prioritize specificity over generic keywords.
• Include question formats, “how to” queries, and scenario-based searches.
• Avoid marketing language.
• Include emotional, situational, and practical context where relevant.
• Don't repeat the same query structure with minor variations.
• Each query should suggest a clear content angle.

Output as a clean bullet list grouped by category.

You can tweak this prompt for your brand and industry. The key is to force the LLM (and yourself) to think like a customer and avoid the trap of generating keyword lists that are just head-term variations dressed up as long-tail queries.

With a prompt like this, you move away from churning out “keyword ideas” and toward understanding real customer needs you can build useful content around.

Dig deeper: If SEO is rocket science, AI SEO is astrophysics

2. Use your LLM to analyze your search data

Most large brands and sites don’t realize they’ve been sitting on a treasure trove of user intelligence: on-site search data.

When customers type a query into your site’s search box, they’re looking for something they expect your brand to provide.

If you see the same searches repeatedly, it usually means one of two things:

  • You have the information, but users can’t find it.
  • You don’t have it at all.

In both cases, it’s a strong signal you need to improve your site’s UX, add meaningful content, or both.

There’s another advantage to mining on-site search data: it reveals the exact words your audience uses, not the terms your team assumes they use.

Historically, the challenge has been the time required to analyze it. I remember projects where I locked myself in a room for days, reviewing hundreds of thousands of queries line by line to find patterns — sorting, filtering, and clustering them by intent.

If you’ve done the same, you know the pattern. The first few dozen keywords represent unique concepts, but eventually you start seeing synonyms and variations.

All of this is buried treasure waiting to be explored. Your LLM can help. Here’s a sample prompt you can use:

You're an SEO strategist analyzing internal site search data.

My goal is to identify content opportunities from what users are searching for on my website – including both major themes and specific long-tail needs within those themes.

I have attached a list of site search queries exported from GA4. Please:

STEP 1 – Cluster by intent
Group the queries into logical intent-based themes.

STEP 2 – Identify long-tail signals inside each theme
Within each theme:
• Identify recurring modifiers (price, location, comparisons, troubleshooting, etc.)
• Identify specific entities mentioned (products, tools, features, audiences, problems)
• Call out rare but high-intent searches
• Highlight wording that suggests confusion or unmet expectations

STEP 3 – Generate content ideas
For each theme:
• Suggest 3 – 5 content ideas
• Include at least one long-tail content idea derived directly from the queries
• Include one “high-intent” content idea
• Include one “problem-solving” content idea

STEP 4 – Identify UX or navigation issues
Point out searches that suggest:
• Users cannot find existing content
• Misleading navigation labels
• Missing landing pages

Output format:
Theme:
Supporting queries:
Long-tail insights:
Content opportunities:
UX observations:

Again, customize this prompt based on what you know about your audience and how they search.

The detail matters. Many SEO practitioners stop at a prompt like “give me a list of topics for my clients,” but this pushes the LLM beyond simple clustering to understand the intent behind the searches.

I used on-site search data because it’s one of the richest, most transparent, and most actionable sources. But similar prompts can uncover hidden value in other keyword lists, such as “striking distance” terms from Google Search Console or competitive keywords from Semrush.

Even better, if your organization keeps detailed customer interaction records (e.g., sales call notes, support tickets, chat transcripts), those can be more valuable. Unlike keyword datasets, they capture problems in full sentences, in the customer’s own words, often revealing objections, confusion, and edge cases that never appear in traditional keyword research.

Get the newsletter search marketers rely on.


3. Create great content

The next step is to create great content.

Your goal is to create content so strong and authoritative that it’s picked up by sources like Common Crawl and survives the intense filtering AI companies apply when building LLM training sets. Realistically, only pioneering brands and recognized authorities can expect to operate in this rarefied space.

For the rest of us, the opportunity is creating high-quality long-tail content that ranks at the top across search engines — not just Google, but Bing, Brave, and even X.

This is one area where I wouldn’t rely on LLMs, at least not to generate content from scratch.

Why?

LLMs are sophisticated pattern matchers. They surface and remix information from across the internet, even obscure material. But they don’t produce genuinely original thought.

At best, LLMs synthesize. At worst, they hallucinate.

Many worry AI will take their jobs. And it will — for anyone who thinks “great content” means paraphrasing existing authority sources and competing with Wikipedia-level sites for broad head terms. Most brands will never be the primary authority on those terms. That’s OK.

The real opportunity is becoming the authority on specific, detailed, often overlooked questions your audience actually has. The long tail is still wide open for brands willing to create thoughtful, experience-driven content that doesn’t already exist everywhere else.

We need to face facts. The fat head is shrinking. The land rush is now for the “fat tail.” Here’s what brands need to do to succeed:

Dominate searches for your brand

Search your brand name in a keyword tool like Semrush and review the long-tail variations people type into Google. You’ll likely find more than misspellings. You’ll see detailed queries about pricing, alternatives, complaints, comparisons, and troubleshooting.

If you don’t create content that addresses these topics directly — the good and the bad — someone else will. It might be a Reddit thread from someone who barely knows your product, a competitor attacking your site, a negative Google Business Profile review, or a complaint on Trustpilot.

When people search your brand, your site should be the best place for honest, complete answers — even and especially when they aren’t flattering. If you don’t own the conversation, others will define it for you.

The time for “frequently asked questions” is over. You need to answer every question about your brand—frequent, infrequent, and everything in between.

Go long

Head terms in your industry have likely been dominated by top brands for years. That doesn’t mean the opportunity is gone.

Beneath those competitive terms is a vast layer of unbranded, long-tail searches that have likely been ignored. Your data will reveal them.

Review on-site search, Google Search Console queries, customer support questions, and forums like Reddit. These are real people asking real questions in their own words.

The challenge isn’t finding questions to write about. It’s delivering the best answers — not one-line responses to check a box, but clear explanations, practical examples, and content grounded in real experience that reflects what sets your brand apart.

Dig deeper: Timeless SEO rules AI can’t override: 11 unshakeable fundamentals

Expertise is now a commodity: Lean into experience, authority, and trust

Publishing expert content still matters, but its role has changed. Today, anyone can generate “expert-sounding” articles with an LLM.

Whether that content ranks in Google is increasingly beside the point, as many users go straight to AI tools for answers.

As the “expertise” in E-E-A-T becomes table stakes, differentiation comes from what AI and competitors can’t easily replicate: experience, authority, and trust.

That means publishing:

  • Original insights and genuine thought leadership from people inside your company.
  • Real customer stories with measurable outcomes.
  • Transparent reviews and testimonials.
  • Evidence that your brand delivers what it promises.

This isn’t just about blog content. These signals should appear across your site — from your About page to product pages to customer support content. Every page should reinforce why a real person should trust your brand.

Stop paywalling your best content

I’m seeing more brands put their strongest content behind logins or paywalls. I understand why. Many need to protect intellectual property and preserve monetization. But as a long-term strategy, this often backfires.

If your content is truly valuable, the ideas will spread anyway. A subscriber may paraphrase it. An AI system may summarize it. A crawler may access it through technical workarounds. In the end, your insights circulate without attribution or brand lift.

When your best content is publicly accessible, it can be cited, linked to, indexed, and discussed. That visibility builds authority and trust over time.

In a search- and AI-driven ecosystem, discoverability often outweighs modest direct content monetization.

This doesn’t mean content businesses can’t charge for anything. It means being strategic about what you charge for. A strong model is to make core knowledge and thought leadership open while monetizing things such as:

  • Tools.
  • Community access.
  • Premium analysis or data.
  • Courses or certifications.
  • Implementation support.
  • Early access or deeper insights.

In other words, let your ideas spread freely and monetize the experience, expertise, and outcomes around them.

Stop viewing content as a necessary evil

I still see brands hiding content behind CSS “read more” links or stuffing blocks of “SEO copy” at the bottom of pages, hoping users won’t notice but search engines will.

Spoiler alert: they see it. They just don’t care.

Content isn’t something you add to check an SEO box or please a robot. Every word on your site must serve your customers. When content genuinely helps users understand, compare, and decide, it becomes an asset that builds trust and drives conversions.

If you’d be embarrassed for users to read your content, you’re thinking about it the wrong way. There’s no such thing as content that’s “bad for users but good for search engines.” There never was.

Embrace user-generated content

No article on long-tail SEO is complete without discussing user-generated content. I covered forums and Q&A sites in a previous article (see: The reign of forums: How AI made conversation king), and they remain one of the most efficient ways to generate authentic, unique content.

The concept is simple. You have an audience that’s already passionate and knowledgeable. They likely have more hands-on experience with your brand and industry than many writers you hire. They may already be talking about your brand offline, in customer communities, or on forums like Reddit.

Your goal is to bring some of those conversations onto your site.

User-generated content naturally produces the long-tail language marketing teams rarely create on their own. Customers

  • Describe problems differently.
  • Ask unexpected questions.
  • Compare products in ways you didn’t anticipate.
  • Surface edge cases, troubleshooting scenarios, and real-world use cases that rarely appear in polished marketing copy.

This is exactly the kind of content long-tail SEO thrives on.

It’s also the kind of content AI systems and search engines increasingly recognize as credible because it reflects real experience rather than brand messaging many dismiss as inauthentic.

Brands that do this well don’t just capture long-tail traffic. They build trust, reduce support costs, and dominate long-tail searches and prompts.

In the age of AI-generated content, real human experience is one of the strongest differentiators.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

The new SEO playbook looks a lot like the old one

For years, SEO has been shaped by the limits of the search box. Short queries and head terms dominated strategy, and long-tail content was often treated as optional.

LLMs are changing that dynamic. AI is expanding search, not eliminating it.

AI systems encourage people to express what they actually want to know. Those detailed prompts still need answers, and those answers come from the web.

That means the SEO opportunity is shifting from competing over a small set of keywords to becoming the best source of answers to thousands of specific questions.

Brands that succeed will:

  • Deeply understand their audience.
  • Publish genuinely useful content.
  • Build trust through real engagement and experience.

That’s always been the recipe for SEO success. But our industry has a habit of inventing complex tactics to avoid doing the simple work well.

Most of us remember doorway pages, exact match domains, PageRank sculpting, LSI obsession, waves of auto-generated pages, and more. Each promised an edge. Few replaced the value of helping users.

We’re likely to see the same cycle repeat in the AI era.

The reality is simpler. AI systems aren’t the audience. They’re intermediaries helping humans find trustworthy answers.

If you focus on helping people understand, decide, and solve problems, you’re already optimizing for AI — whatever you call it.

Dig deeper: Is SEO a brand channel or a performance channel? Now it’s both

Read more at Read More

Google Search Console AI-powered configuration rolling out

Over two months ago, Google began testing its AI-powered configuration tool. It allows you to ask AI questions about the Google Search Console performance reports and it would bring back answers for you. Well, Google is now rolling out this tool for all.

Google said on LinkedIn, “The Search Console’s new AI-powered configuration is now available to everyone!”

AI-powered configuration. AI-powered configuration “lets you describe the analysis you want to see in natural language. Your inputs are then transformed into the appropriate filters and settings, instantly configuring the report for you,” Google said.

Rolling out now. If you login to your Search Console account and click on the performance report, you may see a note at the top that says “New! Customize your Performance report using Al.”

When you click on it, you get into the AI tool:

More details. As we reported earlier, Google said “The AI-powered configuration feature is designed to streamline your analysis by handling three key elements for you.”

  • Selecting metrics: Choose which of the four available metrics – Clicks, Impressions, Average CTR, and Average Position – to display based on your question.
  • Applying filters: Narrow down data by query, page, country, device, search appearance, or date range.
  • Configuring comparisons: Set up complex comparisons (like custom date ranges) without manual setup.

Why we care. This is only supported in the Performance report for Search results. It isn’t available for Discover or News reports, yet. Plus, it is AI, so the answers may not be perfect. But it can be fun to play with and get you thinking about things you may not have thought about yet.

So give it a try.

Read more at Read More

The Step-by-Step Guide to Designing Local Landing Pages That Convert

While the growth of artificial intelligence (AI) and global conveniences like Amazon has been a great thing for society, there’s still an undercurrent of people returning to a local, more personal-feeling shopping experience.

But this “return to local” doesn’t change the fact that we still live in an internet age. Enter local search engine optimization (SEO) and landing pages.

Local SEO tends to work best for businesses with physical locations that require direct customer contact, but it can also work for virtual online businesses that don’t necessarily meet their customers before a business transaction takes place.

This is why local landing pages are so important. They can give customers the convenience of an online transaction while still providing the trust and personal feel of a local business—if your landing page is done right, of course.

Optimizing your landing page design with the proper elements can help you attract local customers to your business, increase lead generation, and boost conversion rates.

Key Takeaways

  • Local landing pages only work when they’re built for real locations and real intent. One page per city or service area, with localized keywords, metadata, and copy that matches how people actually search (“service + city” or “near me”).
  • Trust signals drive both rankings and conversions. Consistent NAP data, real reviews from nearby customers, local photos, and clear business details help you show up in map features and convince visitors to take action.
  • Content needs to feel local, not duplicated. Strong local landing pages include tailored copy, location-specific frequently asked questions (FAQs), social proof, and visuals that prove you serve that area, as opposed to generic pages with city names swapped in.
  • Mobile optimization is nonnegotiable for local SEO. Most local searches happen on mobile and convert fast. Pages must load quickly, display contact info above the fold, and make calling or getting directions effortless.
  • Schema markup and clear calls to action (CTAs) turn visibility into results. Structured data helps search engines and AI tools understand your business, while strong, localized CTAs guide users to call, book, or request a quote immediately.

Why Are Local Landing Pages Important?

Local landing pages help you show up when people search for services near them, and they’re key to winning conversions in your area.

Think about how people search: “best dentist in Austin,” “roof repair near me,” or “24/7 locksmith in Chicago.”

A local landing page.

If you don’t have dedicated pages that target these local queries, you’re invisible in search engine results. In fact, recent stats show 80% of U.S. consumers surveyed search for local businesses online once a week, with about one-third (32%) searching for local businesses multiple times a day. Google’s local algorithm prioritizes relevance and proximity, and a well-optimized local page checks both boxes.

But optimizing your local SEO and landing pages is about more than appeasing Google’s algorithm. These pages can actually convert.

When someone lands on a page with your local address and glowing reviews from nearby customers, trust builds fast. In fact, according to Uberall.com, 85% of customers visit local businesses within a week of discovering them online. 17% of those visit the very next day. That’s why smart local businesses treat these like high-converting landing pages, not just generic content dumps.

With large language models (LLMs) and AI tools pulling content to answer local questions, the need for detailed, well-structured local pages becomes even more critical. These models lean on content that clearly signals relevance and authority, something a basic homepage or generic service page won’t do.

An AI overview of what are some of the best locksmiths in Chicago.

Bottom line: if local traffic matters to you, local landing pages need to be part of your SEO and conversion rate optimization (CRO) strategy.

A chart showing top ranking factors for the Local Pack.

Step 1: Identify where your customers are located.

Local landing pages only work when you know exactly which towns, neighborhoods, or service areas you’re trying to win. Otherwise, you can rack up traffic and still feel stuck because the visits come from places you can’t serve and don’t convert.

Start by answering two questions: Which locations do you want customers to come from? And which locations are they actually coming from today? Once you have both, planning local pages gets a lot easier.

Before you even open your reports, define your real-world service area. If you’re a storefront, your address needs to match how you operate in the real world (and be consistent everywhere it appears). If you’re a service-area business (such as a plumber, cleaner, or mobile vet), set a clear service area in your Google Business Profile so you don’t waste time targeting locations you can’t support.

Then, stop relying on a single data source. Use a few location signals together:

  • Google Analytics 4 (GA4) to spot city/region trends for session and key events (keep in mind location and demographics reporting is aggregated and can be limited by consent).
Demographics overview for Google Analytics 4.

Source

  • Google Search Console to see the “intent layer”—which local queries are driving clicks and impressions.
Google Search Console's intent layer.

Source

Finally, turn those insights into simple personas with local references, clear benefits, and social proof, so your page reads like it was made for that person in that place.

Step 2: Use localized keywords and metadata to create relevance.

Relevance still matters, but that doesn’t mean you can stuff a city name into every sentence and call it a day. Good local SEO matches what the searcher wants (intent) with what the page promises, starting right in the SERP.

Here’s the key difference: a local landing page usually targets transactional intent (“dentist in Austin,” “emergency plumber near me,” “book HVAC repair”), so your keyword + metadata strategy should read like a clear offer, not a watered-down blog headline.

A landing page for an Austin dentist.

Start with the basics that actually move the needle:

  • Title tag: Make a descriptive, concise, and unique title (Google can rewrite titles, but strong input helps). A simple formula works: Primary service + city + differentiator (and brand if it fits). 
  • Meta description: Google primarily builds snippets from on-page content, but it may use your meta description when it better matches the query. Write unique descriptions per page, include the “what” + “where,” and add a reason to click (pricing, availability, social proof). Avoid long strings of keywords. 
  • Meta keywords: Skip them. Google has said it ignores the keywords meta tag for web ranking.

Now, a quick warning: if you’re cranking out dozens of near-identical city pages that funnel to similar destinations, that’s exactly what Google calls doorway abuse. And lists of cities jammed onto a page can fall into keyword stuffing territory. 

Step 3: Use consistent NAP data

NAP stands for name, address, and phone number, and it needs to be exactly the same everywhere your business appears online. That includes your local landing pages, your Google Business Profile, directories, and social platforms.

Why does this matter? Because Google (and users) rely on NAP consistency to trust your business is legit. Inconsistent info can hurt your rankings and knock you out of key local SERP features like the map pack.

An infographic on how to create NAP data.

Source

Make sure your NAP is crawlable text, not embedded in an image. Add it in the footer or near your CTA, and match it letter-for-letter with your business listings. Even something small, like “Street” vs. “St.”, can throw off search engines.

If you serve multiple locations, each page should have its own unique NAP. No shortcuts here. Clean data builds trust, and trust drives clicks.

Step 4: Create and publish valuable content

Implementing local landing page design best practices in your content does two things: it helps you rank for location-specific searches and gives visitors a reason to trust you.

Start with copy that speaks directly to your audience in that area. Mention the city or neighborhood naturally, highlight the services you offer there, and include local differentiators like special hours or nearby service coverage. Make it feel personal.

Next, layer in content that builds credibility. Local reviews and case studies show real proof that your business delivers. Include names, star ratings, and even short quotes to make the social proof pop. Photos help, too. Real images of your team or completed projects add authenticity.

You should also include a brief FAQ section that answers questions specific to that location. Not only does this help your readers, but it also increases your chances of showing up in featured snippets or AI-generated results.

Source

Step 5: Add an effective CTA

Every local landing page needs a clear call to action. Without it, you’re leaving conversions on the table.

The best CTAs guide visitors to take the next logical step, whether that’s calling your business, booking an appointment, or requesting a quote. To be effective, your CTA must feel local and relevant. “Get a Free Quote” is okay. “Get a Free Plumbing Quote in Phoenix” is better. It reinforces the location and makes the offer feel tailored.

Make sure your CTA stands out visually. Use buttons, bold text, and color contrast to grab attention. And don’t just put it at the bottom. Add it near the top of the page and repeat it throughout, especially after sections like testimonials or service descriptions.

If phone calls are your goal, use a click-to-call button—especially for mobile users. For forms, keep them short. Name, email, and one key question is usually enough.

Remember, your local landing page should do more than just inform, it should drive action. The CTA is where that happens.

Step 6: Optimize your local landing pages for mobile users

Mobile search isn’t just dominant, it drives action. In fact, 88% of mobile local business searches result in a call or visit within 24 hours, showing how urgent mobile intent has become.

Start with your page performance. Speed is critical. Slow mobile pages frustrate users and push them to competitors. Tools like Google PageSpeed Insights help identify bottlenecks, enabling you to improve load times by compressing images and deferring unused scripts. Fast pages mean better user experience (UX), which, in turn, leads to higher engagement.

Google PageSpeed Insigihts.

Responsive design is nonnegotiable. Your layout must adapt to screens of all sizes with easily readable text and minimal pop-up interference. Prioritize large, clickable CTAs, and ensure your contact info is visible without scrolling.

Mobile users are often on the go. Clearly display your NAP details front and center, ideally above the fold. Clean navigation and quick access to key info make it easier for people to act immediately.

Step 7: Add schema markup

Schema markup helps search engines understand the context of your content, and that’s a big deal for local SEO.

Schema markup in action.

Source

When you add local business schema to your landing pages, you’re giving Google structured data that it can easily read. This increases the chances  your business showing up in rich results like the map features or AI-generated summaries. It’s not just about visibility. It’s about making your information easier to find, trust, and act on.

At a minimum, include schema for your business name, address, phone number (NAP), hours of operation, and service area. This aligns perfectly with the on-page content you’ve already built. The more complete your schema, the more signals you’re sending to Google that your business is real, local, and helpful.

You can generate local business schema using tools like Google’s Structured Data Markup Helper or Schema.org. Then either embed it as JSON-LD in the <head> of your page or use a plugin if you’re on a platform like WordPress.

Don’t forget to test it. Use Google’s Rich Results Test to make sure your markup is working as intended.

It takes a few extra steps, but schema markup is one of the easiest technical wins you can add to a local landing page. It won’t guarantee rankings, but it gives your content a better shot at being seen and trusted.

FAQs

How do I create content for local landing pages for SEO?

Start with localized keywords (e.g., “[service] in [city]”) and ensure they appear naturally in your headlines and throughout the copy. Then, write content that actually helps local visitors: include location-specific details, highlight nearby landmarks, and speak directly to the needs of that community. Bonus points if you add customer reviews or links to local pages.

How to make local SEO landing pages

Structure each page around one location or service area with unique URLs (like /plumbing-los-angeles). Don’t forget your Google Business Profile and local schema markup. They help search engines match your page with nearby searchers.

How to optimize landing page for local SEO

Use consistent NAP (name, address, phone) info across the page and the web. Add a local map, embed reviews from customers in that area, and link internally to relevant services. Make sure your page loads fast and works well on mobile because that’s where most local searches happen.

Conclusion

To maximize your search results and lead generation, make sure that you design separate landing pages for each city that you’re targeting.

Above all, create unique, location-specific copy for your landing pages. Building a local landing page requires an investment. It could be the investment of your time, money, or both.

However, it’s become a lot easier these days because of the plethora of landing page creators and landing page templates.

Read more at Read More

Why Entity-Based SEO is a New Way of Thinking About Optimization

Search engine optimization (SEO) was once defined by the number of keywords and synonyms scattered across your content. If you used the right word enough times, you’d rank.

Those days are long gone.

Since the launch of its Knowledge Graph in 2012, Google has been moving away from literal text matching toward deep semantic understanding. 

Search engines no longer evaluate pages as collections of words. They evaluate meaning.

This goes beyond Google and search engine results pages (SERPs). Modern discovery operates on entities—distinct people, places, brands, and concepts connected through context and relationships. Search systems now interpret queries by mapping how these entities relate rather than counting keyword usage.

That’s where entity SEO comes in. Entity-based structures set the groundwork for the more intuitive search results we see today in AI platforms and large language models (LLMs). Grouping queries around one central “thing” gives these platforms a clear reference point they can connect to related concepts.

Ultimately, entity SEO helps these platforms research and provide information in a more human way. It gives us the answers we want quickly, and it powers Google’s more complex search features that take our query results beyond a simple list of blue links.

In this article, we’ll explain what entities are, how to use them, and how they’ll continue to shape the future of SEO.

Key Takeaways

  • Entity SEO focuses on clearly defined people, brands, products, and concepts and the relationships between them, rather than isolated keywords.
  • When Google understands the primary entity behind a page, it can rank that page across a broader range of relevant queries without exact-match targeting.
  • Site structure communicates meaning. Topic clusters, internal links, and consistent terminology help search engines map how content fits together.
  • AI-driven search relies on entity context to disambiguate terms and interpret intent, not keyword strings alone.
  • Maintaining consistent signals across pages and trusted third-party profiles strengthens entity recognition and long-term visibility.

What Is Entity-Based SEO?

Entity-based SEO uses context (not just keywords) to help users find exactly what they’re looking for.

You can see this shift in action every time you type a query. For example, when you type a common name like “Malcolm” into a search bar, Google doesn’t just look for those seven letters. It tries to determine which entity you’re looking for:

A Google search dropdown for the name “Malcolm,” showing a Knowledge Panel for author Malcolm Gladwell alongside various entity-based search suggestions like “Malcolm in the Middle” and “Malcolm X.”

Google offers suggestions to searchers to provide immediate context. It speeds up the search for those looking for popular figures like Malcolm Gladwell or Malcolm X, and it prompts others to add more specific details if their intended “thing” isn’t listed.

Once you select a specific entity, the search engine stops scanning for keywords and starts delivering a comprehensive Knowledge Panel.

A Google search results page for "Malcolm Gladwell" showcasing a comprehensive Knowledge Panel. The layout displays the subject as a defined entity with categorized data points, including a photo gallery, biographical details (age, parents), linked YouTube videos, and a list of his published books, like "The Tipping Point" and "Revenge of the Tipping Point."

This layout displays the subject as a defined entity, grouping biographical details, books, and videos into a single source. While this shift makes search more intuitive for users, it makes things slightly more complicated for content creators. 

Here are three ways entity-based SEO has changed the landscape:

  1. AI visibility: Entity SEO revolves around an entity record. These records parse dozens of data points about a particular search query, making all information easy for AI platforms to access. Brands that structure their data properly make themselves much more visible in LLM search. 
  2. Better mobile capabilities: Entities allowed SEO to improve mobile results and improved mobile-first indexing
  3. Translation improvements: Entities can be found regardless of homonyms, synonyms, and foreign language use, thanks to context clues. For instance, a search for “red” will include results for “rouge” or “rojo” if the searcher’s settings allow it.

Let’s dig a little deeper into entity records to understand how they connect to LLMs and search engines like Google.

To start, let’s look at a hypothetical entity record about Taylor Swift:

A hypothetical entity record.

(Image Source)

This makes it clear how entity SEO works in practice. Search engines don’t rely on a single page or keyword to understand a brand. They aggregate structured signals across the web to build a unified view of the entity.

The reason behind this is that search systems and LLMs don’t read content the way humans do. They extract discrete facts, attributes, and relationships, then assemble them into a coherent understanding.

The example above illustrates how an entity can be broken into clear, machine-readable components.

Keywords vs. Entities: What’s the Difference

Entities might sound similar to keywords, but they’re actually quite different. Here’s how they differ and why those differences are so important.

Keywords

Keywords are words or phrases people use to express intent in search. They take many forms, including questions, sentences, or single words.

For example, users looking for makeup tutorials might search for “makeup tutorial,” “smokey eye,” “how to do a smokey eye,” or something similar.

Google search results page for “how to do a smokey eye,” showing a video carousel with multiple YouTube makeup tutorials and a step-by-step blog result below.

Today, keywords tend to work best as demand signals rather than quotas to be filled. They show how users frame their intent, whether they want to learn, compare, buy, or solve a problem, and give you language to match your content to that intent.

That’s why long-tail queries and modifiers (“best,” “near me,” “for beginners,” “price,” “vs.”) are still gold. 

These modifiers provide the intent that tells a search engine how to connect a user to your brand. Your goal is to rank for these high-intent terms to drive organic traffic and establish your site as the definitive source of truth for your niche. 

Long-tail and informational (what, how, why) keywords also help you line up your content with where search is heading. 

Data shows that about 90 percent of influential SERP features, like AI summaries and “People also ask,” come from queries like these, making them useful inputs for LLM-powered workflows like content production plans based on real query language.

If your page answers the query fully and clearly, you’re using keywords the modern way.

Entities

Google defines an entity as “a thing or concept that is singular, unique, well-defined, and distinguishable.” They can be people, places, products, companies, or abstract concepts. 

What makes entities powerful is not just what they are, but how they connect. They are defined by their relationships to other entities, which helps search engines and LLMs understand how each concept fits into the “big picture.”

Once Google is confident about what your page is about, it can rank you for searches you never explicitly targeted. That happens because entities carry built-in relationships, including attributes, categories, synonyms, and commonly associated concepts.

This is where entity SEO really starts to differ from keyword-based optimization. Essentially, entity SEO prioritizes mentions and human discussion over keywords. 

For example, a search for the word “apple” could result in pages about the fruit or pages about the company. As interesting as both topics are, reading about iPhones probably won’t be too helpful if you’re trying to figure out whether apple seeds are indeed poisonous. 

You need to add some keywords or modifiers to give crawlers and LLMs context. 

A side-by-side comparison illustrating entity disambiguation. On the left is a realistic photo of a red apple fruit; on the right is the minimalist black logo of Apple Inc., the technology company.

This is also why pages sometimes rank for “weird” keywords. If your content clearly describes the entity—what it is or related terms—Google can connect you to unexpected queries that share the same underlying intent. This concept is known as latent semantic intent (LSI).

That’s not magic. It’s entity understanding plus context signals.

For entities to be useful, search engines map them into knowledge graphs, which are structured systems that connect related information across the web and make retrieval more reliable.

As of May 2024, Google’s Knowledge Graph contains 1.6 trillion facts about 54 billion entities, and about 1.6 trillion facts about them. Not only do these data points help answer complex informational or long-tail queries, but they also power Google’s Knowledge Panel. Here’s an example:  

A Google Search Results Page for "Eddie Aikau" featuring a Knowledge Panel highlighted in a red box.

(Image Source)

To help search engines or LLMs make sense of which entity fits your query, you want the pages of your website to behave like solid references. Spell out defining details (names, dates, specs, locations), connect related subtopics, and use consistent terminology. 

Add supporting cues like internal links to your own deeper pages and clear headings that map to common questions. Structured data is also key here, making it easier for engines to see specific information that you deem to be important on a given page, like product information, locations, or other items.

How Do Entities and Keywords Work Together?

An effective SEO strategy recognizes that keywords are the signals, but entities are the destination. On-page, you can treat your website as a mini knowledge graph that uses keywords to link to different pages on your site. 

You can further validate your brand by connecting your content to established knowledge graphs like Wikipedia or LinkedIn, which are high in experience, expertise, authoritativeness, and trust (E-E-A-T). While this won’t directly affect your page rank, it can improve your page’s authority in search results.

Practically, this means your keywords should map to specific entity details (features, use cases, comparisons, FAQs, structured data). The clearer those entity connections are, the easier it is for search engines to match your page to related searches. That’s especially the case for those long-tail ones where intent is clear, but the wording is inconsistent.

How To Start Building Up Your Entity-Based SEO

The biggest upside of entity clarity is that it helps your whole site act like a connected knowledge hub. When search systems recognize your brand, products, services, locations, and experts as distinct entities, they can more accurately map your content to complex user intent.

Content Depth and Topical Relevance

Entity-based SEO nudges you away from thin, keyword-targeted pages toward deep, comprehensive content. Instead of fragmented articles, build authoritative topic clusters that cover definitions, use cases, and FAQs. 

This depth reinforces the “identity” of your subject matter, signaling to search engines that your site is the definitive source for that specific entity across all related queries.

Strengthening Relationships via Internal Linking

Internal linking is the connective tissue of your entity strategy. 

Consistently linking supporting content to a central entity page explicitly defines relationships for search engines. That can be as simple as connecting which services belong to which categories or which authors are connected to which brands. 

This internal relationship graph is essential for earning broader semantic visibility and is a core component of reputation management, as it ensures search engines never lose the thread of who you are.

Consistency as a Signal of Authority

Your entity becomes much more powerful when your brand and authors remain consistent across the web. Using the same naming conventions, professional bios, and expertise signals makes it easier for search systems to verify your “identity.” 

Consistency cuts through ambiguity to make sure your authority is attributed to the correct entity. And that goes a long way in preventing your brand from being confused with unrelated concepts.

Trust Signals and Entity Clarity

Trust signals like reviews and citations match up perfectly with entity clarity. Clear, consistent data—like name, address, phone number (NAP) details—help search engines attach your content to the right real-world entity for local SEO

Modern algorithms prioritize clear signals like these when deciding which brands to feature in high-stakes search results and AI-generated overviews.

The Role of AI in Entity SEO

AI-driven search doesn’t “read” the web like a human. It builds a model of the world. 

That model is made of entities (people, brands, products, places, concepts) and the connections between them.

That’s why entities are foundational. A keyword is just a string of text. An entity has a unique identity. 

When Google sees “Jaguar,” it has to decide between the animal, the car brand, or the NFL team? AI makes that call by looking at entity context—nearby terms, linked pages, structured data, and known relationships in systems like the Knowledge Graph.

The screenshots below show how that entity resolution plays out in real search results. The same keyword produces entirely different SERPs based on which entity Google identifies as the best match.

Google search results for “jaguar animal,” showing an animal Knowledge Panel with images, facts, and Wikipedia information about the jaguar species.

Google search results for “jaguar car,” displaying a brand Knowledge Panel for Jaguar as a luxury vehicle manufacturer with models, company details, and images.

This is also how AI gets better at interpreting intent. 

Someone searching “best running shoes for flat feet” isn’t asking for a dictionary definition of shoes. They’re signaling a problem, a use case, a set of constraints. 

Entity relationships help AI connect that query to brands, product categories, medical concepts, reviews, and comparisons before picking results that match the implied goal.

You can see the shift in your data. In Google Search Console, queries often widen into themes, with multiple variations driving impressions to the same page. 

 In the SERPs, features like Knowledge Panels, AI Overviews, and “People also ask” reflect entity understanding, not exact-match phrasing. Content performance aligns better with topic clusters and user journeys than with single keywords.

Entity SEO future-proofs your content by aligning with how AI systems learn. 

If your pages clearly define the entities you cover, connect them with strong internal linking, and stay consistent in terminology and positioning, they’re easier to interpret, categorize, and reuse as search evolves.

How to Shift Your Strategy to Entity-Based SEO

Understanding entity SEO is only useful if it changes how you work. Here are the concrete changes that move a keyword-first strategy toward an entity-based one.

Identify Core Entities Tied to the Business

A core entity is a small, intentional set of “things” that you want Google to associate with your brand. It goes beyond what you want to rank for. 

Start by pressure testing your site against three questions: 

  • Who is this? (the brand/author entity)
  • What do they do? (the offering entity)
  • Who do they serve? (the audience/market entity)

If the answer to any of these feels fuzzy, your entities are too broad or buried within your content.

Keep core entities limited and intentional. Pick the ones that define your positioning, then give each one a clear home on the site. 

An example structure might be: a homepage for the brand, service pages for offerings, an about page for brand/author credibility, and supporting content that links back to those pillars.

Build Topic Clusters Around Those Entities

One page can define the entity, but topic clusters give it depth and context. The goal is coverage, not volume.

For each core entity, build one primary page that acts as the hub (your “entity’s home”). Then publish supporting pages that answer related questions, common use cases, comparisons, and next-step topics that your audience actually searches for. This is known as the hub and spoke model.

Your supporting content should do three things: 

  • Answer real follow-up questions.
  • Reinforce the same entity from different angles.
  • Link back to the hub page with clear, consistent anchor text. 

That internal structure is what helps search engines connect the dots.

Reinforce Entities Through Internal Links and Content Structure

Internal links are how you “wire” entities together across your site. Structure matters as much as the words on the page.

Link pages with related topics, not whatever feels convenient in the moment. If two articles support the same entity, connect them. If a page is a subtopic, point it to the hub and to other closely related subtopics.

NerdWallet’s credit cards hub shows how internal linking reinforces entities, with a single category page connecting related subtopics like cash back, travel rewards, and balance transfers under one clear concept.

NerdWallet credit cards hub page showing a central “Credit Cards” category with multiple subcategory links, including cash back, travel rewards, balance transfer, and business credit cards.

Keep your anchor text consistent and descriptive. And use the entity name (or a tight variation) instead of vague links like “click here” or “learn more.”

Make sure your cluster works both ways. In other words, supporting pages should link up to the main entity page, and related supporting pages should link to each other where it genuinely helps the reader move to the next logical question.

Maintain Entity Consistency Across the Site and Beyond

One way to leverage entity-based SEO is to list your business on directories across the internet.  These directory sites are a popular data source for search engine crawlers and LLMs. Your Google Business Profile, for example, is used as a data source for the Google Knowledge Graph. 

Other listing services, such as Yelp, can also help create strong, authoritative backlinks for your brand and define a well-known entity. 

Listing sites may vary by location, so do your research when deciding where to list. Additionally, be sure to choose sites with high domain authority to improve your search engine standing. 

Ultimately, consistency is key. Listing your business in multiple locations across the internet eventually turns entity signals into trust signals, but it’s important to list your business carefully.

Avoid using multiple names for the same entity and conflicting descriptions from page to page. Also, make sure your listings stay focused on topics related to entities in your industry. Don’t lose focus or drift to unrelated topics.  

Prioritize Brand Building

Brand building is another essential tactic in entity-based SEO. Offline brand signals should be mirrored online wherever search engines and AI systems look for training data.

This includes your about page, author bios, case studies, podcast/webinar pages, and third-party profiles (Crunchbase, G2, LinkedIn, industry directories, etc.). For LLM optimization, you want consistent, crawlable signals in the places models and search engines pull from. 

Use the same brand description, key services, and leadership names everywhere. That consistency makes it easier for systems to connect the dots.

Common Entity SEO Mistakes

Entity SEO fails when you treat it like a checklist instead of a system. These are some of the mistakes that do the most damage:

  • Treating schema as a shortcut. Markup helps Google label what’s on the page. It doesn’t create authority. If the content is thin or unclear, schema just highlights that faster.
  • Publishing thin entity pages. A quick definition page won’t earn trust. Weak entity pages struggle to rank, and they don’t attract links or support clusters.
  • Chasing unrelated entities. Dropping in trendy topics or random brands dilutes relevance. It can also confuse search engines about what you actually do.
  • Ignoring internal linking and structure. Entities need connections. If supporting pages don’t link to the hub (and to each other where it makes sense), Google can’t map the relationship.
  • Sending inconsistent signals. Mixed terminology, shifting positioning, and conflicting service descriptions make your entity harder to identify.

FAQs

What are entities in SEO?

Entities are the “things” search engines recognize—people, places, brands, concepts, and more. Unlike keywords, entities have context and relationships. Google uses them to understand meaning and intent. For example, “Amazon” as a company is an entity, and it’s different from the Amazon rainforest. 

How do you find SEO entities?

Start with your main topic and use tools like Google’s Knowledge Graph, Wikipedia, and Ubersuggest to identify related entities. Look for people, brands, terms, and categories commonly associated with your topic. Also, check competitor content. What entities are they connecting to? Use this to build a structured, semantically rich content plan. 

What is entity SEO?

Entity SEO is the practice of optimizing content around recognizable concepts, not just keywords, so search engines better understand and rank your site.

Conclusion

Entity SEO isn’t some advanced trick. It’s how modern search actually works. 

Search engines no longer rely on traditional keyword research alone. They map concepts, understand relationships, and evaluate authority across connected topics.

If you want to stay visible long term, your content needs more than keywords. 

Clarity and a strong topical focus are the way to go. That’s how you build trust with Google and future-proof your branding strategy as AI continues to reshape the search landscape.

Leaning into entity-focused optimization builds a durable presence that lines up with how users search and how Google works.

Read more at Read More

How to make automation work for lead gen PPC

B2B advertising faces a distinct challenge: most automation tools weren’t built for lead generation.

Ecommerce campaigns benefit from hundreds of conversions that fuel machine learning. B2B marketers don’t have that luxury. They deal with lower conversion volume, longer sales cycles, and no clear cart value to guide optimization.

The good news? Automation can still work.

Melissa Mackey, Head of Paid Search at Compound Growth Marketing, says the right strategy and signals can turn automation into a powerful driver of B2B leads. Below is a summary of the key insights and recommendations she shared at SMX Next.

The fundamental challenge: Why automation struggles with lead gen

Automation systems are built for ecommerce success, which creates three core obstacles for B2B marketers:

  • Customer journey length: Automation performs best with short journeys. A user visits, buys, and checks out within minutes. B2B journeys can last 18 to 24 months. Offline conversions only look back 90 days, leaving a large gap between early engagement and closed revenue.
  • Conversion volume requirements: Google’s automation works best with about 30 leads per campaign per month. Google says it can function with less, but performance is often inconsistent below that level. Ecommerce campaigns easily hit hundreds of monthly conversions. B2B lead gen rarely does.
  • The cart value problem: In ecommerce, value is instant and obvious. A $10 purchase tells the system something very different than a $100 purchase. Lead generation has no cart. True value often isn’t clear until prospects move through multiple funnel stages — sometimes months later.

The solution: Sending the right signals

Despite these challenges, proven strategies can make automation work for B2B lead generation.

Offline conversions: Your number one priority

Connecting your CRM to Google Ads or Microsoft Ads is essential for making automation work in lead generation. This isn’t optional. It’s the foundation. If you haven’t done this yet, stop and fix it first.

In Google Ads’ Data Manager, you’ll find hundreds of CRM integration options. The most common B2B setups include:

  • HubSpot and Salesforce: Both offer native, seamless integrations with Google Ads. Setup is simple. Once connected, customer stages and CRM data flow directly into the platform.
  • Other CRMs: If you don’t use HubSpot or Salesforce, you can build a custom data table with only the fields you want to share. Use connectors like Snowflake to send that data to Google Ads while protecting user privacy and still supplying strong automation signals.
  • Third-party integrations: If your CRM doesn’t integrate directly, tools like Zapier can connect almost anything to Google Ads. There’s a cost, but the performance gains typically pay for it many times over.

Embrace micro conversions with strategic values

Micro conversions signal intent. They show a “hand raiser” — someone engaged on your site who isn’t an MQL yet but clearly interested.

The key is assigning relative value to these actions, even when you don’t know their exact revenue impact. Use a simple hierarchy to train automation what matters most:

  • Video views (value: 1): Shows curiosity, but qualification is unclear.
  • Ungated asset downloads (value: 10): Indicates stronger engagement and added effort.
  • Form fills (value: 100): Reflects meaningful commitment and willingness to share personal information.
  • Marketing qualified leads (value: 1,000): The highest-value signal and top optimization priority.

This value structure tells automation that one MQL matters more than 999 video views. Without these distinctions, campaigns chase impressive conversion rates driven by low-value actions — while real leads slip through the cracks.

Making Performance Max work for lead generation

You might dismiss Performance Max (PMax) for lead generation — and for good reason. Run it on a basic maximize conversions strategy, and it usually produces junk leads and wastes budget.

But PMax can deliver exceptional results when you combine conversion values and offline conversion data with a Target ROAS bid strategy.

One real client example shows what’s possible. They tracked three offline conversion actions — leads, opportunities, and customers — and valued customers at 50 times a lead. The results were dramatic:

  • Leads increased 150%
  • Opportunities increased 350%
  • Closed deals increased 200%

Closed deals became the campaign’s top-performing metric because they reflected real, paying customers. The key difference? Using conversion values with a Target ROAS strategy instead of basic maximize conversions.

Campaign-specific goals: An underutilized feature

Campaign-specific goals let you optimize campaigns for different conversion actions, giving you far more control and flexibility.

You can set conversion goals at the account level or make them campaign-specific. With campaign-specific goals, you can:

  • Run a mid-funnel campaign optimized only for lead form submissions using informational keywords.
  • Build audiences from those form fills to capture engaged prospects.
  • Launch a separate campaign optimized for qualified leads, targeting that warm audience with higher-value offers like demos or trials.

This approach avoids asking someone to “marry you on the first date.” It also keeps campaigns from competing against themselves by trying to optimize for conflicting goals.

Portfolio bidding: Reaching the data threshold faster

Portfolio bidding groups similar campaigns so you can reach the critical 30-conversions-per-month threshold faster.

For example, four separate campaigns might generate 12, 11, 0, and 15 conversions. On their own, none qualify. Grouped into a single portfolio, they total 38 conversions — giving automation far more data to optimize against.

You may still need separate campaigns for valid reasons — regional reporting, distinct budgets, or operational constraints. Portfolio bidding lets you keep that structure while still feeding the system enough volume to perform.

Bonus benefit: Portfolio bidding lets you set maximum CPCs. This prevents runaway bids when automation aggressively targets high-propensity users. This level of control is otherwise only available through tools like SA360.

First-party audiences: Powerful targeting signals

First-party audiences send strong signals about who you want to reach, which is critical for AI-powered campaigns.

If HubSpot or Salesforce is connected to Google Ads, you can import audiences and use them strategically:

  • Customer lists: Use them as exclusions to avoid paying for existing customers, or as lookalikes in Demand Gen campaigns.
  • Contact lists: Use them for observation to signal ideal audience traits, or for targeting to retarget engaged users.

Audiences make it much easier to trust broad match keywords and AI-driven campaign types like PMax or AI Max — approaches that often feel too loose for B2B without strong audience signals in place.

Leveraging AI for B2B lead generation

AI tools can significantly improve B2B advertising efficiency when you use them with intent. The key is remembering that most AI is trained on consumer behavior, not B2B buying patterns.

The essential B2B prompt addition

Always tell the AI you’re selling to other businesses. Start prompts with clear context, like: “You’re a SaaS company that sells to other businesses.” That single line shifts the AI’s lens away from consumer assumptions and toward B2B realities.

Client onboarding and profile creation

Use AI to build detailed client profiles by feeding it clear inputs, including:

  • What you sell and your core value.
  • Your unique selling propositions.
  • Target personas.
  • Ideal customer profiles.

Create a master template or a custom GPT for each client. This foundation sharpens every downstream AI task and dramatically improves accuracy and relevance.

Competitor research in minutes, not hours

Competitive analysis that once took 20–30 hours can now be done in 10–15 minutes. Ask AI to analyze your competitors and break down:

  • Current offers
  • Positioning and messaging
  • Value propositions
  • Customer sentiment
  • Social proof
  • Pricing strategies

AI delivers clean, well-structured tables you can screenshot for client decks or drop straight into Google Sheets for sorting and filtering. Use this insight to spot gaps, uncover opportunities, and identify clear strategic advantages.

Competitor keyword analysis

Use tools like Semrush or SpyFu to pull competitor keyword lists, then let AI do the heavy lifting. Create a spreadsheet with columns for each competitor’s keywords alongside your client’s keywords. Then ask the AI to:

  • Identify keywords competitors rank for that you don’t to uncover gaps to fill.
  • Identify keywords you own that competitors don’t to surface unique advantages.
  • Group keywords by theme to reveal patterns and inform campaign structure.

What once took hours of pivot tables, filtering, and manual cleanup now takes AI about five minutes.

Automating routine tasks

  • Negative keyword review: Create an AI artifact that learns your filtering rules and decision logic. Feed it search query reports, and it returns clear add-or-ignore recommendations. You spend time reviewing decisions instead of doing first-pass analysis, which makes SQR reviews faster and easier to run more often.
  • Ad copy generation: Tools like RSA generators can produce headlines and descriptions from sample keywords and destination URLs. Pair them with your custom client GPT for even stronger starting points. Always review AI-generated copy, but refining solid drafts is far faster than writing from scratch.

Experiments: testing what works

The Experiments feature is widely underused. Put it to work by testing:

  • Different bid strategies, including portfolio vs. standard
  • Match types
  • Landing pages
  • Campaign structures

Google Ads automatically reports performance, so there’s no manual math. It even includes insight summaries that tell you what to do next — apply the changes, end the experiment, or run a follow-up test.

Solutions: Pre-built scripts made easy

Solutions are prebuilt Google Ads scripts that automate common tasks, including:

  • Reporting and dashboards
  • Anomaly detection
  • Link checking
  • Flexible budgeting
  • Negative keyword list creation

Instead of hunting down scripts and pasting code, you answer a few setup questions and the solution runs automatically. Use caution with complex enterprise accounts, but for simpler structures, these tools can save a significant amount of time.

Key takeaways

Automation wasn’t built for lead generation, but with the right strategy, you can still make it work for B2B.

  • Send the right signals: Offline conversions with assigned values aren’t optional. First-party audiences add critical targeting context. Together, these signals make AI-driven campaigns work for B2B.
  • AI is your friend: Use AI to automate repetitive work — not to replace people. Take 50 search query reports off your team’s plate so they can focus on strategy instead of tedious analysis.
  • Leverage platform tools: Experiments, Solutions, campaign-specific goals, and portfolio bidding are powerful features many advertisers ignore. Use what’s already built into your ad platforms to get more out of every campaign.

Watch: It’s time to embrace automation for B2B lead gen 

Read more at Read More