Posts

Web Design and Development San Diego

Google Ads support now requires account change authorization

Auditing and optimizing Google Ads in an age of limited data

Advertisers contacting Google Ads support may now need to grant explicit authorization before they can even submit a help request — giving a Google specialist permission to access and make changes directly inside their account.

Here’s what’s happening. Users are first routed to a beta AI chat. If they opt to submit a support form instead, they must tick an “Authorisation” box. The wording allows a Google Ads specialist, on behalf of the company, to reproduce and troubleshoot issues by making changes directly in the account.

The fine print is clear. Google doesn’t guarantee results. Any adjustments are made at the advertiser’s own risk. And the advertiser remains solely responsible for the impact on campaign performance and spending.

Why we care. The required checkbox shifts more responsibility onto advertisers at a time when automation and AI already limit hands-on control. If support makes changes, the performance and spend risk still sits with the advertiser.

Between the lines. This creates a trade-off between speed and control. Granting access could accelerate troubleshooting, but it also opens the door to account-level changes that may affect live campaigns — without any assurance of improved outcomes.

The bottom line. Getting support may now mean temporarily handing over the keys — while keeping full accountability for whatever happens next.

First seen. This new caveats to getting support was spotted by PPC specialist Arpan Banerjee who shared spotting the message on LinkedIn.

Read more at Read More

Web Design and Development San Diego

Content scoring tools work, but only for the first gate in Google’s pipeline

Content scoring tools work, but only for the first gate in Google’s pipeline

Most SEO professionals give Google too much credit. We assume Google understands content the way we do — that it reads our pages, grasps nuance, evaluates expertise, and rewards quality in some deeply intelligent way. The DOJ antitrust trial told a different story.

Under oath, Google VP of Search Pandu Nayak described a first-stage retrieval system built on inverted indexes and postings lists, traditional information retrieval methods that predate modern AI by decades. Court exhibits from the remedies phase reference “Okapi BM25,” the canonical lexical retrieval algorithm that Google’s system evolved from. The first gate your content has to pass through isn’t a neural network. It’s word matching.

Google does deploy more advanced AI further down the pipeline, including BERT-based models, dense vector embeddings, and entity understanding systems. But those operate only on the much smaller candidate set traditional retrieval produces. We’ll walk through where each technology enters the process.

This matters for content optimization tools like Surfer SEO, Clearscope, and MarketMuse. Their core methodology — a mix of TF-IDF analysis, topic modeling, and entity evaluation — maps directly to how that first retrieval stage scores documents. The tools are built on the right foundation. The problem is that most people use them incorrectly, and the studies backing them have real limitations.

Below, I’ll explain how first-stage retrieval works and why it still matters, what the research on content scoring tools actually shows — and doesn’t show — and most importantly, how to use these tools to produce content that earns its way into the candidate set without wasting time chasing a perfect score.

How first-stage retrieval works and why content tools map to it

Best Matching 25 (BM25) is the retrieval function most commonly associated with Google’s first-stage system. 

Nayak’s testimony described the mechanics it formalizes: an inverted index that walks postings lists and scores topicality across hundreds of billions of indexed pages, narrowing the field to tens of thousands of candidates in milliseconds. 

Here’s what matters for content creators:

  • Term frequency with saturation: The first mention of a relevant term captures roughly 45% of the maximum possible score for that term. Three mentions get you to about 71%. Going from three to thirty adds almost nothing. Repetition has steep diminishing returns.
  • Inverse document frequency: Rare, specific terms carry more scoring weight than common ones. “Pronation” is worth roughly 2.5 times more than “shoes” in a running shoe query because fewer pages contain it.
  • Document length normalization: Longer documents get penalized for the same raw term count. All of these scoring algorithms are essentially looking at some degree of density relative to word count, which is why every content tool measures it.
  • The zero-score cliff: If a term doesn’t appear in your document at all, your score for that term is exactly zero. Not low. Zero. You’re invisible for every query containing it.

That last point is the single most important reason content optimization tools have value. If you write a comprehensive rhinoplasty article but never mention “recovery time,” you score zero for that entire cluster of queries, regardless of how good the rest of your content is. 

Google has systems like synonym expansion and Neural Matching — RankEmbed — that can supplement lexical retrieval and surface additional documents. But counting on those systems to rescue a page with vocabulary gaps is a risky strategy when you can simply cover the term.

After first-stage retrieval, the pipeline gets progressively more expensive and more sophisticated. RankEmbed adds candidates keyword matching missed. Mustang applies roughly 100+ signals, including topicality, quality scores, and NavBoost — accumulated click data over 13 months, described by Nayak as “one of the strongest” ranking signals. 

DeepRank applies BERT-based language understanding to only the final 20 to 30 results because these models are too expensive to run at scale. The practical implication is clear: no amount of authority or engagement signals helps if your page never passes the first gate. Content optimization tools help you get through it. What happens after is a different problem.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

What the research on content tools actually shows

Three major studies have examined whether content tool scores correlate with rankings: Ahrefs (20 keywords, May 2025), Originality.ai (~100 keywords, October 2025), and Surfer SEO (10,000 queries, July 2025). All found weak positive correlations in the 0.10 to 0.32 range.

A 0.24 to 0.28 correlation is actually meaningful in this context. But these numbers need serious qualification. Every study was conducted by a vendor, and in every case, the vendor’s own tool performed best. 

No study controlled for confounding variables like backlinks, domain authority, or accumulated click data. The methodology is fundamentally circular: the tools generate recommendations by analyzing pages that already rank in the top 10 to 20, then the studies test whether pages in the top 10 to 20 score well on those same tools.

The real question — whether following tool recommendations helps a new, unranked page climb — has never been rigorously tested. Clearscope’s Bernard Huang put it directly: “A 0.26 correlation is not the brag they think it is.” 

He’s right. But a weak positive correlation is exactly what you’d expect if these tools solve the retrieval problem — getting into the candidate set — without solving the ranking problem — beating competitors once there. Understanding that distinction is what makes these tools useful rather than misleading.

Why not skip these tools altogether?

Expert writers are terrible at predicting how their audience actually searches. MIT Sloan’s Miro Kazakoff calls it the curse of knowledge. Once you know something, you forget what it was like before you knew it. 

Clearscope’s case study with Algolia illustrates the problem precisely. Algolia’s writers were technical experts producing genuinely excellent content that sat on Page 9. The problem wasn’t quality. The team was using internal jargon instead of the language their audience actually typed into Google. 

After adopting Clearscope, their SEO manager Vince Caruana said the tool helped the organization “start writing for our audience instead of ourselves” by breaking out of internal vocabulary. Blog posts moved from Page 9 to Page 1 within weeks. Not because the writing improved, but because the vocabulary finally matched search behavior.

Google’s own SEO Starter Guide acknowledges this dynamic, noting that users might search for “charcuterie” while others search for “cheese board.” Content optimization tools surface that gap by showing you the actual vocabulary of pages that have already demonstrated retrieval success. 

You can do everything a tool does manually by reading top results and noting common themes, but the tools automate hours of SERP analysis into minutes. At $79 to $399 per month, the investment is justified when teams publish frequently in competitive niches or assign work to freelancers lacking domain expertise. For a solo blogger publishing once or twice a month, manual analysis works fine.

What about AI-powered retrieval?

Dense vector embeddings are the same core technology behind LLMs and AI-powered search features. They compress a document into a fixed-length numerical representation and can match semantically similar content even without shared keywords. Google uses them via RankEmbed, but they supplement lexical retrieval rather than replace it.

The reason is computational: A 768-dimensional embedding can preserve only so much information, and research from Google DeepMind’s 2025 LIMIT paper showed that single-vector models max out at roughly 1.7 million documents before relevance distinctions break down — a small fraction of Google’s index. Multiple studies, including findings on the BEIR benchmark, show hybrid approaches combining BM25 with dense retrieval outperform either method alone.

The bottom line for practitioners is clear: The AI layer matters, but it sits lower in the pipeline, and the traditional retrieval stage your content tools map to still does the heavy lifting at scale.

Get the newsletter search marketers rely on.


How to actually use content scoring tools

This is where most guidance on content tools falls short. The typical advice is “use Surfer/Clearscope, get a high score, rank better.” 

That misses the point entirely. Here’s a framework built on how these tools actually intersect with Google’s retrieval mechanics.

Prioritize zero-usage terms over everything else

The highest-leverage action these tools identify is a term with zero mentions in your content. That’s a term where your retrieval score is literally zero, and you’re invisible for every query containing it. Going from zero to one mention is the single most impactful edit you can make. Going from four mentions to eight is nearly worthless because of the saturation curve.

When reviewing tool recommendations, filter for terms you haven’t used at all. Clearscope’s “Unused” filter does this explicitly. 

Ask yourself: Does this missing term represent a subtopic my audience would expect me to cover? If yes, work it in naturally. If the tool suggests a term that doesn’t fit your angle — a beginner’s guide doesn’t need advanced technical terminology — skip it. 

A high score achieved by forcing irrelevant terms into your content is worse than a moderate score with genuinely useful writing. As Ahrefs noted in its 2025 study, “you can literally copy-paste the entire keyword list, draft nothing else, and get a high score.” That tells you everything about the limits of chasing the number.

Be selective about which competitor pages you analyze

Default settings on most tools pull from the top 10 to 20 ranking pages, which frequently includes Wikipedia, major media outlets, and enterprise sites with overwhelming domain authority. These pages often rank despite their content, not because of it. Their term patterns reflect authority advantage, not content quality, and they’ll skew your recommendations.

A better approach: Look for pages that rank for a high number of organic keywords on mid-authority domains. 

Ahrefs’ data shows the average page ranking No. 1 also ranks in the top 10 for nearly 1,000 other keywords. A page ranking for 500 keywords on a DR 35 site has demonstrated broad retrieval success through vocabulary and topical coverage, not just backlinks. Those pages contain term patterns proven effective across hundreds of separate retrieval events, not just one. 

In most tools, you can manually exclude specific URLs from competitor analysis. Remove the Wikipedia pages, the Amazon listings, and any high-authority site where you know authority is doing the work. What’s left gives you a much cleaner picture of what content actually needs to include.

Use tools during research, not during writing

The worst workflow is writing with the scoring editor open, watching your number tick up in real time. That pulls your attention toward keyword insertion instead of communicating expertise. Practitioners reporting the worst experiences with these tools tend to be the ones writing to a live score.

The better workflow: Run the tool first. Review the term list. Identify gaps in your outline, especially terms with zero usage that represent subtopics you should cover. Then close the tool and write for your reader. 

Run it again at the end as a sanity check. Did you miss any major subtopics? Add them. Is the score significantly lower than competitors? That’s information worth investigating. But your job is to build the best page on the internet for this topic, not to match a number.

Understand that content is one player in the game

NavBoost, RankEmbed, PageRank-derived quality scores, site authority, click data, and engagement signals all operate on the candidate set that first-stage retrieval produces. Content optimization gets you through the gate. It doesn’t win the race. 

If you optimize a page, push the score to 90, and don’t see ranking improvements, that doesn’t mean the tool failed. It likely means the other ranking factors — backlinks, domain authority, and click signals — are doing more work for your competitors than content alone can overcome.

This is especially important when scoping on-page optimization projects. Be honest about what content changes can and can’t accomplish. If a page is on a DR 15 domain competing against DR 70+ sites, perfect content optimization is necessary but probably not sufficient. 

When a client asks why they’re not ranking after you pushed their score to 95, the answer shouldn’t be “we need more content.” It should be a clear explanation of which part of the problem content solves — retrieval — which parts it doesn’t — authority, engagement, brand — and what the next strategic move actually is.

Focus on going beyond, not just matching

The philosophy behind these tools — structure your content after what top results cover — is sound. You need to demonstrate topical relevance to enter the candidate set. But the goal isn’t to produce another version of what already exists.

The pages that rank broadly, the ones that show up for hundreds or thousands of keywords, consistently do more than match the competitive baseline. They add original research, practitioner experience, specific examples, or angles the existing results don’t cover.

Surfer SEO’s December 2024 study supports this. It measured “facts coverage” across articles and found that top-performing content by keyword breadth had significantly higher coverage scores than bottom performers.

The content that ranks for the most queries doesn’t just include the right terms. It includes more information, more specifically. Use the tool to establish the floor of topical coverage. Then build the ceiling with value the tool can’t measure.

A note on entities

Google’s Knowledge Graph contains an estimated 54 billion entities. Entity understanding becomes most powerful in the later ranking stages where BERT and DeepRank process final candidates. 

Some content tools are starting to incorporate entity analysis, but even the best versions present entities as flat keyword lists, missing the relationships between entities that Google’s systems actually evaluate. 

Knowing that “Dr. Smith” and “rhinoplasty” appear on your page is different from understanding that Dr. Smith is a board-certified surgeon with published research at a specific institution. That relational depth is what Google processes, and no content scoring tool currently captures it. 

Treat entity coverage as an additional layer beyond what keyword-focused tools measure, not a replacement for the fundamentals.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Retrieval before ranking

Content optimization tools work because they’ve reverse-engineered the vocabulary of the retrieval stage. That’s a less exciting claim than “they’ve cracked Google’s algorithm,” but it’s the honest one, and it’s supported by what the DOJ trial revealed about Google’s infrastructure.

Use these tools to identify missing terms and subtopics. Be skeptical of exact frequency targets. Exclude high-authority outliers from your competitor analysis. Prioritize zero-usage terms over further optimization of terms you’ve already covered. 

Understand that a perfect content score addresses one stage of a multi-stage pipeline and use the competitive baseline as your floor, not your ceiling. The content that ranks the broadest isn’t the content that best matches what already exists. It’s the content that covers what already exists and then goes further.

Read more at Read More

Web Design and Development San Diego

SerpApi moves to dismiss Google scraping lawsuit

Bot detection maze

SerpApi is asking a federal court to dismiss Google’s lawsuit, arguing the company is misusing copyright law to restrict access to public search results.

  • The motion was filed Feb. 20, according to a blog post by SerpApi CEO and founder Julien Khaleghy.
  • Google sued SerpApi in December, alleging it bypassed technical protections to scrape and resell content from Google Search.

The details: SerpApi argues Google is improperly invoking the Digital Millennium Copyright Act (DMCA). According to Khaleghy:

  • The DMCA protects copyrighted works, not websites or ad businesses.
  • Google doesn’t own the underlying content displayed in search results.
  • Accessing publicly visible pages isn’t “circumvention” under the statute.

Google’s complaint alleged SerpApi:

  • Circumvented bot-detection and crawling controls.
  • Used rotating bot identities and large bot networks.
  • Scraped licensed content from Search features, including images and real-time data.

SerpApi said it doesn’t decrypt systems, disable authentication, or access private data. Khaleghy said SerpApi retrieves the same information available to any user in a browser, without requiring a login.

Khaleghy also argued Google admitted its anti-bot systems protect its advertising business — not specific copyrighted works — which he said undermines the DMCA claim.

SerpApi cites the Ninth Circuit’s hiQ v. LinkedIn decision warning against “information monopolies” over public data. It also cites the Sixth Circuit’s Impression Products v. Lexmark ruling to argue that public-facing content can’t be shielded by technical measures alone.

Catch up quick: The lawsuit follows months of escalating legal fights over scraping and AI data use.

  • Oct. 22: Reddit sued SerpApi, Perplexity, Oxylabs, and AWMProxy in federal court, alleging they scraped Reddit content indirectly from Google Search and reused or resold it. Reddit claimed the companies hid their identities and scraped at “industrial scale.” Reddit said it set a “trap” post visible only to Google’s crawler that later appeared in Perplexity results. Reddit is seeking damages and a ban on further use of previously scraped data.
  • Oct. 29: SerpApi said it would “vigorously defend” itself, calling Reddit’s language “inflammatory” and arguing public search data should remain accessible.
  • Dec. 19: Google sued SerpApi, alleging it bypassed security protections, ignored crawling directives, and scraped licensed Search content for resale. SerpApi responded that it operates lawfully and that accessing public search data is protected by the First Amendment.

By the numbers: SerpApi claims that, under Google’s interpretation of the DMCA, statutory damages could theoretically total $7.06 trillion — a figure it said exceeds U.S. GDP. The number reflects SerpApi’s calculation of potential per-violation penalties, not an actual damages demand.

What’s next. The case now moves to the court’s decision on whether Google’s claims can proceed.

Why we care: The outcome could reshape how SEO platforms, AI tools, and competitive intelligence software access SERP data. A win for Google could make third-party search data harder or riskier to obtain. A win for SerpApi could strengthen arguments that publicly accessible search results can be scraped and collected.

The blog post. Google v. SerpApi: We’re filing a Motion to Dismiss. Here’s why we’re in the right.

Dig deeper. Inside SearchGuard: How Google detects bots and what the SerpAPI lawsuit reveals

Read more at Read More

How to Create a Wikipedia Page for Your Company

Wikipedia is a fascinating experiment. It’s a community-built encyclopedia that’s always in motion. It runs on volunteer energy and openly shared infrastructure, and it’s closer to an open-source project in how it’s built than a traditional encyclopedia book. Anyone can write, edit, and debate what belongs on a page.

And that’s the twist. The “truth” on Wikipedia isn’t handed down by a single editor or community member. It’s negotiated in public, guided by community standards, citations, and a whole lot of conversation. Contributors don’t so much control a subject’s story as they continually test it. They’re constantly asking questions: What can we verify? What deserves weight? What’s missing?

When you read a Wikipedia article, you’re seeing a current snapshot of a living, evolving community decision.

This whole experiment has scale, too. As of February 6, 2026, the English Wikipedia had 7.13 million articles, and the project spanned more than 340 languages.

If you’re thinking about creating a Wikipedia page for your company, it helps to know what you’re signing up for. Wikipedia isn’t a marketing channel, and it isn’t designed for companies to shape their narrative. 

It’s designed to summarize what independent, reliable sources have already said about a company, so not every organization qualifies for a stand-alone article. Wikipedia cautions that only a small percentage of organizations meet the requirements for an article in the first place.

The easiest way to orient yourself with the platform is to keep Wikipedia’s “five pillars” top of mind. Wikipedia is, first and foremost, an encyclopedia. It aims for a neutral point of view, the content is free for anyone to use and edit, editors are expected to be civil, and there are no hard-and-fast rules. It’s just policies and guidelines applied with unbiased judgment.

If your company is genuinely notable by Wikipedia’s standards and you’re willing to play by its guidelines, there’s a real visibility upside in a solid, well-sourced page that holds up over time.

Key Takeaways

  • Wikipedia isn’t for marketing. If a Wikipedia page reads like company positioning, a feature brochure, or a pricing page, it’ll get rejected, reverted, or flagged. Even if other company pages “get away with it,” you need to focus on creating a deeply researched, informative draft to give strong notability in Wikipedia’s eyes. 
  • Notability = independent coverage. You need multiple strong secondary sources (real reporting with editorial standards). Press releases, paid placements, niche trade mentions, and contributor “interviews” don’t hold up.
  • Sources drive the outline (and the page). Build your outline from what your credible secondary sources already cover. Possible sections could include a lead, history, high-level operations, leadership, or controversies, if documented. Each company’s outline may look different depending on what information can be strongly sourced. If you can’t source a section cleanly, it doesn’t belong.
  • Use Wikipedia’s Articles for Creation (AfC) process to avoid conflict of interest (COI) roadblocks. If you’re connected to a company or paid to write a Wikipedia page for them, you must disclose it and lean on the AfC process instead of directly pushing a company page live.
  • Getting published isn’t the finish line. Volunteers continuously review pages. Expect ongoing edits, scrutiny, and occasional challenges, so monitor a live page and keep it updated with strong, independent citations.

What Are the Benefits of Creating a Wikipedia Page?

The most significant benefit of Wikipedia is its sheer size and reach. It is one of the most visited websites in the world, averaging more than 1.1 billion unique visitors per month.

In addition to the size of its audience, the platform offers other benefits to marketers and company owners:

  • Credibility via independent validation (earned, not claimed): A live Wikipedia page signals that reliable, third-party sources have covered your organization in a meaningful way. For journalists, partners, investors, and enterprise buyers, this can reduce skepticism during research.
  • Search and AI visibility (off-page, long-term): Wikipedia tends to surface prominently in search results and is commonly referenced by knowledge systems. A well-sourced page can support progress in how your company appears in search features, AI overviews (AIOs), and large language model (LLM) output, based on what independent sources say, not what a company wants to say.
  • A neutral orientation page for readers: Wikipedia’s format helps readers quickly understand a company’s basics, including history, products or services, leadership, milestones, and context. The tradeoff is accessible neutrality. Anything included needs support from reliable secondary sources, and promotional language rarely lasts.
  • Clarity and disambiguation: If your name overlaps with other companies, or your story includes mergers, rebrands, or multiple founders, Wikipedia can help people land on the right entity and timeline.
  • A durable reference hub: A good Wikipedia page often becomes a stable directory of the strongest independent sources about you, such as press, books, and other reputable coverage, so readers can verify details without relying on your website alone.
  • Consistency across the web (a quiet multiplier): Wikipedia and related knowledge sources are reused in many downstream places. When the facts are clean, cited, and consistent, it can improve how your company is represented across third-party profiles and information panels over time.

A Wikipedia page is rarely a conversion engine, and it isn’t a place to “own” your story. The value is credibility and discoverability that can compound, but benefits can vary based on the strength of independent coverage and ongoing community scrutiny.

Below, we’ll cover the 10 steps on how to create a Wikipedia page, as well as considerations to keep in mind.

1. Check to See If Your Company is a Good Fit for a Wikipedia Page

Before you think about how to create a Wikipedia page for your company, you need to answer one question:

Would Wikipedia editors consider your company “notable”?

On Wikipedia, “notability” has nothing to do with how compelling your company story is. It means there’s enough independent, reliable coverage about your company that an article can be written from what third parties have already published, without filling in gaps with interpretation, insider knowledge, or marketing claims.

This is also where a lot of brand teams get tripped up. Again, Wikipedia isn’t a marketing channel. It’s not a place to shape messaging or control a narrative. If the only story you can tell is the one you want to tell, the page will be declined during initial submission review or deleted later.

What Notability Actually Looks Like

A company is usually considered notable when it receives significant coverage in multiple reliable sources independent of the company. “Significant coverage” is the key phrase here. Editors are looking for articles that discuss your company in real depth, not quick mentions or short blurbs.

A helpful way to think about it is this: if you can’t outline a neutral article using independent secondary sources alone, you probably don’t have enough notability yet.

Editors typically want coverage that checks these boxes:

  • Independent: Truly third-party reporting. Not press releases, paid placements, sponsored posts, advertorials, partner blogs, or content your PR team arranged. If a piece exists because the company made it happen, editors tend to discount it.
  • Significant: More than a passing mention. A funding announcement, product launch blurb, or event listing can be real coverage and still not be enough. The strongest sources are the ones that explain context, impact, history, or controversy in detail.
  • Secondary: Sources that analyze, summarize, or report on the company from the outside. Primary sources like your website, blog, press page, or social channels can support basic facts in limited cases, but they do not establish notability.
  • Reliable: Publications with editorial oversight and a reputation for accuracy. Big-name outlets can help, but they are not the only option. Trade and industry publications can be excellent sources when they have real editorial standards and provide in-depth coverage, but you can rarely use them to establish notability.
  • Multiple and sustained: A single great source is rarely enough on its own. Editors want to see more than one strong source, ideally across time, so the page can hold up after more people review it.
  • Neutral tone: Even when a source is independent, it can still be weak if it reads like promotion. Glowing profiles, “thought leadership” posts, or contributor content that feels like marketing often carry less weight than staff-reported coverage.

One nuance that matters a lot in practice is that “lots of links” does not equal notability. Companies can appear all over the internet through routine announcements and PR-driven writeups and still fail Wikipedia’s notability test.

What matters is whether independent sources have treated the company as worthy of real, substantive coverage. This also means that magazines and trade publications can’t work as reliable coverage to establish notability. Many industry leaders also run trade organizations, creating a conflict of interest (COI, in Wikipedia’s terms) if their trade publication were to cover their own company or the companies of friends or contributors. 

If your company does not meet this bar yet, that’s not a judgment on it. It just means a Wikipedia article is likely premature, and the better move is to wait until there is enough independent coverage to support a neutral, well-sourced page.

A Note on Conflict of Interest (COI)

If you’re writing about your own company (or you’re paid to write for a company), Wikipedia considers that a conflict of interest (COI). That doesn’t automatically ban you from participating, but it does change how you should approach it.

When creating a new page, submit it to Articles for Creation (AfC) to ensure community editors review it properly. 

When editing an existing page, you want to create your edits in a Sandbox draft (the Sandbox is a personal workspace where you can safely draft and refine changes to an article before submitting them for public review). Then, you submit that Sandbox draft onto the live Wikipedia page’s Talk page, along with a comment that asks community members to review and collaborate on the edits you suggested. Once a community consensus is reached, you can push those edits or additions live. 

An example of a sandbox page on Wikipedia.

Source: https://courses.shroutdocs.org/tutorials/editing-your-wikipedia-sandbox/

It’s also a good idea to disclose your COI connection. Your disclosure should be one of the following:

  • A statement on your User page.
  • A statement on the Talk page accompanying any paid contributions.
  • A statement in the edit summary accompanying any paid contributions.

Avoid directly creating or heavily editing an article and stick to Wikipedia’s COI process to request edits for independent editors to review.

Again, this is about expectations. If your team is hoping to just write a draft and hit “publish,” like you do with a blog, you’re going to have a bad time. But if you do have strong, independent coverage from credible outlets, you’ve got a real shot and can move to the next step.

2. Create a Wikipedia Account

Creating an account is a practical next step if you plan to contribute to Wikipedia. While you don’t need an account to read Wikipedia (or even to edit some pages), registering gives you features that make collaboration and transparency easier.

With an account, you can:

  • Create a User page (a simple profile and a place to draft in a Sandbox).
  • Use your Talk page to communicate with other editors.
  • Build an edit history tied to your username (helpful for credibility and continuity).
  • Work through article creation more smoothly, including drafting and submitting via AfC.

If you add images to your User page, make sure they’re properly licensed. Wikipedia generally accepts only freely licensed uploads.

To register, use Wikipedia’s account creation form.

The Create Account Page on Wikipedia.

After that, you’re set up to start editing, drafting, and participating in the community.

3. Contribute to Existing Pages

Quick reminder from earlier: If you’re connected to the company, you’re dealing with a COI. That’s why Wikipedia prefers that company pages undergo independent review before publication.

As a newbie, a good way to get comfortable on Wikipedia is to start by editing existing articles that have nothing to do with your organization. When you spend time improving clarity, tightening wording, and backing up facts with solid sources, you learn how Wikipedia works, and you build a history of helpful contributions.

As you do that, your account may become autoconfirmed. That usually happens automatically after your account has been around for more than four days and you’ve made at least 10 edits to Wikipedia pages that need them. Autoconfirmed status primarily grants a few basic permissions, such as creating pages and editing some semi-protected articles.

An Autoconfirmed Wikipedia account.

Here’s the key point, though: “Autoconfirmed” does not change your COI situation. Even if you can technically publish a page directly, a company-related article should still be written as a draft and submitted through AfC. This is the step that gets you the independent review Wikipedia expects, and it’s the safest, most appropriate route for a company page.

4. Conduct Research and Gather Sources

Before you write a single line of your Wikipedia draft, do the homework. Wikipedia doesn’t prioritize non-source-backed storytelling. The platform only cares about verifiability, meaning every meaningful claim must be backed by a reliable secondary source that an editor can check. Your company story could play well on Wikipedia, as long as there’s enough reliable evidence to back it up. 

This is where most company pages fall apart. Not because the company isn’t real, but because the sources are thin, biased, or too “inside baseball.”

Why sources matter so much on Wikipedia

Wikipedia runs on two big rules:

  • No original research: You can’t “introduce” new facts, even if they’re true, without proper citation. Which leads to the next point…
  • Cite everything that matters: If it’s notable, controversial, or specific (revenue, awards, history, key dates, acquisitions), you need a secondary source to back it up.

Primary vs. secondary vs. tertiary sources (and how Wikipedia treats them)

Wikipedia breaks sources down into three categories: primary, secondary, and tertiary. Here is a look at each and how they play into the strength of your Wiki page:

  • Primary sources (you): Your website, press releases, investor decks, published reports, filings (e.g., Securities Exchange Commission (SEC), etc.).
    • Upside: Can work for basic, factual details (launch dates, historical milestones, etc.).
    • Downside: Biased by default. Editors won’t accept these for “notability” or big claims like “industry leader.”
  • Secondary sources (best for Wikipedia): Independent journalism, books, academic analysis, reputable profiles.
    • Upside: Shows the world noticed you. This is the backbone of the strongest pages.
    • Downside: Harder to earn, and fluff pieces don’t carry much weight.
  • Tertiary sources: Encyclopedias, databases, reputable directories.
    • Upside: Useful for quick confirmation and context.
    • Downside: Often too shallow to prove notability on their own.

Overall, secondary sources are the most important to your success. By their nature, these sources are pivotal in helping you summarize what experts think about a company or topic in Wikipedia’s voice. Relying heavily on these gives you a really strong case for notability in Wikipedia’s eyes. 

What Makes a Good Wikipedia Source?

Good Wikipedia sources cover topics while maintaining editorial standards. Think major publications, local newspapers of record, respected business outlets, and independent industry analysis. If you’re short on that kind of coverage, that’s usually a PR problem, not a Wikipedia problem. Strengthening your digital PR (DPR) efforts can help you earn credible mentions that hold up under editor scrutiny.

But DPR for a Wikipedia use case must be handled carefully. What tends to work is focusing on independent coverage first. This looks like pitching credible story angles to journalists and outlets that genuinely cover your industry, and accepting that they may say no, or cover the story in a way you can’t control.

When an outlet does publish real, editorial reporting, that’s the kind of secondary source Wikipedia editors are more likely to accept.

Reliable Sources at a Glance

After seeing what Wiki editors consider reliable sources, you might be wondering where you even find sources that hit all their criteria. It helps to look at real-world use cases of which sources are best for your company. Here are some of the types of sites you can choose from.

For company pages, the sources that matter most are the ones that provide significant, independent coverage; the kind that demonstrates notability and gives editors something substantial to cite.

  • Major national/international newsrooms (strongest for notability + facts): Reuters, AP, BBC, Financial Times, The Wall Street Journal, Bloomberg, The New York Times, The Washington Post, NPR (news reporting over opinion).
  • Reputable business and investigative reporting: Deep dives and investigations from established outlets (e.g., ProPublica) can be highly valuable, especially for controversies, legal issues, and accountability reporting.
  • High-quality trade press with editorial oversight (context-dependent): Useful for industry coverage when it’s independent and more than a product announcement or reposted PR. You cannot use trade press as a primary indicator of notability, though.
  • Books from reputable publishers: Especially helpful for founders, company history, and industry impact when written by independent authors and published by established presses.
  • Government and major non-governmental organization (NGO) reports (within remit): Strong for regulatory actions, enforcement, public contracts, or formal assessments (but not a substitute for independent secondary coverage).
  • Medical/health claims (only when relevant): For biomedical statements, prioritize high-quality secondary sources like systematic reviews and authoritative guidelines (MEDRS standard), not individual studies or marketing claims.

Check out Wikipedia’s Perennial Sources list to see which sources have a good community track record because they all meet a high level of fact-checking and editorial standards. But remember, the sources featured in this list are still contextual; it’s not a whitelist. 

Non-reliable Sources

To paint a clearer picture, here are some of the sources you should avoid:

  • Self-published/user-generated content (UGC): Personal blogs, Substack/Medium posts, self-hosted sites, most social media. 
  • Press releases/advertorial: Company press rooms, PR wires; these are fine to state that an announcement occurred, not to establish third-party facts or notability. 
  • Sensational/tabloid sources: Outlets known for gossip/sensationalism; poor for verifying facts. 
  • Anonymous forums and crowdsourced threads: Message boards, comment sections, most Reddit/4chan/Discord posts. 

Wikipedia views these types of sources as weaker because they aren’t research-backed, trustworthy, or credible. The common thread is that they undergo minimal editorial oversight (if any) or, in Reddit’s case, most of the content is UGC and self-published. 

5. Research Your Competition

Like many things when it comes to Wikipedia, researching your competitors is fine if you do it the right way. As you start your research, view your competitors’ pages through the lens of what Wikipedia editors ultimately want. 

The challenge here is that Wikipedia isn’t perfectly consistent. Some company pages are old, lightly monitored, or haven’t been updated to match today’s standards.

When someone says, But other pages include feature lists and product tier breakdowns,” that doesn’t really matter. Editors don’t treat “other pages do it” as a justification. They judge your page on whether it reads like an encyclopedia entry and whether it’s backed by independent, reliable sources.

General Competitor Research Rules

Use competing Wiki pages to answer questions like:

  • What’s the typical structure for a company page in your category? Take note of the typical section titles. (We’ll dive into this next.) 
  • What kind of claims survive without getting reverted? (Neutral, sourced, non-promotional.)
  • What sources are doing the heavy lifting on pages that stay live?

A “Wiki-safe” Research Method

Pick 3–5 competitors with live pages, then audit them like an editor would:

  1. Scan the citations first. Are they mostly independent, secondary news coverage, press releases/company sites, or paid placements?
  2. Check the tone. If it reads like a promotional brochure (feature-by-feature, pricing tiers, “best-in-class”), that’s a red flag, even if it hasn’t been removed yet.
  3. Look at the page history and Talk page. Lots of reverts, banners, or sourcing disputes usually mean the page is shaky.
  4. Note what’s missing. If competitors avoid detailed feature lists, that’s usually a sign that those details don’t belong on Wikipedia.

6. Create an Outline

Once you’ve got your sources, your outline has a starting point. The hard part is deciding what belongs.

On Wikipedia, an outline is not “everything you want to say.” It’s you making careful decisions about what independent, reliable sources have actually covered, what they have not covered, and what deserves space without turning the page into a brochure. That takes judgment, and it often takes multiple passes.

The mindset you want is simple: Wikipedia pages are built around what reliable secondary sources already said about the subject. Your outline is how you organize those sourced facts into a structure that editors recognize and are willing to review.

Start with the standard Wikipedia “shape”

Most company pages follow a formulaic layout:

  • Infobox (quick facts): Founded, founders, headquarters, industry, key people, website, and similar basics. Only include items you can verify.
  • Lead (opening summary): 2–4 neutral sentences explaining what the company is, where it’s based, what it does at a high level, and why it’s notable. This is not a tagline.
  • History: Founding and major milestones, expansions, acquisitions, funding or IPO, only if independent sources cover them, and major pivots. Focus on events that third parties actually reported.
  • Operations/Business (optional, and only if sourced): What the company does at a high level and what markets it serves. Avoid feature-by-feature descriptions and pricing tiers.
  • Leadership/Ownership (optional): Only if reliable sources discuss executives, ownership changes, or governance in a meaningful way.
  • Reception/Controversies (only if they exist in sources): Reviews, notable criticism, legal issues, regulatory actions, all written neutrally and backed by sources.
  • See also / References / External links: References do the heavy lifting; external links are usually minimal (often just the official site).
An example company Wikipedia page.

Using Your Sources to Build the Outline

Start with your strongest independent secondary sources and work outward. As you read through them, you’re identifying what the coverage actually emphasizes.

As you review sources, pull out:

  • Events they cover (those become history sections)
  • Claims they support (those become lead and operations sections)
  • Any recurring themes across sources (those become section headings)

Each major section in your outline should be supported by multiple secondary sources, not a single mention. Also, keep an eye on the length as you draft. Wikipedia discourages overly long articles unless the amount of independent coverage truly warrants it. If a section or topic isn’t discussed in depth by reliable secondary sources, it usually doesn’t belong at length in the article.

If you focus on covering the topic from an encyclopedic angle and you leave out anything that feels like marketing, you will give your draft a much better chance of surviving review.

7. Write a Draft of Your Wikipedia Page

Take your time as you write a draft of your Wikipedia page from your outline. You want your content to be source-backed, thorough, thoughtful, and genuinely useful, giving readers the information they came for.

At this stage, it’s best to write your draft in a Wikipedia Sandbox. As mentioned earlier, this is a personal workspace where you can draft safely, revise freely, and share the link with others for informal feedback without accidentally publishing anything live.

While a Wikipedia page can support your broader visibility, the platform’s purpose is encyclopedic and impartial. Anything that reads as emotional, salesy, or promotional is likely to be flagged and can lead to rejection later in the process.

Aim for short, direct sentences that stick to verifiable facts. And those facts need strong secondary sources. For example, if you write, “Spot ran to the big oak tree yesterday,” that claim would need a source. Not just any source, but a credible, independent secondary source that Wikipedia considers reliable.

It’s also critical to remember you’re writing on behalf of Wikipedia. Aka, you’re writing in Wikipedia’s unbiased, impartial, and neutral voice.

Here are some examples to show what this looks like in practice:

Example 1: Product Description

  • Promotional: “XYZ Software is a revolutionary, industry-leading platform that empowers businesses to achieve unprecedented productivity gains. With its cutting-edge AI technology and intuitive interface, XYZ transforms the way teams collaborate, delivering exceptional results that exceed expectations.“​
  • Neutral: “XYZ Software is a project management platform that combines task tracking, team messaging, and file sharing. The software is used by businesses to coordinate work across departments.[1][2]“​

Example 2: Company History

  • Promotional: “Founded by visionary entrepreneur Jane Smith, the company quickly rose to prominence as a game-changer in the industry. Through relentless innovation and unwavering commitment to excellence, it has become the trusted choice for Fortune 500 companies worldwide.“​
  • Neutral: “The company was founded in 2015 by Jane Smith in Seattle.[3] It launched its enterprise tier in 2019 and rebranded from “TaskFlow” to its current name in 2021.[4][5]“​

Wikipedia also defines “promotional” language differently. It’s more than simply using words like “revolutionary” or “legendary.” Factually correct statements can still be considered “promotional” in a Wikipedia editor’s eyes if they meet certain structure and emphasis criteria:

  • Long, comprehensive feature inventories.​
  • Plan/tier breakdowns that resemble packaging (“Free vs. Premium vs. Enterprise”).​
  • Performance claims that read like sales positioning.​
  • Product-benefit phrasing stacked repeatedly (“includes tools for…,” “enables…,” “helps…”).​
  • Details that feel like purchase guidance (pricing, quotas, storage limits, admin entitlements).​

Let’s talk about specs and features for a second. If your company is well-known for a particular product or service, it can be tempting to include a specification or feature list on your Wikipedia page. Unfortunately, that can cause problems with Wikipedia for several reasons.

Here’s why:

  1. Wikipedia isn’t a manual or catalog: Wikipedia tries to avoid becoming vendor documentation. Specs and feature matrices belong on the company site, in the documentation center, in release notes, or on third-party comparison sites, not in an encyclopedia.​
  2. Specs change constantly: Feature sets, tiers, storage limits, and admin/security capabilities change frequently. Wikipedia content must remain stable and verifiable over time. Highly granular spec content becomes outdated quickly and attracts disputes.​
  3. It’s hard to verify neutrally: If the only source for a feature or tier is the vendor’s own site or press release, Wikipedia considers that primary sourcing; useful for limited factual verification, but not ideal for describing capabilities in detail or making value claims.​
  4. “Undue weight” and imbalance: Even accurate feature lists can give a product more prominence than independent sources do. Wikipedia tries to reflect external coverage: if reliable third parties don’t treat a feature as notable, Wikipedia typically won’t either.​

What a Company’s Wikipedia Draft Should Look Like

Much like sourcing, it’s hard to imagine what an acceptable draft should look like, given all of Wikipedia’s guidelines. Here’s a brief rundown of what a solid draft should look like when you’re done:

  • A clear, high-level description of what a company is (one paragraph, not a feature catalog).​
  • A history/timeline of major milestones (launches, renames, major releases) backed by independent sources.​
  • Widely covered integrations/partnerships only when reported by reliable third parties.​
  • A short, selective “features” summary only for capabilities that independent sources treat as notable and cover in-depth.​

8. Upload Your Page into the Article Wizard

Once your Sandbox draft is in good shape, move over to the Wikipedia Article Wizard. The Wizard is the guided tool that helps you move what you wrote from your Sandbox into Wikipedia’s Draft space, which is where new articles are typically prepared before they go live.

For company-related pages, the key takeaway is that the Wizard is the structured path to getting your draft into the right place so it can be submitted for independent review.

The Wikipedia Article Wizard confirming a page was uploaded.

9. Submit Your Article for Review

Now that your draft is in Draft space, you’re ready for the step that triggers formal evaluation by the community. Submit your draft through Articles for Creation by clicking “Submit for review.” This is when your draft enters the AfC queue, and a volunteer reviewer takes a look.

The timeline can range from a few weeks to a few months, depending on backlog and whether the reviewer requests changes. It’s also common for drafts to be declined at first, with feedback you’ll need to address before approval.

At NPD, we’ve found that sticking with AfC is the best practice for companies looking to go live. Even though autoconfirmed accounts may have the technical ability to publish directly, that path often creates more friction for company-related topics. AfC sets expectations for independent review from the start and helps reduce avoidable issues related to COI and other Wikipedia guidelines.

10. Continue Making Improvements

Once your page is accepted, the work is not really over.

Wikipedia is editable by anyone, so changes can happen at any time. Some edits will be helpful, some will be mistaken, and some may reflect a negative point of view. The best approach is to keep an eye on the page so you can understand what is changing and respond appropriately, usually by suggesting improvements on the Talk page or updating the article with strong, independent sourcing.

As the page gets more visibility and gains traction on Google and LLMs, focus on accuracy and neutrality rather than “updating marketing messaging.” Wikipedia is not the place for routine product updates, but it is the right place to reflect significant, well-covered developments when reliable third-party sources have written about them.

You should also plan for the possibility that your draft will be declined. That is common, especially for company-related topics. If it happens, do not get discouraged. Read the reviewer’s comments carefully, make the requested changes, and resubmit when you have addressed the specific issues that kept the draft from being accepted.

FAQs

Should I build a Wikipedia page for my company?

A Wikipedia page can be a meaningful credibility asset, but it isn’t a fit for every company. The deciding factor is whether there’s enough independent, reliable secondary coverage to support a neutral article. If you can’t outline the page using third-party sources alone, it’s usually too early.

If your company does qualify, the value tends to be indirect: stronger brand legitimacy, clearer “who you are” context in search results, and more consistent entity information across the web. It’s less about immediate conversions and more about long-term visibility and trust signals that can compound.

Yes. Creating, publishing, and maintaining a company page is challenging because Wikipedia is community-reviewed and built around strict expectations: neutral tone, verifiable claims, and high-quality sourcing. You also have to plan for ongoing edits and scrutiny after the page goes live.

The opportunity is achievable if you have strong independent coverage and treat the process as encyclopedic documentation rather than company messaging.

How do I know if my Wikipedia page will be published?

There’s no guaranteed way to know. Even well-prepared drafts can be declined, revised, and resubmitted, especially for company topics.

Your best indicators are practical: you have multiple independent sources with significant coverage, your draft reads neutrally (not like marketing), and you submit through the Articles for Creation (AfC) process so reviewers can evaluate it in draft space.

How long will my Wikipedia article be under review before publication?

Review time varies widely. Some drafts are reviewed quickly, but it’s also common for company-related submissions to take weeks (or longer) depending on backlog and how many revisions are needed. A decline doesn’t mean “never”; it usually means “not yet” or “needs stronger sourcing and a more neutral rewrite.”

Conclusion

If you’re looking to increase traffic, improve your search everywhere visibility, or build credibility, Wikipedia can be part of the equation. But it’s not a marketing channel, and it isn’t built for companies to shape their narratives. It’s a community-edited encyclopedia that summarizes what independent, reliable sources have already said about you.

Where Wikipedia can help is in discovery and trust signals. A stable, well-sourced page often shows up prominently for company and topic queries, and it can reinforce consistent “entity facts” that search engines and other knowledge systems use to understand companies. 

That’s also why Wikipedia often pairs well with entity SEO. When key details about your organization are documented consistently across reputable sources, your company is easier to interpret and surface accurately across platforms, including some LLM-style experiences. Results may vary based on implementation, the strength of independent coverage, and ongoing community review.

As you evaluate whether your company is a good fit for a Wikipedia page, keep in mind that the process is complicated, and it won’t be fully in your control. What matters most is having enough independent, reliable secondary coverage to justify a stand-alone article and being willing to follow Wikipedia’s COI expectations.

Read more at Read More

How to Build Audience Personas for Modern Search + Template

Search has changed, and so should your audience personas.

Your audience searches across Google, ChatGPT, Reddit, YouTube, and many other channels.

Knowing who they are isn’t enough anymore. You need to know how they search.

Search-focused audience personas fill gaps that traditional personas miss.

Think insights like:

  • Where this person actually goes for answers
  • What triggers them to look for solutions right now
  • Which proof points win their trust

And you don’t need months of research or expensive tools to build them.

An audience persona is a profile of who you’re creating for — what they need, how they search, and what makes them trust (or tune out). Done well, it aligns your team around a shared understanding of who you’re serving.


In this guide, I’ll walk you through nine strategic questions that dig deep into your persona’s search behavior. I’ve also included AI prompts to speed up your analysis.

They’ll help you spot patterns and synthesize findings without the manual work.

By the end, you’ll have a complete audience persona to guide your content strategy.

Free template: Download our audience persona template to document your insights. It includes a persona example for a fictional SaaS brand to guide you through the process.


1. Where Is Your Audience Asking Questions?

Answer this question to find out:

  • Where you need to build authority and presence
  • Which platforms to target for every persona
  • Which formats work well for each persona


Knowing where your persona hangs out tells you which channels influence their decisions.

So, you can show up in places they already trust.

It also reveals how they think and what will resonate with them.

For example, someone posting on Reddit wants honest advice based on lived experiences. But someone searching on TikTok wants visual content like tutorials or unboxing videos.

Where Your Audience Searches Reveals How They Think

How to Answer This Question

Start with an audience intelligence tool that lets you identify your persona’s preferred platforms and communities.

I’ll be using SparkToro.

Note: Throughout this guide, I’ll walk you through this persona-building process using the example of Podlinko, a fictional podcasting software. You’ll see every step of the research in action, so you can replicate it for your own business.


For this example, we’re building out one of Podlinko’s core personas: Marcus, a marketing professional on a one-person or small team team, so he’s scrappy and in-the-weeds.

Pro tip: Start with one primary persona and build it completely before adding others. Focus on your most valuable customer segment (the one driving the highest revenue for your business).


In SparkToro, enter a relevant keyword that describes your persona’s professional identity or core interests.

This could be their job title, industry, or a topic they care deeply about.

I went with “how to start a podcast.” Marcus would likely search for this early in his journey.

SparkToro – How to start a podcast

The report gives a pretty solid overview of Marcus’s online behavior.

For example, Google, ChatGPT, YouTube, and Facebook are his primary research channels.

SparkToro – Audience Research

But it could be worth testing a few other platforms too.

Compared to the average user, he’s 24.66% more likely to use X and 12.92% more likely to use TikTok.

SparkToro – Social networks report

The report also tells me the specific YouTube channels where he spends time.

He’s watching automation, editing, and business tutorials.

SparkToro – YouTube Channels & Podcasts

He’s also active in multiple industry-related Reddit communities.

Maybe he’s posting, commenting, or even just lurking to read advice.

SparkToro – SubReddits

Since Marcus uses ChatGPT, I also did a quick search on this platform to see which sources the platform frequently cites.

I searched for some prompts he might ask, like “Which podcast hosting platforms should I use for marketing?”

If you see large language models (LLMs) repeatedly mention the same sources, they likely carry authority for the topic.

And by extension, they influence your persona’s research as well.

ChatGPT – Sources – Podcast hosting platforms

Compare these sources to the ones you identified earlier. If they match, you have validation.

If they’re different, assess which ones to add to your persona document.

Here’s how I filled out the persona template with Marcus’s search behavior:

Persona template – Search behavior

2. What Exact Questions Are They Asking?

Answer this question to find out:

  • What language to mirror in your content
  • How to structure content for AI visibility
  • What content gaps exist in your market


Your buyer persona’s language rarely matches marketing jargon.

Companies might talk about “podcast production tools” and “integrated workflows.”

But personas use more personal and specific language:

  • What’s the cheapest way to record remote podcasts?
  • How long does it take to edit a 30-minute podcast?

Knowing your audience’s actual questions reveals the gap between how you describe your solution and how they experience the problem.

And shows you exactly how to bridge it.

How to Answer This Question

Start by going to the platforms and communities you identified in Question 1.

Search 3-5 topics related to your persona.

Review the context around headlines, posts, and comments:

  • How they phrase questions (exact words matter)
  • What emotions do they express
  • What outcomes they’re trying to achieve

Pro tip: As you research, save persona comments, discussions, and reviews in full — not just snippets. You’ll analyze the same sources in Questions 3-5. But through different lenses (challenges, triggers, language patterns). Having everything saved means you won’t need to revisit platforms multiple times.


For example, I searched “how to start a podcast for a business” on Google.

Then, I checked People Also Ask for related questions Marcus might have:

PAA – How to start a podcast for a business

On YouTube, I searched “how to edit a podcast” and reviewed video comments.

Users asked follow-up questions about mic issues and screen sharing.

This gave me insight into language and questions beyond the video’s main topic.

YouTube – How to edit a podcast – Comments

In Facebook Groups, I found users asking questions related to their goals, constraints, and challenges.

It also provided the unfiltered language Marcus uses when he’s stuck.

Facebook – Podcasters on Facebook

Now, use a keyword research tool to visualize how your persona’s questions connect throughout their journey.

I used AlsoAsked for this task. But AnswerThePublic and Semrush’s Topic Research tool would also work.

For Marcus, I searched “Best AI podcasting editing software,” which revealed this path:

Which AI tool is best for audio editing? → Can I use AI to edit audio? → Which software do professionals use for audio editing? → How much does AI audio editor cost?

AlsoAsked – Best Podcast Software

It’s helpful to visualize how Marcus’s questions change as he progresses through his search.

Next, learn the questions your persona asks in AI search.

You’ll need a specialized tool like Semrush’s AI Visibility Toolkit for this task.

It tells you the exact prompts people use when searching topics related to your brand.

(And if your brand appears in the answers.)

If you don’t have a subscription, sign up for a free trial of Semrush One, which includes the AI Visibility Toolkit and Semrush Pro.

Since Podlinko is fictional, I used a real podcasting platform (Zencastr.com) for this example.

Semrush – Visibility Overview – Zencastr

This brand appears often in AI answers for user questions like:

  • What equipment do I need to create a professional podcast setup?
  • Can you recommend popular tools for managing and promoting online radio or podcasts?

Semrush – Visibility Overview – Zencastr – Performing Topics

You’ll also see citation gaps — questions where your brand isn’t mentioned. These reveal content opportunities.

For this brand, one gap includes:

“Which AI tools are best for recording, editing, and distributing an AI-focused podcast?”

Semrush – Topic Opportunities – Questions

After reviewing all the questions I gathered, I narrowed them down to the top 5 for the template:

Top 5 template questions

3. What Challenges Influence Their Search Behavior?

Answer this question to find out:

  • What constraints influence their decision-making process
  • How to anticipate objections before they arise
  • What kind of solutions does your persona need


Challenges are the ongoing issues driving your persona’s search behavior. These overarching problems shape their decisions to find a solution.

Understanding these challenges can help you:

  • Position your solution in the context of these pain points
  • Anticipate and address objections before they come up
  • Structure your campaigns to speak directly to their limitations

How to Answer This Question

Review the questions you collected in Question 2 to identify underlying pain points.

For example, this Facebook Group post contains some telling language for Marcus’s persona:

Facebook – Telling language for Marcus's persona

Specific phrases highlight ongoing challenges:

  • “Tech support is no help”
  • Can’t find an editing software that consistently works”

Now, visit industry-specific review platforms.

Check G2, Capterra, Trustpilot, Amazon, Yelp, or another site, depending on your niche.

Look for reviews where people describe recurring frustrations.

Positive reviews may mention what drove a user to seek a new solution. For example, this one references poor audio and video quality:

G2 – Riverside – Review

Negative reviews reveal what users constantly struggle with.

Unresolved pain points often push people to find workarounds or alternatives.

This user noted issues with a podcasting tool, including loss of backups, unreliable tech, and more.

G2 – Riverside – Negative review

Pay close attention to the language people use. Word choice can signal underlying feelings and constraints.

When someone asks for the “easiest” and “most cost-effective” solution, they’re signaling:

  • Limited resources
  • Low confidence
  • Risk aversion

After reviewing conversations and communities, you’ll likely have dozens of data points.

Copy the reviews, questions, and phrases into an AI tool to identify your persona’s top challenges.

Use this prompt:

Based on these reviews and discussions, identify the five biggest challenges for this persona.

For each challenge, show:

(1) exact phrases they use to describe it

(2) what constraints make it harder (budget, time, skills)

(3) how it influences where and when they search.

Format as a table.


This analysis helped me identify Marcus’s recurring challenges:

Persona template – Challenges

4. What Triggers Them to Search Right Now?

Answer this question to find out:

  • What emotional and situational context should you address in your content
  • How to structure content for different urgency levels
  • Which pain points to lead with


Search triggers explain why your audience is ready to take action.

But they’re not the same as challenges.

Challenges are ongoing constraints your persona faces. This could be a limited budget, small team, or skill gap.

Triggers are the specific events or goals that push them to act right now. Like a looming deadline or a competitor launching a podcast.

Understanding triggers helps you reach your persona when they’re most receptive.

Decoding Persona Search Triggers

How to Answer This Question

If you have access to internal data, start there.

Your sales and customer support teams can spot patterns that push prospects from browsing to buying.

For example, your sales conversations might reveal that one of Marcus’s triggers is urgency. His manager might ask him to improve the sound quality by the next episode, prompting his search.

If you don’t have internal intel, use tools like AnswerThePublic, AlsoAsked, or Semrush’s Keyword Magic Tool.

Keyword Magic Tool – Podcast editing

This will help you identify the language people use when they’re ready to act.

For Marcus, my AlsoAsked research led to questions like:

  • “Can I record a podcast with just my phone?” This may suggest a desire to start immediately, without professional equipment.
  • “How to make a podcast with someone far away” could suggest the trigger of a sudden need to work with a remote guest/host

AlsoAsked – Questions

You can also refer back to your research on community spaces.

(Or conduct additional audience research, if needed.)

These spaces are where people describe the exact moments they decide to take action. Aka plateaus, milestones, and failed attempts.

When I searched “podcast marketing” on Reddit, I found a post from someone experiencing clear triggers:

Reddit – Podcast marketing

This user has been unable to get a consistent flow of organic listeners despite high-quality content.

Trigger: A growth plateau that pushed him to ask for help.

He’s also trying to hit his first 1,000 listeners.

Trigger: A goal that pushed him to look for solutions.

If you collected a lot of content, upload it to an AI tool to quickly identify triggers.

Use this prompt:

Analyze these community posts and discussions. Identify the specific trigger moments that pushed people to actively search for solutions.

For each trigger, show:

  1. The exact moment or event described (quote the language they use)
  2. The type of trigger (situational, temporal, emotional, or goal-driven)
  3. What action did they take as a result

Format as a table.


After analyzing the content I gathered, I identified the key triggers pushing Marcus to search:

Persona template – Triggers

5. What Language Resonates (and What Turns Them Off)?

Answer this question to find out:

  • Which messaging angles resonate
  • What tones build trust with your audience
  • Which phrases trigger objections or skepticism


The words you use can affect whether your persona trusts you or tunes out.

The right language makes people feel understood. The wrong language creates friction and drives them away.

When you know what resonates, you can create messaging that builds trust and motivates your personas to act.

How to Answer This Question

Refer back to your research from Questions 3 and 4.

This time, focus specifically on language patterns in reviews and community discussions.

Look at:

  • Exact phrases people use to describe success, relief, or satisfaction
  • Words highlighting frustration, disappointment, and concerns

For example, on Capterra, users praised podcasting platforms that “do a lot” and let them “distribute with ease.”

Capterra – Review on podcasting platform

This language signals Marcus’s preference for all-in-one platforms.

He would likely connect with messaging that emphasizes functionality without complexity.

Next, review the content you previously gathered from community spaces.

In r/podcasting, users like Marcus write with direct, benefit-focused language:

Reddit – r/podcasting – Benefit focused language

Notice what he values: simplicity and concrete outcomes (“automatic transcripts”).

He’s not mentioning jargon like “AI-powered transcription engine” or “enterprise-grade recording infrastructure.”

Plain language that emphasizes quick results over technical capabilities works best with this persona.

Once you have enough data, use this LLM prompt to identify language patterns:

Analyze these customer reviews and community discussions I’ve shared. Identify:

  1. Most common words and phrases people use to describe positive experiences
  2. Most common words and phrases that signal frustration or concerns
  3. Emotional undertones in how they describe problems and solutions

Create a table organizing these insights.


This analysis revealed the specific language that Marcus reacts to positively (and negatively).

Persona template – Language

6. What Content Types Do They Engage With Most?

Answer this question to find out:

  • Content types to prioritize in your content strategy
  • How to structure content for maximum engagement
  • What length and style work best for each format


Knowing the content types your audience prefers has multiple benefits.

It lets you create content that captures your persona’s attention and keeps them engaged.

Think about it: You could write the most comprehensive guide on podcast equipment.

But if your ideal customer prefers video reviews, they’ll scroll right past it.

How to Answer This Question

You identified your persona’s most-used platforms in Question 1. Now analyze which content formats perform best on each.

Conduct a few Google Searches to identify popular content types.

You’ll learn what users (and search engines) prefer for specific queries. Look at videos, written guides, infographics, carousels, podcasts, and more.

For example, when I search “how to set up podcast equipment,” the top results are a mix: long-form articles, video tutorials, and community discussions.

Google SERP – How to set up podcast equipment

But organic search rankings don’t tell the full story.

Analyze content directly on your persona’s preferred platforms, too.

I searched “How to distribute a podcast” on YouTube and assessed the top 20 videos and Shorts for:

  • Video length
  • Views
  • Comments
  • Engagement patterns

Look at the creators your persona follows on each platform. (From the SparkToro report in Question 1).

SparkToro – Report

Pay attention to:

  • Content types drive the most engagement (videos vs. carousels vs. threads)
  • How these creators structure content (length, style, tone)
  • Which topics resonate most with their audience

Once you’ve collected this data, look for patterns.

Or drop your data into an LLM and ask it to find the patterns for you:

Analyze this engagement data I’ve collected for my audience persona.

Identify:

  1. Which video lengths perform best (views, comments, engagement rate) and why
  2. Which content styles generate the most engagement (tutorials, vlogs, behind-the-scenes, etc.)
  3. Any patterns in thumbnails, titles, or formats that consistently perform well

Summarize my persona’s content preferences by video type and rank them as low, medium, or high


For Marcus, I learned that 5- to 15-minute video tutorials generated the highest engagement.

Shorts consistently underperformed for how-to queries, showing his preference for in-depth tutorials.

I documented my findings and ranked each content type by engagement level: high, medium, or low.

Persona template – Content Preferences

7. What Proof Points and Signals Matter?

Answer this question to find out:

  • What proof points influence buyers
  • How to structure case studies and testimonials
  • Where to place proof points to win people’s trust


Proof points can influence whether someone acts on your content or bounces.

They’re also a ranking factor.

Search engines and LLMs reward content that demonstrates Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T).

What is E-E-A-T

But different personas might value different proof points.

Understanding what matters to each persona is crucial to building trust and visibility.

How to Answer This Question

Identify the most common trust markers on your persona’s preferred sites.

Look for:

  • Author credentials: Bylines with relevant expertise
  • Methods: Transparency about the method for creating this content
  • Citations: Links to studies, expert quotes, industry reports, original research
  • Recency signals: Publication and last updated dates
  • Visual proof: Screenshots, before/after comparisons, annotated walkthroughs
  • Social validation: Comment sections, user discussions, engagement metrics

Use Semrush’s Keyword Overview tool to find this information.

Note: A free Semrush account gives you 10 searches in this tool per day. Or you can use this link to access a free Semrush One trial.


Enter your keyword (I used “how to start a podcast”).

Scroll to the SERP Analysis report to view the ranking domains.

Keyword Analytics – How to start a podcast – SERP Analysis – URL

Aim to review 20 to 50 pages for the best results. (Create a spreadsheet to organize the information.)

Identify which proof points they use and how prominently they’re displayed.

Here’s how I did this for one of the articles I assessed:

  • Quantified track record: “Since 2009, Buzzsprout has helped over 400,000 podcasters”
  • First-person experience: “I’ve drawn on lessons from my own podcasts and thousands of conversations with creators”
  • Third-party sources: Expert advice cited from Apple Podcasts on naming conventions
  • Visual demonstrations: Embedded tutorials showing recommendations in action

Buzzsprout – How to start a podcast

Then, use an LLM to quickly spot patterns:

I’ve analyzed top-ranking pages for my persona and uploaded my findings.

Identify:

  1. Which proof points appear most frequently (e.g., “8 out of 10 pages include X”)
  2. How these proof points are displayed (above the fold, in sidebar, throughout content)
  3. Which combinations of proof points appear together most often

Format as a summary with the top 5 most common patterns.


Ultimately, you’ll want to infuse your content with these same trust markers to attract and convert your persona.

After identifying Marcus’s top proof points, I ranked them from medium to high in the template:

Persona template – Proof points

8. Where (and How) Should You Distribute Content to Reach This Persona?

Answer this question to find out:

  • Which platforms deserve your investment
  • What content formats work best on each platform
  • How to maximize organic reach through distribution


Where you distribute content determines whether it reaches your audience.

If you only publish content on your website but buyers find solutions on LinkedIn, you’re overlooking key touchpoints.

Even worse, you’re invisible on major platforms that LLMs scan for answers, recommendations, and citations.

How to Answer This Question

By now, you know your audience persona’s top platforms.

These are your initial distribution targets.

But you’ll ideally be able to validate them against real behavioral data.

If possible, survey recent customers to find concrete patterns about their search behavior.

Send a short survey to customers who converted in the last 90 days:

  • Where did you first hear about us?
  • Where do you go for advice about [primary pain points]?
  • What platforms do you use when researching [your product category]?
  • How do you prefer to learn about new solutions in your workflow?

Once responses come in, look for patterns in how each segment discovers, researches, and evaluates solutions.

Here’s a prompt you can use in an AI tool for faster analysis:

I surveyed recent customers about their search and discovery behavior.

Analyze this data and identify:

  1. The top 3-5 platforms where customers discovered us or researched solutions
  2. Common pain points or information needs they mentioned
  3. Preferred content formats for learning about solutions
  4. Any patterns in how different customer segments discover and evaluate us

Highlight the platforms and channels that appear most frequently, and flag any gaps between where customers search and where we currently have a presence.


Next, cross-reference your research against existing data in Google Analytics.

Open Google Analytics and navigate to Reports > Lifecycle > Acquisition > Traffic acquisition.

GA – Traffic acquisition

Sort by engagement rate or average session duration to see which channels drive genuinely engaged visitors.

Look for high time on site (2+ minutes) and multiple pages per session (3+).

Then, map each platform to the content format that performs best there.

Combine insights from Question 1 (preferred platforms) and Question 6 (preferred formats) to build your distribution strategy.

Here’s what this looks like for Marcus:

Persona template – Distribution strategy

9. What Keeps This Persona Coming Back?

Answer this question to find out:

  • What product features or experiences to double down on
  • How to position your solution beyond initial use cases
  • What content to create for existing customers


Winning your audience’s attention once is easy. Earning it repeatedly is the real challenge.

Understanding what keeps your persona engaged is the key to getting them to return.

How to Answer This Question

Review all the audience persona insights you’ve gathered so far to identify recurring needs.

Look at triggers, pain points, content preferences, and community discussions.

Pinpoints problems that can’t be solved with a single article or resource.

This could include:

  • Tasks they do every week (editing, distribution, promotion)
  • Decisions they face with each piece of content (format, platform, messaging)
  • Skills they’re continuously learning (new tools, changing algorithms)
  • Friction points that slow them down every time

Then, outline the content types that repeatedly solve these problems.

Think tools, templates, checklists, and guides they’ll use repeatedly.

If you don’t want to do this manually, drop this prompt into an AI tool to synthesize your findings:

Based on my audience persona research, here’s what I’ve learned:

Questions they ask: [Paste top questions from Q2]

Challenges they face: [Paste challenges from Q3]

Triggers that push them to act: [Paste triggers from Q4]

Their preferred content types: [Paste formats from Q6]

Identify recurring problems they face repeatedly (not one-time issues).

For each recurring problem:

  1. Describe the problem in their own words
  2. Explain why it’s recurring (weekly task, ongoing decision, changing landscape, etc.)
  3. Suggest 2-3 content types that would provide repeatable value each time they face this problem

Format as a table with columns: Problem | Why It’s Recurring | Content Solutions


For Marcus, this could look something like this:

Problem areas Content assets
Marcus spends too long cleaning audio
  • Editing workflow template (step-by-step, repeatable each week)
  • Breakdown video: “How to Edit a 30-minute Episode in Under 12 Minutes”
Marcus wants consistent reach across platforms
  • Podcast distribution checklist (Apple, Spotify, YouTube, LinkedIn, newsletter)
  • Repurposing templates (social snippets, video clips, carousel outlines)

Every time Marcus faces these challenges, he can turn to them for a reliable solution.

These are the content types that have repeatable value for him:

Persona template – What brings Marcus back

Build Audience Personas That Win AI Visibility

Forget surface-level demographics.

These nine audience persona questions give you actionable, in-depth search intelligence.

You now know a lot about your persona.

You’ve uncovered where they search, what language resonates, and which proof points earn trust.

This is everything you need to show up in the right places with the right message.

If you haven’t already, download our audience persona template to organize your research.

Use it to guide your content creation, search strategy, and distribution efforts.

Your next move: Expand your visibility further with our guide to ranking in AI search. Our Seen & Trusted Framework will help you increase mentions, citations, and recommendations for your brand.

The post How to Build Audience Personas for Modern Search + Template appeared first on Backlinko.

Read more at Read More

How a 200-Person Company Competes with a $160B Giant in AI Search

At just under 200 employees, Descript is not the biggest name in video editing software.

It’s not the most robust or the most popular, either.

But it’s punching way above its weight, competing with much bigger companies (like Adobe, and CapCut) in LLM search.

Using Semrush’s AI Visibility score, you can see that Descript is competing closely with giant brands like Adobe.

Semrush – AI Visibility – Competitor Research – Descript

Descript found the way in.

And so can you.

In this SaaS LLM visibility case study, we’ll break down exactly how Descript is getting seen.

And more importantly, what you can copy to improve visibility for your own product.

Choosing Clear Niche Messaging

For years, Descript has been known as a podcast editing tool.

That matters.

Because when people talk about podcast editing, Descript comes up naturally.

In blog posts.

In forums.

And now, in AI answers.

This isn’t accidental. Descript is clear about who it’s for, and their content reflects that focus.

Their product pages and blog posts consistently speak to one core audience: people who want to edit podcasts easily.

Here’s why this matters:

When I asked Google’s AI Mode for the best software to edit podcasts — specifically as someone with no video editing skills — Descript was one of the first tools mentioned.

Google AI Mode – Video editing software

And what shows up second in the list of sources?

One of Descript’s own blog posts about podcast editing.

Across Descript’s own website and other third-party sources, this tool is regularly mentioned as ideal for podcasters.

This matters because of a key difference between AI search and traditional SEO.

LLMs don’t just surface pages. They based their answers on query fan-outs.

Here’s what that means: AI creates multiple searches after the original query, and tries to find an answer that is most directly matched to what was asked.

How LLM Query Fan-out Works

That’s why even articles and websites that aren’t ranking well in Google can still get cited by AI when they provide the most relevant, specific answer to what users are asking.

Because Descript’s content is tightly focused on one audience, one use case, one problem, it maps cleanly to those AI queries.

That doesn’t necessarily correlate to higher ranking in traditional search. In fact, Descript’s traffic from traditional SEO has been steadily decreasing since its peak in 2024:

Organic Rankings – Descript – Estimated Traffic Trend

But at the same time, branded traffic has increased.

So even while the brand isn’t succeeding in traditional search, more people are becoming aware of Descript and searching for the brand name specifically.

Why? In part, because the brand is known for exactly what it does: podcast editing.

AI knows that too. And I would bet that a higher amount of mentions in AI search is helping with brand recognition and influencing that increase in branded search traffic.

Here’s the point: Descript isn’t just checking off boxes of what to talk about.

The way they write — and the way they present their product — shows exactly who they’re speaking to. They match the way their audience talks.

Take the blog article on podcast editing that we mentioned above as an example.

The copy flows naturally, includes quotes from an internal expert in the way she describes the problem and solution, and speaks in an easy way that matches the tone of the audience.

Descript – Copy flows naturally

As a byproduct of this natural way of writing and clear product position, their copy and content semantically matches what their audience is searching for.

And their AI mentions keep increasing.

Visibility Overview – Descript – AI Visibility

Action Item: Identify and Focus on Your Niche Market

Effort vs. Impact: Medium effort. High impact.

If you’re trying to be all things to everyone, AI is less likely to recommend you for anything specific.

Instead, narrow your focus like Descript does:

Descript – Homepage

Of course, you also want to find balance.

For example, “Podcast editing software for true crime hosts who only record on Thursdays,” may be a bit too niche.

To get the narrowest viable version of your core audience, look at your most successful customers.

Ask:

  • Who gets the most ROI from our product?
  • Who uses it weekly — or daily?
  • Which customers have become vocal advocates?
  • What do those users have in common? (Role, company size, industry, workflow)

That overlap is your niche.

Once that’s clear, your messaging gets easier.

You stop being an “All-in-one AI-powered platform for creators and teams.”

And start anchoring your product to a specific job: “Edit podcasts and spoken audio, without technical complexity.”

Then, your product becomes easier for AI systems to understand — and recommend — for specific use cases.

Further reading: Learn how to do deep audience research, along with a free audience research tracker template.


Developing Seriously Helpful Content

Once you know who you’re talking to, the next step is obvious: Help them.

That idea isn’t new.

Helpful content has long been a ranking factor in traditional search.

And in 2024, Google confirmed that their algorithm changes had reduced the appearance of low-quality content in search results by 45%.

Google – Low-quality results

But Descript’s example (and plenty of others) shows how this also applies to AI search.

Because clear, useful, unique content also drives LLM visibility.

Descript doesn’t rely on shallow blog posts or surface-level explanations.

They create:

  • Instructional blog content that answers real questions
  • Help Center pages that actually solve problems
  • Product pages that clearly explain what features do — and who they’re for

They also publish content that isn’t strictly about their product, but is highly relevant to their audience.

For example:

When I asked Google’s AI Mode how much YouTubers actually make, one of the cited sources was a Descript blog post on the topic.

Google AI Mode – How much YouTubers make

That article includes:

  • Data from recent studies
  • Real-world examples
  • A YouTube earnings calculator

It’s comprehensive. And it’s written from an expert perspective.

Here’s another example: When I asked how much it costs to start a YouTube channel, I was again directed to an article from Descript.

Google AI Mode – Starting YouTube channel

That page includes a detailed FAQ and embedded video content from Descript’s own YouTube channel.

Descript – Create a YouTube channel

The pattern is clear.

Depth gets cited. Surface-level content gets ignored.

Action Item: Focus Your Content on Being Helpful

Effort vs. Impact: High effort. Medium impact.

Once you’ve defined your niche, focus your content on what actually helps them.

Descript doesn’t target video editing professionals. So, they don’t show up in those searches.

ChatGPT – AI tends toward bigger players

They focus on content creators and podcasters. And their content reflects that.

To do the same:

  • Talk to people in your niche industry
  • Ask about their workflows, goals, and sticking points
  • Learn what slows them down

Pro tip: If you can’t speak directly to people in your audience or customer base, talk to your customer-facing teams. Customer success and sales teams have daily contact with your core audience. So, they’re in a better position to give you insights into what this audience cares about.


Online research also helps.

Find relevant subreddits to see what people are talking about. Check the comments section of relevant YouTube videos.

Look for recurring questions and complaints.

For example, the Descript team might peruse the r/podcasting subreddit to learn about their audience’s questions and opinions.

Reddit – r/podcasting – Subreddit

The goal: understanding.

When you deeply understand your audience’s day-to-day reality, creating helpful content becomes much easier.

And your content can become the source for AI answers.

Of course, getting citations back to your website isn’t the same as getting direct brand mentions. However, it’s still an opportunity to build awareness and authority.

Plus, building content around relevant core topics helps reinforce your niche messaging.

Further reading: Read the full guide on how to create helpful content.


Showcasing Images and Videos of Their Product

LLMs don’t just read text anymore.

They interpret visuals too.

With image-processing models like contrastive language–image pre-training (CLIP,) AI systems can understand what’s happening inside screenshots and videos — not just the words around them.

And those visuals now show up directly in AI answers. Especially for SaaS product queries in tools like ChatGPT.

For example, when I search for “best CRM software for a small business,” the top AI result includes images of the actual product interface.

ChatGPT – Best CRM software for a small business

That’s a shift.

Highly polished mockups matter less. Real, in-product visuals matter more.

Which is why Descript shows up like this in ChatGPT:

ChatGPT – Best software to edit podcasts

Descript consistently shows real product images and videos across product pages, Help Center articles, and blog content.

These aren’t decorative.

They show:

  • What the product looks like
  • How features work
  • What users should expect when they log in

As a result, those same images and videos get pulled into AI answers — often with a link back to Descript’s site.

ChatGPT – Link back to Descript's site

In this case, the link goes back to a very in-depth Help Center guide to getting started with podcast editing.

Descript Help Center

And most Interestingly, that’s a near-perfect semantic match to the original query.

Action Item: Include In-Product Images in Your Marketing Content

Effort vs. Impact: Low effort. Medium impact.

Start with the basics.

For every feature you highlight, ask one question: Can someone see this working?

Then act on it. Add real screenshots of your core product screens to key product pages. Replace abstract diagrams with in-product visuals where possible.

Next, expand beyond product pages.

Mention a feature in a blog post? Include a screenshot of it in use.

Descript – Mention in a blog post

Explaining a workflow in a Help Center article? Show each step visually.

Descript – Importing a Zoom recording into a new project

Teaching a process? Record a short screen capture instead of relying on text alone.

Descript – Short screen capture

The goal is clarity.

Clear visuals help users understand your product faster. And they give AI systems concrete material to reuse in answers.

Which makes your product easier to recommend — and easier to recognize — inside AI search.

Creating Detailed MoFu/BoFu Content

Content mapped to different awareness levels performs especially well in AI search.

Descript understands this.

They don’t just publish top-of-funnel guides. They create content for product-aware and solution-aware searches, too.

When you search in ChatGPT for video creation or editing tools, Descript often appears in the results.

But more importantly, their own content is cited as a source.

ChatGPT – Video Creation & Editing Tools

In this example, the cited source is a Descript-owned “best of” article comparing video tools.

Descript – Blog Article

Instead of generic recommendations, the page:

  • Breaks tools down by specific use cases
  • Includes clear pros and cons
  • Explains who each option is best for

Descript – Best For

Descript follows this same pattern with multiple “best of” lists and comparison pages against their main competitors.

The payoff?

When I asked AI to compare podcast video editing tools, Descript appeared with clear labels explaining:

  • Who it’s best for
  • Key features
  • When it makes sense to choose it

Google AI Mode – Comparisom Table

That context helps AI recommend Descript to the right people (not everyone).

Action Item: Create Citable MoFu and BoFu Content

Effort vs. Impact: High effort. High impact.

Different awareness levels need different content.

Customer Awareness Levels

To increase product-level AI visibility, focus on Product Aware and Solution Aware queries.

For Product Aware audiences, create:

  • Comparison pages
  • “Best alternative” posts
  • Owned “best of” lists

Want more ideas?

Talk to your sales team.

Ask them: What features are convincing people to buy? Which competitors are commonly brought up in sales conversations?

Those answers map directly to comparison content AI likes to cite.

For Solution Aware audiences, focus on how-to content that naturally features your product.

For example, when I asked Google’s AI Mode how to reduce background noise from a microphone, it referenced a Descript how-to article.

Google AI Mode – Prompt – Sources

This same pattern repeats itself across many of Descript’s blog posts: Find a clear problem, give a clear solution, add product mentions naturally.

It’s all about finding the right questions to answer.

To find these opportunities faster, use Semrush’s AI Visibility Toolkit. This data is powered by Semrush’s AI prompt database and clickstream data, organized into meaningful topics.

Head to “Competitor Research” and review:

  • Shared topics where competitors appear
  • Prompts where they earn more AI visibility than you

AI Visibility – Competitor Research – Descript – Topics & Prompts

Then, dig into the specific questions behind those prompts.

AI Visibility – Competitor Research – Descript – Prompt

The goal isn’t simply “more content”.

It’s answering the right questions — at the right stage — with content AI can confidently cite.

Building Positive Sentiment With Digital PR and Affiliate Marketing

AI visibility isn’t earned on your website alone.

LLMs look for signals across the web.

This is what we call consensus. And it means that positive sentiment has to exist outside your owned channels.

Descript is doing this in two ways:

  • Digital PR on sites AI already trusts
  • A creator-friendly affiliate program that drives third-party mentions

Here’s how it works: Google’s AI Mode tends to favor certain websites to source when answering queries about software.

Semrush’s visibility research for AI in SaaS from December 2025 shows these sites dominate citations:

  • Zapier
  • PCMag
  • Gartner
  • LinkedIn
  • G2

Semrush – AI Visibility – Google AI Mode

Here’s what’s interesting.

Descript is mentioned in articles across nearly all of these top sources.

For example, in software listicles like this one on Zapier:

Zapier Blog – Best transcription apps

Or in real-world experience articles like this one on Medium:

Medium – Descript article

Or in their clear listings on reviews sites like Gartner and G2:

Descript – Reviews

When AI systems cite those favored sources, Descript comes along for the ride.

Not because it’s the biggest brand.

But because it’s present where AI is already looking.

Google AI Mode –Software for video transcription

The second lever is Descript’s affiliate program.

It’s simple:

  • $25 per new subscriber
  • 30-day attribution window
  • Monthly payouts
  • No minimums

Descript – Affiliate

Those are solid incentives.

And they lead to more creator-driven content across the web.

For example, a YouTube walkthrough from VP Land explains how to use Descript and includes an affiliate link in the description.

YouTube-video – Descript – Affiliate link in description

When I later asked Google’s AI Mode how to use Descript, that exact video was cited as a source.

Google AI Mode –Video as a source

That’s the pattern.

Affiliate content creates citable, trusted references that AI systems reuse.

Action Item: Build a Strategy to Get More Mentions Online

Effort vs. Impact: High effort. High impact.

Getting third party mentions is all about building relationships.

First, build relationships with publishers, starting with the ones AI already trusts.

Even if you’re not an enterprise SaaS company with a full-sized PR team, this is still possible.

Granted, it’s not the easy route — but when you find the right websites and perform regular outreach to those teams, you can get your brand on these sites.

Before you start outreach, get your bearings.

Start by going back to Semrush’s AI Visibility Toolkit. Head to the “Competitor Research” tab and select “Sources.”

AI Visibility – Competitor Research – Descript – Sources

This shows you:

  • Which sites LLMs cite for your category
  • Where competitors are already getting mentioned
  • Gaps where your brand doesn’t show up (yet)

Those sites become your shortlist.

Outreach works better when you’re aiming at sources AI already relies on.

Second, build relationships with creators.

Affiliate programs work when creators want to talk about you.

So, build an affiliate program people actually want to be part of.

This means the program has to be easy to join, with clear terms that make it worth their time.

At a minimum, make sure you have:

  • A simple signup
  • Transparent tracking
  • Reliable payouts

Pro tip: Use a tool like PartnerStack to handle all of the details automatically. Better signups, better tracking, and automated payouts build trust with your affiliates.


If you need inspiration, research top affiliate programs to learn more about the conditions creators expect.

But most importantly: Treat affiliates as distribution partners, not just a side channel.

This means enabling them with clear positioning on your product, example use cases, demo workflows, screenshots they can reuse, and other resources.

The better you equip them, the stronger their recommendations will be.

Once you have this set up, track the results.

Use AI visibility data to see:

  • Which publisher relationships are turning into citations in AI search
  • Which creators show up in AI answers
  • Which formats perform best

Then, double down.

Now that we’ve discussed what Descript is doing well, let’s look at where there’s room for improvement.

Where Descript Could Improve: Reddit Marketing

Descript is doing a great job in many areas that are important for AI search visibility.

That said, there’s one area they’re missing out on: Reddit.

And yes, Reddit matters. A lot.

It’s still one of the most-cited sources in Google’s AI Mode.

And in almost all of the searches I tested above, Reddit was cited as a source (especially conversations in the r/podcasting subreddit).

Google AI Mode – Reddit sources

Here’s the problem: right now, Reddit is not doing Descript any favors.

Here are a few thread titles I found just by searching for Descript in a podcasting subreddit:

Reddit – r/podcasting – Negative threads

And yes, there are positive mentions of Descript. But they’re buried under a wave of negative sentiment.

When LLMs scan Reddit for sentiment, that unbalance matters.

AI wants to see consensus. So when Reddit skews negative, recommendations may weaken, and alternatives get surfaced instead.

Even when the product is strong.

That’s why, while Descript’s AI visibility is good, it’s still not as good as it could be. And that vulnerability could hurt them in the long run, even if they’re still doing everything else right.

Here are some ways that Descript (and you) could turn the tides on Reddit:

  • Avoid promoting and start participating: Reddit punishes marketing language. Helpful, honest comments perform better than posts.
  • Respond to criticism directly (when appropriate): Not defensively, but with clear explanations and fixes
  • Be present before there’s a problem: Accounts that only show up during damage control don’t build trust
  • Focus on comments, not posts: High-value comments in active threads outperform standalone branded posts
  • Monitor brand mention weekly: Focus especially on high-intent subreddits. In Descript’s case, that could be r/podcasting.

To be fair, it seems like Descript is taking steps in the right direction.

As of December 2025, the Descript team has taken control of a dedicated brand subreddit, with PMM Gabe at the helm.

Reddit – r/Descript- – Team

And the team’s responses feel very Reddit-friendly, not using marketing jargon or being pushy.

Reddit – r/Descript – Filler Words

But popular threads here still have very little interaction with the Descript team. And there seems to be very few (if any) comments from the Descript team outside of this branded subreddit.

It’s a step in the right direction, but there’s still a lot to work on.

Done right, Reddit becomes a sentiment stabilizer and a stronger input source for AI answers.

Ignore it, and Reddit can become a liability.

Remember: for AI visibility, silence isn’t neutral.

Further reading: If Reddit feels like a whole other world, we’ve got you covered. Read our full guide to Reddit Marketing.


What You Can Take Away from This SaaS LLM Visibility Case Study

Descript isn’t winning AI visibility because it’s the biggest brand.

It’s winning because it’s clear, focused, and consistently helpful.

None of that is accidental.

And none of it requires massive scale.

You can get started on this today by choosing one key action to work on.

Use the effort vs. impact lens from this article to choose where to start.

  • Add in-product screenshots and videos: Low effort, medium impact
  • Tighten your niche messaging: Medium effort, high impact
  • Build citable MoFu/BoFu content: High effort, medium impact
  • Invest in digital PR, affiliates, and community participation: High effort, high impact
  • Create seriously helpful content: High effort, high impact

Effort vs. Impact

Pick one, start there. AI search visibility tools for SaaS companies — like Semrush’s AI Visibility Toolkit — can help you see exactly where you stand today, and where you can improve.

Remember: LLM visibility isn’t about chasing algorithms.

It’s about making your product easier to understand, easier to trust, and easier to recommend.

Do that consistently — and AI search will follow.

Want to learn how it all works on a deeper level? Read our LLM visibility guide to discover even more ways to increase your brand mentions and citations in AI search.

The post How a 200-Person Company Competes with a $160B Giant in AI Search appeared first on Backlinko.

Read more at Read More

Content Types & Formats That Earn Mentions in LLMs

Comparative listicles account for 32.5% of all AI citations. Comprehensive guides with data tables achieve 67% citation rates. Pages with […]

The post Content Types & Formats That Earn Mentions in LLMs appeared first on Onely.

Read more at Read More

How to Leverage Google Natural Language to Boost Your ASO Efforts 

Over the past year, Google has significantly accelerated its investment in artificial intelligence and machine learning across its products and platforms. While most marketers are familiar with ChatGPT, Google has been advancing its own AI capabilities in parallel, including the relaunch of Bard as Gemini and the steady rollout of AI-assisted features across Google Play.

For app marketers and ASO specialists, these developments are not abstract. They represent a fundamental shift in how apps are understood, categorized, and surfaced to users. Google Play is no longer relying primarily on keyword matching. Instead, it is moving toward a deeper, semantic understanding of apps, their functionality, and the problems they solve.

This evolution raises an important question. If Google increasingly generates, interprets, and evaluates app metadata itself, how do ASO teams maintain control, differentiation, and long-term competitive advantage?

One underutilized answer lies in a tool that has existed for years but is rarely discussed in an ASO context: the Google Natural Language.

Key Takeaways

  • Google Play is moving away from keyword density and toward semantic understanding driven by machine learning and natural language processing.
  • The Google Natural Language provides valuable insight into how Google interprets app metadata, including entities, sentiment, and category relevance.
  • Optimizing for category confidence and entity relevance can improve keyword coverage and resilience during algorithm updates.
  • ASO teams that align metadata with user intent and natural language patterns are better positioned for long-term discovery performance.
  • Using tools like the Google Natural Language helps future-proof ASO strategies as automation and AI-driven ranking signals continue to expand.

Why Traditional ASO Signals Are Losing Impact

Before exploring how the Google Natural Language can support ASO, it is important to understand the broader shifts in Google Play’s ranking algorithms.

Over the past two years, Google Play has shifted away from frequent, visible algorithm swings towards a more continuous learning model. While ASO teams still see volatility, it is now driven less by discrete updates and more by ongoing recalibration as models ingest new behavioural, linguistic, and performance data. Reindexing events still occur, but they are increasingly tied to semantic reassessment rather than simple metadata changes.

At the same time, the effectiveness of traditional optimization levers such as keyword density, exact-match repetition, and rigid keyword placement has continued to erode. These tactics no longer align with how Google Play evaluates relevance.

Like Google Search, Google Play is now firmly optimized for meaning, not mechanics. Its systems are designed to understand intent, function, and audience context rather than rely on surface-level keyword signals. The algorithm is increasingly capable of identifying what an app does, who it serves, and the problems it solves, even when those ideas are expressed using varied, natural language.

This is where natural language processing becomes central to modern ASO tools and practices.

Explanation of Natural Language processing.

What is the Goal of the Google Natural Language

Google Natural Language is designed to help machines understand human language in a way that more closely mirrors human interpretation. It powers a wide range of Google products and capabilities, including sentiment analysis, entity recognition, content classification, and contextual understanding.

In practical terms, it analyzes a body of text and identifies:

  • The overall sentiment and tone.
  • Key entities and their relative importance.
  • The categories and subcategories that the content most strongly aligns with.

For ASO teams, this offers a rare opportunity. Instead of guessing how Google might interpret app metadata, it provides a proxy for understanding how Google’s machine learning systems read and categorise text.

Used correctly, it can help ASO specialists align metadata more closely with Google’s evolving ranking logic.

How Google Natural Language Applies to ASO

When applied to app metadata, Google Natural Language can reveal how Google is likely to associate an app with certain concepts, categories, and keyword themes. This insight is particularly valuable as keyword density becomes less influential and semantic relevance takes priority.

Below are the key components that matter most for ASO.

Sentiment Analysis

Sentiment analysis evaluates the emotional tone of a piece of text and categorises it as positive, negative, or neutral. While sentiment is not a primary ranking factor for app discovery, it does provide useful contextual information.

For example, overly promotional, aggressive, or unclear language can introduce noise into metadata. Reviewing sentiment outputs can help teams ensure that descriptions maintain a clear, neutral, and informative tone that supports both user trust and algorithmic interpretation.

Entity Recognition and Salience

Entity recognition identifies specific entities within a text and classifies them into predefined types such as company, product, feature, or concept. Each entity is assigned a salience score, which reflects how central that entity is to the overall content.

In an ASO context, entities might include:

  • Core app features
  • Functional use cases
  • Industry-specific terms
  • Recognisable product or service concepts

Salience scores range from 0 to 1.0. Higher scores indicate that an entity plays a more important role in defining the content.

From an optimization perspective, this is critical. If key features or use cases are not appearing as highly salient, it suggests Google may not be strongly associating the app with those concepts.

Strategically incorporating relevant entities into metadata in a natural, user-focused way can improve clarity and strengthen topical relevance. Placement also matters. Important entities that appear early in descriptions or are reinforced toward the end of the text tend to carry more weight.

Metadata entities.

Categories and Confidence Scores

Category classification is arguably the most impactful element of Google Natural Language for ASO.

When text is analyzed, it assigns it to one or more categories and subcategories, each with an associated confidence score. These scores indicate how strongly the content aligns with a given category.

For Google Play, this has major implications. Higher category confidence increases the likelihood that an app will be associated with a broader range of relevant search queries within that category. Rather than ranking for a narrow set of exact keywords, apps can gain visibility across an expanded semantic keyword space.

In practice, we have seen that improving category confidence can significantly enhance keyword coverage and ranking stability, particularly during periods of algorithm change.

To increase category confidence:

  • Use clear, natural language that reflects real user intent
  • Focus on describing functionality and value, not just features
  • Avoid keyword stuffing or forced phrasing
  • Reinforce category-relevant concepts consistently throughout metadata
Hinge's Dating App.

Applying GNL Insights to Metadata Strategy

The real value of Google Natural Language lies not in isolated analysis, but in iterative optimization. By repeatedly testing metadata drafts through the Google Natural Language, ASO teams can refine language until category confidence, entity salience, and overall clarity improve.

This approach aligns well with broader 2026 ASO best practices, which emphasize:

  • User intent over keyword lists
  • Semantic relevance over repetition
  • Long-term stability over short-term gains

Case Study Insights

We have applied GNL-driven optimisation techniques across multiple app categories. While results vary by vertical, the overall pattern has been consistent.

During periods of significant Google Play algorithm updates, apps optimized around category confidence and entity relevance showed greater resilience. In several cases, visibility improved despite widespread volatility elsewhere in the store.

In one example, keyword coverage expanded substantially following metadata updates that increased confidence across both a core category and secondary related categories. This translated into a more than fivefold increase in organic Explore installs over time.

A Yodel Mobile case study about keyword coverage.

These results reinforce an important principle. When ASO strategies align with how Google understands language, they are better positioned to benefit from algorithm evolution rather than being disrupted by it.

Connecting GNL to 2026 ASO Strategy

Looking ahead, the role of natural language processing in app discovery will only grow. As Google continues to automate metadata creation and interpretation, manual optimization will shift from mechanical execution to strategic guidance.

ASO teams that understand and leverage tools like Google Natural Language will be better equipped to:

  • Guide AI-generated content rather than react to it
  • Maintain differentiation in an increasingly automated ecosystem
  • Build metadata that supports both paid and organic discovery

This approach also complements broader trends such as AI-powered search, cross-platform discovery, and privacy-first measurement frameworks.

Conclusion

The rise of natural language processing does not signal the end of ASO. Instead, it marks a shift in how optimization should be approached.

By moving beyond keyword density and embracing semantic relevance, ASO teams can align more closely with Google’s evolving algorithms. Google Natural Language offers a practical way to understand how app metadata is interpreted and how it can be improved to support discovery, conversion, and long-term stability.

As automation continues to expand across Google Play, the teams that succeed will be those who understand the systems behind it and adapt their strategies accordingly. Natural language optimization is no longer optional. It is becoming a core pillar of modern ASO.

Read more at Read More

TikTok launches AI-powered ad options for entertainment marketers

TikTok SEO: The ultimate guide

TikTok is giving entertainment marketers in Europe new tools to reach audiences with precision, leveraging AI to drive engagement and conversions for streaming and ticketed content.

What’s happening. TikTok is introducing two new ad types for European campaigns:

  • Streaming Ads: AI-driven ads for streaming platforms that show personalized content based on user engagement. Formats include a four-title video carousel or a multi-title media card. With 80% of TikTok users saying the app influences their streaming choices, these ads can directly shape viewing decisions.
  • New Title Launch: Targets high-intent users using signals like genre preference and price sensitivity, helping marketers convert cultural moments into ticket sales, subscriptions, or event attendance.

Context. The rollout coincides with the 76th Berlinale International Film Festival, underscoring TikTok’s growing role in entertainment marketing. In 2025, an average of 6.5 million daily posts were shared about film and TV on TikTok, with 15 of the top 20 European box office films last year being viral hits on the platform.

Why we care. TikTok’s new AI-powered ad formats let streaming platforms and entertainment brands target users with highly personalized content, increasing the likelihood of engagement and conversions.

With 80% of users saying TikTok influences their viewing choices (according to TikTok data), these tools can directly shape audience behavior, helping marketers turn cultural moments into subscriptions, ticket sales, or higher viewership. It’s a chance to leverage TikTok’s viral influence for measurable campaign impact.

The bottom line. For entertainment marketers, TikTok’s AI-driven ad formats provide new ways to engage audiences, boost viewership, and turn trending content into measurable results.

Dig deeper. TikTok Adds New Ad Types for Entertainment Marketers

Read more at Read More

Meta adds Manus AI tools into Ads Manager

Inside Meta’s AI-driven advertising system: How Andromeda and GEM work together

Meta Platforms is embedding newly acquired AI agent tech directly into Ads Manager, giving advertisers built-in automation tools for research and reporting as the company looks to show faster returns on its AI investments.

What’s happening. Some advertisers are seeing in-stream prompts to activate Manus AI inside Ads Manager.

  • Manus is now available to all advertisers via the Tools menu.
  • Select users are also getting pop-up alerts encouraging in-workflow adoption.
  • The feature rollout signals deeper integration ahead.

What is Manus. Manus AI is designed to power AI agents that can perform tasks like report building and audience research, effectively acting as an assistant within the ad workflow.

Why we care. Manus AI brings AI-powered automation directly into Meta Platforms Ads Manager, making tasks like report-building, audience research, and campaign analysis faster and more efficient.

Meta is currently prioritizing tying AI investment to measurable ad performance, giving advertisers new ways to optimize campaigns and potentially gain a competitive edge by testing workflow efficiencies early.

Between the lines. Meta is under pressure to demonstrate practical value from its aggressive AI spending. Advertising remains its clearest path to monetization, and embedding Manus into everyday ad tools offers a direct way to tie AI investment to performance gains.

Zoom out. The move aligns with CEO Mark Zuckerberg’s push to weave AI across Meta’s product stack. By positioning Manus as a performance tool for advertisers, Meta is betting that workflow efficiencies will translate into stronger ad results — and a clearer AI revenue story.

The bottom line. For advertisers, Manus adds another layer of built-in automation worth testing. Early adopters may uncover time savings and optimization gains as Meta continues expanding AI inside its ad ecosystem.

Read more at Read More