TikTok launches AI-powered ad options for entertainment marketers

TikTok SEO: The ultimate guide

TikTok is giving entertainment marketers in Europe new tools to reach audiences with precision, leveraging AI to drive engagement and conversions for streaming and ticketed content.

What’s happening. TikTok is introducing two new ad types for European campaigns:

  • Streaming Ads: AI-driven ads for streaming platforms that show personalized content based on user engagement. Formats include a four-title video carousel or a multi-title media card. With 80% of TikTok users saying the app influences their streaming choices, these ads can directly shape viewing decisions.
  • New Title Launch: Targets high-intent users using signals like genre preference and price sensitivity, helping marketers convert cultural moments into ticket sales, subscriptions, or event attendance.

Context. The rollout coincides with the 76th Berlinale International Film Festival, underscoring TikTok’s growing role in entertainment marketing. In 2025, an average of 6.5 million daily posts were shared about film and TV on TikTok, with 15 of the top 20 European box office films last year being viral hits on the platform.

Why we care. TikTok’s new AI-powered ad formats let streaming platforms and entertainment brands target users with highly personalized content, increasing the likelihood of engagement and conversions.

With 80% of users saying TikTok influences their viewing choices (according to TikTok data), these tools can directly shape audience behavior, helping marketers turn cultural moments into subscriptions, ticket sales, or higher viewership. It’s a chance to leverage TikTok’s viral influence for measurable campaign impact.

The bottom line. For entertainment marketers, TikTok’s AI-driven ad formats provide new ways to engage audiences, boost viewership, and turn trending content into measurable results.

Dig deeper. TikTok Adds New Ad Types for Entertainment Marketers

Read more at Read More

Meta adds Manus AI tools into Ads Manager

Inside Meta’s AI-driven advertising system: How Andromeda and GEM work together

Meta Platforms is embedding newly acquired AI agent tech directly into Ads Manager, giving advertisers built-in automation tools for research and reporting as the company looks to show faster returns on its AI investments.

What’s happening. Some advertisers are seeing in-stream prompts to activate Manus AI inside Ads Manager.

  • Manus is now available to all advertisers via the Tools menu.
  • Select users are also getting pop-up alerts encouraging in-workflow adoption.
  • The feature rollout signals deeper integration ahead.

What is Manus. Manus AI is designed to power AI agents that can perform tasks like report building and audience research, effectively acting as an assistant within the ad workflow.

Why we care. Manus AI brings AI-powered automation directly into Meta Platforms Ads Manager, making tasks like report-building, audience research, and campaign analysis faster and more efficient.

Meta is currently prioritizing tying AI investment to measurable ad performance, giving advertisers new ways to optimize campaigns and potentially gain a competitive edge by testing workflow efficiencies early.

Between the lines. Meta is under pressure to demonstrate practical value from its aggressive AI spending. Advertising remains its clearest path to monetization, and embedding Manus into everyday ad tools offers a direct way to tie AI investment to performance gains.

Zoom out. The move aligns with CEO Mark Zuckerberg’s push to weave AI across Meta’s product stack. By positioning Manus as a performance tool for advertisers, Meta is betting that workflow efficiencies will translate into stronger ad results — and a clearer AI revenue story.

The bottom line. For advertisers, Manus adds another layer of built-in automation worth testing. Early adopters may uncover time savings and optimization gains as Meta continues expanding AI inside its ad ecosystem.

Read more at Read More

Google shifts Lookalike to AI signals in Demand Gen

The Google Ads Demand Gen playbook for today’s fractured consumer journey

A core targeting lever in Google Demand Gen campaigns is changing. Starting March 2026, Lookalike audiences will act as optimization signals — not hard constraints — potentially widening reach and leaning more heavily on automation to drive conversions.

What is happening. Per an update to Google’s Help documentation, Lookalike segments in Demand Gen are moving from strict similarity-based targeting to an AI-driven suggestion model.

  • Before: Advertisers selected a similarity tier (narrow, balanced, broad), and campaigns targeted users strictly within that Lookalike pool.
  • After: The same tiers act as signals. Google’s system can expand beyond the Lookalike list to reach users it predicts are likely to convert.

Between the lines. This effectively reframes Lookalikes from a fence to a compass. Instead of limiting delivery to a defined cohort, advertisers are feeding intent signals into Google’s automation and allowing it to search for performance outside preset boundaries.

How this interacts with Optimized Targeting. The new Lookalike-as-signal approach resembles Optimized Targeting — but it doesn’t replace it.

  • When advertisers layer Optimized Targeting on top, Google says the system may expand reach even further.
  • In practice, this stacks multiple automation signals, increasing the algorithm’s freedom to pursue lower CPA or higher conversion volume.

Opt-out option. Advertisers who want to preserve legacy behavior can request continued access to strict Lookalike targeting through a dedicated opt-out form. Without that request, campaigns will default to the new signal-based model.

Why we care. This update changes how much control advertisers will have over who their ads reach in Google Demand Gen campaigns. Lookalike audiences will no longer strictly limit targeting — they’ll guide AI expansion — which can significantly affect scale, CPA, and overall performance.

It also signals a broader shift toward automation, similar to trends driven by Meta Platforms. Advertisers will need to test carefully, rethink audience strategies, and decide whether to embrace the added reach or opt out to preserve tighter targeting.

Zoom out. The shift mirrors a broader industry trend toward AI-first audience expansion, similar to moves by Meta Platforms over the past few years. Platforms are steadily trading granular manual controls for machine-led optimization.

Why Google is doing this. Digital markerter Dario Zannoni, has two reasons as to why Google is doing this:

  • Strict Lookalike targeting can cap scale and constrain performance in conversion-focused campaigns.
  • Maintaining high-quality similarity models is increasingly complex, making broader automation more attractive.

The bottom line. For performance marketers, this is another step toward automation-centric buying. While reduced control may be uncomfortable, comparable platform changes have often produced performance gains in mainstream use cases. Expect a new testing cycle as advertisers measure how expanded Lookalike signals affect CPA, reach, and incremental conversions.

First seen. This update was spotted by Zannoni who shared his thoughts on LinkedIn.

Dig deeper. Use Lookalike segments to grow your audience

Read more at Read More

Google’s Jeff Dean: AI Search relies on classic ranking and retrieval

AI search stack

Jeff Dean says Google’s AI Search still works like classic Search: narrow the web to relevant pages, rank them, then let a model generate the answer.

In an interview on Latent Space: The AI Engineer Podcast, Google’s chief AI scientist explained how Google’s AI systems work and how much they rely on traditional search infrastructure.

The architecture: filter first, reason last. Visibility still depends on clearing ranking thresholds. Content must enter the broad candidate pool, then survive deeper reranking before it can be used in an AI-generated response. Put simply, AI doesn’t replace ranking. It sits on top of it.

Dean said an LLM-powered system doesn’t read the entire web at once. It starts with Google’s full index, then uses lightweight methods to identify a large candidate pool — tens of thousands of documents. Dean said:

  • “You identify a subset of them that are relevant with very lightweight kinds of methods. You’re down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is the final 10 results or 10 results plus other kinds of information.”

Stronger ranking systems narrow that set further. Only after multiple filtering rounds does the most capable model analyze a much smaller group of documents and generate an answer. Dean said:

  • “And I think an LLM-based system is not going to be that dissimilar, right? You’re going to attend to trillions of tokens, but you’re going to want to identify what are the 30,000-ish documents that are with the maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked me to do?”

Dean called this the “illusion” of attending to trillions of tokens. In practice, it’s a staged pipeline: retrieve, rerank, synthesize. Dean said:

  • “Google search gives you … not the illusion, but you are searching the internet, but you’re finding a very small subset of things that are relevant.”

Matching: from keywords to meaning. Nothing new here, but we heard another reminder that covering a topic clearly and comprehensively matters more than repeating exact-match phrases.

Dean explained how LLM-based representations changed how Google matches queries to content.

Older systems relied more on exact word overlap. With LLM representations, Google can move beyond the idea that particular words must appear on the page and instead evaluate whether a page — or even a paragraph — is topically relevant to a query. Dean said:

  • “Going to an LLM-based representation of text and words and so on enables you to get out of the explicit hard notion of particular words having to be on the page. But really getting at the notion of this topic of this page or this page paragraph is highly relevant to this query.”

That shift lets Search connect queries to answers even when wording differs. Relevance increasingly centers on intent and subject matter, not just keyword presence.

Query expansion didn’t start with AI. Dean pointed to 2001, when Google moved its index into memory across enough machines to make query expansion cheap and fast. Dean said:

  • “One of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Because if you don’t have the page in your index, you’re going to not do well.
  • “And then we also needed to scale our capacity because we were, our traffic was growing quite extensively. So we had a sharded system where you have more and more shards as the index grows, you have like 30 shards. Then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. And then as traffic grows, you add more and more replicas of each of those.
  • And so we eventually did the math that realized that in a data center where we had say 60 shards and 20 copies of each shard, we now had 1,200 machines with disks. And we did the math and we’re like, Hey, one copy of that index would actually fit in memory across 1,200 machines. So in 2001, we … put our entire index in memory and what that enabled from a quality perspective was amazing.

Before that, adding terms was expensive because it required disk access. Once the index lived in memory, Google could expand a short query into dozens of related terms — adding synonyms and variations to better capture meaning. Dean said:

  • “Before, you had to be really careful about how many different terms you looked at for a query, because every one of them would involve a disk seek.
  • “Once you have the whole index in memory, it’s totally fine to have 50 terms you throw into the query from the user’s original three- or four-word query. Because now you can add synonyms like restaurant and restaurants and cafe and bistro and all these things.
  • “And you can suddenly start … getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was … 2001, very much pre-LLM, but really it was about softening the strict definition of what the user typed in order to get at the meaning.”

That change pushed Search toward intent and semantic matching years before LLMs. AI Mode (and its other AI experiences) continues Google’s ongoing shift toward meaning-based retrieval, enabled by better systems and more compute.

Freshness as a core advantage. Dean said one of Search’s biggest transformations was update speed. Early systems refreshed pages as rarely as once a month. Over time, Google built infrastructure that can update pages in under a minute. Dean said:

  • “In the early days of Google, we were growing the index quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most.”

That improved results for news queries and affected the main search experience. Users expect current information, and the system is designed to deliver it. Dean said:

  • “If you’ve got last month’s news index, it’s not actually that useful.”

Google uses systems to decide how often to crawl a page, balancing how likely it is to change with how valuable the latest version is. Even pages that change infrequently may be crawled often if they’re important enough. Dean said:

  • “There’s a whole … system behind the scenes that’s trying to decide update rates and importance of the pages. So, even if the update rate seems low, you might still want to recrawl important pages quite often because the likelihood they change might be low, but the value of having updated is high.”

Why we care. AI answers don’t bypass ranking, crawl prioritization, or relevance signals. They depend on them. Eligibility, quality, and freshness still determine which pages are retrieved and narrowed. LLMs change how content is synthesized and presented — but the competition to enter the underlying candidate set remains a search problem.

The interview. Owning the AI Pareto Frontier — Jeff Dean

Read more at Read More

Why AI optimization is just long-tail SEO done right

The return of long-tail SEO in the AI era

If you look at job postings on Indeed and LinkedIn, you’ll see a wave of acronyms added to the alphabet soup as companies try to hire people to boost visibility on large language models (LLMs).

Some people are calling it generative engine optimization (GEO). Others call it answer engine optimization (AEO). Still others call it artificial intelligence optimization (AIO). I prefer large model answer optimization (LMAO).

I find these new acronyms a bit ridiculous because while many like to think AI optimization is new, it isn’t. It’s just long-tail SEO — done the way it was always meant to be done.

Why LLMs still rely on search

Most LLMs (e.g., GPT-4o, Claude 4.5, Gemini 1.5, Grok-2) are transformers trained to do one thing: predict the next token given all previous tokens.

AI companies train them on massive datasets from public web crawls, such as:

  • Common Crawl.
  • Digitized books.
  • Wikipedia dumps.
  • Academic papers.
  • Code repositories.
  • News archives.
  • Forums.

The data is heavily filtered to remove spam, toxic content, and low-quality pages. Full pretraining is extremely expensive, so companies run major foundation training cycles only every few years and rely on lighter fine-tuning for more frequent updates.

So what happens when an LLM encounters a question it can’t answer with confidence, despite the massive amount of training data?

AI companies use real-time web search and retrieval-augmented generation (RAG) to keep responses fresh and accurate, bridging the limits of static training data. In other words, the LLM runs a web search.

To see this in real time, many LLMs let you click an icon or “Show details” to view the process. For example, when I use Grok to find highly rated domestically made space heaters, it converts my question into a standard search query.

Dig deeper: AI search is booming, but SEO is still not dead

The long-tail SEO playbook is back

Many of us long-time SEO practitioners have praised the value of long-tail SEO for years. But one main reason it never took off for many brands: Google.

As long as Google’s interface was a single text box, users were conditioned to search with one- and two-word queries. Most SEO revenue came from these head terms, so priorities focused on competing for the No. 1 spot for each industry’s top phrase.

Many brands treated long-tail SEO as a distraction. Some cut content production and community management because they couldn’t see the ROI. Most saw more value in protecting a handful of head terms than in creating content to capture the long tail of search.

Fast forward to 2026. People typing LLM prompts do so conversationally, adding far more detail and nuance than they would in a traditional search engine. LLMs take these prompts and turn them into search queries. They won’t stop at a few words. They’ll construct a query that reflects whatever detail their human was looking for in the prompt.

Suddenly, the fat head of the search curve is being replaced with a fat tail. While humans continue to go to search engines for head terms, LLMs are sending these long-tail search queries to search engines for answers.

While AI companies are coy about disclosing exactly who they partner with, most public information points to the following search engines as the ones their LLMs use most often:

  • ChatGPT – Bing Search.
  • Claude – Brave Search.
  • Gemini – Google Search.
  • Grok – X Search and its own internal web search tool.
  • Perplexity – Uses its own hybrid index.

Right now, humans conduct billions of searches each month on traditional search engines. As more people turn to LLMs for answers, we’ll see exponential growth in LLMs sending search queries on their behalf.

SEO is being reborn.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Dig deeper: Why ‘it’s just SEO’ misses the mark in the era of AI SEO

How to do long-tail SEO with help from AI

The principles of long-tail SEO haven’t changed much. It’s best summed up by Baseball Hall of Famer Wee Willie Keeler: “Keep your eye on the ball and hit ’em where they ain’t.”

Success has always depended on understanding your audience’s deepest needs, knowing what truly differentiates your brand, and creating content at the intersection of the two.

As straightforward as this strategy has been, few have executed it well, for understandable reasons.

Reading your customers’ minds is hard. Keyword research is tedious. Content creation is hard. It’s easy to get lost in the weeds.

Happily, there’s someone to help: your favorite LLM.

Here are a few best practices I’ve used to create strong long-tail content over the years, with a twist. What once took days, weeks, or even months, you can now do in minutes with AI.

1. Ask your LLM what people search when looking for your product or service

The first rule of long-tail SEO has always been to get into your audience’s heads and understand their needs. This once required commissioning surveys and hiring research firms to figure out.

But for most brands and industries, an LLM can handle at least the basics. Here’s a sample prompt you can use.

Act as an SEO strategist and customer research analyst. You're helping with long-tail keyword discovery by modeling real customer questions.

I want to discover long-tail search questions real people might ask about my business, products, and industry. I’m not looking for mere keyword lists. Generate realistic search questions that reflect how people research, compare options, solve problems, and make decisions.

Company name: [COMPANY NAME]
Industry: [INDUSTRY]
Primary product/service: [PRIMARY PRODUCT OR SERVICE]
Target customer: [TARGET AUDIENCE]
Geography (if relevant): [LOCATION OR MARKET]

Generate a list of 75 – 100 realistic, natural-language search queries grouped into the following categories:

AWARENESS
• Beginner questions about the category
• Problem-based questions (pain points, frustrations, confusion)

CONSIDERATION
• Comparison questions (alternatives, competitors, approaches)
• “Best for” and use-case questions
• Cost and pricing questions

DECISION
• Implementation or getting-started questions
• Trust, credibility, and risk questions

POST-PURCHASE
• Troubleshooting questions
• Optimization and advanced/expert questions

EDGE CASES
• Niche scenarios
• Uncommon but realistic situations
• Advanced or expert questions

Guidelines:
• Write queries the way real people search in Google or ask AI assistants.
• Prioritize specificity over generic keywords.
• Include question formats, “how to” queries, and scenario-based searches.
• Avoid marketing language.
• Include emotional, situational, and practical context where relevant.
• Don't repeat the same query structure with minor variations.
• Each query should suggest a clear content angle.

Output as a clean bullet list grouped by category.

You can tweak this prompt for your brand and industry. The key is to force the LLM (and yourself) to think like a customer and avoid the trap of generating keyword lists that are just head-term variations dressed up as long-tail queries.

With a prompt like this, you move away from churning out “keyword ideas” and toward understanding real customer needs you can build useful content around.

Dig deeper: If SEO is rocket science, AI SEO is astrophysics

2. Use your LLM to analyze your search data

Most large brands and sites don’t realize they’ve been sitting on a treasure trove of user intelligence: on-site search data.

When customers type a query into your site’s search box, they’re looking for something they expect your brand to provide.

If you see the same searches repeatedly, it usually means one of two things:

  • You have the information, but users can’t find it.
  • You don’t have it at all.

In both cases, it’s a strong signal you need to improve your site’s UX, add meaningful content, or both.

There’s another advantage to mining on-site search data: it reveals the exact words your audience uses, not the terms your team assumes they use.

Historically, the challenge has been the time required to analyze it. I remember projects where I locked myself in a room for days, reviewing hundreds of thousands of queries line by line to find patterns — sorting, filtering, and clustering them by intent.

If you’ve done the same, you know the pattern. The first few dozen keywords represent unique concepts, but eventually you start seeing synonyms and variations.

All of this is buried treasure waiting to be explored. Your LLM can help. Here’s a sample prompt you can use:

You're an SEO strategist analyzing internal site search data.

My goal is to identify content opportunities from what users are searching for on my website – including both major themes and specific long-tail needs within those themes.

I have attached a list of site search queries exported from GA4. Please:

STEP 1 – Cluster by intent
Group the queries into logical intent-based themes.

STEP 2 – Identify long-tail signals inside each theme
Within each theme:
• Identify recurring modifiers (price, location, comparisons, troubleshooting, etc.)
• Identify specific entities mentioned (products, tools, features, audiences, problems)
• Call out rare but high-intent searches
• Highlight wording that suggests confusion or unmet expectations

STEP 3 – Generate content ideas
For each theme:
• Suggest 3 – 5 content ideas
• Include at least one long-tail content idea derived directly from the queries
• Include one “high-intent” content idea
• Include one “problem-solving” content idea

STEP 4 – Identify UX or navigation issues
Point out searches that suggest:
• Users cannot find existing content
• Misleading navigation labels
• Missing landing pages

Output format:
Theme:
Supporting queries:
Long-tail insights:
Content opportunities:
UX observations:

Again, customize this prompt based on what you know about your audience and how they search.

The detail matters. Many SEO practitioners stop at a prompt like “give me a list of topics for my clients,” but this pushes the LLM beyond simple clustering to understand the intent behind the searches.

I used on-site search data because it’s one of the richest, most transparent, and most actionable sources. But similar prompts can uncover hidden value in other keyword lists, such as “striking distance” terms from Google Search Console or competitive keywords from Semrush.

Even better, if your organization keeps detailed customer interaction records (e.g., sales call notes, support tickets, chat transcripts), those can be more valuable. Unlike keyword datasets, they capture problems in full sentences, in the customer’s own words, often revealing objections, confusion, and edge cases that never appear in traditional keyword research.

Get the newsletter search marketers rely on.


3. Create great content

The next step is to create great content.

Your goal is to create content so strong and authoritative that it’s picked up by sources like Common Crawl and survives the intense filtering AI companies apply when building LLM training sets. Realistically, only pioneering brands and recognized authorities can expect to operate in this rarefied space.

For the rest of us, the opportunity is creating high-quality long-tail content that ranks at the top across search engines — not just Google, but Bing, Brave, and even X.

This is one area where I wouldn’t rely on LLMs, at least not to generate content from scratch.

Why?

LLMs are sophisticated pattern matchers. They surface and remix information from across the internet, even obscure material. But they don’t produce genuinely original thought.

At best, LLMs synthesize. At worst, they hallucinate.

Many worry AI will take their jobs. And it will — for anyone who thinks “great content” means paraphrasing existing authority sources and competing with Wikipedia-level sites for broad head terms. Most brands will never be the primary authority on those terms. That’s OK.

The real opportunity is becoming the authority on specific, detailed, often overlooked questions your audience actually has. The long tail is still wide open for brands willing to create thoughtful, experience-driven content that doesn’t already exist everywhere else.

We need to face facts. The fat head is shrinking. The land rush is now for the “fat tail.” Here’s what brands need to do to succeed:

Dominate searches for your brand

Search your brand name in a keyword tool like Semrush and review the long-tail variations people type into Google. You’ll likely find more than misspellings. You’ll see detailed queries about pricing, alternatives, complaints, comparisons, and troubleshooting.

If you don’t create content that addresses these topics directly — the good and the bad — someone else will. It might be a Reddit thread from someone who barely knows your product, a competitor attacking your site, a negative Google Business Profile review, or a complaint on Trustpilot.

When people search your brand, your site should be the best place for honest, complete answers — even and especially when they aren’t flattering. If you don’t own the conversation, others will define it for you.

The time for “frequently asked questions” is over. You need to answer every question about your brand—frequent, infrequent, and everything in between.

Go long

Head terms in your industry have likely been dominated by top brands for years. That doesn’t mean the opportunity is gone.

Beneath those competitive terms is a vast layer of unbranded, long-tail searches that have likely been ignored. Your data will reveal them.

Review on-site search, Google Search Console queries, customer support questions, and forums like Reddit. These are real people asking real questions in their own words.

The challenge isn’t finding questions to write about. It’s delivering the best answers — not one-line responses to check a box, but clear explanations, practical examples, and content grounded in real experience that reflects what sets your brand apart.

Dig deeper: Timeless SEO rules AI can’t override: 11 unshakeable fundamentals

Expertise is now a commodity: Lean into experience, authority, and trust

Publishing expert content still matters, but its role has changed. Today, anyone can generate “expert-sounding” articles with an LLM.

Whether that content ranks in Google is increasingly beside the point, as many users go straight to AI tools for answers.

As the “expertise” in E-E-A-T becomes table stakes, differentiation comes from what AI and competitors can’t easily replicate: experience, authority, and trust.

That means publishing:

  • Original insights and genuine thought leadership from people inside your company.
  • Real customer stories with measurable outcomes.
  • Transparent reviews and testimonials.
  • Evidence that your brand delivers what it promises.

This isn’t just about blog content. These signals should appear across your site — from your About page to product pages to customer support content. Every page should reinforce why a real person should trust your brand.

Stop paywalling your best content

I’m seeing more brands put their strongest content behind logins or paywalls. I understand why. Many need to protect intellectual property and preserve monetization. But as a long-term strategy, this often backfires.

If your content is truly valuable, the ideas will spread anyway. A subscriber may paraphrase it. An AI system may summarize it. A crawler may access it through technical workarounds. In the end, your insights circulate without attribution or brand lift.

When your best content is publicly accessible, it can be cited, linked to, indexed, and discussed. That visibility builds authority and trust over time.

In a search- and AI-driven ecosystem, discoverability often outweighs modest direct content monetization.

This doesn’t mean content businesses can’t charge for anything. It means being strategic about what you charge for. A strong model is to make core knowledge and thought leadership open while monetizing things such as:

  • Tools.
  • Community access.
  • Premium analysis or data.
  • Courses or certifications.
  • Implementation support.
  • Early access or deeper insights.

In other words, let your ideas spread freely and monetize the experience, expertise, and outcomes around them.

Stop viewing content as a necessary evil

I still see brands hiding content behind CSS “read more” links or stuffing blocks of “SEO copy” at the bottom of pages, hoping users won’t notice but search engines will.

Spoiler alert: they see it. They just don’t care.

Content isn’t something you add to check an SEO box or please a robot. Every word on your site must serve your customers. When content genuinely helps users understand, compare, and decide, it becomes an asset that builds trust and drives conversions.

If you’d be embarrassed for users to read your content, you’re thinking about it the wrong way. There’s no such thing as content that’s “bad for users but good for search engines.” There never was.

Embrace user-generated content

No article on long-tail SEO is complete without discussing user-generated content. I covered forums and Q&A sites in a previous article (see: The reign of forums: How AI made conversation king), and they remain one of the most efficient ways to generate authentic, unique content.

The concept is simple. You have an audience that’s already passionate and knowledgeable. They likely have more hands-on experience with your brand and industry than many writers you hire. They may already be talking about your brand offline, in customer communities, or on forums like Reddit.

Your goal is to bring some of those conversations onto your site.

User-generated content naturally produces the long-tail language marketing teams rarely create on their own. Customers

  • Describe problems differently.
  • Ask unexpected questions.
  • Compare products in ways you didn’t anticipate.
  • Surface edge cases, troubleshooting scenarios, and real-world use cases that rarely appear in polished marketing copy.

This is exactly the kind of content long-tail SEO thrives on.

It’s also the kind of content AI systems and search engines increasingly recognize as credible because it reflects real experience rather than brand messaging many dismiss as inauthentic.

Brands that do this well don’t just capture long-tail traffic. They build trust, reduce support costs, and dominate long-tail searches and prompts.

In the age of AI-generated content, real human experience is one of the strongest differentiators.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

The new SEO playbook looks a lot like the old one

For years, SEO has been shaped by the limits of the search box. Short queries and head terms dominated strategy, and long-tail content was often treated as optional.

LLMs are changing that dynamic. AI is expanding search, not eliminating it.

AI systems encourage people to express what they actually want to know. Those detailed prompts still need answers, and those answers come from the web.

That means the SEO opportunity is shifting from competing over a small set of keywords to becoming the best source of answers to thousands of specific questions.

Brands that succeed will:

  • Deeply understand their audience.
  • Publish genuinely useful content.
  • Build trust through real engagement and experience.

That’s always been the recipe for SEO success. But our industry has a habit of inventing complex tactics to avoid doing the simple work well.

Most of us remember doorway pages, exact match domains, PageRank sculpting, LSI obsession, waves of auto-generated pages, and more. Each promised an edge. Few replaced the value of helping users.

We’re likely to see the same cycle repeat in the AI era.

The reality is simpler. AI systems aren’t the audience. They’re intermediaries helping humans find trustworthy answers.

If you focus on helping people understand, decide, and solve problems, you’re already optimizing for AI — whatever you call it.

Dig deeper: Is SEO a brand channel or a performance channel? Now it’s both

Read more at Read More

Google Search Console AI-powered configuration rolling out

Over two months ago, Google began testing its AI-powered configuration tool. It allows you to ask AI questions about the Google Search Console performance reports and it would bring back answers for you. Well, Google is now rolling out this tool for all.

Google said on LinkedIn, “The Search Console’s new AI-powered configuration is now available to everyone!”

AI-powered configuration. AI-powered configuration “lets you describe the analysis you want to see in natural language. Your inputs are then transformed into the appropriate filters and settings, instantly configuring the report for you,” Google said.

Rolling out now. If you login to your Search Console account and click on the performance report, you may see a note at the top that says “New! Customize your Performance report using Al.”

When you click on it, you get into the AI tool:

More details. As we reported earlier, Google said “The AI-powered configuration feature is designed to streamline your analysis by handling three key elements for you.”

  • Selecting metrics: Choose which of the four available metrics – Clicks, Impressions, Average CTR, and Average Position – to display based on your question.
  • Applying filters: Narrow down data by query, page, country, device, search appearance, or date range.
  • Configuring comparisons: Set up complex comparisons (like custom date ranges) without manual setup.

Why we care. This is only supported in the Performance report for Search results. It isn’t available for Discover or News reports, yet. Plus, it is AI, so the answers may not be perfect. But it can be fun to play with and get you thinking about things you may not have thought about yet.

So give it a try.

Read more at Read More

Web Design and Development San Diego

Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it

Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it

Rand Fishkin just published the most important piece of primary research the AI visibility industry has seen so far.

His conclusion – that AI tools produce wildly inconsistent brand recommendation lists, making “ranking position” a meaningless metric – is correct, well-evidenced, and long overdue.

But Fishkin stopped one step short of the answer that matters.

He didn’t explore why some brands appear consistently while others don’t, or what would move a brand from inconsistent to consistent visibility. That solution is already formalized, patent pending, and proven in production across 73 million brand profiles.

When I shared this with Fishkin directly, he agreed. The AI models are pulling from a semi-fixed set of options, and the consistency comes from the data. He just didn’t have the bandwidth to dig deeper, which is fair enough, but the digging has been done – I’ve been doing it for a decade.

Here’s what Fishkin found, what it actually means, and what the data proves about what to do about it.

Fishkin’s data killed the myth of AI ranking position

Fishkin and Patrick O’Donnell ran 2,961 prompts across ChatGPT, Claude, and Google AI, asking for brand recommendations across 12 categories. The findings were surprising for most.

Fewer than 1 in 100 runs produced the same list of brands, and fewer than 1 in 1,000 produced the same list in the same order. These are probability engines that generate unique answers every time. Treating them as deterministic ranking systems is – as Fishkin puts it – “provably nonsensical,” and I’ve been saying this since 2022. I’m grateful Fishkin finally proved it with data.

But Fishkin also found something he didn’t fully unpack. Visibility percentage – how often a brand appears across many runs of the same prompt – is statistically meaningful. Some brands showed up almost every time, while others barely appeared at all.

That variance is where the real story lies.

Fishkin acknowledged this but framed it as a better metric to track. The real question isn’t how to measure AI visibility, it’s why some brands achieve consistent visibility and others don’t, and what moves your brand from the inconsistent pile to the consistent pile.

That’s not a tracking problem. It’s a confidence problem.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

AI systems are confidence engines, not recommendation engines

AI platforms – ChatGPT, Claude, Google AI, Perplexity, Gemini, all of them – generate every response by sampling from a probability distribution shaped by:

  • What the model knows.
  • How confidently it knows it.
  • What it retrieved at the moment of the query.

When the model is highly confident about an entity’s relevance, that entity appears consistently. When the model is uncertain, the entity sits at a low probability weight in the distribution – included in some samples, excluded in others – not because the selection is random but because the AI doesn’t have enough confidence to commit.

That’s the inconsistency Fishkin documented, and I recognized it immediately because I’ve been tracking exactly this pattern since 2015. 

  • City of Hope appearing in 97% of cancer care responses isn’t luck. It’s the result of deep, corroborated, multi-source presence in exactly the data these systems consume. 
  • The headphone brands at 55%-77% are in a middle zone – known, but not unambiguously dominant. 
  • The brands at 5%-10% have low confidence weight, and the AI includes them in some outputs and not others because it lacks the confidence to commit consistently. 

Confidence isn’t just about what a brand publishes or how it structures its content. It’s about where that brand stands relative to every other entity competing for the same query – a dimension I’ve recently formalized as Topical Position.

I’ve formalized this phenomenon as “cascading confidence” – the cumulative entity trust that builds or decays through every stage of the algorithmic pipeline, from the moment a bot discovers content to the moment an AI generates a recommendation. It’s the throughline concept in a framework I published this week.

Dig deeper: Search, answer, and assistive engine optimization: A 3-part approach

Every piece of content passes through 10 gates before influencing an AI recommendation

The pipeline is called DSCRI-ARGDW – discovered, selected, crawled, rendered, indexed, annotated, recruited, grounded, displayed, and won. That sounds complicated, but I can summarize it in a single question that repeats at every stage: How confident is the system in this content?

  • Is this URL worth crawling? 
  • Can it be rendered correctly? 
  • What entities and relationships does it contain? 
  • How sure is the system about those annotations? 
  • When the AI needs to answer a question, which annotated content gets pulled from the index? 

Confidence at each stage feeds the next. A URL from a well-structured, fast-rendering, semantically clean site arrives at the annotation stage with high accumulated confidence before a single word of content is analyzed. A URL from a slow, JavaScript-heavy site with inconsistent information arrives with low confidence, even if the actual content is excellent.

This is pipeline attenuation, and here’s where the math gets unforgiving. The relationship is multiplicative, not additive:

  • C_final = C_initial × ∏τᵢ

In plain English, the final confidence an AI system has in your brand equals the initial confidence from your entity home multiplied by the transfer coefficient at every stage of the pipeline. The entity home – the canonical web property that anchors your entity in every knowledge graph and every AI model – sets the starting confidence, and then each stage either preserves or erodes it. 

Maintain 90% confidence at each of 10 stages, and end-to-end confidence is 0.9¹⁰ = 35%. At 80% per stage, it’s 0.8¹⁰ = 11%. One weak stage – say 50% at rendering because of heavy JavaScript – drops the total from 35% to 19% even if every other stage is at 90%. One broken stage can undo the work of nine good ones.

This multiplicative principle isn’t new, and it doesn’t belong to anyone. In 2019, I published an article, How Google Universal Search Ranking Works: Darwinism in Search, based on a direct explanation from Google’s Gary Illyes. He described how Google calculates ranking “bids” by multiplying individual factor scores rather than adding them. A zero on any factor kills the entire bid, no matter how strong the other factors are.

Google applies this multiplicative model to ranking factors within a single system, and nobody owns multiplication. But what the cascading confidence framework does is apply this principle across the full 10-stage pipeline, across all three knowledge graphs.

The system provides measurable transfer coefficients at every transition and bottleneck detection that identifies exactly where confidence is leaking. The math is universal, but the application to a multi-stage, multi-graph algorithmic pipeline is the invention.

This complete system is the subject of a patent application I filed with the INPI titled “Système et procédé d’optimisation de la confiance en cascade à travers un pipeline de traitement algorithmique multi-étapes et multi-graphes.” It’s not a metaphor, it’s an engineered system with an intellectual lineage going back seven years to a principle a Google engineer confirmed to me in person.

Fishkin measured the output – the inconsistency of recommendation lists. But the output is a symptom, and the cause is confidence loss at specific stages of this pipeline, compounded across multiple knowledge representations.

You can’t fix inconsistency by measuring it more precisely. You can only fix it by building confidence at every stage.

The corroboration threshold is where AI shifts from hesitant to assertive

There’s a specific transition point where AI behavior changes. I call it the “corroboration threshold” – the minimum number of independent, high-confidence sources corroborating the same conclusion about your brand before the AI commits to including it consistently.

Below the threshold, the AI hedges. It says “claims to be” instead of “is,” it includes a brand in some outputs but not others, and the reason isn’t randomness but insufficient confidence.

The brand sits in the low-confidence zone, where inconsistency is the predictable outcome. Above the threshold, the AI asserts – stating relevance as fact, including the brand consistently, operating with the kind of certainty that produces City of Hope’s 97%.

My data across 73 million brand profiles places this threshold at approximately 2-3 independent, high-confidence sources corroborating the same claim as the entity home. That number is deceptively small because “high-confidence” is doing the heavy lifting – these are sources the algorithm already trusts deeply, including Wikipedia, industry databases, and authoritative media. 

Without those high-authority anchors, the threshold rises considerably because more sources are needed and each carries less individual weight. The threshold isn’t a one-time gate. Once crossed, the confidence compounds with every subsequent corroboration, which is why brands that cross it early pull further ahead over time, while brands that haven’t crossed it yet face an ever-widening gap.

Not identical wording, but equivalent conviction. The entity home states, “X is the leading authority on Y,” two or three independent, authoritative third-party sources confirm it with their own framing, and the AI encodes it as fact.

This fact is visible in my data, and it explains exactly why Fishkin’s experiment produced the results it did. In narrow categories like LA Volvo dealerships or SaaS cloud computing providers – where few brands exist and corroboration is dense – AI responses showed higher pairwise correlation. 

In broad categories like science fiction novels – where thousands of options exist and corroboration is thin – responses were wildly diverse. The corroboration threshold aligns with Fishkin’s findings.

Dig deeper: The three AI research modes redefining search – and why brand wins

Authoritas proved that fabricated entities can’t fool AI confidence systems

Authoritas published a study in December 2025 – “Can you fake it till you make it in the age of AI?” – that tested this directly, and the results confirm that Cascading Confidence isn’t just theory. Where Fishkin’s research shows the output problem – inconsistent lists – Authoritas shows the input side.

Authoritas investigated a real-world case where a UK company created 11 entirely fictional “experts” – made-up names, AI-generated headshots, faked credentials. They seeded these personas into more than 600 press articles across UK media, and the question was straightforward: Would AI models treat these fake entities as real experts?

The answer was absolute: Across nine AI models and 55 topic-based questions – “Who are the UK’s leading experts in X?” – zero fake experts appeared in any recommendation. Six hundred press articles, and not a single AI recommendation. That might seem to contradict a threshold of 2-3 sources, but it confirms it. 

The threshold requires independent, high-confidence sources, and 600 press articles from a single seeding campaign are neither independent – they trace to the same origin – nor high-confidence – press mentions sit in the document graph only.

The AI models looked past the surface-level coverage and found no deep entity signals – no entity home, no knowledge graph presence, no conference history, no professional registration, no corroboration from the kind of authoritative sources that actually move the needle.

The fake personas had volume, they had mentions, but what they lacked was cascading confidence – the accumulated trust that builds through every stage of the pipeline. Volume without confidence means inconsistent appearance at best, while confidence without volume still produces recommendations.

AI evaluates confidence — it doesn’t count mentions. Confidence requires multi-source, multi-graph corroboration that fabricated entities fundamentally can’t build.

Get the newsletter search marketers rely on.


AI citability concentration increased 293% in under two months

Authoritas used the weighted citability score, or WCS, a metric that measures how much AI engines trust and cite entities, calculated across ChatGPT, Gemini, and Perplexity using cross-context questions.

I have no influence over their data collection or their results. Fishkin’s methodology and Authoritas’ aren’t identical. Fishkin pinged the same query repeatedly to measure variance, while Authoritas tracks varied queries on the same topic. That said, the directional finding is consistent.

Their dataset includes 143 recognized digital marketing experts, with full snapshots from the original study by Laurence O’Toole and Authoritas in December 2025 and their latest measurement on Feb. 2. The pattern across the entire dataset tells a story that goes far beyond individual scores.

  • The top 10 experts captured 30.9% of all citability in December. By February, they captured 59.5% – a 92% increase in concentration in under two months.
  • The HHI, or Herfindahl-Hirschman Index, the standard measure of market concentration, rose from 0.026 to 0.104 – a 293% increase in concentration. This happened while the total expert pool widened from 123 to 143 tracked entities.

More experts are being cited, the field is getting bigger, and the top is pulling away faster. Dominance is compounding while the long tail grows.

This is cascading confidence at population scale. The experts who actively manage their digital footprint – clean entity home, corroborated claims, consistent narrative across the algorithmic trinity – aren’t just maintaining their position, they’re accelerating away from everyone else.

Each cycle of AI training and retrieval reinforces their advantage – confident entities generate confident AI outputs, which build user trust, which generate positive engagement signals, which further reinforce the AI’s confidence. It’s a flywheel, and once it’s spinning, it becomes very, very hard for competitors to catch up.

At the individual level, the data confirms the mechanism. I lead the dataset at a WCS of 23.50, up from 21.48 in December, a gain of +2.02. That’s not because I’m more famous than everyone else on the list.

It’s because we’ve been systematically building my cascading confidence for years – clean entity home, corroborated claims across the algorithmic trinity, consistent narrative, structured data, deep knowledge graph presence.

I’m the primary test case because I’m in control of all my variables – I have a huge head start. In a future article, I’ll dig into the details of the scores and why the experts have the scores they do.

The pattern across my client base mirrors the population data. Brands that systematically clean their digital footprint, anchor entity confidence through the entity home, and build corroboration across the algorithmic trinity don’t just appear in AI recommendations.

They appear consistently, their advantage compounds over time, and they exit the low-confidence zone to enter the self-reinforcing recommendation set.

Dig deeper: From SEO to algorithmic education: The roadmap for long-term brand authority

AI retrieves from three knowledge representations simultaneously, not one

AI systems pull from what I call the Three Graphs model – the algorithmic trinity – and understanding this explains why some brands achieve near-universal visibility while others appear sporadically.

  • The entity graph, or knowledge graph, contains explicit entities with binary verified edges and low fuzziness – either a brand is in, or it’s not.
  • The document graph, or search engine index, contains annotated URLs with scored and ranked edges and medium fuzziness.
  • The concept graph, or LLM parametric knowledge, contains learned associations with high fuzziness, and this is where the inconsistency Fishkin documented comes from.

When retrieval systems combine results from multiple sources – and they do, using mechanisms analogous to reciprocal rank fusion – entities present across all three graphs receive a disproportionate boost.

The effect is multiplicative, not additive. A brand that has a strong presence in the knowledge graph and the document index and the concept space gets chosen far more reliably than a brand present in only one.

This explains a pattern Fishkin noticed but didn’t have the framework to interpret – why visibility percentages clustered differently across categories. The brands with near-universal visibility aren’t just “more famous,” they have dense, corroborated presence across all three knowledge representations. The brands in the inconsistent pool are typically present in only one or two. 

The Authoritas fake expert study confirms this from the negative side. The fake personas existed only in the document graph, press articles, with zero entity graph presence and negligible concept graph encoding. One graph out of three, and the AI treated them accordingly.

What I tell every brand after reading Fishkin’s data

Fishkin’s recommendations were cautious – visibility percentage is a reasonable metric, ranking position isn’t, and brands should demand transparent methodology from tracking vendors. All fair, but that’s analyst advice. What follows is practitioner advice, based on doing this work in production.

Stop optimizing outputs and start optimizing inputs

The entire AI tracking industry is fixated on measuring what AI says about you, which is like checking your blood pressure without treating the underlying condition. Measure if it helps, but the work is in building confidence at every stage of the pipeline, and that’s where I focus my clients’ attention from day one.

Start at the entity home

My experience clearly demonstrates that this single intervention produces the fastest measurable results. Your entity home is the canonical web property that should anchor your entity in every knowledge graph and every AI model. If it’s ambiguous, hedging, or contradictory with what third-party sources say about you, it is actively training AI to be uncertain. 

I’ve seen aligning the entity home with third-party corroboration produce measurable changes in bottom-of-funnel AI citation behavior within weeks, and it remains the highest ROI intervention I know.

Cross the corroboration threshold for the critical claims

I ask every client to identify the claims that matter most:

  • Who you are.
  • What you do.
  • Why you’re credible. 

Then, I work with them to ensure each claim is corroborated by at least 2-3 independent, high-authority sources. Not just mentioned, but confirmed with conviction. 

This is what flips AI from “sometimes includes” to “reliably includes,” and I’ve seen it happen often enough to know the threshold is real.

Dig deeper: SEO in the age of AI: Becoming the trusted answer

Build across all three graphs simultaneously

Knowledge graph presence (structured data, entity recognition), document graph presence (indexed, well-annotated content on authoritative sites), and concept graph presence (consistent narrative across the corpus AI trains on) all need attention. 

The Authoritas study showed exactly what happens when a brand exists in only one – the AI treats it accordingly.

Work the pipeline from Gate 1, not Gate 9

Most SEO and GEO advice operates at the display stage, optimizing what AI shows. But if your content is losing confidence at discovery, selection, rendering, or annotation, it will never reach display consistently enough to matter. 

I’ve watched brands spend months on display-stage optimization that produced nothing because the real bottleneck was three stages earlier, and I always start my diagnostic at the beginning of the pipeline, not the end.

Maintain it because the gap is widening

The WCS data across 143 tracked experts shows that AI citability concentration increased 293% in under two months. The experts who maintain their digital footprint are pulling away from everyone else at an accelerating rate. 

Starting now still means starting early, but waiting means competing against entities whose advantage compounds every cycle. This isn’t a one-time project. It’s an ongoing discipline, and the returns compound with every iteration.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Fishkin proved the problem exists. The solution has been in production for a decade.

Fishkin’s research is a gift to the industry. He killed the myth of AI ranking position with data, he validated that visibility percentage, while imperfect, correlates with something real, and he raised the right questions about methodology that the AI tracking vendors should have been answering all along.

But tracking AI visibility without understanding why visibility varies is like tracking a stock price without understanding the business. The price is a signal, and the business is the thing.

AI recommendations are inconsistent when AI systems lack confidence in a brand. They become consistent when that confidence is built deliberately, through:

  • The entity home.
  • Corroborated claims that cross the corroboration threshold.
  • Multi-graph presence.
  • Every stage of the pipeline that processes your content before AI ever generates a response.

This isn’t speculation, and the evidence comes from every direction.

The process behind this approach has been under development since 2015 and is formalized in a peer-review-track academic paper. Several related patent applications have been filed in France, covering entity data structuring, prompt assembly, multi-platform coherence measurement, algorithmic barrier construction, and cascading confidence optimization.

The dataset supporting the work spans 25 billion data points across 73 million brand profiles. In tracked populations, shifts in AI citability have been observed — including cases where the top 10 experts increased their share from 31% to 60% in under two months while the overall field expanded. Independent research from Authoritas reports findings that align with this mechanism.

Fishkin proved the problem exists. My focus over the past decade has been on implementing and refining practical responses to it.

This is the first article in a series. The second piece, “What the AI expert rankings actually tell us: 8 archetypes of AI visibility,” examines how the pipeline’s effects manifest across 57 tracked experts. The third, “The ten gates between your content and an AI recommendation,” opens the DSCRI-ARGDW pipeline itself.

Read more at Read More

Web Design and Development San Diego

Google Ads adds beta data source integrations to conversion settings

Google Ads is rolling out a beta feature that lets advertisers connect external data sources directly inside conversion action settings, tightening the link between first-party data and campaign measurement.

How it works. A new section in conversion action details — labeled “Get deeper insights about your customers’ behavior to improve measurement” — prompts advertisers to connect external databases to their Google tag.

  • Supported integrations include platforms like BigQuery and MySQL
  • The goal is to enrich conversion metrics and improve performance signals
  • The feature appears in a highlighted prompt within data attribution settings
  • Rollout is gradual and currently marked as Beta

Why we care. Direct integrations could reduce friction in syncing offline or backend data with ad measurement. This beta from Google Ads makes it easier to connect first-party data directly to conversion tracking, which can improve measurement accuracy and campaign optimization.

By integrating sources like BigQuery or MySQL, brands can feed richer customer data into their signals, helping offset data loss from privacy changes. In practical terms, better data in means smarter bidding, clearer attribution, and potentially stronger ROI.

Between the lines. Embedding data connections inside conversion settings — rather than requiring separate pipelines — makes advanced measurement more accessible to everyday advertisers, not just enterprise teams.

Zoom out. As ad platforms compete on measurement accuracy, native data integrations are becoming a key differentiator, especially for brands investing heavily in proprietary customer data.

Read more at Read More

Web Design and Development San Diego

How to create a persona GPT for SEO audience research

How to create a persona GPT for SEO audience research

In a perfect world, you could call up a top customer to pick their brain about a piece of content. But in reality, it can be extremely difficult and time-consuming to conduct audience interviews every time you need to create a new topic or refresh an old piece. 

A few years ago, content marketing was simpler – keyword intent and quality content was enough to rank at the top of Google’s SERP to get clicks. But in the new era of AI, expectations are different.

Audience research has become critical. However, some companies may not have the resources to perform it.

One way to better understand your target audience is to create a custom GPT in ChatGPT, configured with your persona research. These aren’t replacements for audience research or interviews, but they can help you quickly identify what might be missing or wrong in your content. 

Below, I’ll explain how GPTs work so you can use them for audience research.

Perform audience research

Now that the SEO landscape is evolving, audience research is one of your strongest tools to understand the “why” behind search intent. 

Here are several easy-to-use methods and tools to get you started on research. 

  • SparkToro: Search by website, interest, or specific URL to segment different audience types. Research can be in-depth or give an overview of your audience. 
  • Review mining: Create automations through various tools and scrape reviews of your company or competitors to see what users are saying, and then analyze them. What does your target customer like? Why did they like it? What didn’t they like? Why?
  • Listen to calls/review leads: Listen to sales team interactions with customers to hear questions in real time and what led up to a call with a particular client.

Dig deeper: How to do audience research for SEO

Create a customer persona

After completing your research, create a persona – a representation of your target audience. Figma and FigJam are strong tools for building them.

Your persona should include: 

  • Name, bio, and trait slider.
  • Interests, influences, goals, pain points.
  • User stories.
  • The emotional journey during and after.
  • Content focus, trigger words, and calls to action (CTAs).
  • Full customer journey steps.
  • Reviews that support data.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Create a custom GPT of your persona

Now that you have all your research and your persona, it’s time to make a GPT. 

First, log in to ChatGPT, then go to Explore GPTs in the sidebar. 

In the upper right corner, click on Create.

ChatGPT - Create

Once there, prompt ChatGPT with your audience research data and persona information. You can paste in screenshots of your data to make it easier. 

ChatGPT - Hank persona

Once all your data is in and a GPT is created, you can start talking to it. Under the Configure tab, you can use conversation starters to ask it about changes, updates, and copy.

ChatGPT conversation starters

These GPTs, like all AI models, aren’t 100% accurate. They don’t replace a real audience survey or interview, but they can help you quickly identify issues with a piece of content and how it might not connect with your audience. 

Here’s an example of an optimized page. GPT “Hank” helped make sure the section above the fold did what was intended. 

GPT Hank 1
GPT Hank 2
GPT Hank 3

Hank has said what’s working, what isn’t working, and where to improve.

But should you take his advice 100% of the time? Of course not. 

But the GPT helps quickly identify issues you may have missed. That’s where the real benefit of using a GPT comes in. 

Dig deeper: 7 custom GPT ideas to automate SEO workflows

Get the newsletter search marketers rely on.


Ensure data from your GPT is accurate

Nothing analyzed or generated by AI is conclusive evidence. If you’re unsure your GPT is giving you accurate information, double-check by prompting it to provide evidence from the sources you gave it. 

GPT Hank - data accuracy

The GPT can correct itself if the information sounds off. When it does, again ask for evidence from the persona information you provided to double-check the new information. 

Update your persona-based GPT

You can always add more information to your GPT to make it more robust. 

To do this, go back to Explore GPTs in ChatGPT. 

Instead of Create, go to My GPTs in the top right-hand corner. 

Click on your persona. 

GPT Hank Haul

Click on Configure to update, add, or delete your current information.

GPT Hank Haul configuration

Remember that a persona is never a one-and-done situation. The more you learn about your audience and the more information you give a GPT, the better, to keep it up to date. 

Leverage persona GPTs for SEO content

Personas aren’t absolute, and AI can hallucinate. 

But both tools can still help you optimize content. 

Once you’re comfortable creating personas, you can build them for your general audience, specific segments, and individual campaigns.

SEO and marketing are always changing, and you can’t just set it and forget it. As you gain audience insights or if audience intent shifts, update information or delete anything no longer relevant in your GPT. 

When leveraged correctly, these tools can work with SEO to drive traffic and gain more conversions.

Read more at Read More

Web Design and Development San Diego

Google Ads tool is automatically re-enabling paused keywords

Why Google Ads auctions now run on intent, not keywords

Some advertisers are reporting that a Google Ads system tool designed for low-activity bulk changes is automatically enabling paused keywords — a behavior many account managers say they haven’t seen before.

What advertisers are seeing. Activity logs show entries tied to Google’s “Low activity system bulk changes” tool that include actions enabling previously paused keywords. The log entries appear as automated bulk updates, with a visible “Undo” option.

Historically, the tool has been associated mainly with pausing inactive elements, not reactivating them.

What we don’t know. Google hasn’t publicly documented the behavior or clarified whether this is an intentional feature, a limited experiment, or a bug.

It’s also unclear what triggers the reactivation or how broadly the behavior is rolling out.

Why we care. Unexpected keyword reactivation can quietly alter campaign delivery, affecting budgets, pacing, and performance — especially in tightly controlled accounts where paused keywords are intentional.

For agencies and in-house teams, the change raises new concerns about automation overriding manual controls.

What advertisers should do now. Account managers may want to review change histories regularly, watch for unexpected keyword activations, and use undo functions quickly if unintended changes appear.

Until Google provides clarification, closer monitoring may be necessary for accounts relying heavily on paused keyword structures.

First seen. The issue was first flagged by Performance Marketing Consultant Francesco Cifardi on LinkedIn.

Read more at Read More