Posts

Why ad approval is not legal protection

Why ad approval is not legal protection

Most business owners assume that if an ad is approved by Google or Meta, it is safe. 

The thinking is simple: trillion-dollar platforms with sophisticated compliance systems would not allow ads that expose advertisers to legal risk.

That assumption is wrong, and it is one of the most dangerous mistakes an advertiser can make.

The digital advertising market operates on a legal double standard. 

A federal law known as Section 230 shields platforms from liability for third-party content, while strict liability places responsibility squarely on the advertiser. 

Even agencies have a built-in defense. They can argue that they relied on your data or instructions. You can’t.

In this system, you are operating in a hostile environment. 

  • The landlord (the platform) is immune. 
  • Bad tenants (scammers) inflate the cost of participation. 
  • And when something goes wrong, regulators come after you, the responsible advertiser, not the platform, and often not even the agency that built the ad.

Here is what you need to know to protect your business.

Note: This article was sparked by a recent LinkedIn post from Vanessa Otero regarding Meta’s revenue from “high-risk” ads. Her insights and comments in the post about the misalignment between platform profit and user safety prompted this in-depth examination of the legal and economic mechanisms that enable such a system.

The core danger: Strict liability explained

While the strict liability standard is specific to U.S. law (FTC), the economic fallout of this system affects anyone buying ads on U.S.-based platforms.

Before we discuss the platforms, it is essential to understand your own legal standing. 

In the eyes of the FTC and state regulators, advertisers are generally held to a standard of strict liability.

What this means: If your ad makes a deceptive claim, you are liable. That’s it.

  • Intent doesn’t matter: You can’t say, “I didn’t mean to mislead anyone.”
  • Ignorance doesn’t matter: You can’t say, “I didn’t know the claim was false.”
  • Delegation doesn’t matter: You can’t say, “My agency wrote it,” or “ChatGPT wrote it.”

The law views the business owner as the “principal” beneficiary of the ad. 

You have a non-delegable duty to ensure your advertising is truthful. 

Even if an agency writes unauthorized copy that violates the law, regulators often fine the business owner first because you are the one profiting from the sale. 

You can try to sue your agency later to get your money back, but that is a separate battle you have to fund yourself.

The unfair shield: Why the platform doesn’t care

If you are strictly liable, why doesn’t the platform help you stay compliant? Because they don’t have to.

Section 230 of the Communications Decency Act declares that “interactive computer services” (platforms) are not treated as the publisher of third-party content.

  • The original intent: This law was passed in 1996 to allow the internet to scale, ensuring that a website wouldn’t be sued every time a user posted a comment. It was designed to protect free speech and innovation.
  • The modern reality: Today, that shield protects a business model. Courts have ruled that even if platforms profit from illegal content, they are generally not liable unless they actively contribute to creating the illegality.
  • The consequence: This creates a “moral hazard.” Because the platform faces no legal risk for the content of your ads, it has no financial incentive to build perfect compliance tools. Their moderation AI is built to protect the platform’s brand safety, not your legal safety.

The liability ladder: Where you stand

To understand how exposed you are, look at the legal hierarchy of the three main players in any ad campaign:

The platform (Google/Meta)

Legal status: Immune.

They accept your money to run the ad. Courts have ruled that providing “neutral tools” like keyword suggestions does not make the platform liable for the fraud that ensues. 

If the FTC sues, they point to Section 230 and walk away.

The agency (The creator)

  • Legal status: Negligence standard.

If your agency writes a false ad, they are typically only liable if regulators prove they “knew or should have known” it was false. 

They can argue they relied on your product data in good faith.

You (The business owner)

  • Legal status: Strict liability.

You are the end of the line. 

You can’t pass the buck to the platform (immune) or easily to the agency (negligence defense). 

If the ad is false, you pay the fine.

The hostile environment: Paying to bid against ‘ghosts’

The situation gets worse. 

Because platforms are immune, they allow “high-risk” actors into the auction that legitimate businesses, like yours, have to compete against.

A recent Reuters investigation revealed that Meta internally projected roughly 10% of its ad revenue (approximately $16 billion) would come from “integrity risks”: 

  • Scams.
  • Frauds.
  • Banned goods.

Worse, internal documents reveal that when the platform’s AI suspects an ad is a scam (but isn’t “95% certain”), it often fails to ban the advertiser.

Instead, it charges them a “penalty bid,” a premium price to enter the auction.

You are bidding against scammers who have deep illicit profit margins because they don’t ship real products (zero cost of goods sold). 

This allows them to bid higher, artificially inflating the cost per click (CPC) for every legitimate business owner. 

You are paying a fraud tax just to get your ad seen.

Get the newsletter search marketers rely on.


The new threat: The AI trap

The most urgent risk for 2026 is the rise of generative AI tools (like “Automatically Created Assets” or “Advantage+ Creative”).

Platforms are pushing you to let their AI rewrite your headlines and generate your images. Do not do this blindly.

If Google’s AI hallucinates a claim, you are strictly liable for it. 

However, the legal shield for platforms is cracking here.

In cases like Forrest v. Meta, courts are seeing that platforms may lose immunity if their tools actively help “develop” the illegality.

We have seen this before. 

In cases like CYBERsitter v. Google, courts refused to dismiss lawsuits when the platform was accused of “developing” the illegal content rather than just hosting it. 

If the AI writes the lie, the platform is arguably the “developer,” which pierces their initial immunity shield.

This liability extends to your entire website. 

By default, Google’s Performance Max campaigns have “Final URL Expansion” turned on. 

This gives their bot permission to crawl any page on your domain, including test pages or joke pages, and turn them into live ads. 

Google’s Terms of Service state that the “Customer is solely responsible” for all assets generated, meaning the bot’s mistake is legally your fault.

Be cautious of programs that blur the line. 

Features like the “Google Guaranteed” badge can create exposure for deceptive marketing. 

Because the platform is no longer a neutral host but is vouching for the business (“Guaranteed”), regulators can argue they have stepped out from behind the Section 230 shield.

By clicking “Auto-apply,” you are effectively signing a blank check for a robot to write legal promises on your behalf.

Risk reality check: Who actually gets investigated?

While strict liability is the law, enforcement is not random. The FTC and State Attorneys General have limited resources, so they prioritize based on harm and scale.

  • If you operate in dietary supplements (i.e., “nutra”), fintech (crypto and loans), or business opportunity offers, your risk is extreme. These industries trigger the most consumer complaints and the swiftest investigations.
  • If you are an HVAC tech or a local florist, you are unlikely to face an FTC probe unless you are engaging in massive fraud (e.g., fake reviews at scale). However, you are still vulnerable to competitor lawsuits and local consumer protection acts.
  • Investigations rarely start from a random audit. They start from consumer complaints (to the BBB or attorney generals) or viral attention. If your aggressive ad goes viral for the wrong reasons, the regulators will see it.

International intricacies

It is vital to remember that Section 230 is a U.S. anomaly. 

If you advertise globally, you’re playing by a different set of rules.

  • The European Union (DSA): The Digital Services Act forces platforms to mitigate “systemic risks.” If they fail to police scams, they face fines of up to 6% of global turnover.
  • The United Kingdom (Online Safety Act): The UK creates a “duty of care.” Senior managers at tech companies can face criminal liability for failing to prevent fraud.
  • Canada (Competition Bureau): Canadian regulators are increasingly aggressive on “drip pricing” and misleading digital claims, without a Section 230 equivalent to shield the platforms.
  • The “Brussels Effect”: Because platforms want to avoid EU fines, they often apply their strictest global policies to your U.S. account. You may be getting flagged in Texas because of a law written in Belgium.

The advertiser’s survival guide

Knowing the deck is stacked, how do you protect your business?

Adopt a ‘zero trust’ policy

Never hit “publish” on an auto-generated asset without human eyes on it first.

If you use an agency, require them to send you a “substantiation PDF” once a quarter that links every claim in your top ads to a specific piece of proof (e.g., a lab report, a customer review, or a supply chain document).

The substantiation file

For every claim you make (“Fastest shipping,” “Best rated,” “Loses 10lbs”), keep a PDF folder with the proof dated before the ad went live. 

This is your only shield against strict liability.

Audit your ‘auto-apply’ settings

Go into your ad accounts today. 

Turn off any setting that allows the platform to automatically rewrite your text or generate new assets without your manual review. 

Efficiency is not worth the liability.

Watch the legislation

Lawmakers are actively debating the SAFE TECH Act, which would carve out paid advertising from Section 230. 

While Congress continues to debate reform, you must protect your own business today.

The responsibility you can’t outsource

The digital ad market is a powerful engine for growth, but it is legally treacherous. 

Section 230 protects the platform. Your contract protects your agency. 

Nothing protects you except your own diligence.

That is why advertisers must stop conflating platform policy with the law. 

  • Platform policies are house rules designed to protect revenue. 
  • Truth in advertising is a federal mandate designed to protect consumers. 

Passing the first does not mean you are safe from the second.

Read more at Read More

How vibe coding is changing search marketing workflows

Vibe coding for search marketers

Search marketers are starting to build, not just optimize.

Across SEO and PPC teams, vibe coding and AI-powered development tools are shrinking the gap between idea and execution – from weeks of developer queues to hours of hands-on experimentation. 

These tools don’t replace developers, but they do let search teams create and test interactive content on their own timelines.

That matters because Google’s AI Overviews are pulling more answers directly into the SERP, leaving fewer clicks for brand websites

In a zero-click environment, the ability to build unique, useful, conversion-focused tools is becoming one of the most practical ways search marketers can respond.

What is vibe coding?

Vibe coding is a way of building software by directing AI systems through natural language rather than writing most of the code by hand. 

Instead of working line by line, the builder focuses on intent – what the tool should do, how it should look, and how it should respond – while the AI handles implementation.

The term was popularized in early 2025 by OpenAI co-founder Andrej Karpathy, who described a loose, exploratory style of building where ideas are tested quickly, and code becomes secondary to outcomes. 

His framing captured both the appeal and the risk: AI makes it possible to build functional tools at speed, but it also encourages shortcuts that can lead to fragile or poorly understood systems.

Andrej Karpathy on X

Since then, a growing ecosystem of AI-powered development platforms has made this approach accessible well beyond engineering teams. 

Tools like Replit, Lovable, and Cursor allow non-developers to design, deploy, and iterate on web-based tools with minimal setup. 

The result is a shift in who gets to build – and how quickly ideas can move from concept to production.

That speed, however, doesn’t remove the need for judgment. 

Vibe coding works best when it’s treated as a craft, not a shortcut. 

Blindly accepting AI-generated changes, skipping review, or treating tools as disposable experiments creates technical debt just as quickly as it creates momentum. 

Mastering vibe coding means learning how to guide, question, and refine what the AI produces – not just “see stuff, say stuff, run stuff.”

This balance between speed and discipline is what makes vibe coding relevant for search marketers, and why it demands more than curiosity to use well.

Vibe coding vs. vibe marketing

Vibe coding should not be confused with vibe marketing. 

AI no-code tools used for vibe coding are designed to build things – applications, tools, and interactive experiences. 

AI automation platforms used for vibe marketing, such as N8N, Gumloop, and Make, are built to connect tools and systems together.

For example, N8N can be used to automate workflows between products, content, or agents created with Replit. 

These automation platforms extend the value of vibe-coded tools by connecting them to systems like WordPress, Slack, HubSpot, and Meta.

Used together, vibe coding and AI automation allow search teams to both build and operationalize what they create.

 Why vibe coding matters for search marketing

The search marketer's guide to vibe coding

In the future, AI-powered coding platforms will likely become a default part of the marketing skill set, much like knowing how to use Microsoft Excel is today. 

AI won’t take your job – but someone who knows how to use AI might. 

We recently interviewed candidates for a director of SEO and AI optimization role.

None of the people we spoke with were actively vibe coding or had used AI-powered development software for SEO or marketing.

That gap was notable. 

As more companies add these tools to their technology stacks and ways of working, hands-on experience with them is likely to become increasingly relevant.

Vibe coding lets search marketers quickly build interactive tools that are useful, conversion-focused, and difficult for Google to replicate through AI Overviews or other SERP features.

For paid search, this means teams can rapidly test interactive content ideas and drive traffic to them to evaluate whether they increase leads or sales. 

These platforms can also be used to build or enhance scripts, improve workflows, and support other operational needs.

For SEO, vibe coding makes it possible to add meaningful utility to pages and websites, which can increase engagement and encourage users to return. 

Returning visitors matter because, according to Google’s AI Mode patent, user state – which includes engagement – plays a significant role in how results are generated in AI Overviews and AI Mode.

Google’s AI Mode patent - Sheet 9 of 11

For agency founders, CEOs, CFOs, and other group leaders, these tools also make it possible to build custom internal systems to support how their businesses actually operate. 

For example, I used Replit to build an internal growth forecasting and management tool.

Internal growth forecasting and management tool - Replit

It allows me to create annual forecasts with assumptions, margins, and P&L modeling to manage the SEO and AI optimization group. 

There isn’t off-the-shelf software that fully supports those needs.

Vibe coding tools can also be cost-effective. 

In one case, I was quoted $55,000 and a three-month timeline to build an interactive calculator for a client. 

Using Replit, I built a more robust version in under a week on a $20-per-month plan.

Beyond efficiency, the most important reason to develop these skills is the ability to teach them. 

Helping clients learn how to build and adapt alongside you is increasingly part of the value agencies provide.

In a widely shared LinkedIn post about how agencies should approach AI, Chime CMO Vinneet Mehra argued that agencies and holding companies need to move from “we’ll do it for you” to “we’ll build it with you.” 

In-house teams aren’t going away, he wrote, so agencies need to partner with them by offering copilots, playbooks, and embedded pods that help brands become AI-native marketers.

Being early to adopt and understand vibe coding can become a competitive advantage. 

Used well, it allows teams to navigate a zero-click search environment while empowering clients and strengthening long-term working relationships – the kind that make agencies harder to replace.

Top vibe coding platforms for search marketers

There are many vibe coding platforms on the market, with new ones continuing to launch as interest grows. Below are several leading options worth exploring.

AI development tool
and experience level
Pros Cons
Google AI Studio
(Intermediate)
• Direct access to Google’s latest Gemini models.
• Seamless integration with Google ecosystem (Maps, Sheets, etc.).
• Free tier available for experimentation.
• Locked into Google’s ecosystem and Gemini models.
• Limited flexibility compared to open platforms.
• Smaller community/resources compared to established tools.
Lovable
(Beginner)
• Rapid full-stack app generation from natural language.
• Handles database setup automatically. 
• Minimal coding knowledge required.
• Relatively new platform with less maturity.
• Limited customization for complex applications.
• Generated code may need refinement for production.
Figma Make
(Intermediate)
• Seamless design to code workflow within.
• Ideal for teams already using Figma.
• Bridges gap between designers and developers.
• Requires Figma subscription and ecosystem.
• Newer tool, still evolving features.
• Code output may need developer review for production.
Replit
(Intermediate)
• All-in-one platform (code, deploy, host).
• Strong integration capabilities with third-party tools.
• No local setup required.
• Performance can lag compared to local development.
• Free tier has significant limitations.
• Fees can add up based on usage.
Cursor
(Advanced)
• Powerful AI assistance for experienced developers.
• Works locally with your existing workflow.
• Advanced code understanding and generation.
• Steeper learning curve, requires coding knowledge.
• Need to download the software GitHub dependency for some features.

For beginners:

  • Lovable is the most user-friendly option for those with little coding experience. 
  • Figma Make is also intuitive and works well for teams already using Figma. 
  • Replit is also relatively easy to use and does not require prior coding experience.

For developers, Replit and Cursor offer deeper tooling and are better suited for integrations with other systems, such as CRMs and CMS platforms.

Google AI Studio is broader in scope and offers direct connections to Google products, including Google Maps and Gemini, making it useful for teams working within Google’s ecosystem.

You should test several of these tools to find the one that best fits your needs. 

I prefer Replit, but I will be using Figma Make because our creative teams already work in Figma. 

Bubble is also worth exploring if you are new to coding, while Windsurf may be a better fit for more advanced users.

Practical SEO and PPC applications: What you can build today

There is no shortage of things you can build with vibe coding platforms. 

The more important question is what interactive content you should build – tools that do not already exist, solve a real problem, and give users a reason to return. 

Conversion focus matters, but usefulness comes first.

Common use cases include:

  • Lead generation tools
    • Interactive calculators, such as ROI estimators and cost analyzers.
    • Quiz funnels with email capture.
    • Free tools, including word counters and SEO analyzers
  • Content optimization tools
    • Keyword density checkers.
    • Readability analyzers.
    • Meta title and description generators
  • Conversion rate optimization
    • Product recommenders.
    • Personalization engines.
  • Data analysis and reporting
    • Custom analytics dashboards.
    • Rank tracking visualizations.
    • Competitor analysis scrapers, with appropriate ethical considerations.

Articles can only take you so far in a zero-click environment, where AI Overviews increasingly provide direct answers and absorb traffic. 

Interactive content should be an integral part of a modern search and content strategy, particularly for brands seeking to enhance visibility in both traditional and generative search engines, including ChatGPT. 

Well-designed tools can earn backlinks, increase time on site, drive repeat visits, and improve engagement signals that are associated with stronger search performance.

For example, we use AI development software as part of the SEO and content strategy for a client serving accounting firms and bookkeeping professionals. 

Our research led to the development of an AI-powered accounting ROI calculator designed to help accountants and bookkeeping firms understand the potential return on investment from using AI across different parts of their businesses.

The calculator addresses several core questions:

  • Why AI adoption matters for their firm.
  • Where AI can deliver the most impact.
  • What the expected ROI could be.

It fills a gap where clear answers did not previously exist and represents the kind of experience Google AI Overviews cannot easily replace.

AI adoption ROI calculator

The tool is educational by design. 

AI ROI calculator for accounting firms

It explains which tasks can be automated with AI, displays results directly on screen, forecasts a break-even point, and allows users to download a PDF summary of their results.

AI ROI calculator features

AI development software has also enabled us to design additional calculators that deliver practical value to the client’s target audience by addressing problems they cannot easily solve elsewhere.

Get the newsletter search marketers rely on.


A 7-step vibe coding process for search marketers

Vibe coding works best when it follows a structured workflow. 

The steps below outline a practical process search marketers can use to plan, build, test, and launch interactive tools using AI-powered development platforms.

Step 1: Research and ideation

Run SERP analysis, competitor research, and customer surveys, and use audience research tools such as SparkToro to identify gaps where AI Overviews leave room for interactive tools. 

Include sales, PR, legal, compliance, and cybersecurity teams early in the process. 

That collaboration is especially important when building tools for clients. 

When possible, involve customers or target audiences during research, ideation, and testing.

Step 2: Create your content specification document

Create a content specification document to define what you want to build before you start. 

This document should outline functionality, inputs, outputs, and constraints to help guide the vibe coding software and reduce errors. 

Include as much training context as possible, such as brand colors, tone of voice, links, PDFs, and reference materials. 

The more detail provided upfront, the better the results.

Use this Interactive Content Specification Brief template, and review the instructions before getting started.

Step 3: Design before functionality

Begin with wireframes and front-end design before building functionality. 

Replit prompts for this approach during setup, and it helps reduce rework later. 

Getting the design close to final before moving into logic makes it easier to evaluate usability. 

Design changes can always be made later.

Step 4: Prompt like a product manager

After submitting the specification document, continue prompting to refine the build. 

Ask the AI why it made specific decisions and how changes affect the system. 

In practice, targeted questions lead to fewer errors and more predictable outcomes.

Prompt like a product manager

Step 5: Deploy and test

Deploy the tool to a test URL to confirm it behaves as expected.

If the tool will be embedded on other sites, test it in those environments as well. 

Security configurations can block API calls or integrations depending on the host site. 

I encountered this when integrating a Replit build with Klaviyo. 

After reviewing the deployment context, the issue was resolved.

Step 6: Update the content specification document

Have the AI update the content specification document to reflect the final version of what was built. 

This creates a record of decisions, changes, and requirements and makes future updates or rebuilds easier. 

Save this document for reference.

Step 7: Launch

Push the interactive content live using a custom domain or by embedding it on your site. 

Plan distribution and promotion alongside the launch. 

This is why involving PR, sales, and marketing teams from the beginning of the project matters.

They play a role in ensuring the content reaches the right audience.

The dark side of vibe coding and important watchouts

Vibe coding tools are powerful, but understanding their limitations is just as important as understanding their strengths. 

The main risks fall into three areas: 

  • Security and compliance.
  • Price creep.
  • Technical debt.

Security and compliance 

While impressive, vibe coding tools can introduce security gaps. 

AI-generated code does not always follow best practices for API usage, data encryption, authentication, or regulatory requirements such as GDPR or ADA compliance. 

Any vibe-coded tool should be reviewed by security, legal, and compliance professionals before launch, especially if it collects user data. 

Privacy-by-design principles should also be documented upfront in the content specification document.

These platforms are improving. 

For example, some tools now offer automated security scans that flag issues before deployment and suggest fixes. 

Even so, human review remains essential.

Price creep

Another common risk is what could be described as the “vibe coding hangover.” 

A tool that starts as a quick experiment can quietly become business-critical, while costs scale alongside usage. 

Monthly subscriptions that appear inexpensive at first can grow rapidly as traffic increases, databases expand, or additional API calls are required.

In some cases, self-hosting a vibe-coded project makes more sense than relying on platform-hosted infrastructure. 

Hosting independently can help control costs by avoiding per-use or per-visit charges.

Technical debt

Vibe coding can also create technical debt. 

Tools can break unexpectedly, leaving teams staring at code they no longer fully understand – a risk Karpathy highlighted in his original description of the approach. 

This is why “Accept all” should never be the default. 

Reviewing AI explanations, asking why changes were made, and understanding tradeoffs are critical habits.

Most platforms provide detailed change logs, version history, and rollback options, which makes it possible to recover when something breaks. 

Updating the content specification document at major milestones also helps maintain clarity as projects evolve.

Vibe coding is your competitive edge

AI Overviews and zero-click search are changing how value is created in search. 

Traffic is not returning to past norms, and competing on content alone is becoming less reliable. 

The advantage increasingly goes to teams that build interactive experiences Google cannot easily replicate – tools that require user input and deliver specific, useful outcomes.

Vibe coding makes that possible. 

The approach matters: start with research and a clear specification, design before functionality, prompt with intent, and iterate with discipline. 

Speed without structure creates risk, which is why understanding what the AI builds is as important as shipping quickly.

The tools are accessible. Lovable lowers the barrier to entry, Cursor supports advanced workflows, and Replit offers flexibility across use cases. 

Many platforms are free to start. The real cost is not testing what’s possible.

More importantly, vibe coding shifts how teams work together. 

Agencies and in-house teams are moving from “we’ll do it for you” to “we’ll build it with you.” 

Teams that develop this capability can adapt to a zero-click search environment while building stronger, more durable partnerships.

Build something. Learn from it. The competitive advantage is often one prompt away.

Read more at Read More

Localized SEO for LLMs: How Best Practices Have Evolved

Large language models (LLMs) like ChatGPT, Perplexity, and Google’s AI Overviews are changing how people find local businesses. These systems don’t just crawl your website the way search engines do. They interpret language, infer meaning, and piece together your brand’s identity across the entire web. If your local visibility feels unstable, this shift is one of the biggest reasons.

Traditional local SEO like Google Business Profile optimization, NAP consistency, and review generation still matter. But now you’re also optimizing for models that need better context and more structured information. If those elements aren’t in place, you fade from LLM-generated answers even if your rankings look fine. When you’re focusing on a smaller local audience, it’s essential that you know what you have to do.

Key Takeaways

  • LLMs reshape how local results appear by pulling from entities, schema, and high-trust signals, not just rankings.
  • Consistent information across the web gives AI models confidence when choosing which businesses to include in their answers.
  • Reviews, citations, structured data, and natural-language content help LLMs understand what you do and who you serve.
  • Traditional local SEO still drives visibility, but AI requires deeper clarity and stronger contextual signals.
  • Improving your entity strength helps you appear more often in both organic search and AI-generated summaries.

How LLMs Impact Local Search

Traditional local search results present options: maps, listings, and organic rankings. 

Search results for "Mechanic near Milkwaukee."

LLMs don’t simply list choices. They generate an answer based on the clearest, strongest signals available. If your business isn’t sending those signals consistently, you don’t get included.

An AI overview for "Where can I find a good mechanic near Milkwaukee?"

If your business information is inconsistent and your content is vague, the model is less likely to confidently associate you with a given search. That hurts visibility, even if your traditional rankings haven’t changed. As you can see above, these LLM responses are the first thing that someone can see in Google, not an organic listing. This doesn’t even account for the growing number of users turning to LLMs like ChatGPT directly to answer their queries, never using Google at all.

How LLMs Process Local Intent

LLMs don’t use the same proximity-driven weighting as Google’s local algorithm. They infer local relevance from patterns in language and structured signals.

They look for:

  • Reviews that mention service areas, neighborhoods, and staff names
  • Schema markup that defines your business type and location
  • Local mentions across directories, social platforms, and news sites
  • Content that addresses questions in a city-specific or neighborhood-specific way

If customers mention that you serve a specific district, region, or neighborhood, LLMs absorb that. If your structured data includes service areas or specific location attributes, LLMs factor that in. If your content references local problems or conditions tied to your field, LLMs use those cues to understand where you fit. 

This is important because LLMs don’t use GPS or IP address at the time of search like Google does. They are reliant on explicit mentions and pull conversational context, IP-derived from the app to get a general idea, so it’s not as proximity-exact relevant to the searcher.

These systems treat structured data as a source of truth. When it’s missing or incomplete, the model fills the gaps and often chooses competitors with stronger signals.

Why Local SEO Still Matters in an AI-Driven World of Search

Local SEO is still foundational. LLMs still need data from Google Business Profiles, reviews, NAP citations, and on-site content to understand your business. 

NAP info from the better business bureau.

These elements supply the contextual foundation that AI relies on.

The biggest difference is the level of consistency required. If your business description changes across platforms or your NAP details don’t match, AI models sense uncertainty. And uncertainty keeps you out of high-value generative answers. If a user has a more specific branded query for you in an LLM, a lack of detail may mean outdated/incorrect info is provided about your business.

Local SEO gives you structure and stability. AI gives you new visibility opportunities. Both matter now, and both improve each other when done right.

Best Practices for Localized SEO for LLMs

To strengthen your visibility in both search engines and AI-generated results, your strategy has to support clarity, context, and entity-level consistency. These best practices help LLMs understand who you are and where you belong in local conversations.

Focus on Specific Audience Needs For Your Target Areas

Generic local pages aren’t as effective as they used to be. LLMs prefer businesses that demonstrate real understanding of the communities they serve.

Write content that reflects:

  • Neighborhood-specific issues
  • Local climate or seasonal challenges
  • Regulations or processes unique to your region
  • Cultural or demographic details

If you’re a roofing company in Phoenix, talk about extreme heat and tile-roof repair. If you’re a dentist in Chicago, reference neighborhood landmarks and common questions patients in that area ask.

The more local and grounded your content feels, the easier it is for AI models to match your business to real local intent.

Phrase and Structure Content In Ways Easy For LLMs to Parse

LLMs work best with content that is structured clearly. That includes:

  • Straightforward headers
  • Short sections
  • Natural-language FAQs
  • Sentences that mirror how people ask questions

Consumers type full questions, so answer full questions.

Instead of writing “Austin HVAC services,” address:
“What’s the fastest way to fix an AC unit that stops working in Austin’s summer heat?”

Google results for "What's the fastest way to fix an AC unit thtat stops working in Austin's summer heat?"

LLMs understand and reuse content that leans into conversational patterns. The more your structure supports extraction, the more likely the model is to include your business in summaries.

Emphasize Your Localized E-E-A-T Markers

LLMs evaluate credibility through experience, expertise, authority, and trust signals, just as humans do.

Strengthen your E-E-A-T through:

  • Case details tied to real neighborhoods
  • Expert commentary from team members
  • Author bios that reflect credentials
  • Community involvement or partnerships
  • Reviews that speak to specific outcomes

LLMs treat these details as proof you know what you’re talking about. When they appear consistently across your web presence, your business feels more trustworthy to AI and more likely to be recommended.

Use Entity-Based Markup

Schema markup is one of the clearest ways to communicate your identity to AI. LocalBusiness schema, service area definitions, department structures, product or service attributes—all of it helps LLMs recognize your entity as distinct and legitimate.

An example of schema markup.

Source

The more complete your markup is, the stronger your entity becomes. And strong entities show up more often in AI answers.

Spread and Standardize Your Brand Presence Online

LLMs analyze your entire digital footprint, not just your site. They compare how consistently your brand appears across:

  • Social platforms
  • Industry directories
  • Local organizations
  • Review sites
  • News or community publications

If your name, address, phone number, hours, or business description differ between platforms, AI detects inconsistency and becomes less confident referencing you. It’s also important to make sure more subjective factors like your brand voice and value propositions are also consistent across all these different platforms.

One thing that you may not be aware of is that ChatGPT uses Bing’s index, so Bing Places is one area to prioritize building your presence. While it’s not necessarily going to mirror how Bing will display in the search engine, it uses the data. Things like Apple Maps, Google Mps, and Waze are also priorities to get your NAP info.

Standardization builds authority. Authority increases visibility.

Use Localized Content Styles Like Comparison Guides and FAQs

LLMs excel at interpreting content formats that break complex ideas into digestible pieces.

Comparison guides, cost breakdowns, neighborhood-specific FAQs, and troubleshooting explainers all translate extremely well into AI-generated answers. These formats help the model understand your business with precision.

A comparison between two plumbing services.

If your content mirrors the structure of how people search, AI can more easily extract, reuse, and reference your insights.

Internal Linking Still Matters

Internal linking builds clarity, something AI depends on. It shows which concepts relate to each other and which topics matter most.

Connect:

  • Service pages to related location pages
  • Blog posts to the services they support
  • Local FAQs to broader category content

Strong internal linking helps LLMs follow the path of your expertise and understand your authority in context.

Tracking Results in the LLM Era

Rankings matter, but they no longer tell the full story. To understand your AI visibility, track:

  • Branded search growth
  • Google Search Console impressions
  • Referral traffic from AI tools
  • Increases in unlinked brand mentions
  • Review volume and review language trends

This is easier with the advent of dedicated AI visibility tools like Profound. 

The Profound Interface.

The goal here is to have a method to reveal whether LLMs are pulling your business into their summaries, even when clicks don’t occur.

As zero-click results grow, these new metrics become essential.

FAQs

What is local SEO for LLMs?

It’s the process of optimizing your business so LLMs can recognize and surface you for local queries.

How do I optimize my listings for AI-generated results?

Start with accurate NAP data, strong schema, and content written in natural language that reflects how locals ask questions.

What signals do LLMs use to determine local relevance?

Entities, schema markup, citations, review language, and contextual signals such as landmarks or neighborhoods.

Do reviews impact LLM-driven searches?

Yes. The language inside reviews helps AI understand your services and your location.

Conclusion

LLMs are rewriting the rules of local discovery, but strong local SEO still supplies the signals these models depend on. When your entity is clear, your citations are consistent, and your content reflects the real needs of your community, AI systems can understand your business with confidence.

These same principles sit at the core of both effective LLM SEO and modern local SEO strategy. When you strengthen your entity, refine your citations, and create content grounded in real local intent, you improve your visibility everywhere—organic rankings, map results, and AI-generated answers alike.

Read more at Read More

AI search is growing, but SEO fundamentals still drive most traffic

AI search is growing, but SEO fundamentals still drive most traffic

Generative AI is everywhere right now. It dominates conference agendas, fills LinkedIn feeds, and is reshaping how many businesses think about organic search. 

Brands are racing to optimize for AI Overviews, build vector embeddings, map semantic clusters, and rework content models around LLMs.

What gets far less attention is a basic reality: for most websites, AI platforms still drive a small share of overall traffic. 

AI search is growing, no question. 

But in most cases, total referral sessions from all LLM platforms combined amount to only about 2% to 3% of the organic traffic Google alone delivers.

AI referral sessions vs Google organic clicks

Despite that gap, many teams are spending more time chasing AI strategies than fixing simple, high-impact SEO fundamentals that continue to drive measurable results. 

Instead of improving what matters most today, they are overinvesting in the future while underperforming in the present.

This article examines how a narrow focus on AI can obscure proven SEO tactics and highlights practical examples and real-world data showing how those fundamentals still move the needle today.

1. Quick SEO wins are still delivering outsized gains

In an era where everyone is obsessed with things like vector embeddings and semantic relationships, it’s easy to forget that small updates can have a big impact. 

For example, title tags are still one of the simplest and most effective SEO levers to pull. 

And they are often one of the on-page elements that most websites get wrong, either by targeting the wrong keywords, not including variations, or targeting nothing at all.

Just a few weeks ago, a client saw a win by simply adding “& [keyword]” to the existing title tag on their homepage. Nothing else was changed.

Keyword rankings shot up, as did clicks and impressions for queries containing that keyword.

Results - Updating existing title tags
Results - Updating existing title tags Oct-Nov 2025

This was all achieved simply by changing the title tag on one page. 

Couple that with other tactics, such as on-page copy edits, internal linking, and backlinking across multiple pages, and growth will continue. 

It may seem basic, but it still works. 

And if you only focus on advanced GEO strategies, you may overlook simple tactics that provide immediate, observable impact. 

2. Content freshness and authority still matter for competitive keywords

Another tactic that has faded from view with the rise of AI is what’s often called the skyscraper technique. 

It involves identifying a set of keywords and the pages that already rank for them, then publishing a materially stronger version designed to outperform the existing results.

It’s true that the web is saturated with content on similar topics, especially for keywords visible in most research tools.

But when a site has sufficient authority, a clear right to win, and content freshness, this approach can still be highly effective.

I’ve seen this work repeatedly. 

Here’s Google Search Console data from a recent article we published for a client on a popular, long-standing topic with many competing pages already ranking. 

The post climbed to No. 2 almost immediately and began generating net-new clicks and impressions.

Results - Skyscraper content

Why did it work? 

The site has strong authority, and much of the content ranking ahead of it was outdated and stale.

If you’re hesitant to publish the thousandth article on an established topic, that hesitation is understandable. 

This approach won’t work for every site. But ignoring it entirely can mean passing up clear, high-confidence wins like these.

Get the newsletter search marketers rely on.


3. User experience remains a critical conversion lever

Hype around AI-driven shopping experiences has led some teams to believe traditional website optimization is becoming obsolete. 

There is a growing assumption that AI assistants will soon handle most interactions or that users will convert directly within AI platforms without ever reaching a website.

Some of that future is beginning to take shape, particularly for ecommerce brands experimenting with features like Instant Checkout in ChatGPT

But many websites are not selling products. 

And even for those that are, most brands still receive a significant volume of traffic from traditional search and continue to rely on calls to action and on-page signals to drive conversions.

It also makes little difference how a user arrives – via organic search, paid search, AI referrals, or direct visits. 

A fast site, a strong user experience, and a clear conversion funnel remain essential.

There are also clear performance gains tied to optimizing these elements. 

Here are the results we recently achieved for a client following a simple CTR test:

Results - CTR test

Brands that continue to invest in user experience and conversion rate optimization will outperform those that do not. 

That gap is likely to widen the longer teams wait for AI to fully replace the conversion funnel.

AI is reshaping search, but what works still matters

There is no dispute that AI is reshaping the search landscape. 

It’s changing user behavior, influencing SERPs, and complicating attribution models. 

The bigger risk for many businesses, however, is not underestimating AI but overcorrecting for it.

Traditional organic search remains the primary traffic source for most websites, and SEO fundamentals still deliver when executed well. 

  • Quick wins are real. 
  • Higher-quality content continues to be rewarded. 
  • User experience optimization shows no signs of becoming irrelevant. 

These are just a few examples of tactics that remain effective today.

Importantly, these efforts do not operate in isolation. 

Improving a website’s fundamentals can strengthen organic visibility while also supporting paid search performance and LLM visibility.

Staying informed about AI developments and planning for what’s ahead is essential. 

It should not come at the expense of the strategies that are currently driving measurable growth.

Read more at Read More

Why Google is deleting reviews at record levels

Why Google is deleting reviews at record levels

In 2025, Google is removing reviews at unprecedented rates – and it is not accidental.

Our industry analysis of 60,000 Google Business Profiles shows that deletions are being driven by a mix of:

  • Automated moderation.
  • Industry-wide risk factors.
  • Increased enforcement against incentivized reviews.
  • Local regulatory pressure.

Together, these forces have significant implications for businesses and local search visibility.

Review deletions are on the up globally

Weekly deleted reviews - Jan to Jul 2025

Data collected from tens of thousands of Google Business Profile listings across multiple countries by GMBapi.com show a sharp increase in deleted reviews between January and July 2025. 

The surge began accelerating toward the end of Q1 and gained momentum mid-year, with a growing share of monitored locations experiencing at least one review removal in a given week.

This is not limited to negative feedback. 

While one-star reviews continue to be taken down, five-star reviews now account for a sizable share of deletions. 

That pattern suggests Google is applying stricter enforcement, including on positive reviews, as it works to maintain authenticity and trust. 

More recently, Google has begun asking members of its Local Guide community whether businesses are incentivizing reviews, likely in response to AI-driven flags for suspicious activity.

Dig deeper: Google’s review deletions: Why 5-star reviews are disappearing

Not all industries are treated the same

Review deletion patterns vary significantly by business category.

Restaurants account for the highest volume of deleted reviews, followed by home services, brick-and-mortar retail, and construction. 

These categories generate large volumes of reviews, and removals occur across both recent and older submissions. 

That distribution points to ongoing enforcement, not isolated cleanup efforts.

By contrast, medical services, beauty, and professional services see fewer deletions overall. 

However, closer analysis reveals distinct and consistent patterns within those categories.

What review ratings reveal about industry bias

Top 10 meta categories- Deleted review rating mix

Looking at deleted reviews as a share of total removals within each category reveals distinct moderation patterns.

In restaurants and general retail, deleted reviews are relatively evenly distributed across one- to five-star ratings. 

By contrast, medical services and home services show a strong skew toward five-star review deletions, with far fewer removals in the middle of the rating spectrum. 

That imbalance suggests positive reviews in higher-risk or regulated categories face closer scrutiny, likely tied to concerns around trust, safety, and compliance.

These differences do not appear to stem from manual, category-specific policy decisions. 

Instead, they reflect how Google’s automated systems adjust enforcement based on perceived industry risk.

Dig deeper: 7 local SEO wins you get from keyword-rich Google reviews

Get the newsletter search marketers rely on.


Timing matters: Early vs. retroactive deletions

The age of a review plays a significant role in when it is removed.

In medical and home services, a large share of deleted reviews disappear within the first six months after posting. 

That timing points to early intervention by automated systems evaluating language, reviewer behavior, and other risk signals.

Restaurants and brick-and-mortar retail show a different pattern. 

Many deleted reviews in these categories are more than two years old, suggesting retroactive enforcement as detection systems improve or new suspicious patterns emerge. 

It may also reflect efforts to refresh older review profiles.

For businesses, this means reviews can disappear long after they are posted, often without warning.

Geography adds further complexity

Industry alone does not tell the full story. Location matters.

Top 10 meta categories by deleted reviews (stacked by rating)

In English-speaking markets such as the U.S., UK, Canada, and Australia, deleted reviews skew heavily toward five-star ratings. 

That trend aligns with increased AI-driven moderation aimed at reducing review spam and incentivized positive feedback.

Germany stands apart. 

Analysis of thousands of German business listings shows a higher share of deleted reviews are low-rated, and most are removed within weeks of posting. 

This pattern aligns with Germany’s strict defamation laws, which permit businesses to legally challenge negative reviews and require platforms to take prompt action upon notification.

In short:

  • AI-driven enforcement dominates in many English-speaking markets.
  • Legal takedowns play a much larger role in Germany.

What this means for local SEO and small business owners

The rise in review deletions creates two primary challenges.

  • Trust erosion: When legitimate reviews, whether positive or negative, disappear without explanation, confidence in review platforms begins to weaken.
  • Data distortion: Deleted reviews affect star ratings, performance benchmarks, and conversion signals that businesses rely on for local SEO and reputation management.

For SEO practitioners, small businesses, and multi-location brands, review monitoring is no longer optional. 

Understanding when, where, and which reviews are removed is now as important as generating them.

Dig deeper: Why Google reviews will power up your local SEO

The forces reshaping review visibility

Three developments are shaping review visibility:

  • More automated moderation, with AI evaluating reviews in real time and retroactively.
  • Greater legal influence in regions with strict defamation laws.
  • Increased reliance on third-party monitoring tools as businesses seek independent records of review deletion activity.

As moderation becomes more automated and more influenced by local law, sentiment alone will not guarantee review visibility. 

In local SEO, reviews – especially recent ones with detailed context – remain a critical authority signal for both users and search engines.

Staying ahead now means not only collecting new reviews, but also closely tracking and understanding removals. 

Reputation management increasingly requires attention on both fronts.

Read more at Read More

Image SEO for multimodal AI

Decoding the machine gaze- Image SEO for multimodal AI

For the past decade, image SEO was largely a matter of technical hygiene:

  • Compressing JPEGs to appease impatient visitors.
  • Writing alt text for accessibility.
  • Implementing lazy loading to keep LCP scores in the green. 

While these practices remain foundational to a healthy site, the rise of large, multimodal models such as ChatGPT and Gemini has introduced new possibilities and challenges.

Multimodal search embeds content types into a shared vector space. 

We are now optimizing for the “machine gaze.” 

Generative search makes most content machine-readable by segmenting media into chunks and extracting text from visuals through optical character recognition (OCR). 

Images must be legible to the machine eye. 

If an AI cannot parse the text on product packaging due to low contrast or hallucinates details because of poor resolution, that is a serious problem.

This article deconstructs the machine gaze, shifting the focus from loading speed to machine readability.

Technical hygiene still matters

Before optimizing for machine comprehension, we must respect the gatekeeper: performance. 

Images are a double-edged sword. 

They drive engagement but are often the primary cause of layout instability and slow speeds. 

The standard for “good enough” has moved beyond WebP. 

Once the asset loads, the real work begins.

Dig deeper: How multimodal discovery is redefining SEO in the AI era

Designing for the machine eye: Pixel-level readability

To large language models (LLMs), images, audio, and video are sources of structured data. 

They use a process called visual tokenization to break an image into a grid of patches, or visual tokens, converting raw pixels into a sequence of vectors.

This unified modeling allows AI to process “a picture of a [image token] on a table” as a single coherent sentence.

These systems rely on OCR to extract text directly from visuals. 

This is where quality becomes a ranking factor.

If an image is heavily compressed with lossy artifacts, the resulting visual tokens become noisy.

Poor resolution can cause the model to misinterpret those tokens, leading to hallucinations in which the AI confidently describes objects or text that do not actually exist because the “visual words” were unclear.

Reframing alt text as grounding

For large language models, alt text serves a new function: grounding. 

It acts as a semantic signpost that forces the model to resolve ambiguous visual tokens, helping confirm its interpretation of an image.

As Zhang, Zhu, and Tambe noted:

  • “By inserting text tokens near relevant visual patches, we create semantic signposts that reveal true content-based cross-modal attention scores, guiding the model.” 

Tip: By describing the physical aspects of the image – the lighting, the layout, and the text on the object – you provide the high-quality training data that helps the machine eye correlate visual tokens with text tokens.

The OCR failure points audit

Search agents like Google Lens and Gemini use OCR to read ingredients, instructions, and features directly from images. 

They can then answer complex user queries. 

As a result, image SEO now extends to physical packaging.

Current labeling regulations – FDA 21 CFR 101.2 and EU 1169/2011 – allow type sizes as small as 4.5 pt to 6 pt, or 0.9 mm, on compact packaging. 

  • “In case of packaging or containers the largest surface of which has an area of less than 80 cm², the x-height of the font size referred to in paragraph 2 shall be equal to or greater than 0.9 mm.” 

While this satisfies the human eye, it fails the machine gaze. 

The minimum pixel resolution required for OCR-readable text is far higher. 

Character height should be at least 30 pixels. 

Low contrast is also an issue. Contrast should reach 40 grayscale values. 

Be wary of stylized fonts, which can cause OCR systems to mistake a lowercase “l” for a “1” or a “b” for an “8.”

Beyond contrast, reflective finishes create additional problems. 

Glossy packaging reflects light, producing glare that obscures text. 

Packaging should be treated as a machine-readability feature.

If an AI cannot parse a packaging photo because of glare or a script font, it may hallucinate information or, worse, omit the product entirely.

Originality as a proxy for experience and effort

Originality can feel like a subjective creative trait, but it can be quantified as a measurable data point.

Original images act as a canonical signal. 

The Google Cloud Vision API includes a feature called WebDetection, which returns lists of fullMatchingImages – exact duplicates found across the web – and pagesWithMatchingImages. 

If your URL has the earliest index date for a unique set of visual tokens (i.e., a specific product angle), Google credits your page as the origin of that visual information, boosting its “experience” score.

Dig deeper: Visual content and SEO: How to use images and videos

Get the newsletter search marketers rely on.


The co-occurrence audit

AI identifies every object in an image and uses their relationships to infer attributes about a brand, price point, and target audience. 

This makes product adjacency a ranking signal. To evaluate it, you need to audit your visual entities.

You can test this using tools such as the Google Vision API. 

For a systematic audit of an entire media library, you need to pull the raw JSON using the OBJECT_LOCALIZATION feature. 

The API returns object labels such as “watch,” “plastic bag” and “disposable cup.”

Google provides this example, where the API returns the following information for the objects in the image:

Name mid Score Bounds
Bicycle wheel /m/01bqk0 0.89648587 (0.32076266, 0.78941387), (0.43812272, 0.78941387), (0.43812272, 0.97331065), (0.32076266, 0.97331065)
Bicycle /m/0199g 0.886761 (0.312, 0.6616471), (0.638353, 0.6616471), (0.638353, 0.9705882), (0.312, 0.9705882)
Bicycle wheel /m/01bqk0 0.6345275 (0.5125398, 0.760708), (0.6256646, 0.760708), (0.6256646, 0.94601655), (0.5125398, 0.94601655)

Good to know: mid contains a machine-generated identifier (MID) corresponding to a label’s Google Knowledge Graph entry. 

The API does not know whether this context is good or bad. 

You do, so check whether the visual neighbors are telling the same story as your price tag.

Lord Leathercraft blue leather watch band

By photographing a blue leather watch next to a vintage brass compass and a warm wood-grain surface, Lord Leathercraft engineers a specific semantic signal: heritage exploration. 

The co-occurrence of analog mechanics, aged metal, and tactile suede infers a persona of timeless adventure and old-world sophistication.

Photograph that same watch next to a neon energy drink and a plastic digital stopwatch, and the narrative shifts through dissonance. 

The visual context now signals mass-market utility, diluting the entity’s perceived value.

Dig deeper: How to make products machine-readable for multimodal AI search

Quantifying emotional resonance

Beyond objects, these models are increasingly adept at reading sentiment. 

APIs, such as Google Cloud Vision, can quantify emotional attributes by assigning confidence scores to emotions like “joy,” “sorrow,” and “surprise” detected in human faces. 

This creates a new optimization vector: emotional alignment. 

If you are selling fun summer outfits, but the models appear moody or neutral – a common trope in high-fashion photography – the AI may de-prioritize the image for that query because the visual sentiment conflicts with search intent.

For a quick spot check without writing code, use Google Cloud Vision’s live drag-and-drop demo to review the four primary emotions: joy, sorrow, anger, and surprise. 

For positive intents, such as “happy family dinner,” you want the joy attribute to register as VERY_LIKELY

If it reads POSSIBLE or UNLIKELY, the signal is too weak for the machine to confidently index the image as happy.

For a more rigorous audit:

  • Run a batch of images through the API. 
  • Look specifically at the faceAnnotations object in the JSON response by sending a FACE_DETECTION feature request. 
  • Review the likelihood fields. 

The API returns these values as enums or fixed categories. 

This example comes directly from the official documentation:

          "rollAngle": 1.5912293,
          "panAngle": -22.01964,
          "tiltAngle": -1.4997566,
          "detectionConfidence": 0.9310801,
          "landmarkingConfidence": 0.5775582,
          "joyLikelihood": "VERY_LIKELY",
          "sorrowLikelihood": "VERY_UNLIKELY",
          "angerLikelihood": "VERY_UNLIKELY",
          "surpriseLikelihood": "VERY_UNLIKELY",
          "underExposedLikelihood": "VERY_UNLIKELY",
          "blurredLikelihood": "VERY_UNLIKELY",
          "headwearLikelihood": "POSSIBLE"

The API grades emotion on a fixed scale. 

The goal is to move primary images from POSSIBLE to LIKELY or VERY_LIKELY for the target emotion.

  • UNKNOWN (data gap).
  • VERY_UNLIKELY (strong negative signal).
  • UNLIKELY.
  • POSSIBLE (neutral or ambiguous).
  • LIKELY.
  • VERY_LIKELY (strong positive signal – target this).

Use these benchmarks

You cannot optimize for emotional resonance if the machine can barely see the human. 

If detectionConfidence is below 0.60, the AI is struggling to identify a face. 

As a result, any emotion readings tied to that face are statistically unreliable noise.

  • 0.90+ (Ideal): High-definition, front-facing, well-lit. The AI is certain. Trust the sentiment score.
  • 0.70-0.89 (Acceptable): Good enough for background faces or secondary lifestyle shots.
  • < 0.60 (Failure): The face is likely too small, blurry, side-profile, or blocked by shadows or sunglasses. 

While Google documentation does not provide this guidance, and Microsoft offers limited access to its Azure AI Face service, Amazon Rekognition documentation notes that

  • “[A] lower threshold (e.g., 80%) might suffice for identifying family members in photos.”

Closing the semantic gap between pixels and meaning

Treat visual assets with the same editorial rigor and strategic intent as primary content. 

The semantic gap between image and text is disappearing. 

Images are processed as part of the language sequence.

The quality, clarity, and semantic accuracy of the pixels themselves now matter as much as the keywords on the page.

Read more at Read More

How to build search visibility before demand exists

How to build search visibility before demand exists

Discovery now happens before search demand is visible in Google.

In 2026, interest forms across social feeds, communities, and AI-generated answers – long before it shows up as keyword search volume. 

By the time demand appears in SEO tools, the opportunity to shape how a concept is understood has already passed.

This creates a problem for how search marketing research is typically done. 

Keyword tools, search volume, and Google Trends are lagging indicators. 

They reveal what people cared about yesterday, not what they are starting to explore now. 

In a landscape shaped by AI Overviews, social SERPs, and shrinking organic real estate, arriving late means competing inside narratives already defined by someone else.

Exploding Topics sits upstream of this shift. 

It helps surface emerging themes, behaviors, and conversations while they are still forming – before they harden into keywords, content clusters, and product categories. 

Used properly, it is not just a trend tool. It is a way to plan SEO, content, digital PR, and social-led search proactively.

This article breaks down how to use Exploding Topics to identify future entities, validate them through social search, and build search visibility before demand peaks.

Use Exploding Topics Trend Analytics to identify future entities – not just topics

Most marketers who use Exploding Topics already understand its value for content ideation, and we will cover that. 

But its bigger opportunity is identifying future entities – concepts that search engines and AI systems will soon recognize as distinct “things,” not just keyword variations.

This matters because modern search no longer operates purely on keywords. 

Google’s AI Overviews, ChatGPT, and other LLM-powered systems organize information around entities and relationships. 

Once an entity is established, the narrative around it hardens. 

Arrive late, and you are competing inside a story that has already been defined. 

Exploding Topics gives you visibility early enough to act before that happens.

Example: Weighted sleep masks

In Exploding Topics, you might notice “weighted sleep mask” rising steadily. 

Search volume remains low, and most keyword tools understate its importance. 

At a glance, it looks like a niche product trend that is easy to ignore.

Look closer, and the signals are stronger:

  • The phrase is consistent and repeatable.
  • Adjacent topics are rising alongside it, including deep pressure sleep, anxiety sleep tools, and vagus nerve stimulation.
  • Questions that signal intent are increasing.
  • Early discussion focuses on understanding the concept, not just buying a product.

This is the point where something shifts from being a product with an adjective to a named solution. In other words, it is becoming an entity.

The traditional play

Most brands wait until:

  • Search demand becomes obvious, acting in December 2025 rather than July 2025.
  • Competitors launch dedicated product pages.
  • Affiliates and publishers surface “best” and “vs.” content.

Only then do they create:

  • A category page.
  • A “What is a weighted sleep mask?” article or social-search activation.
  • SEO content designed to chase presence, such as FAQs, SERP features, and rankings.

By this point, the entity already exists, and the story around it has largely been written by someone else. 

In this case, NodPod is clearly dominating the entity.

Acting earlier, while the entity is forming

Using Exploding Topics well means acting earlier, while the entity is still being defined. Instead of starting with a product page, you:

  • Publish a clear, authoritative explanation of what a weighted sleep mask is.
  • Explain why deep pressure can help with sleep and anxiety.
  • Address who it is for – and who it is not.
  • Create supporting content that adds context, such as comparisons with weighted blankets or safety considerations.

This work can be done quickly and at scale through reactive PR and social search activations. 

You are not optimizing for keywords yet. 

You are teaching social algorithms, search engines, and AI systems what the concept means and associating your brand with that explanation from the start.

This is how brands can win at search in 2026 and beyond. 

This early, proactive approach:

  • Helps search systems understand new concepts faster.
  • Increases the chance your framing is reused in AI-generated answers.
  • Positions your brand as the authority on the entity – not just a seller within the conversation.

Dig deeper: Beyond Google: How to put a total search strategy together

Validate emerging entities through social search

Identifying an emerging entity is only the first step. 

The real risk is not being early to a conversation. It is being early to something that never takes off.

This is where many SEO teams stall. 

They wait for search volume and arrive too late, publish on instinct and hope demand follows, or freeze under uncertainty and do nothing.

There is a better middle ground: validate emerging entities through social search research and activation tests before scaling them into owned SEO and on-site experiences.

Exploding Topics is straightforward. It shows what might matter. Social platforms tell you whether your audience actually cares.

How social search becomes your validation layer

Once Exploding Topics surfaces a potential emerging entity, the next step is not Keyword Planner. 

It is native search across platforms such as TikTok, Reddit, and YouTube, using either built-in trend tools or basic platform search.

You are looking for signals like:

  • Multiple creators independently explaining the same concept.
  • Comment sections filled with questions such as “Does this actually work?” or “Is this safe?”.
  • Repeated framing, metaphors, or demonstrations.
  • Early how-to or comparison content, even if production quality is low.

These signals point to intent. 

Curiosity is turning into understanding. 

Historically, this phase has always preceded measurable search demand.

Revisiting the weighted sleep mask example

After spotting “weighted sleep mask” in Exploding Topics, you might search for it on TikTok.

What you want to see is a lack of heavy brand advertising. 

Mature ecommerce pushes or TikTok Shop funnels suggest the market is already established. 

Instead, look for creators – not brand channels – testing products, discussing solutions, and exploring the underlying problem.

  • Focus on videos that explain pains, needs, and motivations, such as why pressure may help with anxiety. 
  • Check the comments for comparisons to other solutions. 
  • Look for questions raised in videos and comment threads.

Tools like Buzzabout.AI can help do this at scale through topic analysis and AI-assisted research.

These signals answer two critical questions:

  • Are people actively trying to understand this concept?
  • What language, framing, and objections are forming before SEO data exists?

That is validation.

Rethinking how SEO strategy gets built

This is where search strategy shifts. 

Instead of asking, “Is there enough volume to justify content creation?” the better question is, “Is there enough curiosity to justify building authority early?”

If social signals are weak:

  • Pause.
  • De-risk by testing with creators outside your owned channels.
  • Avoid heavy investment in content that takes months to rank.

If signals are strong:

  • Scale with confidence.
  • Work with creators and activate brand channels.
  • Invest in entity pages, hubs, FAQs, comparisons, and PLP optimization.

In this model, fast-moving social platforms become the testing layer.

SEO is not the experiment, it’is the compounding layer.

Dig deeper: Social and UGC: The trust engines powering search everywhere

Get the newsletter search marketers rely on.


Editorial digital PR that earns links and LLM citations

Most digital PR still works backward.

  • A trend reaches mainstream awareness.
  • Journalists write about it.
  • Brands scramble to comment.
  • PR teams try to extract links from a story that already exists. 

The result is short-term coverage, diluted impact, and little lasting search advantage.

Exploding Topics makes it possible to reverse that dynamic by surfacing editorial narratives before they are obvious and positioning your brand as one of the sources that helps define them.

In 2026, this matters more than ever. 

Links still matter, but they are no longer the only outcome that counts. 

Brand mentions, explanations, and citations increasingly feed the systems behind AI Overviews, ChatGPT, Perplexity, and other LLM-driven discovery experiences.

Why early narratives outperform reactive PR

When a topic is everywhere, journalists are aggregating. When a topic is emerging, they are still asking questions.

Exploding Topics surfaces concepts at the stage where:

  • There is no consensus narrative.
  • Definitions are inconsistent.
  • Journalists are looking for clarity, not quotes.
  • “What is this?” stories have not yet been written.

This is the point where brands can move from commenting on a conversation to shaping it.

From trend-jacker to narrative owner

Instead of pitching “our brand’s take on X,” you lead with early signals you are seeing, why a concept is emerging now, and what it suggests about consumer behavior or the market.

The difference is subtle but important.

You are no longer reacting to coverage that already exists. 

You are creating the framing that journalists, publishers, and, eventually, AI systems reuse. 

LLMs do not learn from rankings alone. 

They learn from editorial context, repeated explanations, and how trusted publications describe and define emerging concepts over time.

Done consistently, this approach compounds. 

As your brand becomes associated with spotting and explaining emerging narratives early, you move from reactive commentary to trusted source. 

Journalists begin to recognize where useful insight comes from, and that trust carries into more established coverage later on. You are no longer pitching for inclusion. 

Your perspective is actively sought out.

The result is early narrative ownership and stronger access when mainstream coverage follows.

An editorial window before mainstream coverage

Before “weighted sleep mask” became a crowded ecommerce term in early 2025, there was a clear editorial window.

Journalists had not yet published stories asking:

  • “What is a weighted sleep mask?”
  • “Are weighted sleep masks safe?”
  • “Do they actually work for anxiety?” 

That was the opportunity.

A PR-led approach at this stage includes:

  • Supplying journalists with expert explanations of deep pressure and sleep.
  • Sharing early insight into why the product category is emerging.
  • Contextualizing it alongside weighted blankets and other anxiety tools.

The result is not just coverage. It connects PR to search, curiosity, and discovery by helping define the concept itself. 

That earns links, builds brand mentions, and signals authority around emerging entities that LLMs are more likely to cite and summarize over time.

Dig deeper: Why PR is becoming more essential for AI search visibility

Content roadmaps and briefs that don’t rely on search volume

Search volume is a poor starting point for content briefing.

It reflects interest only after a topic is established, language has stabilized, and the SERP is already crowded. 

Used as a primary input, it pushes teams to chase demand instead of building authority. 

That is why so many brands end up rewriting the same “What is X?” post year after year.

Better briefs start upstream. 

They use Exploding Topics to spot what is forming and social search to understand how people are trying to make sense of it.

Reframing the briefing process

The core shift is moving away from briefs built around keywords and volumes and toward briefs built around audience intent.

That means focusing on three things:

  • Problems people are beginning to articulate.
  • Concepts that are not yet clearly defined or are actively debated.
  • Language that is inconsistent, emotional, or exploratory.

When content is approached this way, the objective changes. 

It is no longer “create X to rank for Y.” 

It becomes “explain X so the audience does not experience Y.” 

That shift matters.

Designing content that compounds instead of expiring

The goal for SEO content teams in 2026 and beyond should be to brief content that defines a concept clearly. That includes:

  • Connecting it to adjacent ideas.
  • Comparing it to established solutions.
  • Answering questions within conversations that are still forming.

This does not always require written content. 

The same work can happen through social search activations or digital PR.

Approached this way, content grows into demand rather than chasing it.

Instead of being rewritten every time search volume changes, it evolves through updates, expansion, and, where possible, stronger internal linking. 

As interest grows, the content does not need replacing. It needs refining. 

This is the type of material AI and LLMs tend to reference – timely, clear, explanatory, and grounded in real questions.

Publication isn’t the end

Publishing and waiting for content to rank is no longer the end of the brief.

Teams need a clear plan for distribution and reuse.

For emerging topics, that means contributing insight in relevant Reddit threads, Discord communities, niche forums, and creator comment sections. 

Not to drop links, but to answer questions, share explanations, and test framing in public. 

Those conversations feed back into the content itself, improving clarity and increasing the likelihood that your explanation is the one others repeat.

With a social search activation approach, brands can scale messaging quickly by working with partners who interpret and distribute the brief in their own voice. 

When this works, SEO content stops being static and starts acting like a living reference point – one that contributes to culture and builds lasting brand recognition.

Dig deeper: Beyond SERP visibility: 7 success criteria for organic search in 2026

Where this leaves SEO in 2026

Search demand does not appear fully formed. 

It develops across social platforms, communities, and AI-driven discovery long before it registers as keyword volume.

  • Exploding Topics helps surface what is emerging. 
  • Social search shows whether people are trying to understand it. 
  • Digital PR shapes how those ideas are defined and cited. 
  • SEO compounds that work by reinforcing narratives that are already taking shape, rather than trying to test or invent them after the fact.

In this model, SEO is the layer that turns early insight and clear explanation into durable visibility across Google, social platforms, and AI-generated answers.

Search no longer starts on Google. The teams that act on that reality will influence what people search for next.

Read more at Read More

How to Do B2B Keyword Research Using Ubersuggest

When targeting businesses vs. customers with your SEO tactics, there are different formulas that come into play.

But the answer is always the same: “Content matters.”

This is especially true in the world of B2B, where conversions tend to take longer to occur, and customers typically have a deeper understanding of their specific niche.

The right keywords mean people can find you when searching for products and services like yours. And, in the modern marketplace, it’s all about personalization.

Choosing keywords worth targeting, meaning ones that will actually lead to conversions, means matching your research to your target audience. Gone are the days where you can simply focus on target keywords for a given industry. You need to get clear on who your ideal customer is (a customer persona is the best way), work backwards from there, and conduct your keyword research accordingly.

Let’s see how you can use it to supercharge the conversions in your business.

Key Takeaways

  • Intent beats volume in B2B. Long-tail, comparison, integration, and pain-point keywords bring the highest-quality traffic because they mirror how real buyers evaluate solutions.
  • Your best keywords come from conversations, not tools. Sales teams and customers surface language and questions that keyword tools can’t predict.
  • B2B funnels require keyword mapping. TOFU, MOFU, and BOFU terms attract different stakeholders at different readiness levels. If you skip a stage, you break your pipeline.
  • Clusters win in B2B SEO. Organizing keywords into pillars and supporting clusters builds authority and guides buyers naturally through research and evaluation.
  • Keyword lists are only valuable when activated. Use them for on-page optimization, schema, content hubs, repurposed formats, and now LLMO to appear in AI-generated answers.

B2B vs B2C Keyword Research

With both B2B and B2C keyword research, your ideal user or customer should be at the center of what you do.

With B2B marketing, you focus on various decision-makers, like a team lead, manager, or even the CEO. These keywords are typically lower volume, but are higher value when you rank well for them.

With B2C marketing, the only decision-maker you’re worried about is the customer. Your marketing should be geared directly towards them, which makes understanding your target audience even more important. 

One of the challenges with B2B marketing is the sales cycle. Business-to-business conversions generally take longer than B2C. There’s a big difference between someone buying a pair of socks versus investing in a software suite for a whole company.

There are some parallels, but by and large, B2B buyers have different behavior. This is where accurate intent mapping comes into play. Understanding which keywords are ranking is only half the battle. Matching the intent behind the search for each query gives a much clearer picture of what will move your target customers further along their buyer journey, ultimately leading to a conversion. 

The good news is that in some ways, your best practices stay the same.

Know your product, then move to understand your market and competition to build the best B2B keyword list.

B2B keyword research helps you win over the decision-makers at hand, but this can be tricky.

There’s a different drive to the transaction. You need to take a different approach to earn their buyer intent.

To address their unique needs, you need to demonstrate your expertise not only in the niche but also in the specific pain points within that niche. That means picking the right keywords for your content and pages. To inspire your B2B keyword research, ask yourself:

  • What kind of businesses am I targeting? How big are their teams? Are they in industries I can flourish in?
  • Am I trying to reach businesses at the executive, manager, or employee level?
  • Of the decision-makers I’m targeting, what challenges are they up against? How is their current system failing them?

If you don’t keep these questions in mind during your keyword research, you’ll have a tough time reaching your B2B SEO goals.

Taking the time to get it right is critical to long-term growth.

What Makes the B2B Buyer Journey Unique (and how it impacts keywords)

B2B buyers don’t search like consumers. They ask more questions and involve more decision-makers. That means your keyword strategy needs to map to every stage of the funnel, because each stage comes with its own unique intent.

At the top of the funnel (TOFU), people are looking to understand the problem. Think keywords like “what is lead nurturing” or “how to qualify B2B leads.”

In the middle (MOFU), they’re evaluating options. That’s where terms like “best B2B CRM platforms” or “HubSpot vs. Salesforce” show up.

At the bottom (BOFU), they’re ready to buy. They’ll search for things like “HubSpot onboarding consultant” or “best CRM for B2B SaaS.”

If you skip a stage, you risk confusing or losing your audience. Match your keywords to where buyers actually are, not where you hope they are.

How To Find High-Intent B2B Keywords That Actually Convert

To drive real leads, you need more than traffic. Here’s how to find keywords that match intent and move B2B buyers toward a decision.

Step 1. Interview Your Sales Team and Customers

If you want high-intent keywords, talk to the people on the front lines.

Your sales team knows exactly what questions prospects ask before they buy. They hear the same objections, pain points, and decision criteria repeatedly. That language? It’s keyword fuel. Ask them: What are the top questions you hear? What phrases come up in discovery calls? What signals buying intent?

Then talk to a few current customers. Ask what they Googled before they found you. What words did they use to describe their problem? Why did they choose you over a competitor?

These conversations don’t have to be formal. A quick 15-minute chat can uncover terms your audience actually uses that your keyword tool might miss.

Log every phrase, question, and pain point. You’ll use them later to validate topics and shape content that speaks directly to your buyer’s intent.

Step 2. Use Tools To Expand Your Keyword Set

Once you’ve got seed terms from sales and customers, plug them into keyword tools to scale.

Start with Ubersuggest or Semrush to find related phrases, autocomplete suggestions, and questions your audience is already searching for. 

AnswerThePublic is great for uncovering long-tail keywords phrased as real questions—perfect for B2B blog content and landing pages.

Focus on commercial-intent keywords, terms that suggest the searcher is in buying mode. Look for modifiers like “best,” “vs,” “top,” or “software for [industry].”

Don’t just chase volume. Check keyword difficulty to make sure you can rank, and look at CPC (cost per click) to gauge how valuable a keyword is to advertisers. High CPC usually means it’s converting for someone.

This is where you turn insights into opportunity. The right tools help you see the full landscape and find the gaps your competitors missed.

Step 3. Spy on Competitors (Especially in Niche B2B)

If your competitors are already ranking, reverse-engineer what’s working for them.

Tools like Semrush and Ahrefs let you plug in a competitor’s domain and see the exact keywords they rank for, along with positions, search volume, and traffic estimates. This gives you a fast snapshot of what’s driving their visibility.

Look for content gaps. Are there high-value keywords they missed? Are there topics they cover that you could go deeper on, with more data, better examples, or stronger CTAs?

In niche B2B markets, you won’t find millions of searches—but that’s the point. The right long-tail keyword with even 100 searches a month could drive qualified leads if the intent is strong and the competition is low.

Don’t copy what they’ve done. Use it as a launchpad. Then build something more useful, more specific, and more aligned with your buyer’s needs.

Step 4. Analyze Intent, Not Just Volume

In B2B, high search volume doesn’t always mean high value.

A keyword like “lead generation” might pull in thousands of searches, but it’s broad and packed with top-of-funnel traffic. Instead, go after long-tail keywords that signal real buying intent.

Look for terms like:

  • “SOC 2 vs ISO 27001” – These comparison searches show the buyer is actively evaluating solutions.
  • “Lead scoring software for SaaS” – This one’s specific, solution-aware, and vertical-focused. A perfect match for bottom-of-funnel content.

Intent > volume. That’s the rule.

Use keyword tools to filter by modifiers like “vs,” “best,” “alternatives,” or “[industry] software.” These often have lower volume, but they attract leads who are closer to buying and more likely to convert.

Build your keyword strategy around relevance and readiness, not raw traffic. That’s how you attract the right people at the right time.

Step 5. Group Keywords Into Pillars and Clusters

Don’t just build a list, build a structure.

Once you’ve nailed down your keyword set, organize it into pillars and clusters. A pillar page targets a broad, high-value topic like “email marketing software.” Around it, you build supporting content, think clusters like “email automation for B2B,” “lead nurturing workflows,” and “best B2B email sequences.”

This approach does two things:

  1. It strengthens your SEO by signaling topical authority.
  2. It aligns with the B2B buyer journey, letting prospects go deeper as they move from problem-aware to solution-ready.

Each cluster targets a long-tail, intent-driven keyword and links back to the pillar. The result? Better rankings and clearer paths to conversion.

Use tools like Ubersuggest or SEMrush’s keyword grouping to speed this up. Just make sure every piece has a purpose in your funnel.

B2B Keyword Types You Should Actually Focus On

Not all keywords are created equal, especially in B2B. Some attract the right audience, move them through the funnel, and convert. Others just bring “fluff traffic” that never turns into leads.

Here are the four keyword types that consistently deliver in B2B:

Comparisons

These are high-intent gold. When someone searches “HubSpot vs Salesforce” or “SOC 2 vs ISO 27001,” they’re in evaluation mode. They’re comparing options and looking for a clear winner.

Create content that breaks down the pros and cons honestly. Side-by-side features, pricing, integrations, and who it’s best for. This is where trust gets built and decisions get made.

Integrations

In B2B, tools rarely stand alone. That’s why keywords like “Slack integration with project management software” or “CRM that integrates with QuickBooks” pull in traffic that’s ready to act.

These searches signal product fit and technical alignment—key for conversion. If your product integrates with other tools, optimize for those terms.

Use-Case Specific

Broad keywords miss the mark. “Lead scoring software” is nice, but “lead scoring software for SaaS” is better. Even better? “lead scoring software for early-stage B2B SaaS.”

The more specific the use case, the higher the intent. Create content that addresses your audience’s specific needs and concerns.

Pain Point Phrases

These are often phrased as questions: “How to reduce churn in B2B SaaS” or “Why aren’t my sales qualified leads converting?” These aren’t just TOFU, they’re strong entry points for solution-aware buyers.

Targeting these keywords helps you show up early in the journey and guide buyers toward your solution.

What to Do After You Have Your Keywords?

Now what?

You know your keyword opportunities. It’s time to put them to work.

Use them to make on-page optimizations in the meta description or body copy.

In addition, by implementing keywords appropriately in areas such as schema markup like FAQs or price listings for e-commerce, you can both have a more optimized and useful listing. Use Ubersuggest or AnswerThePublic to pinpoint the questions your target decision-makers may have. (Hint: They’re already searching for them, and these tools will show you what they are.)

As far as working more B2B SEO keywords into your content, make sure the content is directly related to your existing target B2B keywords.

Another quick way to optimize for your target keywords is to structure your internal links in a way that creates content hubs on your site for pieces relevant to your B2B content strategy.

Below you can see Zapier’s Remote Work Guide as a content hub touchpoint example. This page acts as a content hub, with many “spokes” out to different resources around tools and tactics for the main subject: remote work.

Today, your B2B keyword strategy is more about being the source across search, AI, and voice.

Fortunately, you can also use your keyword list to guide large language model optimization (LLMO). Tools like ChatGPT, Gemini, and Claude often cite content when answering B2B queries. If your page is optimized for specific long-tail or question-based keywords, you increase the odds of being surfaced in AI-generated answers.

Using your keywords to shape new content formats is another smart move. Turn question-based terms into short-form video or slideshows. Repurposing like this builds topical authority across channels and sends strong signals back to your core site.

Finally, don’t let your keyword list sit in a spreadsheet. Plug it into your editorial calendar. Map keywords to specific goals, funnel stages, and audience segments. That’s how you turn SEO research into actual business growth.

FAQs

Does SEO work for B2B?

Yes, SEO is a valuable tactic to use to win over buyers. Good organic visibility throughout the sales funnel is a proven technique to drive growth and, in turn, increase interest.

Why is SEO important for B2B?

SEO generates valuable leads and makes it easier for potential buyers to find you. When they’re searching for products or services in relation to yours, you’re more likely to show up in their search results thanks to SEO tactics like using B2B SEO keywords. 

How do I create a B2B SEO strategy?

<

<

<

If you want a solid B2B SEO strategy, follow these quick tips:
1. Conduct B2B keyword research. (Hint: Use Ubersuggest to help you get valuable results.)
2. Understand what matters to your target decision-makers and nurture them through your sales funnel.
3. Optimize your site to target your ideal audience by updating aspects like meta descriptions and internal linking.
4. From the B2B keywords, formulate content to position yourself as the answer to your audience’s needs.
5. Promote your content and grow your audience and domain authority through backlinks.

Conclusion

Now that you know how to conduct B2B keyword research using Ubersuggest, you can unlock hidden opportunities for your brand.

Getting the lay of the land in your niche will help. From your competitor analysis on your target B2B keywords, ask yourself: Where do you stand? How can you satisfy buyers in a way that your competitors aren’t?

The goal with B2B content tactics is to position yourself as the answer decision-makers need.

Your keyword research will reveal the topics that reel in buyers, and the content you create will help secure conversions.

Folding B2B SEO keywords into your strategy is a core step in gaining the attention and influence of the brands you’re targeting.

Read more at Read More

What Is LLMs.txt? & Do You Need One?

Most site owners don’t realize how much of their content large language models (LLMs) already gather. ChatGPT, Claude, and Gemini pull from publicly available pages unless you tell them otherwise. That’s where LLMs.txt for SEO comes into the picture.

LLMs.txt gives you a straightforward way to tell AI crawlers how your content can be used. It doesn’t change rankings, but it adds a layer of control over model training, something that wasn’t available before.

This matters as AI-generated answers take up more real estate in search results nowadays. Your content may feed those answers unless you explicitly opt out. LLMs.txt provides clear rules for what’s allowed and what isn’t, giving you leverage in a space that has grown quickly without much input from site owners.

Whether you allow or restrict access, having LLMs.txt in place sets a baseline for managing how your content appears in AI-driven experiences.

Key Takeaways

  • LLMs.txt lets you control how AI crawlers such as GPTBot, ClaudeBot, and Google-Extended use your content for model training.
  • It functions similarly to robots.txt but focuses on AI data usage rather than traditional crawling and indexing.
  • Major LLM providers are rapidly adopting LLMs.txt, creating a clearer standard for consent.
  • Allowing access may strengthen your presence in AI-generated answers; blocking access protects proprietary material.
  • LLMs.txt doesn’t impact rankings now, but it helps define your position in emerging AI search ecosystems. 

What is LLMs.txt?

LLMs.txt is a simple text file you place at the root of your domain to signal how AI crawlers can interact with your content. If robots.txt guides search engine crawlers, LLMs.txt guides LLM crawlers. Its goal is to define whether your public content becomes part of training datasets used by models such as GPT-4, Claude, or Gemini.

LLMs.txt files.

Here’s what the file controls:

  • Access permissions for each AI crawler
  • Whether specific content can be used for training
  • How your site participates in AI-generated answers
  • Transparent documentation of your data-sharing rules

This protocol exists because AI companies gather training data at scale. Your content may already appear in datasets unless you explicitly opt out. LLMs.txt adds a consent layer that didn’t previously exist, giving you a direct way to express boundaries.

OpenAI, Anthropic, and Google introduced support for LLMs.txt in response to rising concerns around ownership and unauthorized data use. Adoption isn’t universal yet, but momentum is growing quickly as more organizations ask for clarity around AI access.

LLMs.txt isn’t replacing robots.txt because the two files handle different responsibilities. Robots.txt manages crawling for search engines, while LLMs.txt manages training permissions for AI models. Together, they help you protect your content, define visibility rules, and prepare for a future where AI-driven search continues to expand.

Why is LLMs.txt a Priority Now?

Model developers gather massive datasets, and most of that comes from publicly accessible content. When OpenAI introduced GPTBot in 2023, it also introduced a pathway for websites to opt out. Google followed with Google-Extended, allowing publishers to restrict their content from AI training. Anthropic and others soon implemented similar mechanisms.

This shift matters for one reason: your content may already be part of the AI ecosystem unless you explicitly say otherwise.

LLMs.txt is becoming a standard because site owners want clarity. Until recently, there was no formal way to express whether your content could be repurposed inside model training pipelines. Now you can define that choice with a single file.

There’s another angle to this. Generative search tools increasingly rely on trained data to produce answers. If you block AI crawlers, your content may not appear in those outputs. If you allow access, your content becomes eligible for reference in conversational responses, something closely tied to how brands approach LLM SEO strategies.

Neither approach is right for everyone. Some companies want tighter content control. Others want stronger visibility in AI-driven areas. LLMs.txt helps you set a position instead of defaulting into one.

As AI-generated search becomes more prominent, the importance of LLMS.txt grows. You can adjust your directives over time, but having the file in place keeps you in control of how your content is used today.

How LLMs.txt Works

LLMs.txt is a plain text file located at the root of your domain. AI crawlers that support the protocol read it to understand which parts of your content they can use. You set the rules, upload the file once, and update it anytime your strategy evolves.

Where it Lives

LLMs.txt must be placed at:

yoursite.com/llms.txt

This mirrors the structure of robots.txt and keeps things predictable for crawlers. Every supported AI bot checks this exact location to find your rules. It must be in the root directory to work correctly, subfolders won’t register.

Robots.txt structure.

Source

The file is intentionally public. Anyone can view it by navigating directly to the URL. This transparency allows AI companies, researchers, and compliance teams to see your stated preferences.

What You Can Control

Inside LLMs.txt, you specify allow or disallow directives for individual AI crawlers. Example:

User-agent: GPTBot
Disallow: /

User-agent: Google-Extended
Allow: /

You can grant universal permissions or block everything. The file gives you fine-grained control over how your public content flows into AI training datasets.

Current LLMs That Respect It

Several major AI crawlers already check LLMs.txt automatically:

  • GPTBot (OpenAI) — supports opt-in and opt-out training rules
  • Google-Extended — used for Google’s generative AI systems
  • ClaudeBot (Anthropic) — honors site-level directives
  • CCBot (Common Crawl) — contributes to datasets used by many models
  • PerplexityBot — early adopter in 2024

Support varies across the industry, but the direction is clear: more crawlers are aligning around LLMs.txt as a standardized method for training consent.

LLMs.txt vs Robots.txt: What’s the Difference?

Robots.txt and LLMs.txt serve complementary but distinct purposes.

Robots.txt controls how traditional search engine crawlers access and index your content. Its focus is SEO: discoverability, crawl budgets, and how pages appear in search results.

Robots.txt example.

LLMs.txt, in contrast, governs how AI models may use your content for training. These directives tell model crawlers whether they can read, store, and learn from your pages.

Here’s how they differ:

  • Different crawlers: Googlebot and Bingbot follow robots.txt; GPTBot, ClaudeBot, and Google-Extended read LLMs.txt.
  • Different outcomes: Robots.txt influences rankings and indexing. LLMs.txt influences how your content appears in generative AI systems.
  • Different risks and rewards: Robots.txt affects search visibility. LLMs.txt affects brand exposure inside AI-generated answers — and your control over proprietary content.

Both files are becoming foundational as search shifts toward blended AI and traditional results. You’ll likely need each one working together as AI-driven discovery expands.

Should You Use LLMs.txt for SEO?

LLMs.txt doesn’t provide a direct ranking benefit today. Search engines don’t interpret it for SEO purposes. Still, it influences how your content participates in generative results, and that matters.

Allowing AI crawlers gives models more context to work with, improving the odds that your content appears in synthesized answers. Blocking crawlers protects proprietary or sensitive content but removes you from those AI-based touchpoints.

Your approach depends on your goals. Brands focused on reach often allow access. Brands focused on exclusivity or IP protection typically restrict it.

LLMs.txt also pairs well with thoughtful LLM optimization work. Content structured for clarity, strong signals, and contextual relevance helps models interpret your material more accurately. LLMs.txt simply defines whether they’re allowed to learn from it.

“LLMs.txt doesn’t shift rankings today, but it sets early rules for how your content interacts with AI systems. Think of it like robots.txt in its early years: small now, foundational later.” explains Anna Holmquist, Senior SEO Manager at NP Digital.

Who Actually Needs LLMs.txt?

Some websites benefit more than others from adopting LLMs.txt early.

  • Content-heavy sites
    Publishers, educators, and documentation libraries often prefer structure around how their content is reused by AI systems.
  • Brands with proprietary material
    If your revenue depends on premium reports, gated content, or specialized datasets, LLMs.txt offers a necessary layer of protection.
  • SEOs planning for AI search
    As generative results become more common, brands want control over how content feeds into those answer engines. LLMs.txt helps set boundaries while still supporting visibility.
  • Industries with compliance requirements
    Healthcare, finance, and legal organizations often need strict data-handling rules. Blocking AI crawlers becomes part of their governance approach.

LLMs.txt doesn’t lock you into a long-term decision. You can update it as AI search evolves.

How To Set Up an LLMs.txt File

Setting up an LLMs.txt file is simple. Here’s the process. If you want assistance doing this, there are tools and generators that can assist.

LLMs. txt generator in action.

Source

1. Create the File

Open a plain text editor and create a new file called llms.txt.

Add a comment at the top for clarity:

# LLMS.txt — AI crawler access rules

2. Add Bot Directives

Define which crawlers can read and train on your content. For example:

User-agent: GPTBot
Disallow: /

User-agent: Google-Extended
Allow: /

You can open or close access globally:

User-agent: *
Disallow: /

or:

User-agent: *
Allow: /

3. Upload to Your Root Directory

Place the file at:

yoursite.com/llms.txt

This location is required for crawlers to detect it. Subfolders won’t work.

4. Monitor AI Crawler Activity

Check your server logs to confirm activity from:

  • GPTBot
  • ClaudeBot
  • Google-Extended
  • PerplexityBot
  • CCBot

This helps you verify whether your directives are working as expected.

AI crawler activity.

Source

FAQs

What is LLMs.txt?

It’s a file that tells AI crawlers whether they can train on your content. It’s similar to robots.txt but designed specifically for LLMs.

Does ChatGPT use LLMs.txt?

Yes. OpenAI’s GPTBot checks LLMs.txt and follows the rules you specify.

How do I create an LLMs.txt file?

Create a plain text file, add crawler rules, and upload it to your site’s root directory. Use the examples above to set your directives.

Conclusion

LLMs.txt gives publishers a way to define how their content interacts with AI training systems. As AI-generated search expands, having explicit rules helps protect your work while giving you control over how your brand appears inside model-generated answers.

This file pairs naturally with stronger LLM SEO strategies as you shape how your content is discovered in AI-driven environments. And if you’re already improving your content structure for model comprehension, LLMs.txt fits neatly beside ongoing LLM optimization efforts.

If you need help setting up LLMs.txt or planning for AI search visibility, my team at NP Digital can guide you.

Read more at Read More

AI Search for E-commerce: Optimize Product Feeds for Visibility

AI is reshaping how people shop online. Search isn’t just about keywords anymore. Tools like Google’s AI Overviews, ChatGPT shopping features, and Perplexity product recommendations analyze huge amounts of product data to decide what to show users. That shift means e-commerce brands need to rethink the way their product information is structured.

If you want visibility in these AI-powered shopping journeys, your product data has to be clean, complete, and enriched. AI models lean heavily on structured feeds, trusted marketplaces, and high-quality product attributes to understand exactly what you sell.

That’s why AI search for e-commerce matters right now. Brands that optimize their feeds will show up in conversational queries, comparison results, and visual search responses. Brands that don’t will struggle to appear even if they’ve done traditional SEO well.

This foundation will help you give AI systems the clarity they need to recommend your products with confidence.

Key Takeaways

  • AI search engines rely heavily on structured product feed data instead of just site content to understand and surface products.
  • Clean, complete feeds lead to higher visibility across Google Shopping, ChatGPT shopping research, Perplexity results, and other LLMs.
  • Strong titles, enriched attributes, and quality images make it easier for AI systems to match your products to real user needs.
  • Brands with clear, structured product data will outperform competitors in AI-driven shopping experiences.

How AI Search Is Reshaping Product Discovery

AI is changing the way customers find products long before they reach your website. Instead of typing traditional keywords, shoppers now describe what they want in plain language:
“lightweight waterproof hiking boots,”
“a gift for a 12-year-old who loves science,”
“a mid-century floor lamp under $150.”

AI systems interpret these natural-language queries using semantic understanding instead of exact keyword matches. That shift affects everything from Google Shopping listings to ChatGPT’s built-in shopping tools. It also impacts how AI-driven platforms rank your products when answering conversational or comparison-based queries.

Shopping resuts in ChatGPT.

Source: RetailTouchPoints

If you’ve been following the evolution of AI in e-commerce, you already know AI is moving deeper into product search, recommendation, and personalization. But behind the scenes, the link between your product data and AI visibility is tightening.

AI models rely on structured, trustworthy data sources, including product feeds, schema markup, and marketplace listings. If your feed lacks attributes or clarity, AI can’t confidently connect your product to a user’s need, even if your website is strong.

Optimizing your feed is no longer a backend task. It’s a visibility strategy.

What Is a Product Feed (and Why AI Cares About It)

A product feed is a structured data file that contains detailed information about every item you sell. It includes attributes like product title, description, brand, size, color, price, availability, GTIN, and more. Platforms such as Google Shopping, Meta, Amazon, and TikTok Shops rely on these feeds to understand your inventory and decide when to show your products.

AI systems depend on the same structure. Instead of scanning pages manually, they pull product details from feeds because the information is cleaner, more complete, and easier to interpret at scale.

If your feed includes rich attributes, AI can match your items to complex user queries. When attributes are missing or titles are vague, your products become invisible in AI-driven discovery, regardless of how strong your website content might be.

This is why optimizing product feeds is a priority for e-commerce brands right now. Clean, enriched feeds increase your visibility across AI-powered shopping experiences and visual search tools like Google Lens.

A product feed for E-commerce.

Source

Now, your product feed isn’t just for ads, but is a core input for AI search.

What AI Needs From Your Product Feed (Titles, Attributes, Images)

AI systems don’t guess what your products are, instead analyzing the data you provide. These are the elements that matter most.

Titles and Descriptions

AI models prefer natural, descriptive, human-sounding titles. Short, vague titles like “Running Shoes” don’t give AI enough context. But a title such as:

“Women’s Waterproof Trail Running Shoes – Lightweight, Breathable, Blue”

instantly signals the audience, category, and key benefits.

Descriptions should reinforce the title and add details that help AI understand use cases, materials, fit, and core value.

Avoid keyword stuffing. AI systems would likely reference sites with ambiguity less because they would have less info to understand it.

Product Attributes

AI engines rely heavily on structured attributes such as:

  • Size
  • Color
  • Material
  • Fit
  • Style
  • GTIN/MPN
  • Age range
  • Intended use

Missing attributes = missing visibility.

Attributes help AI refine products when users ask things like:
“Show me a size 8,”
“Only vegan options,”
“Something in walnut or dark wood.”

The more complete your attributes, the better your likelihood of appearing in those filtered results.

Product Images and Alt Text

AI increasingly “reads” images using vision models. Google Lens, Pinterest Lens, and multimodal AI systems analyze colors, textures, shapes, and packaging.

Clear, high-resolution images paired with alt text provide two inputs: visual interpretation and descriptive language.

Example alt text:
“Women’s waterproof trail running shoe with rubber sole, breathable mesh upper, and reinforced toe cap in blue.”

Examples of trail running shoes for women.

Visual clarity improves both AI understanding and user experience.

Steps To Optimize Product Feeds for AI Visibility

Here’s the practical workflow to upgrade your product feed for AI search visibility.

1. Audit Your Current Product Feed

Start with a complete audit using tools like Google Merchant Center, Feedonomics, or GoDataFeed. Look for:

  • Missing GTINs or invalid identifiers
  • Weak or vague product titles
  • Incomplete attributes
  • Duplicate listings
  • Mismatched availability or pricing
  • Blank fields or generic descriptions

AI search systems penalize incomplete or ambiguous data.

Google Merchant Center's interface.

Source

2. Improve Title and Description Relevance

Use a clear structure:

Brand + Category + Key Attributes + Value Proposition

Examples:

  • “Nike Men’s Running Shoes – Cushioned, Lightweight, Black”
  • “Organic Cotton Baby Pajamas – Soft, Breathable, Unisex”
  • “Mid-Century Floor Lamp – Walnut, LED Compatible, 60” Height”

Descriptions should expand on the title, adding details AI can use to match queries.

Avoid fluff. Focus on clarity.

3. Enhance Structured Attributes

Fill out every attribute you have access to, even optional ones. AI uses these to match long-tail, specific user needs.

Add custom labels for:

  • Best sellers
  • Seasonal items
  • High margin
  • Clearance
  • New arrivals

Custom labels help you manage bidding, targeting, and segmentation across Shopping and Performance Max campaigns.

Custom lables for Google Shopping campaigns.

Source

4. Optimize for Rich Results & Visual Search

Include product schema markup on all product pages, especially:

  • Product
  • Review
  • Price
  • Availability

AI search engines treat structured schema as a trust signal.

Also include descriptive alt text on all product images to support accessibility and AI interpretation.

Example results for Blue Hiking Shoes for women.

5. Set Up Feed Rules and Automations

Automate cleanup tasks such as:

  • Adding missing colors to titles
  • Appending product type or material
  • Standardizing capitalization
  • Populating missing attributes with known defaults
  • Flagging products with incomplete data

Automation keeps your feed consistent as your catalog changes.

How AI Assistants Use Product Data

AI shopping assistants are rapidly changing how customers discover and compare products. 

To generate these answers, AI systems pull from:

  • Merchant Center feeds
  • Structured schema markup
  • Marketplace listings
  • Verified product databases
  • High-quality product images
  • Trusted review sources

This creates a composite understanding of your product beyond just what your site says about it.

If you’ve explored the role of AI shopping assistants, you’ve likely seen how quickly they recommend products based on attributes like size, color, performance, ratings, and price. Those signals come directly from your feed and structured product data.

Brands with richer data sets see higher inclusion rates in:

  • Comparison lists
  • “Top choices” summaries
  • Product match queries
  • Visual search results
  • Conversational shopping recommendations
AI shopping results.

Source

AI systems don’t guess. They promote products they can understand clearly and ignore the rest.

Common Mistakes That Hurt AI Visibility

Most feed problems fall into a few categories, and each one reduces visibility in AI search engines.

1. Vague or Duplicated Titles

Titles like “Running Shoes” or “LED Lamp” provide no usable context. AI deprioritizes these compared to richer alternatives.

2. Missing Key Attributes

Many merchants skip fields like size, color, material, GTIN, or gender. AI relies heavily on these attributes when matching products to specific user requests.

3. Keyword-Stuffed or Fluffy Descriptions

Descriptions should be informative, not bloated. AI models prefer specific phrasing over repetitive keywords.

4. Inconsistent Pricing or Availability

If your feed shows “in stock” but your page says “out of stock,” AI systems flag inconsistencies and may reduce your visibility.

5. Low-Quality Images or Missing Alt Text

Visual AI models need clarity. Poor images or missing alt text make your product harder to classify.

Fixing these issues has a measurable impact on how often your products appear in AI-driven recommendations.

FAQs

What is AI e-commerce?

AI e-commerce refers to using artificial intelligence to improve product discovery, recommendations, personalization, and automation throughout the online shopping experience.

How is AI changing e-commerce?

AI is shifting product discovery toward natural-language search, visual identification, and conversational shopping assistants. Brands now need structured, enriched product data to stay visible.

How do you optimize a product feed for AI search?

Create clear titles, use complete attributes, include schema markup, strengthen product images, and use automation to maintain consistency. A detailed feed helps AI understand your products accurately.

Conclusion

Brands that invest in structured data, enriched attributes, and clear product information will outperform competitors as AI-driven shopping grows.

Feed optimization also strengthens your broader search strategy. The same structured data powering AI engines aligns with strong AI in e-commerce practices, and the same clarity helps conversational systems recommend your products more confidently.

Visibility in AI search isn’t random. It comes from data quality. And improving that data is one of the highest-impact steps an e-commerce brand can take today.

Read more at Read More