Posts

Web Design and Development San Diego

Google Ask Maps is moving from listings to recommendations

Google Ask Maps is moving from listings to recommendations

Google’s Ask Maps feature does more than help users find nearby businesses.

Based on hands-on testing of local service queries for plumbers, electricians, and HVAC companies, Ask Maps often narrows the field, interprets user intent, and frames businesses around qualities such as responsiveness, specialization, honesty, and repair-first thinking.

In more complex prompts, it sometimes provides guidance before recommending businesses. This shows Google Maps moving beyond simple local retrieval and toward a more recommendation-driven experience.

To evaluate that shift, we tested Ask Maps across five levels of local intent — starting with simple category searches and progressing toward conversational prompts involving uncertainty, trust, and decision-making.

A clear pattern emerged. As query nuance increased, Ask Maps shifted from listing businesses to interpreting which businesses fit and why.

This article draws from hands-on testing across a limited set of local service queries in one geographic area. Treat these findings as an early directional view, not a comprehensive representation across all markets or query types.

The testing framework

To evaluate progression, we built a five-level intent model based on how homeowners and local service customers actually search. Instead of organizing around traditional keyword categories, we structured the framework from simple retrieval toward conversational decision-making.

  • Level 1 focused on basic requests with minimal context.
    • Example: “Looking for an HVAC company near me.” 
  • Level 2 introduced more service specificity.
    • Example: “I need an electrician to upgrade my panel in an older home.” 
  • Level 3 moved into situational queries, where the user described a problem.
    • Example: “My furnace is making a loud banging noise and I’m not sure if it needs to be replaced or repaired.” 
  • Level 4 introduced trust and decision concerns.
    • Example: “I think my furnace might need to be replaced, but I don’t want to get overcharged. Who is honest about that?” 
  • Level 5 combined those elements into fully conversational prompts asking for guidance, validation, and recommendations in the same search.
    • Example: “I was told I need a full furnace replacement, but it feels expensive. How do I know if that’s actually necessary, and who should I call for a second opinion in my area?”

This framework allowed us to evaluate:

  • Which businesses appeared.
  • How Ask Maps interpreted prompts.
  • What attributes it emphasized.
  • When results started to resemble guided recommendations rather than search results.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Ask Maps narrows the field and adds interpretation

One of the clearest patterns across the testing was that Ask Maps consistently returned a relatively small set of businesses while increasing the amount of interpretation as the user’s search intent became more complex.

At Level 1, the average number of businesses shown was 3.6. Level 2 rose to 4.3. Level 3 dropped slightly to 3.3. Level 4 averaged 5, and Level 5 averaged 4.6. Across the full set, the range remained fairly tight, generally between three and eight businesses.

That’s a different experience from traditional Maps, where a user can scroll through a much broader set of options and do more of the evaluation work themselves.

Ask Maps narrows choices early and spends more effort explaining why those businesses fit the prompt, but stops short of being fully action-oriented. Even when a phone number is shown, there’s no clickable call button directly in the Ask Maps response. 

To call or access the full set of contact options, the user still has to click into the business’s Google Business Profile. That matters because while Ask Maps is becoming more interpretive, the underlying GBP is still where action happens.

As prompts become more nuanced, uncertain, or trust-sensitive, Ask Maps draws on a broader range of sources. It shows fewer businesses, replacing breadth with interpretation.

Dig deeper: How to build FAQs that power AI-driven local search

Basic queries already go beyond simple listings

Even the simplest queries don’t behave like a traditional Maps result.

Basic queries already go beyond simple listings

At the baseline level, Ask Maps still relies heavily on Google Business Profile data, including: 

  • Business descriptions.
  • Review content.
  • Ratings.
  • Hours.
  • In some cases, posts. 

Website influence is minimal here, and there’s little evidence of outside sourcing. But even within that mostly closed ecosystem, it goes beyond listing nearby businesses.

Instead of just showing names, ratings, and locations, Ask Maps:

  • Generates narrative summaries based on information in the Google Business Profile. 
  • Describes businesses in terms of responsiveness, experience, specialization, or the kinds of situations they seem well-suited for. 
  • Draws on reviews when framing businesses.

Even at the most basic level, Ask Maps isn’t neutral. It’s beginning to interpret businesses for the user.

As queries become more specific, Ask Maps starts matching capability

Once the prompt shifts from a general service search to a specific type of job, Ask Maps becomes more selective in how it matches businesses to the request.

  • A query about an electrical panel upgrade doesn’t behave the same way as a query about urgent AC repair. 
  • Replacement-oriented prompts emphasize installation and system expertise. 
  • Repair-oriented prompts emphasize speed, availability, and responsiveness. 
  • Queries tied to older homes or higher-risk work call for more evidence of specialization.

At this level, Google Business Profile and reviews still carry much of the weight, but websites matter more when the job is more complex or costly. A panel upgrade query produces stronger external link usage than a more straightforward AC repair prompt.

That doesn’t mean websites are always heavily used. It shows more selectivity. As decisions become more complex, Google looks for more supporting evidence before recommending businesses.

Situational queries push Ask Maps toward interpretation

The more noticeable shift begins once the prompts move from service categories to real-world scenarios.

At Level 3, the user is no longer looking for a plumber, electrician, or HVAC company. Instead, they’re describing a problem, such as a loud banging furnace, outdated electrical in an older home, or an AC unit that has stopped working during extreme heat. In those cases, Ask Maps increasingly interprets the problem before introducing businesses.

Some responses provide guidance or context first. Others identify the provider and clarify the work before making recommendations. The businesses that follow aren’t framed as generic providers. They’re framed as possible solutions to the situation.

Review content becomes important here. Rather than simply supporting a business’s credibility, reviews act as evidence that the company has handled similar situations before. Fast arrival times, experience with older homes, communication during stressful repairs, and problem-solving ability all become more meaningful when describing businesses.

This is the point where Ask Maps moves more clearly from retrieval to interpretation.

Dig deeper: 7 local SEO wins you get from keyword-rich Google reviews

Trust-oriented queries change what gets emphasized

When the prompts introduce fear, skepticism, or concern about making the wrong decision, Ask Maps changes again.

At Level 4, the focus is less on the service need itself and more on the emotional context around it. The user is worried about being overcharged, being pushed into unnecessary replacement, or hiring someone who would cut corners. 

Ask Maps doesn’t just return businesses capable of doing the work. It organizes businesses around trust-related qualities such as honesty, transparency, careful workmanship, fairness, and second-opinion value.

This is one of the strongest patterns in the research. At this stage, review language is the primary signal shaping how businesses are framed. Specific phrases and anecdotes matter, elevating businesses that explain options clearly, don’t upsell, offer honest assessments, or deliver careful, professional work.

External sources become more relevant here. In addition to GBP information and reviews, Ask Maps shows more willingness to pull from company websites, testimonials, third-party platforms, and educational resources when the user’s concern involves decision risk rather than just service need.

Once the query becomes trust-driven, the recommendation no longer appears to be based only on who can do the job. It reflects who is most likely to handle the situation in a way that the user feels good about.

Get the newsletter search marketers rely on.


Advisory queries show the clearest shift

The strongest example of this progression came at Level 5. These are prompts where the user combines a problem, uncertainty, and a request for recommendations in a single query. 

For example, someone might say they were told they needed a full furnace replacement but were unsure whether that was really necessary and wanted to know who to call for a second opinion. In these cases, Ask Maps moves most clearly into a decision-support role.

Instead of leading with local businesses, it often starts with an explanation, introducing frameworks, safety context, or ways to think about the decision. 

Only after that does it recommend businesses, and those businesses are often grouped not just by rating or proximity, but by approach. Some are framed as repair-first options. Others are framed as second-opinion experts or safety-focused specialists.

This is where Ask Maps feels least like a directory and most like an advisor. The structure of the response looks more like a guided decision process than a traditional local search result.

That doesn’t mean the system is flawless or that every answer is equally strong. But it does suggest that when a prompt includes uncertainty and a need for validation, Ask Maps is trying to do more than match a category. It’s trying to help the user think through what to do next.

Dig deeper: New Google Maps features: Local Guides redesign, AI captions, photo sharing

Where Ask Maps gets its information

Across the testing, several source patterns appear repeatedly, and the mix appears to shift depending on the type of query.

Where Ask Maps seems to get its info

At the foundation, Google Business Profile does much of the early work. Business categories, service descriptions, hours, ratings, and review counts help determine which businesses are eligible to appear and how they are initially framed. In some cases, Ask Maps also pulls from GBP services and products, business descriptions, and occasionally posts when those help reinforce what the business does.

Reviews seem to be one of the most important inputs across nearly every query type. Not just in ratings, but in how review language shapes the summary. 

Ask Maps often draws on review themes tied to:

  • Responsiveness.
  • Honesty.
  • Professionalism.
  • Fast arrival times.
  • Work on older homes.
  • Repair-versus-replace situations.
  • Whether customers feel the company explains options clearly or avoids unnecessary upselling.

In other words, reviews support reputation and help define how a business is positioned in the response.

Business websites matter more once the query becomes more specific, higher-stakes, or more tied to decision-making. In those cases, Ask Maps seems more likely to pull in service pages, testimonial pages, or other on-site business information that helps reinforce specialization, repair-first positioning, second-opinion value, or experience with a particular type of job. 

That’s more noticeable in queries tied to things like panel upgrades, replacement decisions, or older-home electrical concerns than in simpler “near me” searches.

External sources are the most selective layer, but they become more visible when the query involves safety, diagnosis, pricing uncertainty, or broader decision support. 

In those cases, Ask Maps pulls in:

  • Educational content around issues like repair-versus-replace decisions, quote validation, and electrical safety. 
  • Third-party review and directory platforms such as Angi, HomeAdvisor, YouTube, and Facebook.
  • Other publicly available business information, when it helps reinforce trust, workmanship, or reputation. 

In some of the trust-oriented electrician queries in particular, this outside sourcing is more prominent than in simpler local lookups, suggesting Google may broaden its evidence base when evaluating how a business is likely to operate, not just what services it offers.

How Ask Maps mixes sources based on query

Ask Maps isn’t relying on a single source of truth. It appears to be constructing an answer from a mix of Google Business Profile data, review language, business website content, and selectively chosen outside sources, with the balance shifting based on what the user is actually asking.

What this may mean for local visibility

If Ask Maps continues to develop in this direction, it could have meaningful implications for local visibility in Google Maps.

  • Inclusion alone may matter less than interpretation. If Ask Maps is consistently showing a smaller set of businesses and adding more explanation around them, the question is no longer just whether a business appears. It’s also how that business is framed and whether Google has enough confidence to position it as a good fit for the situation.
  • Review content is becoming more important than many businesses realize. The language within reviews appears to influence not just credibility, but the actual way a business is described and recommended.
  • Website content plays a more targeted role than many local businesses assume. It may not be equally important for every prompt, but it matters more when the service is complex, expensive, or tied to greater uncertainty.

More broadly, Ask Maps points toward a version of local search in which retrieval, evaluation, and decision support occur much more closely together. Instead of searching, comparing, researching, and then deciding across several steps, the user may increasingly be guided through much of that process within a single AI-mediated Maps experience.

What businesses and SEOs should tighten up now

If Ask Maps continues moving in this direction, the practical response isn’t to chase a new tactic or treat it like a separate channel. It’s to make the business easier for Google to understand and easier for customers to trust.

What businesses should tighten up now

Keep the Google Business Profile current and specific

A Google Business Profile may play a bigger role when Ask Maps is trying to decide what a business does, what kinds of jobs it handles, and whether it fits a more nuanced prompt.

  • Review primary and secondary categories to make sure they reflect the core work accurately.
  • Tighten the business description so it clearly explains the services offered, the types of jobs handled, and any specialties or areas of focus.
  • Make sure hours, service areas, and contact details are complete and current.
  • Add photos that reinforce the kinds of jobs the business wants to be associated with.
  • Treat posts and profile updates as another way to reinforce services and activity, not just as optional extras.
  • Use the Services and Products sections fully, adding clear descriptions that reflect the specific jobs, specialties, and situations the business wants to be known for.

Pay closer attention to review language

If Ask Maps uses review language to shape how businesses are positioned, then the wording in reviews may matter more than many businesses realize.

  • Look beyond review volume and average rating.
  • Pay attention to whether reviews naturally mention specific jobs, customer concerns, and outcomes.
  • Watch for language around responsiveness, honesty, professionalism, repair-first thinking, and clear communication.
  • Encourage reviews that reflect real experiences rather than generic praise.
  • Use review trends to understand how the business is likely being framed by Google.

Revisit website content for higher-consideration services

Website content appears more likely to matter when the query is more complex, more expensive, or tied to more uncertainty.

  • Strengthen service pages for the higher-value or higher-risk work the business wants to be known for.
  • Add FAQs that address real decision points, not just basic definitions.
  • Include examples of the kinds of jobs handled, especially where context matters.
  • Reinforce trust signals such as experience, process, reviews, and proof of work.
  • Use language that helps explain situations like repair versus replace, older-home work, or second-opinion scenarios.

Think beyond ranking for a phrase

There’s a broader strategic shift here for local SEO. The question may no longer be only whether a business can rank for a phrase. It may also be whether Google has enough evidence to recommend that business in response to a real-world question.

  • Evaluate whether the business is easy to understand across GBP, reviews, website content, and broader digital mentions.
  • Look at whether the business is clearly associated with the jobs and situations it wants to win.
  • Think about trust and decision support, not just service relevance.
  • Focus on making the business more legible to both Google and potential customers.
  • Treat local optimization less like keyword matching alone and more like building a clear, consistent business profile across sources.

Dig deeper: If your local rankings are off, your map pin may be the reason

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

The direction of Ask Maps is becoming clearer

The main question behind this research was when Ask Maps stops behaving like a directory and starts behaving more like a recommendation engine. Based on this testing, that shift starts earlier than many might expect.

Even at the most basic level, Ask Maps narrows, summarizes, and interprets. As prompts become more specific, situational, and trust-driven, they move further toward guided recommendations. At the highest level of complexity, it begins to look less like traditional local search and more like a system designed to help users make decisions.

That doesn’t mean Google Maps has fully changed into something else. But it does suggest the direction is becoming clearer. For local businesses and the people who support them, that makes this worth watching closely. Visibility inside Maps may increasingly depend not just on being present, but on being understood well enough for Google to explain why the business fits the user’s needs.

Read more at Read More

The 6 Agentic AI Protocols Every SEO Needs to Know

A user asks Gemini: “Find me a task chair under $400 with lumbar support and free shipping. Order the best one.”

The AI doesn’t open a new tab. It doesn’t ask the user to click anything. Instead, it queries product databases, cross-references reviews, checks real-time inventory, compares shipping policies, and initiates a checkout — all without a human touching a single page.

These are all things the user would have done themselves, but now in a fraction of the time, with as much effort as it took to write the initial prompt.

Okay, we might not be quite at the stage where everyone is letting AI agents make all their purchases for them. But it’s no longer an unrealistic future.

What made that possible isn’t the AI models themselves. It’s the infrastructure we’re seeing become an increasingly important part of how modern websites are built. This infrastructure consists of a stack of protocols that tells AI agents how to find each retailer’s site, understand their catalog, verify their claims, and take action.

These protocols define how AI agents interact with your brand. And most SEOs have no idea they exist.

By the end of this article, you’ll understand what each protocol does, how they differ from one another, and why you need to pay attention to what’s going on underneath the hood of AI search if you want to stay visible going forward.

Why Protocols Matter for SEOs

Protocols determine whether an AI agent can interact with your brand programmatically, or whether it has to guess. Brands that can speak the agent’s language are more likely to not just be surfaced, but also recommended and, ultimately, interacted with to make purchases.

Think of how robots.txt and XML sitemaps became table stakes for search crawlers. Agentic protocols are shaping up to be that for AI agents.

Put simply: if you want agents to be able to take action on your site — whether that’s making a purchase, booking a table, or completing a form — you need to understand these protocols.

Note: We’re not suggesting that without these protocols AI agents and users will never access your site or buy from them. Agentic commerce is still pretty new, and even the protocols themselves are still evolving. But we believe that agents will increasingly act on behalf of users, and that the easier you make it for them to do that on your website, the better positioned you’ll be as agentic commerce becomes the norm.


The Protocol Stack: A Quick Map

These protocols aren’t competing standards fighting for dominance. They operate at different layers of the same stack, and most are designed to work together.

Here’s a quick breakdown of what these protocols do:

Layer What It Does Key Protocols
Agent / Tool Connects agents to external data, APIs, and tools MCP
Agent / Agent Lets agents hand off tasks to other agents A2A
Agent / Website Lets websites become directly queryable by agents NLWeb, WebMCP
Agent / Commerce Enables agents to discover products and complete purchases ACP, UCP

Note: As with everything AI, the agentic protocols we’ll give more details on below are constantly evolving. This means some platforms are yet to adopt some of the protocols, and the specifics of each protocol could also change over time.


MCP: Model Context Protocol

MCP is the universal connector between AI agents and external tools, data sources, and APIs.

How It Works

Before MCP, every AI tool needed a custom integration for every data source it wanted to access. If you wanted a chatbot to pull live pricing from your database and cross-reference it with your CMS, someone had to build a bespoke connection between those systems. Then rebuild it whenever either one changed.

MCP standardizes that connection. Think of it as USB-C for AI: one protocol that lets any agent plug into any tool, database, or website that supports it.

An agent using MCP can pull live pricing data, check inventory, read structured content from a site, or execute a workflow, all through the same interface.

The website or tool publishes an MCP server, and the agent connects to it. There’s much less need for custom integration work on either side.

Who’s Behind It

MCP was launched by Anthropic in November 2024. It has since been adopted by OpenAI, Google, and Microsoft. MCP is now governed by an open-source community under the Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation.

As of early 2026, there are more than 10K MCP servers out there, making it the de facto standard for agent-to-tool connectivity.

What It Means for Your Brand

Structured data, clean APIs, and accessible HTML have always been good technical SEO. Now they’re also agent compatibility requirements. Brands with MCP-compatible data give agents something to work with. Brands without it force agents to scrape pages and infer meaning, which creates friction and can affect whether they recommend you.

A2A: Agent-to-Agent Protocol

A2A is the standard that lets AI agents from different vendors communicate, delegate tasks, and hand off work to one another.

How It Works

MCP lets an agent talk to tools. A2A lets agents talk to each other.

When a task is complex enough to need multiple specialist agents — like one for research, one for comparison, and one for completing a transaction — A2A is the protocol that coordinates them.

Each A2A-compliant agent publishes an “Agent Card” at a standardized URL (that looks like “/.well-known/agent-card.json”). This card advertises what the agent can do, what inputs it accepts, and how to authenticate with it. Other agents discover these cards and route tasks accordingly.

The result: agents from entirely different companies, built on different frameworks, running on different servers, can collaborate on a single user request. No custom-built connections required.

Who’s Behind It

Google launched A2A in April 2025 with 50+ technology partners, including Salesforce, PayPal, SAP, Workday, and ServiceNow. The Linux Foundation now maintains it under the Apache 2.0 license.

What It Means for Your Brand

As multi-agent workflows become more common, agents may evaluate your brand across multiple checkpoints before a human sees the result.

That chain might look something like this:

  • A research agent surfaces your product from a broad category query
  • An evaluation agent reads your reviews and checks the sentiment
  • A pricing agent verifies your costs against third-party sources
  • A trust agent cross-references your claims for consistency

A2A orchestrates that entire chain. If your data is inconsistent across sources, like if your pricing page says one thing and your G2 profile says another, the AI agent might filter your brand out as a contender. All before the user even sees you as an option.

NLWeb: Natural Language Web

NLWeb is Microsoft’s open protocol that turns any website into a natural language interface, queryable by both humans and AI agents.

How It Works

Right now, when an AI agent visits your website, it might have to make a lot of guesses. It scrapes your HTML, infers meaning from your content, and relies on your page being structured properly to be able to parse it effectively. There’s a lot of room for error.

Once a site implements NLWeb, any agent can send a natural language query to a standard “/ask” endpoint and receive a structured JSON response. Your site then answers the agent’s question directly, rather than the agent interpreting your HTML.

Every NLWeb instance is also an MCP server. A site implementing NLWeb automatically becomes discoverable within the broader MCP agent ecosystem without any additional configuration.

Who’s Behind It

NLWeb was created by R.V. Guha, the same person behind RSS, RDF, and Schema.org. (That’s no coincidence.) NLWeb deliberately builds on web standards that already exist, which means a lot of websites are close to NLWeb-ready right now.

Microsoft announced NLWeb at Build 2025 in May 2025. It’s open-source on GitHub. Early adopters include TripAdvisor, Shopify, Eventbrite, O’Reilly Media, and Hearst.

What It Means for Your Brand

For SEOs, NLWeb is a natural extension of work you may already be doing.

Schema markup, clean RSS feeds, and well-structured content are the foundation NLWeb builds on. Sites that have invested in structured data have a head start. Sites that haven’t are harder for agents to work with, but they can easily catch back up by implementing schema markup now.

Structured data already helps search engines, and it can make it easier for agents to understand and interact with your site too. That increases the value of technical SEO work you may have been putting off.

WebMCP

WebMCP is a proposed W3C standard that lets websites declare their capabilities directly to AI agents through the browser.

How It Works

NLWeb makes your content queryable. WebMCP goes one step further: it lets websites declare what actions they support. These actions could include “add to cart,” “book a demo,” “check availability,” and “start a trial.”

These capabilities are declared in a structured, machine-readable format. Instead of an agent scraping your UI and guessing how your checkout works, WebMCP gives it an explicit map, straight from the source (you).

Who’s Behind It

Google and Microsoft proposed WebMCP, and the W3C Community Group is currently incubating it. Chrome’s early preview shipped in February 2026, with broader browser support expected by mid-to-late 2026.

What It Means for Your Brand

WebMCP is the clearest preview of where agent-website interaction is heading.

Imagine you have two brands with similar products, similar pricing, and similar reviews. The one whose site declares clear, structured capabilities is easier for an agent to act on. The other requires guesswork.

Agents are likely to take the path of least friction, and WebMCP helps you reduce friction to a minimum.

ACP: Agentic Commerce Protocol

ACP is OpenAI and Stripe’s open standard for enabling AI agents to initiate purchases.

How It Works

ACP focuses specifically on the checkout moment. It creates a standardized way for an AI agent to complete a purchase on a merchant’s behalf, handling payment credentials, authorization, and security through the protocol itself.

Before ACP, an agent that wanted to complete a purchase had to navigate each merchant’s unique checkout flow. A different form, a different payment process, and a different confirmation step for every retailer. ACP standardizes this process.

Merchants integrate with ACP through their commerce platform, and once live, checkout becomes agent-executable. The user doesn’t have to do anything except approve.

ACP originally powered ChatGPT’s instant checkout functionality, but that has since been removed by OpenAI in favor of dedicated merchant apps. ACP may still power product discovery within ChatGPT, and may be used within these apps, but things are evolving fast.

Who’s Behind It

OpenAI and Stripe launched ACP in September 2025. It’s open-sourced under Apache 2.0, with platform support still expanding.

What It Means for Your Brand

If an agent has shortlisted your product and the user tells it to go ahead and pay, ACP is what allows the agent to complete the transaction. If your brand isn’t integrated with this workflow, you risk the AI agent getting stuck or being unable to complete that purchase.

The agent can recommend you, but it can’t buy from you. That gap will matter more as agentic commerce becomes the norm.

UCP: Universal Commerce Protocol

UCP is Google and Shopify’s open standard for the full agentic commerce journey, from product discovery through checkout and post-purchase.

How It Works

ACP focuses on the checkout moment, while UCP covers the entire shopping lifecycle.

An agent using UCP can discover a merchant’s capabilities, understand what products are available, check real-time inventory, initiate a checkout with the appropriate payment method, and manage post-purchase events like order tracking and returns. All through a single protocol.

UCP is built to work alongside MCP, A2A, and AP2 (Agent Payments Protocol), meaning it plugs into the broader agent infrastructure rather than replacing it.

Merchants publish a machine-readable capability profile. Agents then discover it, negotiate which capabilities both sides support, and proceed.

Who’s Behind It

Google and Shopify co-developed UCP, with Google CEO Sundar Pichai announcing it at NRF 2026. More than 20 launch partners signed on, including Target, Walmart, Wayfair, Etsy, Mastercard, Visa, and Stripe.

What It Means for Your Brand

When a user asks Google AI Mode to find and buy something, UCP determines whether your brand is in the conversation, and whether the agent can actually complete the transaction.

The machine-readability of your product data, the consistency of your pricing across sources, the clarity of your inventory signals: all of it feeds directly into whether an agent can successfully transact with you.

ACP vs. UCP: The Key Difference

ACP and UCP are often confused, and they do share some similarities, but here’s where they differ:

ACP UCP
Built by OpenAI + Stripe Google + Shopify
Scope Discovery and checkout layers Full journey: discovery, checkout, and post-purchase
Powers ChatGPT instant checkout and product discovery Google AI Mode, Gemini
Architecture Centralized merchant onboarding Decentralized: merchants publish capabilities at /.well-known/ucp
Status (early 2026) Live, wider rollout in progress Live, wider rollout in progress

ACP and UCP are complementary, not competing. A brand may eventually support both — one for ChatGPT’s ecosystem, one for Google’s.

For now, the practical question is: which platforms matter most to your customers, and where does your commerce infrastructure make integration easiest? Choose the protocol that aligns with your answer, or use both.

Example of Agentic Search Protocols in Action

These protocols don’t operate in isolation. Here’s what they might look like working together (note that this isn’t necessarily exactly what’s going on at each stage, and is just for illustrative purposes):

Scenario: A user asks Gemini: “Find me a comfortable task chair under $400 with lumbar support and free shipping. Order the best option.”

Gemini – Order best chair

Step 1: MCP Activates

The agent uses MCP to connect to external tools: product databases, review platforms, retailer inventory feeds. It can query live data rather than relying on cached or trained knowledge.

Step 2: A2A Coordinates

The agent then coordinates with specialist agents published by brands and review platforms via A2A. One evaluates ergonomics reviews. One checks pricing consistency across sources. One verifies free shipping claims against each retailer’s actual policy page.

Step 3: NLWeb Answers Queries Directly

The agents query each retailer’s site. Brands with NLWeb implemented respond to the agent’s /ask query with structured data. This includes things like accurate inventory, real-time pricing, and product attributes. Brands without it force the agent to scrape and infer, slowing it down and potentially leading to them being skipped altogether.

Step 4: WebMCP Declares Available Actions

The “winning” retailer’s site has declared its checkout capabilities via WebMCP. The agent knows exactly what actions are available and how to initiate them without any guesswork.

Step 5: UCP Completes the Transaction

The purchase is executed via UCP, entirely within Google’s AI experience. The merchant’s backend communicates through the standardized API. The user gets an order confirmation, and they never visited a single product page.

Obviously this is the fully agentic scenario. In reality, not every purchase is going to be left entirely to an AI agent.

But even when a human wants to evaluate options before clicking buy, making it as easy as possible for the agent to make recommendations is still good practice. That’s why these protocols are worth paying attention to.

What SEOs Should Do Now

Understanding the protocol layer is step one. Here’s where to focus next:

1. Prioritize Machine-Readable Content Over Volume

Before adding more pages, make sure your existing pages can be parsed cleanly by an agent. That means:

  • Having your pricing in plain text, not locked behind JavaScript drop-downs
  • Using feature lists that don’t require interaction to reveal
  • Including FAQ content that renders server-side
  • Using schema markup on product and organization pages

An agent that can’t read your page can’t recommend or buy your products.

2. Audit Your Structured Data

NLWeb builds on Schema.org, RSS, and structured content that sites already publish. If you’ve invested in schema markup, you have a head start on NLWeb compatibility.

If you haven’t, this is now a double reason to prioritize it: it improves your search visibility and makes your site more easily queryable by agents.

3. Check Your Consistency Across Sources

Agents verify claims by cross-referencing your site, review platforms, and third-party content. If your pricing page says one thing and your Capterra profile says another, agents can flag the discrepancy and lose confidence in your brand, making the recommendation or purchase less likely.

Audit for cross-source consistency the same way you’d audit NAP consistency in local SEO. It’s the same underlying principle, just for a different kind of crawler.

4. Get on the ACP and UCP Waitlists Now

These protocols are in active rollout. Early adopters benefit from lower competition in agent-mediated commerce while the rest of the ecosystem catches up. Join Stripe’s waitlist for ACP access. And join Google’s UCP waitlist too.

For other protocols like MCP, talk to your dev team about making sure your site supports them.

5. Monitor Your AI Footprint as a Regular Practice

Search your brand in ChatGPT, Perplexity, and Google AI Mode. Are agents describing your product accurately? Is your pricing consistent with what they’re surfacing? Are competitors appearing where you aren’t?

This is the new version of checking your SERP presence, and it needs to become a recurring part of your workflow, not a one-time audit.

Understand how your brand is appearing in AI search right now with Semrush’s AI Visibility Toolkit. It shows you where you’re showing up, where you’re behind your rivals, and exactly what AI tools are saying about your brand.

Brand Performance – Backlinko – Key Business Drivers

What’s Next for Agentic Search Protocols?

The protocols we’ve discussed here are already live, but they’re still evolving.

WebMCP is still in early preview. ACP and UCP are mid-rollout. New protocols — for agent payments, agent identity, agent-to-user interaction — are still being drafted and debated.

But the SEOs that understand and implement these protocols correctly are the ones most likely to see success.

Find out where your brand stands right now with our free AI brand visibility checker.

The post The 6 Agentic AI Protocols Every SEO Needs to Know appeared first on Backlinko.

Read more at Read More

How to Do Keyword Research for SEO

Key Takeaways

  • Keyword research is the process of finding and analyzing the search terms your audience uses to determine which ones are worth targeting and why.
  • Search intent, keyword difficulty, search volume, and topical authority are the core variables that determine whether a keyword is a viable target for your site.
  • AI Overviews now appear in a significant share of searches and measurably reduce click-through rates. 
  • Long-tail keywords carry more weight than ever. They convey highly specific intent and mirror the natural language patterns behind voice and LLM queries.
  • Prompt research is a discipline that sits alongside traditional keyword research. It accounts for how people interact with AI tools, where query structure and user intent differ meaningfully from traditional search.

Have you been tracking your target keywords, only to watch rankings hold steady while organic traffic falls? 

You’re not imagining it. 

According to SEOClarity, AI Overviews (AIOs) appear for 30 percent of U.S. desktop searches, and according to Ahrefs, that presence alone reduces organic click-through rate (CTR) for position-one results by 58 percent.

You might think that makes keyword research for SEO less important now, but that couldn’t be further from the truth. 

Your research still matters. What’s changed is the goal. High-volume terms alone won’t cut it anymore. 

You need to identify which keywords still drive clicks and understand how large language models (LLM) prompts are reshaping the demand signals you rely on.

This guide covers the full research process, updated for how search works today.

What Is Keyword Research?

Keyword research is the process of identifying and analyzing the search terms your target audience types into search engines and LLMs. The goal is to determine which terms are worth targeting based on factors like the intent behind a user’s query.

Intent is the why behind what people search, and it’s an area many teams underinvest in.

Finding a high-volume keyword is easy enough. The harder part is understanding the true intent behind the keyword. That’s the key to making sure your content satisfies that intent better than what’s already ranking.

Why Is Keyword Research Important for SEO?

Creating content without keyword research is a gamble. 

Sure, you might produce something useful. However, without confirming what people are actually searching for and that you have a realistic shot at ranking, you’re spending resources on content that may never be found.

Keyword research solves for three variables that determine whether a keyword is worth pursuing:

  • Search volume tells you how many people are looking for a term each month. A keyword with zero volume isn’t worth a dedicated page. Search volume alone doesn’t close the case, though. The vast majority (94.74 percent) of keywords receive 10 or fewer monthly searches, proving low-volume, high-relevance terms can still drive traffic that converts.
  • Keyword difficulty tells you how competitive a keyword is based on the authority of the pages currently ranking for it. This is where many teams misjudge their opportunities. A keyword with a high difficulty score might be within reach for a high-authority domain but completely out of scope for a site with limited backlink equity. Targeting beyond your domain’s current authority just adds to your backlog.
  • Topical authority has become increasingly important over the past two years. Google has gotten a lot better at evaluating whether a domain demonstrates depth and consistency within a topic area. Keyword research should inform a content strategy that builds clusters of related content rather than targeting disconnected terms.

There’s also the AI layer. 

AIOs now appear in a significant share of searches and reshape the value of a keyword depending on whether one shows up. 

Research from Seer Interactive tracking 3,119 informational queries finds that organic CTR dropped 61 percent for queries with AIOs compared to queries without them.

Notice how a more semantic long-tail keyword for the same subject produces a Google AIO versus a product-based search:

Google AI Overview for how to do keyword research

Source: Google.com

Google results for keyword research tools query

Source: Google.com

See how small differences in keywords can drastically change your results? This is why doing proper keyword research is important.

Long-tail keywords are more likely to trigger AIOs, which means users get their answer without clicking through. 

That’s worth knowing, but it’s not a reason to abandon those keywords. Flag them during analysis and see where they fit in your broader strategy.

Why Search Intent Is Important for Keyword Research

Search intent is the underlying goal behind a query. 

Google organizes intent into four broad categories: 

  • Informational (users want to learn something)
  • Navigational (users are looking for a specific site or brand)
  • Commercial (users are comparing options before a purchase)
  • Transactional (users are ready to buy or act)
Four keyword intent types chart by NP Digital

Intent type is a big deal because Google matches results to intent. 

An e-commerce product page won’t rank for a query that Google interprets as informational. A how-to article won’t win for a transactional query where users want a product listing. 

No amount of optimization compensates for a content-to-intent mismatch.

Use keyword research for SEO to verify intent before you commit to a content format. The fastest way to do this is to run the keyword in Google and see what’s ranking. 

If listicles dominate page one, that’s what Google thinks the searcher wants. If product pages own the top positions, a blog post isn’t going to break through.

“What sort of things do they search for during the awareness, research, and transaction phases of their buying journey? Target each of these clearly in different areas of the website by bucketing groups of terms into these different intent groups,” explains William Kammer, Vice President of SEO at NP Accel.

Bucketing your keyword list by intent before mapping keywords to pages is one of the most practical things you can do to make sure your SEO efforts match how your audience actually moves through the funnel.

Prompt Research and AI Visibility

Traditional keyword research focuses on what people type into Google. 

Prompt research focuses on how people interact with AI tools like ChatGPT, Perplexity, and Gemini. The patterns across them are quite different.

When someone searches Google for “email marketing tools,” they enter that short phrase (or a close variant) and scan a list of results. 

When someone asks ChatGPT the same question, the query looks more like this: “I run a small e-commerce business, and I’m looking for an email marketing tool that integrates with Shopify and has automation features. What would you recommend?”

The intent might be the same, but the structure and the specificity are completely different.

LLMs take these longer queries and break them down into three key components:

  • Persona: Defines who the user is and helps the LLM tailor the response to them
  • Context: Identifies the user’s specific needs and narrows the scope of the answer
  • Question: The actual “ask” contained within the query defines the LLM’s output
Anatomy of an AI prompt persona context question

Source: Claude.ai

This structural difference affects your content strategy. 

LLMs synthesize information from multiple sources to generate a response. They evaluate content for credibility and depth. 

A page optimized around a head keyword might rank well in Google but never appear in an LLM response if it doesn’t fully answer the underlying question a user would actually ask.

Prompt research is the practice of identifying the underlying questions within the full, natural-language queries people use when interacting with AI tools and the keyword-related topic clusters those queries reveal.

Think of it as keyword research for a different interface. LLMs use a process called query fan-out, breaking out a single user prompt into multiple sub-queries to retrieve information. That means your content needs to answer not just the surface question but the related ones surrounding it.

A quarter of search volume has already shifted toward AI-driven chatbots and answer engines, according to Gartner. 

That shift is gradual, but it’s not stopping. Get ahead of it now by building prompt research into your workflow alongside traditional keyword research.

How to Do Keyword Research

Good keyword research starts with the same core process regardless of where you’re starting. Here’s how to work through it, whether you’re building a content strategy from scratch or auditing an existing one.

Six-step keyword research process by NP Digital

1. Revisit Your SEO Goals

Before you open a keyword tool, get clear on what you’re trying to accomplish. Your keyword strategy should follow from your business goals, not the other way around.

A site prioritizing revenue will have a different keyword mix than one focused on growing organic traffic volume. A brand building topical authority in a new vertical needs different content targets than one trying to hang on to existing rankings. 

Your objectives will dictate the metrics you optimize for and which parts of the keyword funnel you invest in first.

Three common goal types shape keyword priorities:

  • Conversion-focused goals call for commercial and transactional keywords. These terms sit at the bottom of the funnel and carry strong purchase or sign-up intent. They also tend to have higher keyword difficulty. That means traffic volumes are often lower, but the quality is high.
  • Traffic-growth goals point toward informational keywords with higher search volumes. These terms attract users earlier in the funnel and are generally easier to rank for, though they convert at lower rates.
  • Topical authority goals are where keyword clusters shine. These are groups of semantically related terms that together signal depth of expertise to Google. The cluster approach is a longer-term play, but it’s often the only sustainable way to rank for the high-difficulty terms in competitive verticals.

Keep your competition in mind as you match keywords to goals, too. 

If a transactional keyword is out of reach for your domain right now, targeting it could hurt your conversion goals and waste resources. A smarter move is finding long-tail keywords around the same seed and intent as a backdoor into that topic.

2. Keyword Discovery

Keyword discovery is where you build a broad list of potential targets before narrowing it down during analysis. A lot of teams spend too much time here without a clear method. Here’s one that works.

Start by mapping your core topic areas from your audience’s perspective. Consider their pain points and the industry terminology they naturally use. These become your seed keywords,  the starting points you’ll expand through tools.

From there, enter your seed keywords into a keyword tool. 

My SEO tool, Ubersuggest, has a Keyword Ideas feature that gives you dozens of variations to shape the focal point of your content. 

Here’s what it delivers for the seed keyword “hiking boots”:

Ubersuggest Keyword Ideas results for hiking boots

Source: https://app.neilpatel.com/en/ubersuggest/keyword_ideas/

Run enough seed keywords through the tool to build a list of hundreds of candidates before you start cutting.

Your competitors are a valuable third-party source, too. Pull competitor domains into Ubersuggest’s Keywords by Traffic feature to see which keywords are driving traffic to their pages. This surfaces real gaps in your strategy rather than theoretical ones.

Here’s what you get when you search my domain, neilpatel.com.

Ubersuggest Keywords by Traffic for neilpatel.com

One caveat to note is that tools may not yet have reliable volume data for trending or emerging topics. 

Jonathan Hoffer, SEO Manager at NP Digital, notes that “in the case of new trends, they might not appear in a tool, so you’ll have to check social media or forums to see if something is trending.”

Long-Tail Keywords

Long-tail keywords are search phrases of three or more words. They carry lower search volumes than head terms, but they’re more specific. That means they face less competition and tend to attract users with clearer intent, which often translates to higher conversion rates.

“Hiking boots skechers” illustrates the point well. The difficulty score is lower than our seed keyword phrase, meaning it’s easier to rank for. 

As you can see below, Ubersuggest rates “hiking boots” 39 in SEO difficulty vs. 27 for “hiking boots skechers.”

Ubersuggest SEO difficulty hiking boots
Ubersuggest SEO difficulty hiking boots skechers

That keyword is still valuable, though, because someone typing “hiking boots skechers” probably knows exactly what they want to buy. That means the odds are good that they’re close to a purchasing decision. 

A page that directly addresses that particular brand is far more likely to rank and convert than a generic “hiking boots” page ever would for that searcher.

The value of long-tail keywords goes beyond traditional SEO.

For starters, voice search queries are naturally long-tail. They’re phrased the way people speak in real life rather than in typed shorthand.

Someone typing might enter “hiking boots waterproof.” The same person using voice search asks, “What are the best waterproof hiking boots for wide feet?”

LLM prompts follow the same conversational pattern. A user asking an AI assistant a question phrases it the way they’d phrase it to a knowledgeable colleague. 

Targeting long-tail keywords in these cases gives you the best shot at matching how your audience searches.

Local Keywords

Local keyword research follows the same core process as broader keyword research. There’s one important distinction, though: Potential competitors and search intent are filtered through geography. 

Someone searching “pizza delivery” in Santa Monica isn’t looking for the same results as someone searching the same term in Chicago. Both are looking to get pizza delivered, yes, but the keyword effectively becomes a different target once location comes into play.

Don’t limit yourself to a single location modifier. 

A pizzeria in Santa Monica can target “pizza delivery Santa Monica” and neighborhood-level variants like “pizza near the pier.” Service-specific combinations like “late night pizza delivery Santa Monica” work, too.

Each geographic variation is a keyword opportunity in its own right.

Local keywords tend to have lower difficulty than non-local ones, but that doesn’t make them uniformly easy. 

Local rankings don’t run on content alone. Your Google Business Profile and the consistency of your name, address, and phone number (NAP) across the web factor in, too.

3. Keyword Analysis

Keyword target criteria checklist by NP Digital

By the end of discovery, you’ll have a long list of potential keywords. Keyword analysis is how you cut it down to a working set.

The primary metrics to evaluate are search volume, keyword difficulty, and search intent alignment.

A tool like Ubersuggest lets you organize all your candidates in a Keywords List and sort by these variables simultaneously, which is faster than evaluating them one at a time.

Ubersuggest Keyword Lists for activewear research

The right search volume floor depends on your goals. Don’t automatically filter out low-volume keywords. A term with 50 monthly searches and clear commercial intent can be worth more than a 5,000-volume informational keyword with no realistic conversion path.

For keyword difficulty, calibrate your threshold to your domain authority. 

Sites with limited backlink equity are usually better off focusing on terms with difficulty scores under 40. Higher-authority domains have more room to compete for scores of 50 and above. What counts as realistic is site-specific.

After sorting by the numbers, run a Google search on each shortlisted keyword and analyze the search engine results page (SERP) directly. Your goal is to answer two questions:

  • Does the content format match what you can produce? If every top-ranking result is a detailed comparison guide and you’re planning a product page, that’s an intent mismatch.
  • Does your domain belong in this conversation? Look at who’s ranking. If the top results are all major publications with significantly more backlink equity than your site has, be realistic about your timeline and consider adjusting your target keyword.

You should also consider whether your target keyword generates an AIO. A keyword where an AIO is present doesn’t make it a bad target, but it does change how you measure success. For those terms, landing an AIO citation matters as much as ranking position.

Nikki Brandemarte, Sr. SEO Strategist and Local SEO Team Lead at NP Digital, offers this guidance: “Pay attention to content coverage for specific topic areas. For example, are your SERP competitors publishing multiple blogs that explain the basics of a topic, or a single comprehensive guide? This can help pinpoint gaps in topical authority.”

By the end of analysis, every keyword on your shortlist should clear these bars:

  • Measurable search volume
  • Relevant to your brand or industry
  • A difficulty score your domain can realistically compete for
  • Clear search intent alignment
  • A content format your site can actually produce

4. Keyword Targeting

Once you have a refined keyword list, you need to decide which keywords to pursue first and which URLs to target them with. 

For prioritization, start with keywords that combine low difficulty with reasonable volume. These are your highest-probability wins. They won’t always be the most valuable keywords on your list, but early traction validates the strategy and gives you ranking data to learn from.

From there, move to high-intent commercial keywords. These carry more difficulty but have the most direct line to revenue. A few hundred visitors from a well-targeted commercial keyword can generate more return than thousands of visits from an informational term.

Finally, layer in top-of-funnel, high-volume informational terms. These are the awareness plays. They’re hard to rank for and have longer time horizons, but they’re important for building topical authority over time.

When assigning keywords to pages, be deliberate about avoiding keyword cannibalization

Cannibalization happens when two or more pages on your site target the same or nearly identical keywords. This splits ranking signals, creating competition between your own content. 

It’s one of the more common structural problems in mature content programs. Audit for it before you start mapping new keywords to existing pages. If you find two pages competing for the same term, consolidate, redirect, or clearly differentiate the content before adding more.

5. Keyword Optimization

With your keyword targets set, optimization is how you signal relevance to search engines without sacrificing content quality. Here’s a rundown of what current best practices look like.

  • Title tag and H1: Your primary keyword belongs in both. This remains one of the most consistent on-page ranking signals. According to Rankability, 93.5 percent of page-one results use their target keyword in the title or H1.
  • URL slug: Use a clean, keyword-inclusive URL. Research shows that URLs that include the target keyword see up to 45 percent higher click-through rates than those without.
  • Meta description: Your meta descriptions don’t directly influence rankings, but they do influence clicks. The goal is to include the keyword naturally and give searchers a clear reason to click.
  • Body copy: Use your keyword and related semantic terms throughout, but write it for the reader first. Resist the urge to stuff keywords. Density has declined as a ranking factor. Pages in the top 10 today have significantly lower keyword density than those that ranked well even a few years ago. 
  • Image alt text: Include your keyword in at least one image’s alt attribute on the page. Alt text serves accessibility and SEO purposes.
  • Structured data: Schema markup helps search engines and AI systems understand the content type and context of your page. For competitive keywords, structured data improves your eligibility for featured snippets and AIO citations.
  • Content completeness: For any keyword you’re seriously targeting, your content needs to address the topic more thoroughly than what’s currently ranking. That doesn’t mean longer for its own sake. Your piece can be shorter and still outrank what’s currently there if yours is more helpful.

For highly competitive keywords, link building to the specific page will almost certainly be part of the equation. Rankings alone won’t hold in a tough vertical without external authority pointing at the page.

6. Keyword Tracking

Systematically tracking your keyword research is what separates good SEO results from great SEO. 

Rankings change, and competitor or algorithm adjustments can swiftly change the playing field. A tracking system catches those changes before they become problems.

Typically, keyword research tools include a rank-tracking feature that monitors your keyword positions daily and displays ranking distribution or visibility trends across your tracked keyword set. 

Here’s what Ubersuggest’s Rank Tracking feature looks like:

Ubersuggest Rank Tracking dashboard keyword SEO

You can track performance separately by desktop and mobile, which is a big plus given how differently Google’s SERPs behave across devices.

The core metrics to monitor are:

  • Ranking position
  • Organic impressions via Google Search Console
  • CTR

CTR is especially worth watching for any keywords where AIOs are present. 

A stable ranking alongside a declining CTR is a signal that an AIO has entered the picture, but don’t panic. This is less a traffic problem and more an opportunity for content optimization. You may be able to go back and refresh that page with long-tail keywords that more properly align with AI search.

For broader keyword programs, tracking AI citation frequency is increasingly worth adding to your reporting stack. Brands cited in AIOs earn 35 percent more organic clicks and 91 percent more paid clicks than brands that aren’t cited on the same queries, according to Seer Interactive. 

Citation is now a meaningful key performance indicator (KPI) alongside position.

The Prompt Research Process: Is It Any Different?

The short answer is yes. Prompt research differs somewhat from traditional keyword research, but the fundamentals overlap.

Prompt and keyword research share the same goal, though: to understand what your audience is looking for and create content that satisfies that need. 

The difference is the interface.

LLM users don’t type compressed keyword strings. They ask full questions and often include specific constraints. 

The prompt below breaks down how each component works together. Notice how far it goes beyond a simple keyword search:

Structured AI prompt example with labeled components

Source: https://www.thevccorner.com/p/guide-writing-powerful-ai-prompts

These added layers change what a good target keyword looks like.

Here’s a practical approach to building prompt research into your workflow:

  • Start with your existing keyword list. Take your top commercial and informational keywords and expand them into full-sentence questions. “Email marketing tools” becomes “What’s the best email marketing tool for a small business that already uses Shopify?” 
  • Mine community forums and Q&A platforms. Reddit threads and Quora discussions show you the actual language your audience uses when asking for help. These tend to be longer and more detailed than keyword tool data, and that specificity is precisely what LLM prompts look like.
  • Use your keywords in LLMs directly. Type your target topics into ChatGPT or Perplexity and observe their results and how they phrase follow-up questions. Those follow-up questions represent the sub-queries the model identified as relevant, which are also the content gaps your pages can fill.
  • Monitor brand mention prompts. Tools like Profound track which prompts lead AI engines to mention your brand or your competitors, and how those mentions change over time. This is the closest thing to rank tracking for LLM visibility.

The content strategy implication is to prioritize completeness. 

Content scoring highly on semantic completeness appears in AI-generated answers at a rate 340 percent higher than content that scores lower, according to recent AIO research data. 

LLMs reward content that fully addresses a topic, which is the same thing Google has been rewarding since the Helpful Content updates. The convergence is not coincidental.

Bonus: More Ways to Find Keywords

As your skills grow or you take on more competitive keywords, the tools below are worth adding to your stack to spot opportunities you might otherwise miss. You’ve already seen a little of what Ubersuggest can do, so let’s start there.

Ubersuggest

One sometimes-overlooked part of Ubersuggest is the Keyword Ideas feature’s ability to filter keyword results by suggestions, related terms, questions, prepositions, and comparisons. 

Each filter uncovers a different angle on how people search for your topic (as shown in our hiking boots example).

Ubersuggest keyword filter tabs for hiking boots

The Questions modifier is particularly useful for content planning.

Ubersuggest keyword questions filter hiking boots

The Questions filter alone gives you 120 variations for “hiking boots.” They range from informational queries like “how long do hiking boots last” to commercial ones like “where to buy hiking boots near me.” 

Each has a potential content angle with its own intent and difficulty profile.

It shows you exactly what people are asking about a keyword, giving you ready-made content angles and FAQ targets. 

Ahrefs and Semrush

Ahrefs’ Keywords Explorer provides full SERP analysis in one dashboard. 

One feature worth highlighting is the AI visibility filter in Ahrefs’ Site Explorer, which shows exactly which of your ranking keywords are currently triggering AIOs. That filter turns AIO exposure into a specific, actionable list of keywords you can monitor more closely.

Semrush has integrated AI-specific research tools into its platform, too. 

Its tracking functionality enables you to monitor your brand’s performance across ChatGPT, Perplexity, and Google’s search generative experience (SGE) simultaneously. Plus, its AI sentiment feature tells whether AI-generated responses mention your brand positively or negatively. 

For teams building out an AEO strategy alongside traditional SEO, that cross-platform visibility is difficult to replicate manually.

Many experienced SEOs use multiple tools in parallel, cross-referencing data from Ubersuggest, Ahrefs, and Semrush to build a more complete picture. Because volume figures are estimates and can vary by platform, using multiple tools reduces the risk of making targeting decisions based solely on a single platform’s data.

AnswerThePublic

AnswerThePublic generates question-based keyword ideas from a seed keyword. Enter a topic, and the tool maps the questions people are asking about it, organized by preposition and question type.

The output is useful for building FAQ sections and identifying informational content angles that pure volume-based tools can’t see. 

For example, if you search for “social media marketing,” AnswerThePublic returns questions like “what are the best social media marketing strategies?” and “how to measure ROI in social media marketing?”

AnswerThePublic keyword map social media marketing

Both are strong long-tail targets with real search demand.

LLMs and AI Tools

AI tools have become genuinely useful for scaling keyword research, particularly in the brainstorming and clustering phases.

Take Claude or ChatGPT. You can rapidly expand a seed keyword into related angles and intent clusters. Use the persona component of your prompt to make them think like your target audience.

For example, you might ask an LLM to generate the questions a small business owner would ask before buying a product. Or you might dig into the objections they’d have at each stage of the purchase process. 

LLM output isn’t a replacement for tool-based volume data, but it’s a fast way to surface angles you wouldn’t have thought to search for.

Here’s a sample query I ran in Claude: “What questions would someone ask before buying email marketing software?”

Claude AI keyword brainstorm for email marketing

Source: Claude.ai

This is just a small snippet of what it returned. The LLM returned questions across a variety of categories, covering the entire buying journey someone might go through when purchasing email marketing software. 

Doing the same could provide you with long-tail keyword opportunities to reach every segment of your target audience exactly where they are. 

Semrush’s AI-powered keyword clustering tools take this further by grouping related keywords by semantic meaning and search intent. Running your keyword list through clustering before mapping keywords to pages can reveal topical gaps and consolidation opportunities that spreadsheet-based sorting misses.

Of course, you need to keep these tools’ limitations in mind. They’re strong at synthesis and pattern recognition but weaker at providing reliable volume and difficulty data. Use them alongside your keyword tools, not instead of them.

Search Suggestions

Search engines themselves are a free, always-up-to-date resource for keyword research. Google autocomplete, the People Also Ask box, and the related searches section at the bottom of the SERP all surface real query patterns from real users.

Google autocomplete is particularly useful for long-tail discovery. Enter your seed keyword and add a letter:

Google autocomplete suggestions for hiking boots

Source: Google.com

Google will suggest several popular phrases, each of which is a data point about what people search with that keyword as a root. 

People Also Ask (People also search for) displays related questions that Google considers topically connected to your query, often revealing adjacent content opportunities worth targeting independently.

Google People Also Search For hiking boots results

Source: Google.com

FAQs

What is keyword research?

Keyword research is the practice of finding and analyzing search queries to identify which ones are worth targeting with your content. It involves evaluating search volume, keyword difficulty, and the intent behind each query to build a targeted list of terms that align with your site’s goals and domain authority.

How do I do keyword research?

Start by defining your goals, then build a list of seed keywords based on your audience’s pain points and your core topic areas. Use a tool like Ubersuggest to expand that list and analyze candidates by search volume, difficulty, and intent. Audit the SERP directly for your top candidates before finalizing your targets. Then map keywords to specific pages, create or optimize content, and track performance over time.

Can I do keyword research for free?

Yes. Ubersuggest and AnswerThePublic both offer free keyword data. Google Search Console is also free. If you’re not ready to pay for a tool yet, you can use Google’s built-in search features like autocomplete and People Also Ask (People also search for). Free tools may have volume and feature limitations, but they’re more than sufficient for early-stage research or smaller sites. Paid plans unlock more comprehensive data that you may want to view as you progress.

What do I do after keyword research?

After completing keyword research, map your keywords to specific URLs, either existing pages you’ll optimize or new content you’ll create. Prioritize by intent and difficulty, then write or update content to match the search intent behind each keyword. Publish, build links where needed, and track performance in a rank tracker. Keyword research isn’t a one-time task. Revisit it regularly as your domain authority grows and as search behavior evolves.

Conclusion

Keyword research has always been the foundation of SEO. 

What’s changed is the complexity of the environment you’re researching. AIOs have changed how clicks are distributed. LLMs have introduced a layer of search behavior that operates under different rules entirely. And topical authority now matters as much as optimizing individual keywords.

The teams navigating this well aren’t researching keywords in isolation anymore. 

They’re combining traditional keyword analysis with prompt research and monitoring AI citation alongside ranking position. They then use that research to build content strategies around topic clusters rather than individual terms.

The process I’ve outlined here covers all that. If you want to go deeper on implementation, my complete SEO checklist walks through how keyword research connects to the rest of your optimization program. 

If you’d rather have an expert team handle the execution, NP Digital’s SEO consulting services are built for exactly this kind of work and dive into keyword research for your site using the process above.

Read more at Read More

New Google Maps features: Local Guides redesign, AI captions, photo sharing

Google Maps AI updates

Google is rolling out new Google Maps features that make it easier to contribute photos, reviews, and local insights, while adding Gemini-powered caption suggestions.

Local Guides redesign. Contributor profiles are getting more visibility. Total points now appear more prominently, Local Guide levels are easier to spot, and badge designs have been refreshed.

  • Top contributors will also stand out more in reviews with new gold profile indicators.

AI caption drafts. Google is also introducing AI-generated caption drafts. Gemini analyzes selected images and suggests text you can edit or discard.

  • Caption suggestions are available in English on iOS in the U.S., with Android and broader global expansion planned.

Media sharing. Google Maps now shows recent photos and videos directly in the Contribute tab, making uploads faster.

  • If you enable media access, Google Maps will suggest images from your camera roll that are ready to post with a tap.
  • This feature is now live globally on iOS and Android.

Why we care. Google is making it easier to create and scale fresh local content, which can directly affect rankings and visibility. At the same time, stronger contributor signals may influence which reviews users trust and which businesses win clicks.

Read more at Read More

Web Design and Development San Diego

5 priorities for lead gen in AI-driven advertising

5 priorities for lead gen in AI-driven advertising

Many of today’s PPC tools were designed to be easily accessible to ecommerce. That doesn’t mean lead gen can’t take advantage of them, but it does mean more intentional application is required.

Lead gen with AI still requires a creative approach, and many conventional ecommerce tools still apply — but not always in the same way.

Here are the priorities that matter most for succeeding with lead gen using AI.

Disclosure: I’m a Microsoft employee. While this guidance is platform-agnostic, I’ll reference examples that lean into Microsoft Advertising tooling. The principles apply broadly across platforms.

1. Fix your conversion data first

This is the single most important thing you can do as AI becomes more embedded in media buying.

Between evolving attribution models, privacy changes, different platform connections, and shifts in how consumers engage with brands, it’s reasonable to ask whether your data is still telling an accurate story.

Start by auditing your CRM or lead management system. Make sure the data you pass back to advertising platforms is clean, consistent, and intentional.

In most cases, data issues stem from human choices rather than technical failures. Still, there are a few technical checks that matter:

  • Confirm conversions are firing consistently.
  • Regularly review conversion goal diagnostics.
  • Validate that lead status updates and downstream signals are actually flowing back.

If AI systems are learning from your data, you want to be confident that the feedback loop reflects reality.

Dig deeper: How to make automation work for lead gen PPC

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

2. Make landing pages easy to ingest and easy to understand

Lead gen campaigns often have multiple conversion paths, which can be helpful for users. But from an AI perspective, ambiguity is a risk.

Your landing pages should make it clear:

  • What action you want the user to take.
  • What happens after action is taken.
  • Which conversions matter most.

Redundant or unclear conversion paths can confuse both users and systems. If AI crawlers detect that anticipated outcomes are inconsistent, they may begin to question the accuracy of what your site claims to do. That can limit eligibility for certain placements.

Language clarity matters just as much. Avoid jargon, eccentric terminology, or internally focused phrasing when describing your services. Clear, plain language makes it easier for AI systems to understand who you are, what you offer, and how to match creative to the right audience.

A practical test: Put your website content into a Performance Max campaign builder and review how the system attempts to position your business. If you agree with the messaging, imagery, and framing, your site is likely easy to understand. If not, that feedback is valuable.

You can also paste your site content into AI assistants and ask them to describe your business and services. If the response aligns with reality, you’re in a good place. If it doesn’t, that’s a signal to refine your content.

Behavioral analytics tools, like Clarity, can help you understand exactly how humans are engaging with your site and how often AI tools are crawling your site.

Dig deeper: AI tools for PPC, AI search, and social campaigns: What’s worth using now

3. Budget across the entire funnel

Lead gen has always struggled with long conversion cycles. That challenge doesn’t go away, and in some ways, it becomes more pronounced.

AI-driven systems increasingly weigh sentiment, visibility, and contextual signals, not just last-click performance. If all of your budget and reporting focuses on immediate traffic, you may miss meaningful impact higher in the funnel.

That means:

  • Budgeting intentionally across awareness, consideration, and conversion.
  • Applying the right metrics at each stage.
  • Looking beyond traffic as the primary success indicator.

In many lead gen models, citations, qualified leads, and eventual revenue tell a more accurate story than clicks alone.

Dig deeper: Lead gen PPC: How to optimize for conversions and drive results

Get the newsletter search marketers rely on.


4. Clean up your feeds and map data

You may not think you have a “feed” in your lead gen setup, but that absence can put you at a disadvantage.

Feeds help AI systems understand your business structure, services, and site architecture. Even if you don’t have hundreds of pages, a simple, well-maintained feed in an Excel document can provide valuable context when uploaded to ad platforms.

Clean up your feeds and map data
Example of a feed for lead gen

Feed hygiene matters. Use clear, specific columns. Follow platform standards for text, images, and categorization. Make sure all relevant categories are represented.

On the local side, claim and maintain all map profiles. Ensure information is accurate and consistent. If you use call tracking in map placements, review your labeling carefully. AI systems may pull data from map listings or your website, and mismatches can create attribution confusion, particularly for phone leads.

Account for potential AI-driven inflation in reporting, whether you’re looking at map pack data, direct reporting, or site-level performance. Any changes you make should also be reflected correctly in your conversion goals.

5. Pressure-test your creative for clarity

Creative assets may be mixed, matched, or shortened using AI. In some cases, you may only get one headline to explain who you are and why someone should contact you.

If your value proposition requires three headlines, or a headline plus a description, to make sense, that’s a risk.

Review your existing creative and identify assets that stand on their own. You should have at least some options where a single headline clearly communicates:

  • What you do
  • Who you help
  • Why it matters

If that clarity isn’t there, AI-driven placements can quickly become confusing.

Dig deeper: Why creative, not bidding, is limiting PPC performance

The fundamentals that still move the needle

Lead gen today doesn’t need to be complicated.

Most of the actions that matter today are things strong advertisers already do: clean data, clear messaging, intentional budgeting, and disciplined execution. What changes is how attribution may shift, and how much weight systems place on different signals.

The fundamentals still win. The difference is that AI makes weaknesses more visible and strengths more scalable.

If you focus on clarity, accuracy, and alignment across your funnel, you give both people and systems the best possible chance to understand your business — and that’s where sustainable performance comes from.

Read more at Read More

Web Design and Development San Diego

The Mad Men era of SEO: Why AI is shifting search to persuasion

The Mad Men era of SEO- Why AI is shifting search to persuasion

For most people, “Mad Men” means the TV show. But the phrase points to something more specific: Madison Avenue in the 1950s and ‘60s, when agencies grew brands through persuasion, positioning, and earned trust in a world of scarce media channels and powerful gatekeepers. If you wanted attention, you bought your way in, then made your product the obvious choice.

When the internet arrived and Google made the chaos navigable, an entire industry was built on getting brands found. Search and SEO became one of the most commercially valuable disciplines in marketing.

That model isn’t disappearing. But something new is taking shape on top of it — and most of the industry is still using the wrong language to describe what’s happening.

AI is exposing everything SEO has neglected. Brands that win recommendations from AI systems won’t do so by publishing more content. They’ll win through positioning, persuasion, and corroborated proof.

In other words, they’ll win the way Madison Avenue always did.

SEO was never really about content

One of the strangest things about the current industry conversation is how many people talk as if the job of SEO is to create content. It isn’t. Not for most businesses.

If you’re a publisher, content is the product. Traffic is the commercial engine. But for most brands, content never did what people thought.

Early on, people wrote content for customers, and it worked. Then it changed. Content became a keyword vehicle. “Get people to our site” replaced good marketing comms.

Traffic became a proxy for exposure. It worked because search rewarded retrieval: type a query, get a page, get a click. All you needed to sell that model was the belief that any traffic was good traffic. That traffic somehow led to revenue that your agency could keep delivering.

That model is now under serious pressure. 

Google and ChatGPT are increasingly taking the click. Every serious large language model is trying to satisfy informational intent before the user reaches the source. They aren’t trying to be better search engines. They’re trying to make search engines unnecessary — and that’s the entire point.

There’s too much information on the web. People don’t want to open 10 tabs and read five near-identical blog posts to find a basic answer. They want the answer. The AI systems exist precisely to give it to them.

So if informational retrieval gets absorbed into the interface, what remains? Marketing. That’s the part many SEOs are still not fully grappling with.

Dig deeper: The three AI research modes redefining search – and why brand wins

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

From place to preference

The cleanest way to understand this shift is through the “4 Ps” of marketing: product, price, place, and promotion.

Traditional SEO has been, almost entirely, a place discipline. It’s been about getting your products, services, or information onto the digital shelf when people go looking.

Keyword rankings are shelf position. Paid search is just a more expensive version of the same principle. In commercial search, you pay for premium placement in a digital aisle.

That still matters enormously.

Buyer-intent search remains valuable. Google hasn’t solved its commercial transition to a fully AI-led interface, and won’t overnight. Search is too important to Google’s revenue to disappear fast. But another layer is emerging above it, and this is the layer that most agencies aren’t yet equipped to compete on.

As AI systems become the first interaction point for more users, the game shifts from being present to being preferred.

Users don’t just search. They ask. They describe a problem. They want the best CRM for a mid-market SaaS company, the best estate agent in their area, the best sandwich shop near the office. And the system responds with recommendations.

If classic SEO was about rankings, the next phase is about recommendations. If classic SEO was about digital placement, the next phase is about shaping preference. And recommendation, in practice, is advertising.

Not a display banner. Not a 30-second TV spot. But advertising in the oldest and most commercially powerful sense: influencing the choice someone makes before they’ve even consciously made it.

An AI-generated recommendation is an invisible ad unit. It doesn’t bill by impression.

Why AI recommendations hit differently

When an LLM recommends a brand, it can’t know with certainty what will work best. So it infers. It weighs signals: past success, prominence, reviews, case studies, corroborating sources, and repeated associations between a brand and a specific type of problem.

Humans do something almost identical. 

Where performance is clearly bounded, we can identify a winner. We know who won the Oscar. We know which film topped the box office.

But when performance isn’t obvious in advance, we rely on proxies. We ask friends, read reviews, and scan for authority. We use familiarity, logic, and social proof to estimate what is likely to be right.

That’s exactly the territory AI recommendation is now entering — the consideration set problem. If I ask an LLM to find me a reliable accountant for a small business, I’m not asking it to retrieve a blog post. I’m asking it to build me a shortlist. 

Unlike traditional search, the recommendation layer is invisible to brands unless they test for it actively. You don’t see the prompt or the source chain. You don’t even know why one brand made the cut and another didn’t.

But the commercial effect is real, possibly stronger than anything traditional search produced. If you’re in the recommendation set, you’re in the running. If you’re absent, you’ve lost the sale before the conversation started.

Dig deeper: Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it

Get the newsletter search marketers rely on.


Your website is now an argument for preference

The first practical consequence: your website can no longer function like a polite digital brochure. Despite being optimized for search, many commercial web pages simply:

  • Introduce the company.
  • Gesture vaguely at services.
  • Bury differentiation under generic corporate language.
  • Treat the page as an endpoint for a ranking rather than a persuasive asset.

Still, they’re weak where it matters most: actual selling.

In the Mad Men era of SEO, your landing pages and service pages need to function like sales pages, not in a cheesy direct-response way, but in the strategic sense that they must clearly answer four things:

  • Who is this for?
  • What problem does it solve?
  • Why is it different?
  • Why choose it over the alternatives?

This comes down to positioning, which is key to GEO. If seven brands do broadly the same thing, the model needs distinctions. It needs enough clarity to say: this brand is best for X kind of buyer with Y kind of problem because it does Z better than everyone else.

Your website copy must surface real performance attributes: the specific things you genuinely do better or more distinctively than competitors. Your pages must become machine-readable arguments for preference.

Copywriting is back

Actual commercial copywriting — not fluffy brand storytelling or word count for its own sake — identifies a target customer, sharpens the problem, articulates the value, and makes the offer easy to recommend.

Good copy isn’t optional.

Take a local sandwich shop. The old SEO conversation runs to “best sandwich near me,” local pack, and review acquisition. It’s useful, but limited. 

The GEO version starts with the shop’s actual performance attributes. 

  • Is it the speed? 
  • The handmade bread? 
  • The office catering? 
  • The locally sourced produce?

Those claims must be clear on the website first. Then they need corroboration everywhere else:

  • Reviews that mention the sourdough specifically.
  • A local food blogger’s write-up.
  • Inclusion in “best lunch spots” roundups.

They’re specific, repeated, retrievable evidence of why this shop is the right recommendation for a particular type of customer.

Scale that logic to a B2B software company, and the principle holds. Pages that clearly explain who the product is for, which problems it solves, and why it outperforms rivals. Then build mentions, customer reviews, and gain trade-press coverage — the body of evidence to support recommending you to buyers — and let the AI find it.

That’s pretty much GEO in a nutshell.

Keywords don’t disappear, but they lose their throne

Keywords are a human workaround. Approximations of intent, built for a retrieval system that needed exact string matching. LLMs process fuller context, layered needs, and comparative requirements. They move from keyword matching toward problem understanding.

Keyword research still matters for classic search, paid search, and buyer-intent pages. But the center of gravity shifts.

Instead of asking only “what terms should we rank for?”, the better question is: what attributes make us the right recommendation for the buyer we actually want, and what evidence exists across the web to support that claim?

The future of SEO is starting to look like the old agency model, as the work is increasingly promotional. Once your website clearly expresses your positioning, the challenge becomes promoting that position across the wider web through credible, repeated, relevant signals.

  • Digital PR. 
  • Traditional PR. 
  • Expert commentary. 
  • Case studies. 
  • Reviews. 
  • Listicles.
  • Awards. 
  • Trade press.
  • Brand mentions. 
  • Conference speaking. 
  • Events. 
  • Creator coverage. 
  • Product comparisons. 
  • Original data studies that other people actually cite. 

These are the things you go after, create, and encourage. Sadly, many “AI visibility” conversations flatten this into nonsense.

The goal isn’t merely to have content cited by AI. It’s to gather enough market evidence that AI systems repeatedly encounter your brand in the right contexts, with the right associations.

The work stops being optimization and becomes maximization: building the largest possible volume of persuasive, corroborated, retrievable evidence that your brand is a sensible recommendation for a specific kind of buyer.

That’s a fundamentally different model from anything the SEO industry has been selling. It’s promotional and strategic brand marketing.

Dig deeper: How to design content that AI systems prefer and promote

Where SEO still fits

SEOs need to grow up. There’s still significant value in buyer-intent search, technical site architecture, entity clarity, internal linking, and structured data. SEOs are well placed to monitor recommendation environments, test prompts, and identify where visibility is being won or lost.

But the identity crisis is real. Many agencies were built for a world of rankings, informational blogs, and monthly traffic graphs. They aren’t equipped to lead a world defined by positioning, copy, PR, brand evidence, and recommendation science.

Tracking brand citations inside AI outputs isn’t a complete strategy. It’s a temporary metric. 

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

The new agency model

Winning agencies look like hybrid commercial strategy firms: part SEO, part copywriting, part PR, part brand strategy, part technical infrastructure. They know how to protect buyer-intent search revenue today while building the fame, clarity, and corroborated authority that earns recommendation tomorrow.

This is the Mad Men model of SEO. Persuasion, positioning, and clear claims backed by public proof matter again. And the job is to become recommended by AI.

Read more at Read More

Web Design and Development San Diego

Search Central Live is Coming to Shanghai in 2026!

We are hosting Search Central Live Shanghai 2026 on May 15.
This event is structured for the local community, with a specific focus on
optimizing sites that target users outside of China.

Read more at Read More

Google expands Merchant Center loyalty features to 14 countries and AI surfaces

Google Shopping Ads - Google Ads

Google is giving retailers more firepower to promote loyalty program benefits directly within product listings — expanding the program internationally and into its newest AI-powered shopping experiences.

What’s new. Merchants can now highlight member pricing and exclusive shipping options directly on listings. Loyalty annotations have also expanded to local inventory ads and regional Shopping ads — making it easier to promote in-store or geography-specific perks.

Why we care. The more you can personalize an offer for a shopper, the better. Embedding member perks into the moment of purchase discovery — rather than requiring a separate loyalty app or webpage — makes programs more visible and more likely to drive sign-ups.

By the numbers. According to Google, some retailers have reported up to a 20% lift in click-through rates when showing tailored offers to existing loyalty members.

The big picture. Loyalty benefits will now appear on Google’s AI-first surfaces, including AI Mode and Gemini, putting member offers in front of shoppers at an entirely new layer of the search experience.

Where it’s available. The expansion covers 14 countries — Australia, Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Netherlands, South Korea, Spain, the UK, and the US.

How to get started. Merchants activate the loyalty add-on in Merchant Center, configure member tiers, and set up pricing and shipping attributes. Connecting Customer Match lists in Google Ads is required to display strikethrough pricing and shipping perks to known members.

Don’t miss. US merchants can apply to join a pilot that uses Customer Match as a relationship data source for free listings — potentially expanding loyalty reach without additional ad spend.

Read more at Read More

59% of SEO jobs are now senior-level roles: Study

SEO command center

SEO hiring is shifting toward senior, strategy-led roles as AI reshapes search and expands the scope of the job. A new Semrush analysis of 3,900 listings shows companies now prioritize leadership, experimentation, and cross-channel visibility over pure technical execution.

Why we care. SEO hiring, career paths, and required skills are changing. Entry roles focus on execution, while most demand sits at the leadership level — owning strategy across search, AI assistants, and paid channels, with clear revenue impact.

What changed. Senior roles dominated, accounting for 59% of listings. Mid-level roles, such as specialists (15%) and managers (10%), trailed far behind.

  • Companies are shifting budget toward strategy as AI tools absorb more execution work.

The skills shift. In-demand capabilities extend beyond traditional SEO into coordination, testing, and decision-making:

  • Project management appeared in more than 30% of listings.
  • Communication led non-senior roles at 39.4%.
  • Experimentation appeared in 23.9% of senior roles compared with 14% of other roles.
  • Technical SEO appeared in about 6% of listings.

Tools and channels. The SEO tech stack now spans analytics, paid media, and data.

  • Google Analytics appeared in up to 47.7% of listings.
  • Google Ads appeared in 29% of listings.
  • SQL demand grew at the senior level.
  • AI tools like ChatGPT were increasingly listed.

AI expectations: AI literacy is moving from optional to expected:

  • 31% of senior roles mentioned AI.
  • Nearly 10% referenced LLM familiarity.
  • AI search concepts like AI search and AEO appeared more often.

Pay and positioning: SEO is increasingly treated as a business function.

  • The median salary for senior roles reached $130,000, compared to $71,630 for others. Some listings were much higher.
  • Degree preferences skewed toward business and marketing.

Remote work is now standard. More than 40% of listings offered remote options, with little difference by seniority.

About the data: Semrush analyzed 3,900 U.S.-based SEO job listings from Indeed as of Nov. 25. Roles were deduplicated, segmented by seniority, and analyzed using semantic keyword extraction.

The study. What 3,900 SEO Job Listings Reveal for 2026: Experiments, AI, and Six-Figure Salaries

Read more at Read More

ChatGPT enables location sharing for more precise local responses

OpenAI now allows users of ChatGPT to share their device location so that ChatGPT can know more precisely where the user is and serve better answers and results based on that location.

The feature is called location sharing, OpenAI wrote, “Sharing your device location is completely optional and off until you choose to enable it. You can update device location sharing in Settings > Data Controls at any time.”

What it does. If ChatGPT knows your location, it can return better local results. OpenAI wrote:

  • “Precise location means ChatGPT can use your device’s specific location, such as an exact address, to provide more tailored results.”
  • “For example, if you ask “what are the best coffee shops near me?”, ChatGPT can use your precise location to provide more relevant nearby results. On mobile devices, you can choose to toggle off precise location separately while keeping approximate device location sharing on for additional control.”

Privacy. OpenAI said “ChatGPT deletes precise location data after it’s used to provide a more relevant response.” Here is how ChatGPT uses that information:

  • “If ChatGPT’s response includes information related to your specific location, such as the names of nearby restaurants or maps, that information becomes part of your conversation like any other response and will remain in your chat history unless you delete the conversation.”

Does it work. Does this work? Well, maybe not as well as you’d expect. Here is an example from Glenn Gabe:

Why we care. Making ChatGPT local results better is a bit deal in local search and local SEO. Knowing the users location and better yet, precise location, can result in better local results.

Hopefully this will result in ChatGPT responding with more useful local results for users.

Read more at Read More