Posts

Meta is on track to overtake Google in global ad revenue for the first time

Inside Meta’s AI-driven advertising system: How Andromeda and GEM work together

A major shift is underway in digital advertising: Meta Platforms is projected to generate more ad revenue than Google in 2026, signaling how marketers are increasingly favoring automated, performance-driven platforms.

Driving the news. According to Emarketer, Meta is expected to bring in $243.46 billion in global ad revenue this year, narrowly topping Google’s projected $239.54 billion.

  • Meta is forecast to capture 26.8% of global ad spend.
  • Google is projected to take 26.4%.
  • It would be the first time Google has lost the top spot in digital ad revenue.

Why we care. Meta’s growth suggests brands are getting more value from automated, performance-focused tools, which could influence how they split budgets between Meta and Google. It’s also a reminder that platform dynamics are changing fast, so media strategies need to stay flexible.

Catch up quick: Google has long dominated digital advertising through Search ads, Display ads across the web, and YouTube.

But its core ad business is growing more slowly than in previous years.

Meanwhile, Meta has benefited from AI-powered ad automation, stronger performance measurement tools, and continued scale across Facebook, Instagram, and WhatsApp.

Why Meta is winning now. Advertisers are increasingly prioritizing platforms that can deliver both reach and measurable return.

Meta’s advantage has been its ability to automate creative and targeting faster, optimize campaigns with less manual input, and make it easier for brands to prove ROI.

That’s especially appealing in a tighter economic environment where marketers are under pressure to do more with less.

Yes, but. Google is still enormous — and still growing.

Its search business remains one of the most profitable ad engines in the world, and YouTube continues to attract brand budgets. But the company faces more pressure from, AI search disruption, antitrust scrutiny, and slowing growth in traditional search advertising.

The bottom line. Meta passing Google in ad revenue would mark more than a symbolic milestone — it reflects a broader power shift toward platforms that make advertising easier to automate, measure, and scale.

Read more at Read More

Why topical authority isn’t enough for AI search

Why topical authority isn’t enough for AI search

Topical authority is a key concept in SEO, but it doesn’t account for how search and AI systems choose between competing sources.

The missing layer isn’t in content or structure. It’s in the signals that determine selection once a topic is understood — the difference between being eligible and being chosen.

Topical authority explains content, not selection

Topical authority is foundational for SEO and now AEO and AAO. But the framework the industry calls topical authority is incomplete. It covers semantics, content, and structure, but that’s just one part of a three-row, nine-cell model that defines topical ownership.

Topical authority describes what you’ve built. Topical ownership describes whether the system picks you.

Search and AI systems don’t reward content for existing. They reward content for winning a selection process. At Recruitment (Gate 6 in the AI engine pipeline), the system selects candidate answers from everything it has indexed.

Topical ownership has three layers: coverage, architecture, and position.

Everything in this article builds on Koray Tuğberk GÜBÜR’s foundation. He has engineered a rigorous methodology for building content architecture that signals genuine expertise to search engines, and his case studies prove it produces measurable results.

He coined “topical map” as a standard SEO deliverable, engineered the semantic content network methodology, and brought mathematical rigor to what had been vague advice about writing comprehensively. 

His own formula (topical authority equals topical coverage plus historical Data) already acknowledges the temporal dimension I’ll expand below. He’s the authority on this subject. The expanded framework names the cells he already recognized and adds the one row he hasn’t yet formalized.

Topical ownership- The nine-cell matrix
Topical authority, fully defined, is a three-by-three matrix.

As with everything in this series, the “straight C” principle applies. To compete in any algorithmic selection process, you can’t afford a failing grade in any of the criteria that are being evaluated. 

Excellence in some dimensions doesn’t compensate for absence in others. The system requires a passing grade for each criterion. The three rows aren’t equally weighted above that floor, and position is the dominant row, as we’ll see.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Row 1: Coverage is the entry ticket, not the destination

Coverage in one sentence: Go deep enough that nothing’s left to add, cover every adjacent angle, and bring a perspective nobody else has.

Coverage describes the content itself. 

  • Depth is vertical exhaustiveness and is often underestimated. 
  • Breadth is the horizontal range across subtopics and adjacent areas. GÜBÜR’s topical map concept is the engineering discipline that makes breadth systematic rather than accidental.
  • Original thought is the dimension that is almost always overlooked. Pushing the boundaries of a topic is what makes your coverage non-interchangeable.

An entity that covers a topic with perfect depth and breadth but says nothing new is an encyclopedia: comprehensive, correct, and structurally identical to any other comprehensive source. That’s an advantage that you will lose over time since it will become prior knowledge in the training data of the AI sooner or later. You’re no longer needed and won’t be cited.

Original thought is the key to retaining the attention of the AI — a new framework, a novel angle, and a perspective no one else has articulated is a good reason to come back again and again, and ultimately cite.

Importantly, original thought doesn’t require being revolutionary, nor do you need to be original on every page. Often it will be as simple as a fresh way of framing a familiar concept.

Define your brand’s specific perspective on specific vocabulary. When done properly, that’s enough.

There are two kinds of original thought, and they carry different risk profiles. 

  • Reframing connects two existing validated truths that nobody has explicitly joined before. Both components are already corroborated; the system can verify them independently, and the originality lives in the framing.
  • True invention is different. There’s nothing for the system to cross-reference and nothing that’s already established to anchor the new claim. The result is that you look fringe until the world catches up.

The window between being right and being recognized can be long and uncomfortable, and to take that risk credibly, you need absolute conviction not only that you’re right, but that you’ll be proven right, and the patience to survive looking wrong in the meantime.

The reframe carries a fraction of that risk: the source truths are already verifiable, so the connection is credible from the moment it’s published.

Row 2: All architecture decisions begin with source context

Architecture in one sentence: Write sentences clearly, make your content flow in a logical manner, and link intelligently.

The three cells in the architecture row are GÜBÜR’s terms, and I’m using them as he defined them.

Source context determines everything that follows:

  • The publisher’s angle.
  • The identity and purpose that shapes what the topical map should contain. 
  • How the semantic network should be constructed. 

GÜBÜR’s insight that a casino affiliate and a casino technology provider need fundamentally different topical maps for the same subject captures the principle: structure follows identity.

Topical map is the structural design of the content: core sections and outer sections, which attributes become standalone pages and which merge together, the direction of internal linking, and the identification and elimination of information gaps.

Semantic network is the interconnected execution that makes the structure machine-readable: contextual flow between sentences and paragraphs, semantic distance minimized between related concepts, and cost of retrieval optimized so that the system can extract facts without unnecessary computational effort.

Good architecture makes coverage legible to the system. You can have thorough coverage that the algorithm can’t parse, and the result is the same as not having the content at all. Architecture is the bridge between what exists and what the system understands.

Where architecture falls short as a complete model is that it’s entirely within what you control. It describes how to organize your own house. It doesn’t address who the neighborhood knows you as.

Row 3: Position is why two equally thorough sources produce different results

Position in one sentence: Be first to stake the claim, be recognized by others as the best at what you do, and do things that ensure you are the person everyone refers to when they talk about your topic.

Position is the competitive layer. It’s the only row that describes the entity rather than the content. That distinction makes it the dominant row, for the same structural reason links were the dominant signal in traditional SEO: external validation at the entity level breaks ties that content quality alone can’t.

Because you’re building entity reputation, the position row requires the greatest investment of resources and must be maintained over time. Because most brands are looking for quick, easy wins and are unwilling to commit to long-term investment in their position, this is where your competitive advantage lies and where you’ll see a real difference.

Two entities can have identical coverage and architecture, and yet one will be treated as the authority and the other won’t. The current definition of topical authority can’t explain why. Position is the huge missing piece.

Position- earned, not claimed

Temporal position is about when you said it. The source that established a claim, coined a term, or described a mechanism before anyone else has a structurally different relationship to that topic than a source that repeated it later. 

GÜBÜR’s formula already acknowledges this: “Historical data” in his equation is the accumulated proof of chronological priority. First-mover advantage in knowledge graphs is an architectural phenomenon we see over and over in our data.

Hierarchical position is about dominance: being recognized by others as the top voice on the topic. Primary sources, practitioners who work in the field, researchers who run studies, and experts who generate knowledge. This isn’t self-declared. Others assign it. When Matt Diggity describes GÜBÜR as “one of the most knowledgeable people” in semantic SEO, that’s a hierarchical position being conferred by a peer.

Narrative position is about centrality: being the person everyone refers to when they talk about the topic. The journalist credits you, the researcher cites you, and the conference features you as the reference voice. 

All roads lead to Rome, and you’re Rome. The system reads these co-citation patterns and builds a picture of where you sit in the source landscape. 

Narrative position can’t be manufactured with first-party content. It’s earned by doing things in the world that others find worth referencing.

Get the newsletter search marketers rely on.


Topical authority, N-E-E-A-T-T, and topical ownership

N-E-E-A-T-T — Google’s experience, expertise, authoritativeness, and trustworthiness (E-E-A-T) framework, extended with notability and transparency — describes the credibility signals that drive algorithmic confidence and are rightly a huge focus of the industry.

N-E-E-A-T-T describes inputs, not structure. Those signals don’t exist in a vacuum. They attach to an entity that the system has already understood.

I made this argument in a Semrush webinar with Lily Ray, Nik Ranger, and Andrea Volpini in 2020, when we were still talking about E-A-T: entity understanding is a prerequisite to leveraging credibility signals, not an optional layer on top.

The nine-cell matrix shows where each signal lands.

  • The coverage row provides the source material for AI to evaluate your knowledge on your claimed topic. 
  • The architecture row is where your content gets classified and positioned relative to a topic. 
  • The position row is where strong N-E-E-A-T-T signals translate into a competitive advantage because N-E-E-A-T-T is an entity framework: it measures the publisher and author, not the content. Position is the entity row.

Note on the diagram: It could be argued that the four gaps in the diagram are partially covered by inference. 

  • Expertise implies the knowledge to build a topical map and the depth that produces original thought.
  • Experience implies the first-hand involvement that creates temporal priority.
  • Transparency implies the clear structural identity that shapes a semantic network. 

Those arguments aren’t wrong. N-E-E-A-T-T evaluates the person primarily — what they built is an indirect signal.

Where N-E-E-A-T-T signals land

N-E-E-A-T-T maps onto two of the three position dimensions. 

  • Hierarchical position is, in structural terms, what Authoritativeness and expertise measure — your level of knowledge and peer recognition of your standing on a topic. 
  • Narrative position is what notability captures. The co-citation patterns that tell the system you’re the reference voice.

Temporal position sits outside N-E-E-A-T-T. No credibility signal changes just because you said something first. 

Original thought sits outside it, too. The framework that’s supposed to reward quality has no mechanism for recognizing originality — at least not in the short term. It can reward reframing immediately, because both source truths are already verifiable. 

True invention only registers retroactively, once corroboration has accumulated to the point where assertion becomes position.

That structural gap points to a practical problem. Most practitioners build N-E-E-A-T-T credibility as a general brand exercise — demonstrate expertise, earn trust, and accumulate signals. However, credibility without topical position is a credential without context. The fix is to audit all nine dimensions and focus your work on building N-E-E-A-T-T credibility to improve your weakest.

My own situation is a good example of the difficulties of original thought:

  • Temporal position is well-documented. Brand SERP in 2012, Entity home in 2015, answer engine Optimization in 2017, the algorithmic trinity and untrained salesforce in 2024, and now assistive agent optimization in 2025. The chronological priority is established and verifiable. 
  • Hierarchical position has partial coverage. I’m recognized within specific circles as the reference voice on brand SERPs and algorithmic brand optimization, but not yet broadly enough to call it dominance.
  • Narrative position is the biggest gap. Many people use the terms I coined, but few third-party sources cite me unprompted, and more articles on my own properties won’t change that. The fix I am implementing is doing things in the world that others find worth referencing: keynotes, independent collaborations, corroboration with partners, and articles like this one.

This is why crediting GÜBÜR for source context, topical map, and semantic network is intentional. Accurate attribution from a credible source builds the narrative position of the person being credited (GÜBÜR), and giving credit accurately signals to the system that my own claims are likely to be equally well-founded. 

Crediting well is a position signal, and it’s one most practitioners consistently underuse. My take is that citing the original source is the same as linking out. People resisted for years to protect the mysterious “link juice,” but it’s now accepted that linking out to provide supporting evidence is worth more than the PageRank cost. The same logic applies to citations: the value it brings you is greater than the loss.

This article is itself a demonstration. 

  • GÜBÜR’s architecture framework is validated and extensively corroborated.
  • The AI engine pipeline argument runs across the previous eight articles in this series.
  • The nine-cell connection is new. 

For the original thought in this article, I’m using the safer form of original thought: the reframe-cite-and-add technique. I invite you to do the same.

Recruitment (Gate 6) is where position determines the winner

Article 8 in this series covered annotation (Gate 5) — the gate where you’re alone with the machine, where the system classifies your content based on your signals alone, and with no competitor in the frame. Annotation is the last absolute gate. From recruitment onward, you’re always being compared with your competition.

So, recruitment (Gate 6) is where the game changes. Every source that reaches recruitment has cleared the infrastructure gates and survived annotation (hopefully in a healthy, competition-ready state). Now the system is selecting between candidates, and it’s selecting based on relative standing, not absolute quality.

This is the moment the entire matrix resolves into a single question: when the algorithm culls candidates at the recruitment gate, is your entity’s position strong enough to be one of the survivors in that selection? 

In my three-by-three topical ownership grid, coverage gets you into the candidate pool, architecture makes the system confident it understands your content, and position determines whether it picks you ahead of the competition.

Coverage and architecture are content rows. They describe what you published. Position is the entity row. It describes who published it.

At recruitment, the system evaluates the content, and selection is heavily influenced by its assessment of the entity in the context of the topic. You can rewrite the content, but you can’t quickly rewrite who you are.

Darwin described natural selection as the mechanism by which organisms best adapted to their environment survive. An entity that occupies a strong position is an entity best adapted to the system’s selection criteria: temporal priority, hierarchical standing, and narrative centrality.

 The system isn’t being arbitrary when it selects one well-structured, comprehensive source over another equally well-structured, equally comprehensive one. It’s selecting the entity best adapted to the query’s requirements, and best adapted means best positioned, not best written.

The signals behind each row have never been equally weighted, and entity is the clearest illustration of that. In traditional SEO, inbound links were the dominant signal. They could sometimes overcome very weak criteria and were almost a guarantee of victory when all other signals were roughly equal.

That dominance gradually diminished as links became one signal among many, table stakes rather than differentiator. Entity has followed the inverse trajectory. It began as a minor signal with the introduction of the knowledge graph and knowledge panels, and has grown steadily in structural importance ever since. 

N-E-E-A-T-T attaches to an entity. Topical ownership attaches to an entity. Agential behavior requires a resolvable entity to function. Co-citation and co-occurrence patterns are only meaningful when the system has an entity to attach them to. 

The AI engine pipeline stalls at the annotation stage (Gate 5) without a resolved entity. That gate is entity classification, and everything downstream depends on it. Brand SERPs, Knowledge panels, and AI résumés are entity constructs. Without a resolved entity, they don’t exist in a meaningful way. 

The future will be more entity-dependent, not less, and the gap between brands that have invested in their entity and those that haven’t will compound. Entity is no longer simply a signal. It’s the substrate that other signals require to operate, and the most important single investment you can make in your long-term search and AI strategy.

To update a common saying: the best time to start was 10 years ago, the next best time is today, and the time it won’t be worth starting is tomorrow.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Topical ownership requires all nine cells, all three rows

Topical ownership is the state where an entity dominates all nine cells of the matrix for a given topic. Not just comprehensive, not just well-structured, but the entity others reference when they write about the subject — ideally the one that got there first, and the one peers defer to by name.

  • Coverage tells the system you’re eligible.
  • Architecture tells the system you’re legible.
  • Position tells the system you’re the right answer.

The industry has been actively optimizing for six of those nine cells. 

Understandability work builds the entity. N-E-E-A-T-T builds credibility. But the position row — the one that determines who wins at recruitment — has been built largely without intent. Practitioners accumulate N-E-E-A-T-T signals as a general credibility exercise and assume that covers the entity layer. 

Position requires deliberate engineering of temporal, hierarchical, and narrative standing on specific topics. Being intentional about all nine, knowing which row each piece of work serves and why, is where the competitive advantage lives now. 

Simply becoming conscious of the grid and the three rows will make your topical ownership, SEO, and N-E-E-A-T-T work more purposeful across all nine cells, because you will implement each signal with specific intent rather than general ambition.

The brands AI consistently recommends aren’t just covering their topics well. They own them.


This is the ninth piece in my AI authority series. 

Read more at Read More

Web Design and Development San Diego

Claude Skills for PPC: How to turn one-off prompts into scalable systems

Claude Skills for PPC- How to turn one-off prompts into scalable systems

Despite all the shiny new capabilities at our disposal, many professionals seem stuck in a cycle of “AI Groundhog Day.” 

You open a chat window, carefully craft a prompt, paste in your context, and get a great result. An hour later, you do it all over again. If this is how you use AI to automate, you’re still doing manual work — you’re just doing it in a chat box.

To move from using AI to building with it, you need to shift from a human doer to a true human orchestrator. That means stopping one-off prompts and starting to build systems. In this new phase of AI automation, what you really need are AI skills.

I explore this shift in my new book, “The AI Amplified Marketer,” where I look at how the human element of marketing remains vital even as new AI tools and shifting expectations evolve at a breakneck pace.

Below, I’ll show how to use Skills, a newer AI capability, to make you more efficient when managing PPC.

What’s a Claude Skill?

While many marketers have used ChatGPT’s Custom Instructions to set a general approach for how their AI works, a Skill is a more rigorous definition of how the AI needs to do things. These instructions can help it deliver more predictable outcomes that fit your expectations.

For example, I recently used a standard chat to rate search terms. While the AI’s logic was sound, the output was inconsistent: one session returned letter grades, another gave a percentage out of 100, and a third used a 1-10 scale.

In a professional setting, this inconsistency is a problem. It makes it difficult to integrate that prompt into a larger workflow where unpredictable grading might confuse other tools or team members.

A Skill solves this by providing a reusable set of instructions. It defines which tools and logic to use for a complex task and ensures the results are formatted exactly the same way every time.

It’s what turns the AI from a temperamental assistant into a reliable professional teammate.

And thanks to more recent agentic capabilities in Claude, a Skill is like turning your best multi-step PPC playbook into something an AI can execute on demand by delegating the various tasks to the right tools and subagents.

Whether it’s your agency’s proprietary account audit checklist or your framework for mining search query reports, a Skill encodes that process. It turns your PPC expertise into a scalable system that anyone on your team can use with their AI.

Dig deeper: Agentic AI and vibe coding: The next evolution of PPC management

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

How to build your first AI Skill

Creating a Skill is more straightforward than it might sound and you can do it through a simple chat session with your AI. Provide an account audit checklist, a standard operating procedure (SOP) from your team, or a blueprint to Claude. You can then ask it to convert that process into the formal structure of a Skill.

Interestingly, when you ask Claude to help build a Skill, it uses a specialized Skill-building protocol. This ensures your final output is structured correctly, follows best practices, and remains consistent with Anthropic’s underlying architecture.

Technically, a Skill is saved as a Markdown (.md) file that contains the playbook for the task at hand.

This file can be stored locally on your computer if you’re concerned about data privacy. Alternatively, you can share it in a central cloud repository. This makes it easy for your team to update and deploy best practices across your entire organization.

You don’t have to start from zero. Many pre-built Skills are available on platforms like GitHub. You can find examples for various marketing tasks, download them, and adapt them to fit your specific needs and workflows.

How to use a Skill in PPC

To use a skill, first make sure there are some available in your account.

Then, just tell the AI the task you want to do.

The AI will look through connected Skills and, if it finds one that matches the task, it will use those instructions to perform the work.

Sidenote: This means it is important not to have competing skills in your account. Imagine what could go wrong if you did: with two skills that both do Google Ads audits, you lose the predictability a Skill was supposed to give you in the first place, because it may randomly pick a different one and do the work in different ways as a result.

Dig deeper: Agentic PPC: What performance marketing could look like in 2030

PPC Skills need real-time data

A Skill provides powerful logic, but without access to live account data, it remains theoretical.

A Skill can define an analysis, such as “review search terms from the last 14 days with costs over $50 and zero conversions.” However, it doesn’t know how to pull that data from Google Ads on its own.

In the past, the workaround was to manually download static data, like a CSV from the Google Ads interface or a Google Ads Editor file. You would then feed this file to the AI as context. This works, but it’s slow, manual, and the data is outdated the moment you download it.

A more modern approach uses a Model Context Protocol (MCP) to connect your AI and its Skills to other systems, such as live data sources. For example, using the Optmyzr MCP, your Skill can dynamically pull the exact Google Ads data it needs, when it needs it. This connection turns a static set of instructions into a living, responsive tool. (Disclosure: I’m the cofounder and CEO of Optmyzr.)

How Skills tell AI how to do things, and how tools and MCP enable it to do those things more reliably
How Skills tell AI how to do things, and how tools and MCP enable it to do those things more reliably

Dig deeper: From scripts to agents: OpenAI’s new tools unlock the next phase of automation

Get the newsletter search marketers rely on.


From grunt work to system oversight

Combining a Skill with a tool like an MCP is where the real transformation happens. Your AI moves from being an assistant that requires constant direction to a system that can manage a process. It transitions from giving you ideas to executing your vision.

Let’s look at a common PPC task:

  • Task: Search Term Analysis to Eliminate Irrelevant Clicks
  • A Skill without tools is a task-oriented assistant: It might instruct you: “Paste in your search term report as a CSV, and I will identify potential negative keywords.” You’re still the one doing the grunt work of retrieving data and implementing the findings.
  • A Skill with tools acts as a junior manager for that specific process: It can be configured to: “Pull the search term report for the last 7 days via the MCP, identify terms with high spend and no conversions, and apply them as exact match negatives to the appropriate campaign.” The entire workflow is handled, and your role shifts to one of oversight.

When you combine structured logic (Skills) with live data and execution capabilities (tools), you’re building more than a chatbot; you’re building a reliable teammate. It’s a grounded, practical system that handles defined tasks, freeing you up to be the orchestrator of your strategy.

Dig deeper: Scaling PPC with AI automation: Scripts, data, and custom tools

4 PPC Skills you can build today

To move from theory to practice, let’s look at four concrete examples of PPC Skills. In each case, notice how connecting these Skills to live tools transforms the AI from a passive analyst into an active participant.

1. Search term mining

This Skill’s logic guides the AI to analyze a search query report to find wasted spend and opportunities.

  • Without tools: You provide a CSV. The Skill returns a structured list of recommended negative keywords and new keyword ideas. You have to implement them manually.
  • With tools (MCP): The Skill automatically pulls the latest search query report data, identifies the negative keywords, and uses a tool function to apply them directly to your Google Ads account.

2. Ad copy generation

This Skill takes a landing page URL and target keywords to generate ad copy variations based on value propositions and user intent.

  • Without tools: The Skill produces headlines and descriptions in a text format. You copy and paste them into Google Ads.
  • With tools (MCP): The Skill finds underperforming ad assets in your account, and then generates the ad copy and pushes the new ads directly into the correct ad groups, potentially even setting up a new ad experiment.

3. Account auditing

This Skill runs a predefined checklist against an account, looking for issues like missing ad extensions, campaigns limited by budget, or ad groups with low CTR.

  • Without tools: The Skill generates a report that lists all the problems it found. You then have to log in to the account and fix each one.
  • With tools (MCP): The Skill not only identifies that an ad group is missing a callout extension but can also apply a relevant, pre-approved extension from extensions used elsewhere in the account. It doesn’t just report the problem; it fixes it.

4. Budget reallocation

This Skill analyzes campaign performance data to find opportunities to shift budget from underperforming campaigns to those with higher potential returns.

  • Without tools: The Skill provides a recommendation, such as: “Decrease Campaign A’s budget by 20% and increase Campaign B’s budget by 15%.”
  • With tTools (MCP): The Skill performs a dynamic analysis, pulling in exactly the right data with the appropriate lookback and time segmentation, and then executes the budget change directly, ensuring budgets are optimized as soon as the opportunity is identified.

The future of your role: From PPC doer to PPC designer

The combination of Skills and tools enables you to move from playing with AI to having AI do meaningful work. For years, AI has been good at generating ideas but weak at executing them inside the ad platforms. This solves the “last mile problem” by giving AI the logic, data, and permissions to act.

This also signals a change in the role of the PPC professional. Your job will shift from doing the repetitive work to designing the systems that do the work. Instead of manually analyzing reports and making changes, you will spend more time designing Skills, defining the rules and guardrails for automation, and reviewing the outcomes.

We’re at a point where the large language models are capable, the tools for connecting them to platforms are available, and the interfaces make it possible for non-developers to build. It’s time to rethink your processes and get AI to be a real teammate.

Dig deeper: AI tools for PPC, AI search, and social campaigns: What’s worth using now

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

The end of endless prompting

The cycle of endless prompting is a dead end. It keeps you in the role of a manual operator when you should be a systems designer. By embracing Claude Skills, you’re doing more than just working faster; you’re changing the very nature of your job. You’re moving from “doing PPC work” to “designing the PPC systems” that perform that work with predictability and at scale.

This is the ultimate expression of the AI-amplified marketer: building a true partner that codifies your expertise into a reliable, efficient engine.

The first step is to look at your daily tasks through the lens of a designer. What repetitive process is ready to be turned into your first Skill?

Read more at Read More

Web Design and Development San Diego

Google Ask Maps is moving from listings to recommendations

Google Ask Maps is moving from listings to recommendations

Google’s Ask Maps feature does more than help users find nearby businesses.

Based on hands-on testing of local service queries for plumbers, electricians, and HVAC companies, Ask Maps often narrows the field, interprets user intent, and frames businesses around qualities such as responsiveness, specialization, honesty, and repair-first thinking.

In more complex prompts, it sometimes provides guidance before recommending businesses. This shows Google Maps moving beyond simple local retrieval and toward a more recommendation-driven experience.

To evaluate that shift, we tested Ask Maps across five levels of local intent — starting with simple category searches and progressing toward conversational prompts involving uncertainty, trust, and decision-making.

A clear pattern emerged. As query nuance increased, Ask Maps shifted from listing businesses to interpreting which businesses fit and why.

This article draws from hands-on testing across a limited set of local service queries in one geographic area. Treat these findings as an early directional view, not a comprehensive representation across all markets or query types.

The testing framework

To evaluate progression, we built a five-level intent model based on how homeowners and local service customers actually search. Instead of organizing around traditional keyword categories, we structured the framework from simple retrieval toward conversational decision-making.

  • Level 1 focused on basic requests with minimal context.
    • Example: “Looking for an HVAC company near me.” 
  • Level 2 introduced more service specificity.
    • Example: “I need an electrician to upgrade my panel in an older home.” 
  • Level 3 moved into situational queries, where the user described a problem.
    • Example: “My furnace is making a loud banging noise and I’m not sure if it needs to be replaced or repaired.” 
  • Level 4 introduced trust and decision concerns.
    • Example: “I think my furnace might need to be replaced, but I don’t want to get overcharged. Who is honest about that?” 
  • Level 5 combined those elements into fully conversational prompts asking for guidance, validation, and recommendations in the same search.
    • Example: “I was told I need a full furnace replacement, but it feels expensive. How do I know if that’s actually necessary, and who should I call for a second opinion in my area?”

This framework allowed us to evaluate:

  • Which businesses appeared.
  • How Ask Maps interpreted prompts.
  • What attributes it emphasized.
  • When results started to resemble guided recommendations rather than search results.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Ask Maps narrows the field and adds interpretation

One of the clearest patterns across the testing was that Ask Maps consistently returned a relatively small set of businesses while increasing the amount of interpretation as the user’s search intent became more complex.

At Level 1, the average number of businesses shown was 3.6. Level 2 rose to 4.3. Level 3 dropped slightly to 3.3. Level 4 averaged 5, and Level 5 averaged 4.6. Across the full set, the range remained fairly tight, generally between three and eight businesses.

That’s a different experience from traditional Maps, where a user can scroll through a much broader set of options and do more of the evaluation work themselves.

Ask Maps narrows choices early and spends more effort explaining why those businesses fit the prompt, but stops short of being fully action-oriented. Even when a phone number is shown, there’s no clickable call button directly in the Ask Maps response. 

To call or access the full set of contact options, the user still has to click into the business’s Google Business Profile. That matters because while Ask Maps is becoming more interpretive, the underlying GBP is still where action happens.

As prompts become more nuanced, uncertain, or trust-sensitive, Ask Maps draws on a broader range of sources. It shows fewer businesses, replacing breadth with interpretation.

Dig deeper: How to build FAQs that power AI-driven local search

Basic queries already go beyond simple listings

Even the simplest queries don’t behave like a traditional Maps result.

Basic queries already go beyond simple listings

At the baseline level, Ask Maps still relies heavily on Google Business Profile data, including: 

  • Business descriptions.
  • Review content.
  • Ratings.
  • Hours.
  • In some cases, posts. 

Website influence is minimal here, and there’s little evidence of outside sourcing. But even within that mostly closed ecosystem, it goes beyond listing nearby businesses.

Instead of just showing names, ratings, and locations, Ask Maps:

  • Generates narrative summaries based on information in the Google Business Profile. 
  • Describes businesses in terms of responsiveness, experience, specialization, or the kinds of situations they seem well-suited for. 
  • Draws on reviews when framing businesses.

Even at the most basic level, Ask Maps isn’t neutral. It’s beginning to interpret businesses for the user.

As queries become more specific, Ask Maps starts matching capability

Once the prompt shifts from a general service search to a specific type of job, Ask Maps becomes more selective in how it matches businesses to the request.

  • A query about an electrical panel upgrade doesn’t behave the same way as a query about urgent AC repair. 
  • Replacement-oriented prompts emphasize installation and system expertise. 
  • Repair-oriented prompts emphasize speed, availability, and responsiveness. 
  • Queries tied to older homes or higher-risk work call for more evidence of specialization.

At this level, Google Business Profile and reviews still carry much of the weight, but websites matter more when the job is more complex or costly. A panel upgrade query produces stronger external link usage than a more straightforward AC repair prompt.

That doesn’t mean websites are always heavily used. It shows more selectivity. As decisions become more complex, Google looks for more supporting evidence before recommending businesses.

Situational queries push Ask Maps toward interpretation

The more noticeable shift begins once the prompts move from service categories to real-world scenarios.

At Level 3, the user is no longer looking for a plumber, electrician, or HVAC company. Instead, they’re describing a problem, such as a loud banging furnace, outdated electrical in an older home, or an AC unit that has stopped working during extreme heat. In those cases, Ask Maps increasingly interprets the problem before introducing businesses.

Some responses provide guidance or context first. Others identify the provider and clarify the work before making recommendations. The businesses that follow aren’t framed as generic providers. They’re framed as possible solutions to the situation.

Review content becomes important here. Rather than simply supporting a business’s credibility, reviews act as evidence that the company has handled similar situations before. Fast arrival times, experience with older homes, communication during stressful repairs, and problem-solving ability all become more meaningful when describing businesses.

This is the point where Ask Maps moves more clearly from retrieval to interpretation.

Dig deeper: 7 local SEO wins you get from keyword-rich Google reviews

Trust-oriented queries change what gets emphasized

When the prompts introduce fear, skepticism, or concern about making the wrong decision, Ask Maps changes again.

At Level 4, the focus is less on the service need itself and more on the emotional context around it. The user is worried about being overcharged, being pushed into unnecessary replacement, or hiring someone who would cut corners. 

Ask Maps doesn’t just return businesses capable of doing the work. It organizes businesses around trust-related qualities such as honesty, transparency, careful workmanship, fairness, and second-opinion value.

This is one of the strongest patterns in the research. At this stage, review language is the primary signal shaping how businesses are framed. Specific phrases and anecdotes matter, elevating businesses that explain options clearly, don’t upsell, offer honest assessments, or deliver careful, professional work.

External sources become more relevant here. In addition to GBP information and reviews, Ask Maps shows more willingness to pull from company websites, testimonials, third-party platforms, and educational resources when the user’s concern involves decision risk rather than just service need.

Once the query becomes trust-driven, the recommendation no longer appears to be based only on who can do the job. It reflects who is most likely to handle the situation in a way that the user feels good about.

Get the newsletter search marketers rely on.


Advisory queries show the clearest shift

The strongest example of this progression came at Level 5. These are prompts where the user combines a problem, uncertainty, and a request for recommendations in a single query. 

For example, someone might say they were told they needed a full furnace replacement but were unsure whether that was really necessary and wanted to know who to call for a second opinion. In these cases, Ask Maps moves most clearly into a decision-support role.

Instead of leading with local businesses, it often starts with an explanation, introducing frameworks, safety context, or ways to think about the decision. 

Only after that does it recommend businesses, and those businesses are often grouped not just by rating or proximity, but by approach. Some are framed as repair-first options. Others are framed as second-opinion experts or safety-focused specialists.

This is where Ask Maps feels least like a directory and most like an advisor. The structure of the response looks more like a guided decision process than a traditional local search result.

That doesn’t mean the system is flawless or that every answer is equally strong. But it does suggest that when a prompt includes uncertainty and a need for validation, Ask Maps is trying to do more than match a category. It’s trying to help the user think through what to do next.

Dig deeper: New Google Maps features: Local Guides redesign, AI captions, photo sharing

Where Ask Maps gets its information

Across the testing, several source patterns appear repeatedly, and the mix appears to shift depending on the type of query.

Where Ask Maps seems to get its info

At the foundation, Google Business Profile does much of the early work. Business categories, service descriptions, hours, ratings, and review counts help determine which businesses are eligible to appear and how they are initially framed. In some cases, Ask Maps also pulls from GBP services and products, business descriptions, and occasionally posts when those help reinforce what the business does.

Reviews seem to be one of the most important inputs across nearly every query type. Not just in ratings, but in how review language shapes the summary. 

Ask Maps often draws on review themes tied to:

  • Responsiveness.
  • Honesty.
  • Professionalism.
  • Fast arrival times.
  • Work on older homes.
  • Repair-versus-replace situations.
  • Whether customers feel the company explains options clearly or avoids unnecessary upselling.

In other words, reviews support reputation and help define how a business is positioned in the response.

Business websites matter more once the query becomes more specific, higher-stakes, or more tied to decision-making. In those cases, Ask Maps seems more likely to pull in service pages, testimonial pages, or other on-site business information that helps reinforce specialization, repair-first positioning, second-opinion value, or experience with a particular type of job. 

That’s more noticeable in queries tied to things like panel upgrades, replacement decisions, or older-home electrical concerns than in simpler “near me” searches.

External sources are the most selective layer, but they become more visible when the query involves safety, diagnosis, pricing uncertainty, or broader decision support. 

In those cases, Ask Maps pulls in:

  • Educational content around issues like repair-versus-replace decisions, quote validation, and electrical safety. 
  • Third-party review and directory platforms such as Angi, HomeAdvisor, YouTube, and Facebook.
  • Other publicly available business information, when it helps reinforce trust, workmanship, or reputation. 

In some of the trust-oriented electrician queries in particular, this outside sourcing is more prominent than in simpler local lookups, suggesting Google may broaden its evidence base when evaluating how a business is likely to operate, not just what services it offers.

How Ask Maps mixes sources based on query

Ask Maps isn’t relying on a single source of truth. It appears to be constructing an answer from a mix of Google Business Profile data, review language, business website content, and selectively chosen outside sources, with the balance shifting based on what the user is actually asking.

What this may mean for local visibility

If Ask Maps continues to develop in this direction, it could have meaningful implications for local visibility in Google Maps.

  • Inclusion alone may matter less than interpretation. If Ask Maps is consistently showing a smaller set of businesses and adding more explanation around them, the question is no longer just whether a business appears. It’s also how that business is framed and whether Google has enough confidence to position it as a good fit for the situation.
  • Review content is becoming more important than many businesses realize. The language within reviews appears to influence not just credibility, but the actual way a business is described and recommended.
  • Website content plays a more targeted role than many local businesses assume. It may not be equally important for every prompt, but it matters more when the service is complex, expensive, or tied to greater uncertainty.

More broadly, Ask Maps points toward a version of local search in which retrieval, evaluation, and decision support occur much more closely together. Instead of searching, comparing, researching, and then deciding across several steps, the user may increasingly be guided through much of that process within a single AI-mediated Maps experience.

What businesses and SEOs should tighten up now

If Ask Maps continues moving in this direction, the practical response isn’t to chase a new tactic or treat it like a separate channel. It’s to make the business easier for Google to understand and easier for customers to trust.

What businesses should tighten up now

Keep the Google Business Profile current and specific

A Google Business Profile may play a bigger role when Ask Maps is trying to decide what a business does, what kinds of jobs it handles, and whether it fits a more nuanced prompt.

  • Review primary and secondary categories to make sure they reflect the core work accurately.
  • Tighten the business description so it clearly explains the services offered, the types of jobs handled, and any specialties or areas of focus.
  • Make sure hours, service areas, and contact details are complete and current.
  • Add photos that reinforce the kinds of jobs the business wants to be associated with.
  • Treat posts and profile updates as another way to reinforce services and activity, not just as optional extras.
  • Use the Services and Products sections fully, adding clear descriptions that reflect the specific jobs, specialties, and situations the business wants to be known for.

Pay closer attention to review language

If Ask Maps uses review language to shape how businesses are positioned, then the wording in reviews may matter more than many businesses realize.

  • Look beyond review volume and average rating.
  • Pay attention to whether reviews naturally mention specific jobs, customer concerns, and outcomes.
  • Watch for language around responsiveness, honesty, professionalism, repair-first thinking, and clear communication.
  • Encourage reviews that reflect real experiences rather than generic praise.
  • Use review trends to understand how the business is likely being framed by Google.

Revisit website content for higher-consideration services

Website content appears more likely to matter when the query is more complex, more expensive, or tied to more uncertainty.

  • Strengthen service pages for the higher-value or higher-risk work the business wants to be known for.
  • Add FAQs that address real decision points, not just basic definitions.
  • Include examples of the kinds of jobs handled, especially where context matters.
  • Reinforce trust signals such as experience, process, reviews, and proof of work.
  • Use language that helps explain situations like repair versus replace, older-home work, or second-opinion scenarios.

Think beyond ranking for a phrase

There’s a broader strategic shift here for local SEO. The question may no longer be only whether a business can rank for a phrase. It may also be whether Google has enough evidence to recommend that business in response to a real-world question.

  • Evaluate whether the business is easy to understand across GBP, reviews, website content, and broader digital mentions.
  • Look at whether the business is clearly associated with the jobs and situations it wants to win.
  • Think about trust and decision support, not just service relevance.
  • Focus on making the business more legible to both Google and potential customers.
  • Treat local optimization less like keyword matching alone and more like building a clear, consistent business profile across sources.

Dig deeper: If your local rankings are off, your map pin may be the reason

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

The direction of Ask Maps is becoming clearer

The main question behind this research was when Ask Maps stops behaving like a directory and starts behaving more like a recommendation engine. Based on this testing, that shift starts earlier than many might expect.

Even at the most basic level, Ask Maps narrows, summarizes, and interprets. As prompts become more specific, situational, and trust-driven, they move further toward guided recommendations. At the highest level of complexity, it begins to look less like traditional local search and more like a system designed to help users make decisions.

That doesn’t mean Google Maps has fully changed into something else. But it does suggest the direction is becoming clearer. For local businesses and the people who support them, that makes this worth watching closely. Visibility inside Maps may increasingly depend not just on being present, but on being understood well enough for Google to explain why the business fits the user’s needs.

Read more at Read More

The 6 Agentic AI Protocols Every SEO Needs to Know

A user asks Gemini: “Find me a task chair under $400 with lumbar support and free shipping. Order the best one.”

The AI doesn’t open a new tab. It doesn’t ask the user to click anything. Instead, it queries product databases, cross-references reviews, checks real-time inventory, compares shipping policies, and initiates a checkout — all without a human touching a single page.

These are all things the user would have done themselves, but now in a fraction of the time, with as much effort as it took to write the initial prompt.

Okay, we might not be quite at the stage where everyone is letting AI agents make all their purchases for them. But it’s no longer an unrealistic future.

What made that possible isn’t the AI models themselves. It’s the infrastructure we’re seeing become an increasingly important part of how modern websites are built. This infrastructure consists of a stack of protocols that tells AI agents how to find each retailer’s site, understand their catalog, verify their claims, and take action.

These protocols define how AI agents interact with your brand. And most SEOs have no idea they exist.

By the end of this article, you’ll understand what each protocol does, how they differ from one another, and why you need to pay attention to what’s going on underneath the hood of AI search if you want to stay visible going forward.

Why Protocols Matter for SEOs

Protocols determine whether an AI agent can interact with your brand programmatically, or whether it has to guess. Brands that can speak the agent’s language are more likely to not just be surfaced, but also recommended and, ultimately, interacted with to make purchases.

Think of how robots.txt and XML sitemaps became table stakes for search crawlers. Agentic protocols are shaping up to be that for AI agents.

Put simply: if you want agents to be able to take action on your site — whether that’s making a purchase, booking a table, or completing a form — you need to understand these protocols.

Note: We’re not suggesting that without these protocols AI agents and users will never access your site or buy from them. Agentic commerce is still pretty new, and even the protocols themselves are still evolving. But we believe that agents will increasingly act on behalf of users, and that the easier you make it for them to do that on your website, the better positioned you’ll be as agentic commerce becomes the norm.


The Protocol Stack: A Quick Map

These protocols aren’t competing standards fighting for dominance. They operate at different layers of the same stack, and most are designed to work together.

Here’s a quick breakdown of what these protocols do:

Layer What It Does Key Protocols
Agent / Tool Connects agents to external data, APIs, and tools MCP
Agent / Agent Lets agents hand off tasks to other agents A2A
Agent / Website Lets websites become directly queryable by agents NLWeb, WebMCP
Agent / Commerce Enables agents to discover products and complete purchases ACP, UCP

Note: As with everything AI, the agentic protocols we’ll give more details on below are constantly evolving. This means some platforms are yet to adopt some of the protocols, and the specifics of each protocol could also change over time.


MCP: Model Context Protocol

MCP is the universal connector between AI agents and external tools, data sources, and APIs.

How It Works

Before MCP, every AI tool needed a custom integration for every data source it wanted to access. If you wanted a chatbot to pull live pricing from your database and cross-reference it with your CMS, someone had to build a bespoke connection between those systems. Then rebuild it whenever either one changed.

MCP standardizes that connection. Think of it as USB-C for AI: one protocol that lets any agent plug into any tool, database, or website that supports it.

An agent using MCP can pull live pricing data, check inventory, read structured content from a site, or execute a workflow, all through the same interface.

The website or tool publishes an MCP server, and the agent connects to it. There’s much less need for custom integration work on either side.

Who’s Behind It

MCP was launched by Anthropic in November 2024. It has since been adopted by OpenAI, Google, and Microsoft. MCP is now governed by an open-source community under the Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation.

As of early 2026, there are more than 10K MCP servers out there, making it the de facto standard for agent-to-tool connectivity.

What It Means for Your Brand

Structured data, clean APIs, and accessible HTML have always been good technical SEO. Now they’re also agent compatibility requirements. Brands with MCP-compatible data give agents something to work with. Brands without it force agents to scrape pages and infer meaning, which creates friction and can affect whether they recommend you.

A2A: Agent-to-Agent Protocol

A2A is the standard that lets AI agents from different vendors communicate, delegate tasks, and hand off work to one another.

How It Works

MCP lets an agent talk to tools. A2A lets agents talk to each other.

When a task is complex enough to need multiple specialist agents — like one for research, one for comparison, and one for completing a transaction — A2A is the protocol that coordinates them.

Each A2A-compliant agent publishes an “Agent Card” at a standardized URL (that looks like “/.well-known/agent-card.json”). This card advertises what the agent can do, what inputs it accepts, and how to authenticate with it. Other agents discover these cards and route tasks accordingly.

The result: agents from entirely different companies, built on different frameworks, running on different servers, can collaborate on a single user request. No custom-built connections required.

Who’s Behind It

Google launched A2A in April 2025 with 50+ technology partners, including Salesforce, PayPal, SAP, Workday, and ServiceNow. The Linux Foundation now maintains it under the Apache 2.0 license.

What It Means for Your Brand

As multi-agent workflows become more common, agents may evaluate your brand across multiple checkpoints before a human sees the result.

That chain might look something like this:

  • A research agent surfaces your product from a broad category query
  • An evaluation agent reads your reviews and checks the sentiment
  • A pricing agent verifies your costs against third-party sources
  • A trust agent cross-references your claims for consistency

A2A orchestrates that entire chain. If your data is inconsistent across sources, like if your pricing page says one thing and your G2 profile says another, the AI agent might filter your brand out as a contender. All before the user even sees you as an option.

NLWeb: Natural Language Web

NLWeb is Microsoft’s open protocol that turns any website into a natural language interface, queryable by both humans and AI agents.

How It Works

Right now, when an AI agent visits your website, it might have to make a lot of guesses. It scrapes your HTML, infers meaning from your content, and relies on your page being structured properly to be able to parse it effectively. There’s a lot of room for error.

Once a site implements NLWeb, any agent can send a natural language query to a standard “/ask” endpoint and receive a structured JSON response. Your site then answers the agent’s question directly, rather than the agent interpreting your HTML.

Every NLWeb instance is also an MCP server. A site implementing NLWeb automatically becomes discoverable within the broader MCP agent ecosystem without any additional configuration.

Who’s Behind It

NLWeb was created by R.V. Guha, the same person behind RSS, RDF, and Schema.org. (That’s no coincidence.) NLWeb deliberately builds on web standards that already exist, which means a lot of websites are close to NLWeb-ready right now.

Microsoft announced NLWeb at Build 2025 in May 2025. It’s open-source on GitHub. Early adopters include TripAdvisor, Shopify, Eventbrite, O’Reilly Media, and Hearst.

What It Means for Your Brand

For SEOs, NLWeb is a natural extension of work you may already be doing.

Schema markup, clean RSS feeds, and well-structured content are the foundation NLWeb builds on. Sites that have invested in structured data have a head start. Sites that haven’t are harder for agents to work with, but they can easily catch back up by implementing schema markup now.

Structured data already helps search engines, and it can make it easier for agents to understand and interact with your site too. That increases the value of technical SEO work you may have been putting off.

WebMCP

WebMCP is a proposed W3C standard that lets websites declare their capabilities directly to AI agents through the browser.

How It Works

NLWeb makes your content queryable. WebMCP goes one step further: it lets websites declare what actions they support. These actions could include “add to cart,” “book a demo,” “check availability,” and “start a trial.”

These capabilities are declared in a structured, machine-readable format. Instead of an agent scraping your UI and guessing how your checkout works, WebMCP gives it an explicit map, straight from the source (you).

Who’s Behind It

Google and Microsoft proposed WebMCP, and the W3C Community Group is currently incubating it. Chrome’s early preview shipped in February 2026, with broader browser support expected by mid-to-late 2026.

What It Means for Your Brand

WebMCP is the clearest preview of where agent-website interaction is heading.

Imagine you have two brands with similar products, similar pricing, and similar reviews. The one whose site declares clear, structured capabilities is easier for an agent to act on. The other requires guesswork.

Agents are likely to take the path of least friction, and WebMCP helps you reduce friction to a minimum.

ACP: Agentic Commerce Protocol

ACP is OpenAI and Stripe’s open standard for enabling AI agents to initiate purchases.

How It Works

ACP focuses specifically on the checkout moment. It creates a standardized way for an AI agent to complete a purchase on a merchant’s behalf, handling payment credentials, authorization, and security through the protocol itself.

Before ACP, an agent that wanted to complete a purchase had to navigate each merchant’s unique checkout flow. A different form, a different payment process, and a different confirmation step for every retailer. ACP standardizes this process.

Merchants integrate with ACP through their commerce platform, and once live, checkout becomes agent-executable. The user doesn’t have to do anything except approve.

ACP originally powered ChatGPT’s instant checkout functionality, but that has since been removed by OpenAI in favor of dedicated merchant apps. ACP may still power product discovery within ChatGPT, and may be used within these apps, but things are evolving fast.

Who’s Behind It

OpenAI and Stripe launched ACP in September 2025. It’s open-sourced under Apache 2.0, with platform support still expanding.

What It Means for Your Brand

If an agent has shortlisted your product and the user tells it to go ahead and pay, ACP is what allows the agent to complete the transaction. If your brand isn’t integrated with this workflow, you risk the AI agent getting stuck or being unable to complete that purchase.

The agent can recommend you, but it can’t buy from you. That gap will matter more as agentic commerce becomes the norm.

UCP: Universal Commerce Protocol

UCP is Google and Shopify’s open standard for the full agentic commerce journey, from product discovery through checkout and post-purchase.

How It Works

ACP focuses on the checkout moment, while UCP covers the entire shopping lifecycle.

An agent using UCP can discover a merchant’s capabilities, understand what products are available, check real-time inventory, initiate a checkout with the appropriate payment method, and manage post-purchase events like order tracking and returns. All through a single protocol.

UCP is built to work alongside MCP, A2A, and AP2 (Agent Payments Protocol), meaning it plugs into the broader agent infrastructure rather than replacing it.

Merchants publish a machine-readable capability profile. Agents then discover it, negotiate which capabilities both sides support, and proceed.

Who’s Behind It

Google and Shopify co-developed UCP, with Google CEO Sundar Pichai announcing it at NRF 2026. More than 20 launch partners signed on, including Target, Walmart, Wayfair, Etsy, Mastercard, Visa, and Stripe.

What It Means for Your Brand

When a user asks Google AI Mode to find and buy something, UCP determines whether your brand is in the conversation, and whether the agent can actually complete the transaction.

The machine-readability of your product data, the consistency of your pricing across sources, the clarity of your inventory signals: all of it feeds directly into whether an agent can successfully transact with you.

ACP vs. UCP: The Key Difference

ACP and UCP are often confused, and they do share some similarities, but here’s where they differ:

ACP UCP
Built by OpenAI + Stripe Google + Shopify
Scope Discovery and checkout layers Full journey: discovery, checkout, and post-purchase
Powers ChatGPT instant checkout and product discovery Google AI Mode, Gemini
Architecture Centralized merchant onboarding Decentralized: merchants publish capabilities at /.well-known/ucp
Status (early 2026) Live, wider rollout in progress Live, wider rollout in progress

ACP and UCP are complementary, not competing. A brand may eventually support both — one for ChatGPT’s ecosystem, one for Google’s.

For now, the practical question is: which platforms matter most to your customers, and where does your commerce infrastructure make integration easiest? Choose the protocol that aligns with your answer, or use both.

Example of Agentic Search Protocols in Action

These protocols don’t operate in isolation. Here’s what they might look like working together (note that this isn’t necessarily exactly what’s going on at each stage, and is just for illustrative purposes):

Scenario: A user asks Gemini: “Find me a comfortable task chair under $400 with lumbar support and free shipping. Order the best option.”

Gemini – Order best chair

Step 1: MCP Activates

The agent uses MCP to connect to external tools: product databases, review platforms, retailer inventory feeds. It can query live data rather than relying on cached or trained knowledge.

Step 2: A2A Coordinates

The agent then coordinates with specialist agents published by brands and review platforms via A2A. One evaluates ergonomics reviews. One checks pricing consistency across sources. One verifies free shipping claims against each retailer’s actual policy page.

Step 3: NLWeb Answers Queries Directly

The agents query each retailer’s site. Brands with NLWeb implemented respond to the agent’s /ask query with structured data. This includes things like accurate inventory, real-time pricing, and product attributes. Brands without it force the agent to scrape and infer, slowing it down and potentially leading to them being skipped altogether.

Step 4: WebMCP Declares Available Actions

The “winning” retailer’s site has declared its checkout capabilities via WebMCP. The agent knows exactly what actions are available and how to initiate them without any guesswork.

Step 5: UCP Completes the Transaction

The purchase is executed via UCP, entirely within Google’s AI experience. The merchant’s backend communicates through the standardized API. The user gets an order confirmation, and they never visited a single product page.

Obviously this is the fully agentic scenario. In reality, not every purchase is going to be left entirely to an AI agent.

But even when a human wants to evaluate options before clicking buy, making it as easy as possible for the agent to make recommendations is still good practice. That’s why these protocols are worth paying attention to.

What SEOs Should Do Now

Understanding the protocol layer is step one. Here’s where to focus next:

1. Prioritize Machine-Readable Content Over Volume

Before adding more pages, make sure your existing pages can be parsed cleanly by an agent. That means:

  • Having your pricing in plain text, not locked behind JavaScript drop-downs
  • Using feature lists that don’t require interaction to reveal
  • Including FAQ content that renders server-side
  • Using schema markup on product and organization pages

An agent that can’t read your page can’t recommend or buy your products.

2. Audit Your Structured Data

NLWeb builds on Schema.org, RSS, and structured content that sites already publish. If you’ve invested in schema markup, you have a head start on NLWeb compatibility.

If you haven’t, this is now a double reason to prioritize it: it improves your search visibility and makes your site more easily queryable by agents.

3. Check Your Consistency Across Sources

Agents verify claims by cross-referencing your site, review platforms, and third-party content. If your pricing page says one thing and your Capterra profile says another, agents can flag the discrepancy and lose confidence in your brand, making the recommendation or purchase less likely.

Audit for cross-source consistency the same way you’d audit NAP consistency in local SEO. It’s the same underlying principle, just for a different kind of crawler.

4. Get on the ACP and UCP Waitlists Now

These protocols are in active rollout. Early adopters benefit from lower competition in agent-mediated commerce while the rest of the ecosystem catches up. Join Stripe’s waitlist for ACP access. And join Google’s UCP waitlist too.

For other protocols like MCP, talk to your dev team about making sure your site supports them.

5. Monitor Your AI Footprint as a Regular Practice

Search your brand in ChatGPT, Perplexity, and Google AI Mode. Are agents describing your product accurately? Is your pricing consistent with what they’re surfacing? Are competitors appearing where you aren’t?

This is the new version of checking your SERP presence, and it needs to become a recurring part of your workflow, not a one-time audit.

Understand how your brand is appearing in AI search right now with Semrush’s AI Visibility Toolkit. It shows you where you’re showing up, where you’re behind your rivals, and exactly what AI tools are saying about your brand.

Brand Performance – Backlinko – Key Business Drivers

What’s Next for Agentic Search Protocols?

The protocols we’ve discussed here are already live, but they’re still evolving.

WebMCP is still in early preview. ACP and UCP are mid-rollout. New protocols — for agent payments, agent identity, agent-to-user interaction — are still being drafted and debated.

But the SEOs that understand and implement these protocols correctly are the ones most likely to see success.

Find out where your brand stands right now with our free AI brand visibility checker.

The post The 6 Agentic AI Protocols Every SEO Needs to Know appeared first on Backlinko.

Read more at Read More

Web Design and Development San Diego

Google Ads MCC hacked? Here’s what to do immediately

Google Ads MCC hacked? Here’s what to do immediately

At midnight on Jan. 5, hackers took over our Google Ads Manager Account (MCC). We weren’t alone. While it’s hard to get an exact count, hundreds, if not thousands, of agencies have been affected by the hacks, in turn affecting tens of thousands of accounts

While I wouldn’t wish this experience on our worst enemy, having been through it, I have some insights that I hope can help you prevent the same experience from happening to your MCC account.

How we were hacked

Despite having two-factor authentication (2FA) and allowed domains enabled, the hackers were able to get into our account via an employee’s email address. It was clearly a targeted hack: the night of the hack, the hackers tried to get in via two other email accounts at our company before they succeeded with the third.

While phishing or compromised passwords may have originally gotten them into the system — we still don’t know which — we later learned that the account the hackers used had been compromised for months and that they had created their own 2FA that they had been using all along.

Once they gained access to our account, the hackers removed everyone else’s access to the MCC. They then changed the allowed domain to Gmail and granted access to over a dozen people. The hackers then created a new MCC in our company’s name and invited most of our clients. Luckily, none of them accepted.

In the few hours they were in the MCC, the hackers proceeded to create chaos. They removed all the users from some accounts and changed the payment method in others. They launched new campaigns on only a few accounts, yet somehow also attempted half-million-dollar credit card charges on two others (despite not running any ads in those accounts).

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

What happened after the hack

We were very lucky. The hackers were locked out within eight hours, and we regained access in just over a week. They spent only about $100 across the MCC. Neither crazy credit card charge went through. We were fully recovered from the hack within two weeks. How did we do this? Let’s take a look at the steps we took.

Step 1: We contacted Google

When we were hacked, we immediately contacted our reps at Google. We’re incredibly lucky to have wonderful Google reps with whom we’ve built longstanding relationships, including one we’ve worked with for over three years. 

These long-term relationships helped, and our reps went to bat for us. They continued to put pressure on the support cases until they were resolved and helped connect us to the resources we needed. Not everyone has their own reps, but you can also take these steps on your own.

Step 2: Fill out the forms

Our Google reps immediately directed us to their “What to do if your account is compromised” resource. From there, we filed Account Takeover Forms, alerting Google to the hack. We were directed to file a form for each of our accounts that had been hacked.

We first filed one for our MCC, even though the form, at the time, said not to use it for MCCs. It looks like that language has since been changed, which is great — don’t skip this step. Getting back into the MCC makes it easier to resolve all issues, rather than having to file tickets and coordinate access for each account.

Step 3: Contact clients

At the same time, we directed any clients who still had access to their accounts to disconnect them from our MCC, and to grant access to a non-compromised email account. That way we were able to secure the accounts, work on them, and mitigate any damages immediately. We were also able to triage our accounts to figure out which we were still able to access, and which had no admins left with access.

Step 4: Reset billing

Disconnecting from our MCC wound up being a very important step. That’s because when our accounts were disconnected from the MCC, we were easily able to reset the billing by editing the payment manager and undoing all of the payment chaos that the hackers had created. We were then able to reconnect them without issue.

Step 5: Check change history

When we eventually did get back into the accounts, we immediately checked the change history, which we were able to do at the MCC level for additional speed. All the changes the hackers made during that time were there with time stamps, allowing us to put together a timeline of the hack and remediate any remaining issues.

Get the newsletter search marketers rely on.


Best practices for recovering from a hack

During all this activity, a few things were especially critical to our success in recovering the account and mitigating damage. Here’s a quick rundown of best practices to keep in mind.

Make sure clients have access

This isn’t just a best practice, but something we believe should always be the case for ethical reasons. Having additional admins in the account let us regain access immediately, despite being locked out of the MCC, and remediate issues without losing time or momentum. 

Google also pushed back on any access or billing changes that didn’t have approval from an existing admin, so having people still in the accounts was critical.

Keep your MCC clean

Remove old clients, and any other MCCs for tools you’re no longer using. We didn’t do this, and wish we had. We’ve made it a best practice for our accounts moving forward.

Limit team access

Make sure your team only has the minimum access they need. Standard access is great. Admin access should be reserved for as few people as possible. The compromised account belonged to a junior team member who didn’t need admin-level access. 

This isn’t to say they wouldn’t have gotten in through a more senior team member’s account — as mentioned, they did try to get in through several before succeeding — but it would have mitigated risk.

Use credit cards or invoices

Never connect your bank accounts to your MCC. We’ve heard of companies that have lost hundreds of thousands of dollars with this same kind of hack. Because our clients were all either on invoice or credit cards, the hackers couldn’t quickly spend money in a way that hit their accounts. 

As noted earlier, the credit card companies rejected the very suspicious half-million-dollar charges the hackers attempted to make, and notified the credit card holders. The clients we were invoicing were never charged, and everything was captured on the invoices before billing.

Invest in relationships

It’s important to invest in your relationships with your Google reps, and fellow agency owners. We remain incredibly grateful to all of the people who helped us, or even just commiserated with us along the way. This experience would’ve been even more painful if we’d had to go through it alone.

How to prevent being hacked

For those who have yet to be hacked, congratulations! Let’s try to keep it that way. Here are some things you can do to make it much less likely that this will ever happen to your accounts.

Start with a clean reset

Begin by kicking every single user out of your account, and have everybody on the accounts reset their passwords. Make sure you log everyone out of every session they were in on every device. 

Our hackers were sitting around auto-logging in and keeping their sessions open for over two months prior to the night they took over the MCC. If we’d forced a reset and logged everyone off, we would’ve removed their access without even realizing it.

Enable 2FA and allowed domains

Make sure there’s only one 2FA per person. 2FAs that use authenticators or physical keys are better than pinging a device. The hackers had created their own 2FA to get into our employees’ accounts, and we never even had an idea that it was happening.

Audit and limit access

Make sure the minimum number of people have the minimum access they need to the MCC. This reduces your risk.

Enable multi-party approval

Google rolled out this new feature quite recently to help prevent account takeovers. Essentially, the feature requires that a second admin verifies any big changes before they happen. If you’d like to read up on this feature, here’s a great guide introducing multi-party approval.

Back up your accounts

You can copy and paste your accounts into your preferred spreadsheet app via Google Ads Editor. Make a habit of doing this periodically so that you’ll always have a copy of how things were in case of a hack. With the backups, you can easily revert back if you need to.

Use strong passwords

It’s important to use unique passwords that aren’t being used anywhere else. That way, if one site gets hacked, your MCC is still not at risk. We’re still not sure how the hackers passed the initial password stage to be able to create their own 2FA.

Invest in security monitoring

If you want to be extra careful, invest in security software and/or a cybersecurity expert to monitor your system. We have now done this, and it’s been amazing (and scary) to see how many phishing attempts have already been caught in the six weeks since we did it.

A note for clients: If you’re a client and another team is managing your Google Ads, do not accept any Google Ads MCC access requests that you aren’t expecting. Please make sure you always know who and what you’re giving access to. When in doubt, double-check with the team that is managing your account. A little caution can go a long way.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Stay safe out there

The good news is that Google knows about these issues, and is actively finding ways to tighten their systems to prevent hacks. In the meantime, I hope this article has helped make our loss your gain. With an ounce of prevention, you’re likely to prevent a pound of pain.

Read more at Read More

Web Design and Development San Diego

How Google’s removal tools work for SEO and reputation management by Erase Technologies

When a client calls about a damaging search result, you might typically default to one of two responses: “we can suppress it” or “there’s nothing we can do.” Both skip the middle ground — where Google’s removal tools live.

Google provides tools to remove or deindex content from search results. They’re underused, frequently misunderstood, and often conflated.

This guide breaks down what each tool does, when to use it, and what it can’t do — so you can triage client situations accurately and set expectations that hold.

The distinction that changes everything: removal vs. deindexing

Before you use any tool, get one thing right with clients: the difference between two outcomes that look the same but aren’t.

  • Removal at source: The content is deleted from the site where it lives. Once removed, Google will drop it from its index as it re-crawls the page. This is the cleanest outcome — but it requires the site owner to act. Google’s tools can’t force it.
  • Deindexing: Google removes the URL from its index, so it won’t appear in search results — even if the page still exists. Anyone with the direct URL can still access it. This is what most of Google’s self-service tools do.

The practical implication: deindexing fixes a search problem, not a content problem. If the content is the liability — a news article, court record, or damaging forum post — deindexing reduces risk but doesn’t eliminate it. That context matters when you advise clients.

Google’s removal tools, explained one by one

1. The URL removal tool (Search Console)

In Google Search Console under Index > Removals, this tool lets you temporarily hide a URL or directory from search results. Removal lasts about six months. If the URL still exists, it may reappear.

  • Who it’s for: You, if you control the site in Search Console. You can’t use it to remove someone else’s content.
  • Common use case: Your site has an outdated page you don’t want surfacing — old press releases, deprecated product pages, or pages you’ve updated or removed.
  • What it won’t do: Remove content from a site you don’t control. This misconception causes significant client frustration.

2. The outdated content removal tool

This is the public tool to request deindexing of pages already removed or significantly changed at the source.

  • When it works: The content is gone (the page 404s or the content is removed), but Google still shows a cached version. You submit the URL, Google recrawls it, and if the content is gone, it removes the result and cached snippet.
  • When it doesn’t: The page still exists and the content is live. Google will verify it and reject the request.
  • Practical use: After you’ve removed content at the source, use this to speed up deindexing instead of waiting for the next crawl. It’s not a removal tool — it triggers a recrawl.

For a more technical breakdown, see this step-by-step guide to Google’s removal tools.

3. The Results about you tool

Launched in 2022 and expanded in August 2023, the Results About You tool lets you request the removal of specific categories of personal information from Google Search. It added proactive alerts and broader coverage, then expanded again in early 2026 to include government-issued IDs, passport data, Social Security numbers, and improved reporting for non-consensual explicit imagery, including AI-generated deepfakes.

  • What it can remove:
    • Home addresses and precise location data
    • Phone numbers
    • Email addresses
    • Login credentials and passwords
    • Credit card and bank account numbers
    • Images of handwritten signatures
    • Medical records
    • Personal identification documents (passports, driver’s licenses)
    • Explicit or intimate images shared without consent
  • What it can’t remove: General information that falls outside these categories — news articles, reviews, social posts, court records, or professional information. Those require different paths.
  • Why it matters: If you’re dealing with doxxing, data broker sites, or exposed sensitive data, you now have a self-service path. Managing this tool is increasingly part of ORM work.

4. Legal removal requests

For content outside self-service categories, you can submit legal removal requests to Google:

  • Defamation: False statements of fact about an identifiable person.
  • Copyright (DMCA): Unauthorized use of copyrighted material.
  • Court orders: Legally binding orders requiring removal.
  • Right to be Forgotten (EU/UK): Requests under GDPR and UK law, based on the 2014 Google Spain v. AEPD ruling.
  • Other legal grounds: Harassment, illegal imagery, or other violations.

Google’s legal team reviews these requests; they aren’t automatic, and approval isn’t guaranteed. Defamation has a high bar: the content must be false, not just negative. A bad review isn’t defamation; an inaccurate factual claim may be.

Right to be Forgotten applies only if you’re in the EU or UK. It allows deindexing from Google’s European search properties. It doesn’t remove content globally or impact U.S. search.

5. The personal content removal form

Separate from Results About You, this Google form handles requests to remove non-consensual explicit images, doxxing content, and certain sensitive information on other sites.

This process is more manual. Google reviews the external site content rather than just deindexing a URL. Approval rates are higher for explicit imagery than for other categories, but the process is slower and less predictable.

What none of these tools do

Understanding the limits matters as much as knowing the tools. None of Google’s removal tools will:

  • Force a third-party site to delete content.
  • Remove content from other search engines (Bing, Yahoo, DuckDuckGo).
  • Remove content from Google Images, News, or Maps without separate requests.
  • Permanently fix the underlying content problem.
  • Remove results that are accurate, lawful, and in the public interest.

That’s why suppression remains core to reputation management: when you can’t remove content, you push it down with authoritative, well-optimized content.

How to triage a client removal situation

A practical decision flow for incoming removal requests:

Step 1: Can the client control the source site? 

If yes, remove it at the source, then use the outdated content tool to speed up deindexing.

Step 2: Is it personal information in Google’s covered categories? 

Use Results About You.

Step 3: Is there a legal basis? 

Defamation, copyright, court order, or GDPR right to be forgotten. If yes, file the appropriate request and set realistic timelines (weeks to months, not days).

Step 4: Is it none of the above? 

Suppression is likely the primary path. Build a content and link strategy around the branded SERP to displace the result over time. 

For high-stakes cases — like non-consensual content or permanent court records — firms like Erase.com handle direct outreach and legal escalation on a pay-for-success basis, bridging the gap between DIY tools and litigation.

Setting realistic client expectations

The most common client mistake is expecting Google to act like a content moderator. It isn’t. 

Google’s removal tools cover specific, narrow categories. Outside them, Google defaults to indexing what exists on the web.

Set this expectation upfront to protect the client relationship. It also positions suppression not as a fallback, but as the right tool for most ORM situations.

When removal is viable, these tools have improved over the past two years. Results About You has expanded and should be included in your standard ORM audit. The outdated content tool remains underused and is a quick win when source removal has already happened.

Know the tools. Use them where they apply. Suppress where they don’t.

Read more at Read More

Three new tasks, better navigation, and a bug fix in the Yoast SEO Task List 

We launched the Yoast SEO Task List in December to give you a clear, actionable to-do list for your site’s SEO. In this update, we’ve added two new tasks, improved how you navigate to fixes, and resolved a bug that was showing tasks in the wrong language. 

A quick recap: what does the Task List do? 

The Task List scans your site and surfaces specific content that needs attention, ranked by priority with an estimated time to fix. Instead of guessing what to work on next, you click a task and Yoast takes you directly to the right place to make the improvement. Think of it as a personal SEO assistant that knows your site. 

What’s new in this update 

New task: improve your meta descriptions 

Meta descriptions are the short snippets that appear under your page title in Google search results. They don’t directly affect rankings, however they have a significant impact on whether someone clicks your link. The Task List will now flag recent posts where the meta description is missing or could be stronger, and point you to where you can fix it. Premium users can use the AI Generate button to write one in seconds. 

New task: delete your sample page 

Every new WordPress site comes with a default “Sample Page” that most people never delete. It adds no value and can create unnecessary noise for search engines. The Task List will now remind you to remove it if it’s still there. It’s a two-minute job that’s easy to overlook. 

New task: set social sharing images  

Available with Yoast SEO Premium, Yoast WooCommerce SEO, and Yoast SEO AI+

When someone shares your content on Facebook or X, the image that appears alongside it can make a real difference to whether people click. The Task List will now remind you to set a custom social sharing image for your posts and pages, so your content looks its best every time it gets shared. 

Go directly to the right place in the editor 

Previously, clicking a task would open the post editor and leave you to find the right section yourself. Now, Yoast takes you to the exact part of the editor you need: the SEO tab, the readability panel, or the meta description field. Less scrolling, faster fixing. 

Bug fix: tasks now appear in your language 

We fixed a bug where task descriptions were showing up in the site’s language rather than the logged-in user’s language. If you manage a multilingual site, or your personal language settings differ from your site’s default, tasks will now display correctly for you. 

Also in this release 

  • We’ve added a new Yoast tab to the WordPress Plugins screen that groups all your installed Yoast plugins in one place. This requires WordPress 7.0+. 
  • We fixed a bug where alt text changes made via the inline image editor in How-to and FAQ blocks weren’t saving correctly to the frontend. Thanks to @param-chandarana for the report. 

What’s coming next 

We’re continuing to expand the Task List with improvements that surface high-impact changes specific to your content. Users of paid plans will see additional tasks in upcoming releases.

Update to Yoast SEO 27.4 to get these improvements automatically, or download the latest version from the WordPress plugin directory. 

The post Three new tasks, better navigation, and a bug fix in the Yoast SEO Task List  appeared first on Yoast.

Read more at Read More

Web Design and Development San Diego

Google is bringing back a familiar name: Data Studio

In an AI-driven economy, companies have more data than ever but still struggle to turn it into useful daily decisions. Google is betting that a revamped Data Studio can become the place where users quickly explore, organize and act on data across its ecosystem.

Why the switch back. Google says the new Data Studio will serve as a central hub for a range of assets, from traditional reports and dashboards to data apps built in Colab and BigQuery conversational agents. The idea is to give users one place to work with the tools and information that shape their business each day.

Flashback. Three years ago, Google folded Data Studio into its broader analytics push by rebranding it as Looker Studio. Now, it is separating the products again as customer needs evolve.

Two versions. Google is launching two versions of the product.

  • Data Studio will remain free for individuals and small teams that need quick analysis and visualization.
  • Data Studio Pro, meanwhile, is aimed at larger organizations that need stronger security, compliance, management controls and AI capabilities, with licenses sold through the Google Cloud and Workspace admin consoles.

Why we care. The (kind of) new Data Studio could make it much easier to pull together campaign, audience and performance data from across Google’s ecosystem in one place. That means faster reporting, easier ad hoc analysis and quicker answers without relying as heavily on analysts or engineering teams. For brands already using Google Ads, BigQuery or Sheets, it could streamline how teams track performance and make day-to-day budget and creative decisions.

Where Looker fits in. Under the new structure, Looker will remain Google Cloud’s enterprise business intelligence platform, focused on governed data, semantic modeling and large-scale analytics. Data Studio, by contrast, is being positioned as the faster, more flexible option for personal exploration, ad hoc reporting and lightweight dashboards across services like BigQuery, Google Sheets and Ads.

What’s next. For existing users, Google says the transition should be seamless. Current reports, data sources and assets will carry over automatically, with no action required.

Google plans to share more about the relaunch and its broader analytics strategy at Google Cloud Next ’26 later this month.

Dig deeper. Data Studio returns as new home for Data Cloud assets

Read more at Read More

How to Do Keyword Research for SEO

Key Takeaways

  • Keyword research is the process of finding and analyzing the search terms your audience uses to determine which ones are worth targeting and why.
  • Search intent, keyword difficulty, search volume, and topical authority are the core variables that determine whether a keyword is a viable target for your site.
  • AI Overviews now appear in a significant share of searches and measurably reduce click-through rates. 
  • Long-tail keywords carry more weight than ever. They convey highly specific intent and mirror the natural language patterns behind voice and LLM queries.
  • Prompt research is a discipline that sits alongside traditional keyword research. It accounts for how people interact with AI tools, where query structure and user intent differ meaningfully from traditional search.

Have you been tracking your target keywords, only to watch rankings hold steady while organic traffic falls? 

You’re not imagining it. 

According to SEOClarity, AI Overviews (AIOs) appear for 30 percent of U.S. desktop searches, and according to Ahrefs, that presence alone reduces organic click-through rate (CTR) for position-one results by 58 percent.

You might think that makes keyword research for SEO less important now, but that couldn’t be further from the truth. 

Your research still matters. What’s changed is the goal. High-volume terms alone won’t cut it anymore. 

You need to identify which keywords still drive clicks and understand how large language models (LLM) prompts are reshaping the demand signals you rely on.

This guide covers the full research process, updated for how search works today.

What Is Keyword Research?

Keyword research is the process of identifying and analyzing the search terms your target audience types into search engines and LLMs. The goal is to determine which terms are worth targeting based on factors like the intent behind a user’s query.

Intent is the why behind what people search, and it’s an area many teams underinvest in.

Finding a high-volume keyword is easy enough. The harder part is understanding the true intent behind the keyword. That’s the key to making sure your content satisfies that intent better than what’s already ranking.

Why Is Keyword Research Important for SEO?

Creating content without keyword research is a gamble. 

Sure, you might produce something useful. However, without confirming what people are actually searching for and that you have a realistic shot at ranking, you’re spending resources on content that may never be found.

Keyword research solves for three variables that determine whether a keyword is worth pursuing:

  • Search volume tells you how many people are looking for a term each month. A keyword with zero volume isn’t worth a dedicated page. Search volume alone doesn’t close the case, though. The vast majority (94.74 percent) of keywords receive 10 or fewer monthly searches, proving low-volume, high-relevance terms can still drive traffic that converts.
  • Keyword difficulty tells you how competitive a keyword is based on the authority of the pages currently ranking for it. This is where many teams misjudge their opportunities. A keyword with a high difficulty score might be within reach for a high-authority domain but completely out of scope for a site with limited backlink equity. Targeting beyond your domain’s current authority just adds to your backlog.
  • Topical authority has become increasingly important over the past two years. Google has gotten a lot better at evaluating whether a domain demonstrates depth and consistency within a topic area. Keyword research should inform a content strategy that builds clusters of related content rather than targeting disconnected terms.

There’s also the AI layer. 

AIOs now appear in a significant share of searches and reshape the value of a keyword depending on whether one shows up. 

Research from Seer Interactive tracking 3,119 informational queries finds that organic CTR dropped 61 percent for queries with AIOs compared to queries without them.

Notice how a more semantic long-tail keyword for the same subject produces a Google AIO versus a product-based search:

Google AI Overview for how to do keyword research

Source: Google.com

Google results for keyword research tools query

Source: Google.com

See how small differences in keywords can drastically change your results? This is why doing proper keyword research is important.

Long-tail keywords are more likely to trigger AIOs, which means users get their answer without clicking through. 

That’s worth knowing, but it’s not a reason to abandon those keywords. Flag them during analysis and see where they fit in your broader strategy.

Why Search Intent Is Important for Keyword Research

Search intent is the underlying goal behind a query. 

Google organizes intent into four broad categories: 

  • Informational (users want to learn something)
  • Navigational (users are looking for a specific site or brand)
  • Commercial (users are comparing options before a purchase)
  • Transactional (users are ready to buy or act)
Four keyword intent types chart by NP Digital

Intent type is a big deal because Google matches results to intent. 

An e-commerce product page won’t rank for a query that Google interprets as informational. A how-to article won’t win for a transactional query where users want a product listing. 

No amount of optimization compensates for a content-to-intent mismatch.

Use keyword research for SEO to verify intent before you commit to a content format. The fastest way to do this is to run the keyword in Google and see what’s ranking. 

If listicles dominate page one, that’s what Google thinks the searcher wants. If product pages own the top positions, a blog post isn’t going to break through.

“What sort of things do they search for during the awareness, research, and transaction phases of their buying journey? Target each of these clearly in different areas of the website by bucketing groups of terms into these different intent groups,” explains William Kammer, Vice President of SEO at NP Accel.

Bucketing your keyword list by intent before mapping keywords to pages is one of the most practical things you can do to make sure your SEO efforts match how your audience actually moves through the funnel.

Prompt Research and AI Visibility

Traditional keyword research focuses on what people type into Google. 

Prompt research focuses on how people interact with AI tools like ChatGPT, Perplexity, and Gemini. The patterns across them are quite different.

When someone searches Google for “email marketing tools,” they enter that short phrase (or a close variant) and scan a list of results. 

When someone asks ChatGPT the same question, the query looks more like this: “I run a small e-commerce business, and I’m looking for an email marketing tool that integrates with Shopify and has automation features. What would you recommend?”

The intent might be the same, but the structure and the specificity are completely different.

LLMs take these longer queries and break them down into three key components:

  • Persona: Defines who the user is and helps the LLM tailor the response to them
  • Context: Identifies the user’s specific needs and narrows the scope of the answer
  • Question: The actual “ask” contained within the query defines the LLM’s output
Anatomy of an AI prompt persona context question

Source: Claude.ai

This structural difference affects your content strategy. 

LLMs synthesize information from multiple sources to generate a response. They evaluate content for credibility and depth. 

A page optimized around a head keyword might rank well in Google but never appear in an LLM response if it doesn’t fully answer the underlying question a user would actually ask.

Prompt research is the practice of identifying the underlying questions within the full, natural-language queries people use when interacting with AI tools and the keyword-related topic clusters those queries reveal.

Think of it as keyword research for a different interface. LLMs use a process called query fan-out, breaking out a single user prompt into multiple sub-queries to retrieve information. That means your content needs to answer not just the surface question but the related ones surrounding it.

A quarter of search volume has already shifted toward AI-driven chatbots and answer engines, according to Gartner. 

That shift is gradual, but it’s not stopping. Get ahead of it now by building prompt research into your workflow alongside traditional keyword research.

How to Do Keyword Research

Good keyword research starts with the same core process regardless of where you’re starting. Here’s how to work through it, whether you’re building a content strategy from scratch or auditing an existing one.

Six-step keyword research process by NP Digital

1. Revisit Your SEO Goals

Before you open a keyword tool, get clear on what you’re trying to accomplish. Your keyword strategy should follow from your business goals, not the other way around.

A site prioritizing revenue will have a different keyword mix than one focused on growing organic traffic volume. A brand building topical authority in a new vertical needs different content targets than one trying to hang on to existing rankings. 

Your objectives will dictate the metrics you optimize for and which parts of the keyword funnel you invest in first.

Three common goal types shape keyword priorities:

  • Conversion-focused goals call for commercial and transactional keywords. These terms sit at the bottom of the funnel and carry strong purchase or sign-up intent. They also tend to have higher keyword difficulty. That means traffic volumes are often lower, but the quality is high.
  • Traffic-growth goals point toward informational keywords with higher search volumes. These terms attract users earlier in the funnel and are generally easier to rank for, though they convert at lower rates.
  • Topical authority goals are where keyword clusters shine. These are groups of semantically related terms that together signal depth of expertise to Google. The cluster approach is a longer-term play, but it’s often the only sustainable way to rank for the high-difficulty terms in competitive verticals.

Keep your competition in mind as you match keywords to goals, too. 

If a transactional keyword is out of reach for your domain right now, targeting it could hurt your conversion goals and waste resources. A smarter move is finding long-tail keywords around the same seed and intent as a backdoor into that topic.

2. Keyword Discovery

Keyword discovery is where you build a broad list of potential targets before narrowing it down during analysis. A lot of teams spend too much time here without a clear method. Here’s one that works.

Start by mapping your core topic areas from your audience’s perspective. Consider their pain points and the industry terminology they naturally use. These become your seed keywords,  the starting points you’ll expand through tools.

From there, enter your seed keywords into a keyword tool. 

My SEO tool, Ubersuggest, has a Keyword Ideas feature that gives you dozens of variations to shape the focal point of your content. 

Here’s what it delivers for the seed keyword “hiking boots”:

Ubersuggest Keyword Ideas results for hiking boots

Source: https://app.neilpatel.com/en/ubersuggest/keyword_ideas/

Run enough seed keywords through the tool to build a list of hundreds of candidates before you start cutting.

Your competitors are a valuable third-party source, too. Pull competitor domains into Ubersuggest’s Keywords by Traffic feature to see which keywords are driving traffic to their pages. This surfaces real gaps in your strategy rather than theoretical ones.

Here’s what you get when you search my domain, neilpatel.com.

Ubersuggest Keywords by Traffic for neilpatel.com

One caveat to note is that tools may not yet have reliable volume data for trending or emerging topics. 

Jonathan Hoffer, SEO Manager at NP Digital, notes that “in the case of new trends, they might not appear in a tool, so you’ll have to check social media or forums to see if something is trending.”

Long-Tail Keywords

Long-tail keywords are search phrases of three or more words. They carry lower search volumes than head terms, but they’re more specific. That means they face less competition and tend to attract users with clearer intent, which often translates to higher conversion rates.

“Hiking boots skechers” illustrates the point well. The difficulty score is lower than our seed keyword phrase, meaning it’s easier to rank for. 

As you can see below, Ubersuggest rates “hiking boots” 39 in SEO difficulty vs. 27 for “hiking boots skechers.”

Ubersuggest SEO difficulty hiking boots
Ubersuggest SEO difficulty hiking boots skechers

That keyword is still valuable, though, because someone typing “hiking boots skechers” probably knows exactly what they want to buy. That means the odds are good that they’re close to a purchasing decision. 

A page that directly addresses that particular brand is far more likely to rank and convert than a generic “hiking boots” page ever would for that searcher.

The value of long-tail keywords goes beyond traditional SEO.

For starters, voice search queries are naturally long-tail. They’re phrased the way people speak in real life rather than in typed shorthand.

Someone typing might enter “hiking boots waterproof.” The same person using voice search asks, “What are the best waterproof hiking boots for wide feet?”

LLM prompts follow the same conversational pattern. A user asking an AI assistant a question phrases it the way they’d phrase it to a knowledgeable colleague. 

Targeting long-tail keywords in these cases gives you the best shot at matching how your audience searches.

Local Keywords

Local keyword research follows the same core process as broader keyword research. There’s one important distinction, though: Potential competitors and search intent are filtered through geography. 

Someone searching “pizza delivery” in Santa Monica isn’t looking for the same results as someone searching the same term in Chicago. Both are looking to get pizza delivered, yes, but the keyword effectively becomes a different target once location comes into play.

Don’t limit yourself to a single location modifier. 

A pizzeria in Santa Monica can target “pizza delivery Santa Monica” and neighborhood-level variants like “pizza near the pier.” Service-specific combinations like “late night pizza delivery Santa Monica” work, too.

Each geographic variation is a keyword opportunity in its own right.

Local keywords tend to have lower difficulty than non-local ones, but that doesn’t make them uniformly easy. 

Local rankings don’t run on content alone. Your Google Business Profile and the consistency of your name, address, and phone number (NAP) across the web factor in, too.

3. Keyword Analysis

Keyword target criteria checklist by NP Digital

By the end of discovery, you’ll have a long list of potential keywords. Keyword analysis is how you cut it down to a working set.

The primary metrics to evaluate are search volume, keyword difficulty, and search intent alignment.

A tool like Ubersuggest lets you organize all your candidates in a Keywords List and sort by these variables simultaneously, which is faster than evaluating them one at a time.

Ubersuggest Keyword Lists for activewear research

The right search volume floor depends on your goals. Don’t automatically filter out low-volume keywords. A term with 50 monthly searches and clear commercial intent can be worth more than a 5,000-volume informational keyword with no realistic conversion path.

For keyword difficulty, calibrate your threshold to your domain authority. 

Sites with limited backlink equity are usually better off focusing on terms with difficulty scores under 40. Higher-authority domains have more room to compete for scores of 50 and above. What counts as realistic is site-specific.

After sorting by the numbers, run a Google search on each shortlisted keyword and analyze the search engine results page (SERP) directly. Your goal is to answer two questions:

  • Does the content format match what you can produce? If every top-ranking result is a detailed comparison guide and you’re planning a product page, that’s an intent mismatch.
  • Does your domain belong in this conversation? Look at who’s ranking. If the top results are all major publications with significantly more backlink equity than your site has, be realistic about your timeline and consider adjusting your target keyword.

You should also consider whether your target keyword generates an AIO. A keyword where an AIO is present doesn’t make it a bad target, but it does change how you measure success. For those terms, landing an AIO citation matters as much as ranking position.

Nikki Brandemarte, Sr. SEO Strategist and Local SEO Team Lead at NP Digital, offers this guidance: “Pay attention to content coverage for specific topic areas. For example, are your SERP competitors publishing multiple blogs that explain the basics of a topic, or a single comprehensive guide? This can help pinpoint gaps in topical authority.”

By the end of analysis, every keyword on your shortlist should clear these bars:

  • Measurable search volume
  • Relevant to your brand or industry
  • A difficulty score your domain can realistically compete for
  • Clear search intent alignment
  • A content format your site can actually produce

4. Keyword Targeting

Once you have a refined keyword list, you need to decide which keywords to pursue first and which URLs to target them with. 

For prioritization, start with keywords that combine low difficulty with reasonable volume. These are your highest-probability wins. They won’t always be the most valuable keywords on your list, but early traction validates the strategy and gives you ranking data to learn from.

From there, move to high-intent commercial keywords. These carry more difficulty but have the most direct line to revenue. A few hundred visitors from a well-targeted commercial keyword can generate more return than thousands of visits from an informational term.

Finally, layer in top-of-funnel, high-volume informational terms. These are the awareness plays. They’re hard to rank for and have longer time horizons, but they’re important for building topical authority over time.

When assigning keywords to pages, be deliberate about avoiding keyword cannibalization

Cannibalization happens when two or more pages on your site target the same or nearly identical keywords. This splits ranking signals, creating competition between your own content. 

It’s one of the more common structural problems in mature content programs. Audit for it before you start mapping new keywords to existing pages. If you find two pages competing for the same term, consolidate, redirect, or clearly differentiate the content before adding more.

5. Keyword Optimization

With your keyword targets set, optimization is how you signal relevance to search engines without sacrificing content quality. Here’s a rundown of what current best practices look like.

  • Title tag and H1: Your primary keyword belongs in both. This remains one of the most consistent on-page ranking signals. According to Rankability, 93.5 percent of page-one results use their target keyword in the title or H1.
  • URL slug: Use a clean, keyword-inclusive URL. Research shows that URLs that include the target keyword see up to 45 percent higher click-through rates than those without.
  • Meta description: Your meta descriptions don’t directly influence rankings, but they do influence clicks. The goal is to include the keyword naturally and give searchers a clear reason to click.
  • Body copy: Use your keyword and related semantic terms throughout, but write it for the reader first. Resist the urge to stuff keywords. Density has declined as a ranking factor. Pages in the top 10 today have significantly lower keyword density than those that ranked well even a few years ago. 
  • Image alt text: Include your keyword in at least one image’s alt attribute on the page. Alt text serves accessibility and SEO purposes.
  • Structured data: Schema markup helps search engines and AI systems understand the content type and context of your page. For competitive keywords, structured data improves your eligibility for featured snippets and AIO citations.
  • Content completeness: For any keyword you’re seriously targeting, your content needs to address the topic more thoroughly than what’s currently ranking. That doesn’t mean longer for its own sake. Your piece can be shorter and still outrank what’s currently there if yours is more helpful.

For highly competitive keywords, link building to the specific page will almost certainly be part of the equation. Rankings alone won’t hold in a tough vertical without external authority pointing at the page.

6. Keyword Tracking

Systematically tracking your keyword research is what separates good SEO results from great SEO. 

Rankings change, and competitor or algorithm adjustments can swiftly change the playing field. A tracking system catches those changes before they become problems.

Typically, keyword research tools include a rank-tracking feature that monitors your keyword positions daily and displays ranking distribution or visibility trends across your tracked keyword set. 

Here’s what Ubersuggest’s Rank Tracking feature looks like:

Ubersuggest Rank Tracking dashboard keyword SEO

You can track performance separately by desktop and mobile, which is a big plus given how differently Google’s SERPs behave across devices.

The core metrics to monitor are:

  • Ranking position
  • Organic impressions via Google Search Console
  • CTR

CTR is especially worth watching for any keywords where AIOs are present. 

A stable ranking alongside a declining CTR is a signal that an AIO has entered the picture, but don’t panic. This is less a traffic problem and more an opportunity for content optimization. You may be able to go back and refresh that page with long-tail keywords that more properly align with AI search.

For broader keyword programs, tracking AI citation frequency is increasingly worth adding to your reporting stack. Brands cited in AIOs earn 35 percent more organic clicks and 91 percent more paid clicks than brands that aren’t cited on the same queries, according to Seer Interactive. 

Citation is now a meaningful key performance indicator (KPI) alongside position.

The Prompt Research Process: Is It Any Different?

The short answer is yes. Prompt research differs somewhat from traditional keyword research, but the fundamentals overlap.

Prompt and keyword research share the same goal, though: to understand what your audience is looking for and create content that satisfies that need. 

The difference is the interface.

LLM users don’t type compressed keyword strings. They ask full questions and often include specific constraints. 

The prompt below breaks down how each component works together. Notice how far it goes beyond a simple keyword search:

Structured AI prompt example with labeled components

Source: https://www.thevccorner.com/p/guide-writing-powerful-ai-prompts

These added layers change what a good target keyword looks like.

Here’s a practical approach to building prompt research into your workflow:

  • Start with your existing keyword list. Take your top commercial and informational keywords and expand them into full-sentence questions. “Email marketing tools” becomes “What’s the best email marketing tool for a small business that already uses Shopify?” 
  • Mine community forums and Q&A platforms. Reddit threads and Quora discussions show you the actual language your audience uses when asking for help. These tend to be longer and more detailed than keyword tool data, and that specificity is precisely what LLM prompts look like.
  • Use your keywords in LLMs directly. Type your target topics into ChatGPT or Perplexity and observe their results and how they phrase follow-up questions. Those follow-up questions represent the sub-queries the model identified as relevant, which are also the content gaps your pages can fill.
  • Monitor brand mention prompts. Tools like Profound track which prompts lead AI engines to mention your brand or your competitors, and how those mentions change over time. This is the closest thing to rank tracking for LLM visibility.

The content strategy implication is to prioritize completeness. 

Content scoring highly on semantic completeness appears in AI-generated answers at a rate 340 percent higher than content that scores lower, according to recent AIO research data. 

LLMs reward content that fully addresses a topic, which is the same thing Google has been rewarding since the Helpful Content updates. The convergence is not coincidental.

Bonus: More Ways to Find Keywords

As your skills grow or you take on more competitive keywords, the tools below are worth adding to your stack to spot opportunities you might otherwise miss. You’ve already seen a little of what Ubersuggest can do, so let’s start there.

Ubersuggest

One sometimes-overlooked part of Ubersuggest is the Keyword Ideas feature’s ability to filter keyword results by suggestions, related terms, questions, prepositions, and comparisons. 

Each filter uncovers a different angle on how people search for your topic (as shown in our hiking boots example).

Ubersuggest keyword filter tabs for hiking boots

The Questions modifier is particularly useful for content planning.

Ubersuggest keyword questions filter hiking boots

The Questions filter alone gives you 120 variations for “hiking boots.” They range from informational queries like “how long do hiking boots last” to commercial ones like “where to buy hiking boots near me.” 

Each has a potential content angle with its own intent and difficulty profile.

It shows you exactly what people are asking about a keyword, giving you ready-made content angles and FAQ targets. 

Ahrefs and Semrush

Ahrefs’ Keywords Explorer provides full SERP analysis in one dashboard. 

One feature worth highlighting is the AI visibility filter in Ahrefs’ Site Explorer, which shows exactly which of your ranking keywords are currently triggering AIOs. That filter turns AIO exposure into a specific, actionable list of keywords you can monitor more closely.

Semrush has integrated AI-specific research tools into its platform, too. 

Its tracking functionality enables you to monitor your brand’s performance across ChatGPT, Perplexity, and Google’s search generative experience (SGE) simultaneously. Plus, its AI sentiment feature tells whether AI-generated responses mention your brand positively or negatively. 

For teams building out an AEO strategy alongside traditional SEO, that cross-platform visibility is difficult to replicate manually.

Many experienced SEOs use multiple tools in parallel, cross-referencing data from Ubersuggest, Ahrefs, and Semrush to build a more complete picture. Because volume figures are estimates and can vary by platform, using multiple tools reduces the risk of making targeting decisions based solely on a single platform’s data.

AnswerThePublic

AnswerThePublic generates question-based keyword ideas from a seed keyword. Enter a topic, and the tool maps the questions people are asking about it, organized by preposition and question type.

The output is useful for building FAQ sections and identifying informational content angles that pure volume-based tools can’t see. 

For example, if you search for “social media marketing,” AnswerThePublic returns questions like “what are the best social media marketing strategies?” and “how to measure ROI in social media marketing?”

AnswerThePublic keyword map social media marketing

Both are strong long-tail targets with real search demand.

LLMs and AI Tools

AI tools have become genuinely useful for scaling keyword research, particularly in the brainstorming and clustering phases.

Take Claude or ChatGPT. You can rapidly expand a seed keyword into related angles and intent clusters. Use the persona component of your prompt to make them think like your target audience.

For example, you might ask an LLM to generate the questions a small business owner would ask before buying a product. Or you might dig into the objections they’d have at each stage of the purchase process. 

LLM output isn’t a replacement for tool-based volume data, but it’s a fast way to surface angles you wouldn’t have thought to search for.

Here’s a sample query I ran in Claude: “What questions would someone ask before buying email marketing software?”

Claude AI keyword brainstorm for email marketing

Source: Claude.ai

This is just a small snippet of what it returned. The LLM returned questions across a variety of categories, covering the entire buying journey someone might go through when purchasing email marketing software. 

Doing the same could provide you with long-tail keyword opportunities to reach every segment of your target audience exactly where they are. 

Semrush’s AI-powered keyword clustering tools take this further by grouping related keywords by semantic meaning and search intent. Running your keyword list through clustering before mapping keywords to pages can reveal topical gaps and consolidation opportunities that spreadsheet-based sorting misses.

Of course, you need to keep these tools’ limitations in mind. They’re strong at synthesis and pattern recognition but weaker at providing reliable volume and difficulty data. Use them alongside your keyword tools, not instead of them.

Search Suggestions

Search engines themselves are a free, always-up-to-date resource for keyword research. Google autocomplete, the People Also Ask box, and the related searches section at the bottom of the SERP all surface real query patterns from real users.

Google autocomplete is particularly useful for long-tail discovery. Enter your seed keyword and add a letter:

Google autocomplete suggestions for hiking boots

Source: Google.com

Google will suggest several popular phrases, each of which is a data point about what people search with that keyword as a root. 

People Also Ask (People also search for) displays related questions that Google considers topically connected to your query, often revealing adjacent content opportunities worth targeting independently.

Google People Also Search For hiking boots results

Source: Google.com

FAQs

What is keyword research?

Keyword research is the practice of finding and analyzing search queries to identify which ones are worth targeting with your content. It involves evaluating search volume, keyword difficulty, and the intent behind each query to build a targeted list of terms that align with your site’s goals and domain authority.

How do I do keyword research?

Start by defining your goals, then build a list of seed keywords based on your audience’s pain points and your core topic areas. Use a tool like Ubersuggest to expand that list and analyze candidates by search volume, difficulty, and intent. Audit the SERP directly for your top candidates before finalizing your targets. Then map keywords to specific pages, create or optimize content, and track performance over time.

Can I do keyword research for free?

Yes. Ubersuggest and AnswerThePublic both offer free keyword data. Google Search Console is also free. If you’re not ready to pay for a tool yet, you can use Google’s built-in search features like autocomplete and People Also Ask (People also search for). Free tools may have volume and feature limitations, but they’re more than sufficient for early-stage research or smaller sites. Paid plans unlock more comprehensive data that you may want to view as you progress.

What do I do after keyword research?

After completing keyword research, map your keywords to specific URLs, either existing pages you’ll optimize or new content you’ll create. Prioritize by intent and difficulty, then write or update content to match the search intent behind each keyword. Publish, build links where needed, and track performance in a rank tracker. Keyword research isn’t a one-time task. Revisit it regularly as your domain authority grows and as search behavior evolves.

Conclusion

Keyword research has always been the foundation of SEO. 

What’s changed is the complexity of the environment you’re researching. AIOs have changed how clicks are distributed. LLMs have introduced a layer of search behavior that operates under different rules entirely. And topical authority now matters as much as optimizing individual keywords.

The teams navigating this well aren’t researching keywords in isolation anymore. 

They’re combining traditional keyword analysis with prompt research and monitoring AI citation alongside ranking position. They then use that research to build content strategies around topic clusters rather than individual terms.

The process I’ve outlined here covers all that. If you want to go deeper on implementation, my complete SEO checklist walks through how keyword research connects to the rest of your optimization program. 

If you’d rather have an expert team handle the execution, NP Digital’s SEO consulting services are built for exactly this kind of work and dive into keyword research for your site using the process above.

Read more at Read More