A major shift is underway in digital advertising: Meta Platforms is projected to generate more ad revenue than Google in 2026, signaling how marketers are increasingly favoring automated, performance-driven platforms.
Driving the news. According to Emarketer, Meta is expected to bring in $243.46 billion in global ad revenue this year, narrowly topping Google’s projected $239.54 billion.
Meta is forecast to capture 26.8% of global ad spend.
Google is projected to take 26.4%.
It would be the first time Google has lost the top spot in digital ad revenue.
Why we care. Meta’s growth suggests brands are getting more value from automated, performance-focused tools, which could influence how they split budgets between Meta and Google. It’s also a reminder that platform dynamics are changing fast, so media strategies need to stay flexible.
Catch up quick: Google has long dominated digital advertising through Search ads, Display ads across the web, and YouTube.
But its core ad business is growing more slowly than in previous years.
Meanwhile, Meta has benefited from AI-powered ad automation, stronger performance measurement tools, and continued scale across Facebook, Instagram, and WhatsApp.
Why Meta is winning now. Advertisers are increasingly prioritizing platforms that can deliver both reach and measurable return.
Meta’s advantage has been its ability to automate creative and targeting faster, optimize campaigns with less manual input, and make it easier for brands to prove ROI.
That’s especially appealing in a tighter economic environment where marketers are under pressure to do more with less.
Yes, but. Google is still enormous — and still growing.
Its search business remains one of the most profitable ad engines in the world, and YouTube continues to attract brand budgets. But the company faces more pressure from, AI search disruption, antitrust scrutiny, and slowing growth in traditional search advertising.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/04/Inside-Metas-AI-driven-advertising-system-How-Andromeda-and-GEM-work-together-zLESHi.jpg?fit=1920%2C1080&ssl=110801920Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-14 17:48:032026-04-14 17:48:03Meta is on track to overtake Google in global ad revenue for the first time
A growing number of advertisers say their Google Ads campaigns were suddenly hit with mass disapprovals tied to DNS and 500 server errors — even when their sites appeared to be working normally. The issue is raising fresh concerns about platform reliability and the risk of sudden performance disruptions.
Driving the news. PPC advertisers began flagging widespread problems this week across Google Ads accounts, with multiple agency leaders saying clients were affected at the same time.
Managing Director at Cornerhouse Media, Ryan Berry, said more than 1,500 ads were disapproved in a single account around 1:30 p.m. UTC.
Others said they received overnight emails warning that ads had been disapproved.
Why we care. Sudden mass disapprovals can instantly pause traffic, leads, and revenue — even if nothing is actually wrong with their website. If Google’s systems are incorrectly flagging DNS or server errors, brands could lose performance and spend valuable time troubleshooting an issue they didn’t cause. It also highlights the need for closer monitoring and faster escalation when platform glitches happen.
What advertisers are seeing:
DNS errors, even when internal IT teams found no website issue.
Google Ads trainer, Charlotte Osborne said she saw two separate cases this week — one tied to a DNS error and another to a 500 error — with no issues found on the client side.
Google Advertising specialist Joshua Barr said he received “lots of emails overnight” about disapproved ads and has been dealing with similar problems for weeks.
Several Paid Search experts also said they were seeing the same issue across accounts.
What’s likely happening. Google’s ad review systems use automated crawlers to test landing pages. If Googlebot encounters temporary server issues, DNS lookup failures, redirects, or timeout errors, ads can be automatically disapproved under the platform’s “destination not working” policy.
That means advertisers can be penalized even if:
their site is live for users,
the issue is temporary,
or the problem is on Google’s crawler side.
What to do now:
Check Google Ads policy manager for exact disapproval reasons.
Test landing pages using multiple locations and devices.
Review DNS uptime, redirects, and CDN/firewall settings.
Submit appeals for clearly incorrect disapprovals.
Document account-level impacts in case the issue proves platform-wide.
The bottom line. For advertisers, this is a reminder that campaign performance can be derailed by platform glitches as much as by strategy — and when Google’s systems misfire, spend and leads can disappear fast.
Google’s legal troubles over its search and ad tech businesses are entering a new phase — one that could expose the company to billions in payouts from advertisers seeking damages after U.S. courts found it illegally monopolized key digital ad markets.
Driving the news. A growing group of advertisers is preparing to file mass arbitration claims against Google, according to attorney Ashley Keller, who said the first filings are expected this week.
Keller says he has already signed up a “significant number” of advertisers.
He estimates potential claims tied to online search and display advertising could exceed $218 billion, based on economic analysis his firm commissioned.
Similar mass arbitration cases typically take 12 to 24 months to resolve.
Catch up quick. Courts in 2024 dealt Google major antitrust blows.
Why we care. This case could open a path to recover money advertisers believe they overpaid for search and display ads due to Google’s alleged monopoly power. Mass arbitration may give businesses more leverage than individual claims and could pressure Google into settlements.
It also signals growing legal scrutiny of the digital ad market, which could eventually lead to more competition and lower costs.
Why arbitration matters. Most advertisers can’t simply sue Google in court because their contracts require disputes to go through arbitration.
That usually favors large companies when claims are handled one by one. But mass arbitration — which bundles 25 or more similar claims — can shift leverage back toward claimants.
It increases pressure to settle.
It can lower legal costs for smaller businesses.
It allows companies with relatively modest individual claims to pursue damages collectively.
What’s new. This case could break new ground because most mass arbitrations to date have involved consumers or workers — not corporate plaintiffs.
A large-scale advertiser action against Google would be among the first major efforts to use the strategy for business-to-business claims.
What Google says. In a recent filing, Google said it faces private damages claims tied to global antitrust cases but cannot yet estimate potential losses.
The company said it believes it has “strong arguments” and plans to defend itself aggressively.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/04/google-search-court-1920-5DIRBs.png?fit=1920%2C1080&ssl=110801920Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-14 16:14:022026-04-14 16:14:02Advertisers are gearing up to hit Google with mass arbitration claims worth billions
Topical authority is a key concept in SEO, but it doesn’t account for how search and AI systems choose between competing sources.
The missing layer isn’t in content or structure. It’s in the signals that determine selection once a topic is understood — the difference between being eligible and being chosen.
Topical authority explains content, not selection
Topical authority is foundational for SEO and now AEO and AAO. But the framework the industry calls topical authority is incomplete. It covers semantics, content, and structure, but that’s just one part of a three-row, nine-cell model that defines topical ownership.
Topical authority describes what you’ve built. Topical ownership describes whether the system picks you.
Search and AI systems don’t reward content for existing. They reward content for winning a selection process. At Recruitment (Gate 6 in the AI engine pipeline), the system selects candidate answers from everything it has indexed.
Topical ownership has three layers: coverage, architecture, and position.
Everything in this article builds on Koray Tuğberk GÜBÜR’s foundation. He has engineered a rigorous methodology for building content architecture that signals genuine expertise to search engines, and his case studies prove it produces measurable results.
He coined “topical map” as a standard SEO deliverable, engineered the semantic content network methodology, and brought mathematical rigor to what had been vague advice about writing comprehensively.
His own formula (topical authority equals topical coverage plus historical Data) already acknowledges the temporal dimension I’ll expand below. He’s the authority on this subject. The expanded framework names the cells he already recognized and adds the one row he hasn’t yet formalized.
Topical authority, fully defined, is a three-by-three matrix.
As with everything in this series, the “straight C” principle applies. To compete in any algorithmic selection process, you can’t afford a failing grade in any of the criteria that are being evaluated.
Excellence in some dimensions doesn’t compensate for absence in others. The system requires a passing grade for each criterion. The three rows aren’t equally weighted above that floor, and position is the dominant row, as we’ll see.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
Row 1: Coverage is the entry ticket, not the destination
Coverage in one sentence: Go deep enough that nothing’s left to add, cover every adjacent angle, and bring a perspective nobody else has.
Coverage describes the content itself.
Depth is vertical exhaustiveness and is often underestimated.
Breadth is the horizontal range across subtopics and adjacent areas. GÜBÜR’s topical map concept is the engineering discipline that makes breadth systematic rather than accidental.
Original thought is the dimension that is almost always overlooked. Pushing the boundaries of a topic is what makes your coverage non-interchangeable.
An entity that covers a topic with perfect depth and breadth but says nothing new is an encyclopedia: comprehensive, correct, and structurally identical to any other comprehensive source. That’s an advantage that you will lose over time since it will become prior knowledge in the training data of the AI sooner or later. You’re no longer needed and won’t be cited.
Original thought is the key to retaining the attention of the AI — a new framework, a novel angle, and a perspective no one else has articulated is a good reason to come back again and again, and ultimately cite.
Importantly, original thought doesn’t require being revolutionary, nor do you need to be original on every page. Often it will be as simple as a fresh way of framing a familiar concept.
Define your brand’s specific perspective on specific vocabulary. When done properly, that’s enough.
There are two kinds of original thought, and they carry different risk profiles.
Reframing connects two existing validated truths that nobody has explicitly joined before. Both components are already corroborated; the system can verify them independently, and the originality lives in the framing.
True invention is different. There’s nothing for the system to cross-reference and nothing that’s already established to anchor the new claim. The result is that you look fringe until the world catches up.
The window between being right and being recognized can be long and uncomfortable, and to take that risk credibly, you need absolute conviction not only that you’re right, but that you’ll be proven right, and the patience to survive looking wrong in the meantime.
The reframe carries a fraction of that risk: the source truths are already verifiable, so the connection is credible from the moment it’s published.
Row 2: All architecture decisions begin with source context
Architecture in one sentence: Write sentences clearly, make your content flow in a logical manner, and link intelligently.
The three cells in the architecture row are GÜBÜR’s terms, and I’m using them as he defined them.
Source context determines everything that follows:
The publisher’s angle.
The identity and purpose that shapes what the topical map should contain.
How the semantic network should be constructed.
GÜBÜR’s insight that a casino affiliate and a casino technology provider need fundamentally different topical maps for the same subject captures the principle: structure follows identity.
Topical map is the structural design of the content: core sections and outer sections, which attributes become standalone pages and which merge together, the direction of internal linking, and the identification and elimination of information gaps.
Semantic network is the interconnected execution that makes the structure machine-readable: contextual flow between sentences and paragraphs, semantic distance minimized between related concepts, and cost of retrieval optimized so that the system can extract facts without unnecessary computational effort.
Good architecture makes coverage legible to the system. You can have thorough coverage that the algorithm can’t parse, and the result is the same as not having the content at all. Architecture is the bridge between what exists and what the system understands.
Where architecture falls short as a complete model is that it’s entirely within what you control. It describes how to organize your own house. It doesn’t address who the neighborhood knows you as.
Row 3: Position is why two equally thorough sources produce different results
Position in one sentence: Be first to stake the claim, be recognized by others as the best at what you do, and do things that ensure you are the person everyone refers to when they talk about your topic.
Position is the competitive layer. It’s the only row that describes the entity rather than the content. That distinction makes it the dominant row, for the same structural reason links were the dominant signal in traditional SEO: external validation at the entity level breaks ties that content quality alone can’t.
Because you’re building entity reputation, the position row requires the greatest investment of resources and must be maintained over time. Because most brands are looking for quick, easy wins and are unwilling to commit to long-term investment in their position, this is where your competitive advantage lies and where you’ll see a real difference.
Two entities can have identical coverage and architecture, and yet one will be treated as the authority and the other won’t. The current definition of topical authority can’t explain why. Position is the huge missing piece.
Temporal position is about when you said it. The source that established a claim, coined a term, or described a mechanism before anyone else has a structurally different relationship to that topic than a source that repeated it later.
GÜBÜR’s formula already acknowledges this: “Historical data” in his equation is the accumulated proof of chronological priority. First-mover advantage in knowledge graphs is an architectural phenomenon we see over and over in our data.
Hierarchical position is about dominance: being recognized by others as the top voice on the topic. Primary sources, practitioners who work in the field, researchers who run studies, and experts who generate knowledge. This isn’t self-declared. Others assign it. When Matt Diggity describes GÜBÜR as “one of the most knowledgeable people” in semantic SEO, that’s a hierarchical position being conferred by a peer.
Narrative position is about centrality: being the person everyone refers to when they talk about the topic. The journalist credits you, the researcher cites you, and the conference features you as the reference voice.
All roads lead to Rome, and you’re Rome. The system reads these co-citation patterns and builds a picture of where you sit in the source landscape.
Narrative position can’t be manufactured with first-party content. It’s earned by doing things in the world that others find worth referencing.
Topical authority, N-E-E-A-T-T, and topical ownership
N-E-E-A-T-T — Google’s experience, expertise, authoritativeness, and trustworthiness (E-E-A-T) framework, extended with notability and transparency — describes the credibility signals that drive algorithmic confidence and are rightly a huge focus of the industry.
N-E-E-A-T-T describes inputs, not structure. Those signals don’t exist in a vacuum. They attach to an entity that the system has already understood.
I made this argument in a Semrush webinar with Lily Ray, Nik Ranger, and Andrea Volpini in 2020, when we were still talking about E-A-T: entity understanding is a prerequisite to leveraging credibility signals, not an optional layer on top.
The nine-cell matrix shows where each signal lands.
The coverage row provides the source material for AI to evaluate your knowledge on your claimed topic.
The architecture row is where your content gets classified and positioned relative to a topic.
The position row is where strong N-E-E-A-T-T signals translate into a competitive advantage because N-E-E-A-T-T is an entity framework: it measures the publisher and author, not the content. Position is the entity row.
Note on the diagram: It could be argued that the four gaps in the diagram are partially covered by inference.
Expertise implies the knowledge to build a topical map and the depth that produces original thought.
Experience implies the first-hand involvement that creates temporal priority.
Transparency implies the clear structural identity that shapes a semantic network.
Those arguments aren’t wrong. N-E-E-A-T-T evaluates the person primarily — what they built is an indirect signal.
N-E-E-A-T-T maps onto two of the three position dimensions.
Hierarchical position is, in structural terms, what Authoritativeness and expertise measure — your level of knowledge and peer recognition of your standing on a topic.
Narrative position is what notability captures. The co-citation patterns that tell the system you’re the reference voice.
Temporal position sits outside N-E-E-A-T-T. No credibility signal changes just because you said something first.
Original thought sits outside it, too. The framework that’s supposed to reward quality has no mechanism for recognizing originality — at least not in the short term. It can reward reframing immediately, because both source truths are already verifiable.
True invention only registers retroactively, once corroboration has accumulated to the point where assertion becomes position.
That structural gap points to a practical problem. Most practitioners build N-E-E-A-T-T credibility as a general brand exercise — demonstrate expertise, earn trust, and accumulate signals. However, credibility without topical position is a credential without context. The fix is to audit all nine dimensions and focus your work on building N-E-E-A-T-T credibility to improve your weakest.
My own situation is a good example of the difficulties of original thought:
Temporal position is well-documented. Brand SERP in 2012, Entity home in 2015, answer engine Optimization in 2017, the algorithmic trinity and untrained salesforce in 2024, and now assistive agent optimization in 2025. The chronological priority is established and verifiable.
Hierarchical position has partial coverage. I’m recognized within specific circles as the reference voice on brand SERPs and algorithmic brand optimization, but not yet broadly enough to call it dominance.
Narrative position is the biggest gap. Many people use the terms I coined, but few third-party sources cite me unprompted, and more articles on my own properties won’t change that. The fix I am implementing is doing things in the world that others find worth referencing: keynotes, independent collaborations, corroboration with partners, and articles like this one.
This is why crediting GÜBÜR for source context, topical map, and semantic network is intentional. Accurate attribution from a credible source builds the narrative position of the person being credited (GÜBÜR), and giving credit accurately signals to the system that my own claims are likely to be equally well-founded.
Crediting well is a position signal, and it’s one most practitioners consistently underuse. My take is that citing the original source is the same as linking out. People resisted for years to protect the mysterious “link juice,” but it’s now accepted that linking out to provide supporting evidence is worth more than the PageRank cost. The same logic applies to citations: the value it brings you is greater than the loss.
This article is itself a demonstration.
GÜBÜR’s architecture framework is validated and extensively corroborated.
The AI engine pipeline argument runs across the previous eight articles in this series.
The nine-cell connection is new.
For the original thought in this article, I’m using the safer form of original thought: the reframe-cite-and-add technique. I invite you to do the same.
Recruitment (Gate 6) is where position determines the winner
Article 8 in this series covered annotation (Gate 5) — the gate where you’re alone with the machine, where the system classifies your content based on your signals alone, and with no competitor in the frame. Annotation is the last absolute gate. From recruitment onward, you’re always being compared with your competition.
So, recruitment (Gate 6) is where the game changes. Every source that reaches recruitment has cleared the infrastructure gates and survived annotation (hopefully in a healthy, competition-ready state). Now the system is selecting between candidates, and it’s selecting based on relative standing, not absolute quality.
This is the moment the entire matrix resolves into a single question: when the algorithm culls candidates at the recruitment gate, is your entity’s position strong enough to be one of the survivors in that selection?
In my three-by-three topical ownership grid, coverage gets you into the candidate pool, architecture makes the system confident it understands your content, and position determines whether it picks you ahead of the competition.
Coverage and architecture are content rows. They describe what you published. Position is the entity row. It describes who published it.
At recruitment, the system evaluates the content, and selection is heavily influenced by its assessment of the entity in the context of the topic. You can rewrite the content, but you can’t quickly rewrite who you are.
Darwin described natural selection as the mechanism by which organisms best adapted to their environment survive. An entity that occupies a strong position is an entity best adapted to the system’s selection criteria: temporal priority, hierarchical standing, and narrative centrality.
The system isn’t being arbitrary when it selects one well-structured, comprehensive source over another equally well-structured, equally comprehensive one. It’s selecting the entity best adapted to the query’s requirements, and best adapted means best positioned, not best written.
The signals behind each row have never been equally weighted, and entity is the clearest illustration of that. In traditional SEO, inbound links were the dominant signal. They could sometimes overcome very weak criteria and were almost a guarantee of victory when all other signals were roughly equal.
That dominance gradually diminished as links became one signal among many, table stakes rather than differentiator. Entity has followed the inverse trajectory. It began as a minor signal with the introduction of the knowledge graph and knowledge panels, and has grown steadily in structural importance ever since.
N-E-E-A-T-T attaches to an entity. Topical ownership attaches to an entity. Agential behavior requires a resolvable entity to function. Co-citation and co-occurrence patterns are only meaningful when the system has an entity to attach them to.
The AI engine pipeline stalls at the annotation stage (Gate 5) without a resolved entity. That gate is entity classification, and everything downstream depends on it. Brand SERPs, Knowledge panels, and AI résumés are entity constructs. Without a resolved entity, they don’t exist in a meaningful way.
The future will be more entity-dependent, not less, and the gap between brands that have invested in their entity and those that haven’t will compound. Entity is no longer simply a signal. It’s the substrate that other signals require to operate, and the most important single investment you can make in your long-term search and AI strategy.
To update a common saying: the best time to start was 10 years ago, the next best time is today, and the time it won’t be worth starting is tomorrow.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
Topical ownership requires all nine cells, all three rows
Topical ownership is the state where an entity dominates all nine cells of the matrix for a given topic. Not just comprehensive, not just well-structured, but the entity others reference when they write about the subject — ideally the one that got there first, and the one peers defer to by name.
Coverage tells the system you’re eligible.
Architecture tells the system you’re legible.
Position tells the system you’re the right answer.
The industry has been actively optimizing for six of those nine cells.
Understandability work builds the entity. N-E-E-A-T-T builds credibility. But the position row — the one that determines who wins at recruitment — has been built largely without intent. Practitioners accumulate N-E-E-A-T-T signals as a general credibility exercise and assume that covers the entity layer.
Position requires deliberate engineering of temporal, hierarchical, and narrative standing on specific topics. Being intentional about all nine, knowing which row each piece of work serves and why, is where the competitive advantage lives now.
Simply becoming conscious of the grid and the three rows will make your topical ownership, SEO, and N-E-E-A-T-T work more purposeful across all nine cells, because you will implement each signal with specific intent rather than general ambition.
The brands AI consistently recommends aren’t just covering their topics well. They own them.
This is the ninth piece in my AI authority series.
Despite all the shiny new capabilities at our disposal, many professionals seem stuck in a cycle of “AI Groundhog Day.”
You open a chat window, carefully craft a prompt, paste in your context, and get a great result. An hour later, you do it all over again. If this is how you use AI to automate, you’re still doing manual work — you’re just doing it in a chat box.
To move from using AI to building with it, you need to shift from a human doer to a true human orchestrator. That means stopping one-off prompts and starting to build systems. In this new phase of AI automation, what you really need are AI skills.
I explore this shift in my new book, “The AI Amplified Marketer,” where I look at how the human element of marketing remains vital even as new AI tools and shifting expectations evolve at a breakneck pace.
Below, I’ll show how to use Skills, a newer AI capability, to make you more efficient when managing PPC.
What’s a Claude Skill?
While many marketers have used ChatGPT’s Custom Instructions to set a general approach for how their AI works, a Skill is a more rigorous definition of how the AI needs to do things. These instructions can help it deliver more predictable outcomes that fit your expectations.
For example, I recently used a standard chat to rate search terms. While the AI’s logic was sound, the output was inconsistent: one session returned letter grades, another gave a percentage out of 100, and a third used a 1-10 scale.
In a professional setting, this inconsistency is a problem. It makes it difficult to integrate that prompt into a larger workflow where unpredictable grading might confuse other tools or team members.
A Skill solves this by providing a reusable set of instructions. It defines which tools and logic to use for a complex task and ensures the results are formatted exactly the same way every time.
It’s what turns the AI from a temperamental assistant into a reliable professional teammate.
And thanks to more recent agentic capabilities in Claude, a Skill is like turning your best multi-step PPC playbook into something an AI can execute on demand by delegating the various tasks to the right tools and subagents.
Whether it’s your agency’s proprietary account audit checklist or your framework for mining search query reports, a Skill encodes that process. It turns your PPC expertise into a scalable system that anyone on your team can use with their AI.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
How to build your first AI Skill
Creating a Skill is more straightforward than it might sound and you can do it through a simple chat session with your AI. Provide an account audit checklist, a standard operating procedure (SOP) from your team, or a blueprint to Claude. You can then ask it to convert that process into the formal structure of a Skill.
Interestingly, when you ask Claude to help build a Skill, it uses a specialized Skill-building protocol. This ensures your final output is structured correctly, follows best practices, and remains consistent with Anthropic’s underlying architecture.
Technically, a Skill is saved as a Markdown (.md) file that contains the playbook for the task at hand.
This file can be stored locally on your computer if you’re concerned about data privacy. Alternatively, you can share it in a central cloud repository. This makes it easy for your team to update and deploy best practices across your entire organization.
You don’t have to start from zero. Many pre-built Skills are available on platforms like GitHub. You can find examples for various marketing tasks, download them, and adapt them to fit your specific needs and workflows.
How to use a Skill in PPC
To use a skill, first make sure there are some available in your account.
Then, just tell the AI the task you want to do.
The AI will look through connected Skills and, if it finds one that matches the task, it will use those instructions to perform the work.
Sidenote: This means it is important not to have competing skills in your account. Imagine what could go wrong if you did: with two skills that both do Google Ads audits, you lose the predictability a Skill was supposed to give you in the first place, because it may randomly pick a different one and do the work in different ways as a result.
A Skill provides powerful logic, but without access to live account data, it remains theoretical.
A Skill can define an analysis, such as “review search terms from the last 14 days with costs over $50 and zero conversions.” However, it doesn’t know how to pull that data from Google Ads on its own.
In the past, the workaround was to manually download static data, like a CSV from the Google Ads interface or a Google Ads Editor file. You would then feed this file to the AI as context. This works, but it’s slow, manual, and the data is outdated the moment you download it.
A more modern approach uses a Model Context Protocol (MCP) to connect your AI and its Skills to other systems, such as live data sources. For example, using the Optmyzr MCP, your Skill can dynamically pull the exact Google Ads data it needs, when it needs it. This connection turns a static set of instructions into a living, responsive tool. (Disclosure: I’m the cofounder and CEO of Optmyzr.)
How Skills tell AI how to do things, and how tools and MCP enable it to do those things more reliably
Combining a Skill with a tool like an MCP is where the real transformation happens. Your AI moves from being an assistant that requires constant direction to a system that can manage a process. It transitions from giving you ideas to executing your vision.
Let’s look at a common PPC task:
Task: Search Term Analysis to Eliminate Irrelevant Clicks
A Skill without tools is a task-oriented assistant: It might instruct you: “Paste in your search term report as a CSV, and I will identify potential negative keywords.” You’re still the one doing the grunt work of retrieving data and implementing the findings.
A Skill with tools acts as a junior manager for that specific process: It can be configured to: “Pull the search term report for the last 7 days via the MCP, identify terms with high spend and no conversions, and apply them as exact match negatives to the appropriate campaign.” The entire workflow is handled, and your role shifts to one of oversight.
When you combine structured logic (Skills) with live data and execution capabilities (tools), you’re building more than a chatbot; you’re building a reliable teammate. It’s a grounded, practical system that handles defined tasks, freeing you up to be the orchestrator of your strategy.
To move from theory to practice, let’s look at four concrete examples of PPC Skills. In each case, notice how connecting these Skills to live tools transforms the AI from a passive analyst into an active participant.
1. Search term mining
This Skill’s logic guides the AI to analyze a search query report to find wasted spend and opportunities.
Without tools: You provide a CSV. The Skill returns a structured list of recommended negative keywords and new keyword ideas. You have to implement them manually.
With tools (MCP): The Skill automatically pulls the latest search query report data, identifies the negative keywords, and uses a tool function to apply them directly to your Google Ads account.
2. Ad copy generation
This Skill takes a landing page URL and target keywords to generate ad copy variations based on value propositions and user intent.
Without tools: The Skill produces headlines and descriptions in a text format. You copy and paste them into Google Ads.
With tools (MCP): The Skill finds underperforming ad assets in your account, and then generates the ad copy and pushes the new ads directly into the correct ad groups, potentially even setting up a new ad experiment.
3. Account auditing
This Skill runs a predefined checklist against an account, looking for issues like missing ad extensions, campaigns limited by budget, or ad groups with low CTR.
Without tools: The Skill generates a report that lists all the problems it found. You then have to log in to the account and fix each one.
With tools (MCP): The Skill not only identifies that an ad group is missing a callout extension but can also apply a relevant, pre-approved extension from extensions used elsewhere in the account. It doesn’t just report the problem; it fixes it.
4. Budget reallocation
This Skill analyzes campaign performance data to find opportunities to shift budget from underperforming campaigns to those with higher potential returns.
Without tools: The Skill provides a recommendation, such as: “Decrease Campaign A’s budget by 20% and increase Campaign B’s budget by 15%.”
With tTools (MCP): The Skill performs a dynamic analysis, pulling in exactly the right data with the appropriate lookback and time segmentation, and then executes the budget change directly, ensuring budgets are optimized as soon as the opportunity is identified.
The future of your role: From PPC doer to PPC designer
The combination of Skills and tools enables you to move from playing with AI to having AI do meaningful work. For years, AI has been good at generating ideas but weak at executing them inside the ad platforms. This solves the “last mile problem” by giving AI the logic, data, and permissions to act.
This also signals a change in the role of the PPC professional. Your job will shift from doing the repetitive work to designing the systems that do the work. Instead of manually analyzing reports and making changes, you will spend more time designing Skills, defining the rules and guardrails for automation, and reviewing the outcomes.
We’re at a point where the large language models are capable, the tools for connecting them to platforms are available, and the interfaces make it possible for non-developers to build. It’s time to rethink your processes and get AI to be a real teammate.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
The end of endless prompting
The cycle of endless prompting is a dead end. It keeps you in the role of a manual operator when you should be a systems designer. By embracing Claude Skills, you’re doing more than just working faster; you’re changing the very nature of your job. You’re moving from “doing PPC work” to “designing the PPC systems” that perform that work with predictability and at scale.
This is the ultimate expression of the AI-amplified marketer: building a true partner that codifies your expertise into a reliable, efficient engine.
The first step is to look at your daily tasks through the lens of a designer. What repetitive process is ready to be turned into your first Skill?
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-14 14:00:002026-04-14 14:00:00Claude Skills for PPC: How to turn one-off prompts into scalable systems
Google’s Ask Maps feature does more than help users find nearby businesses.
Based on hands-on testing of local service queries for plumbers, electricians, and HVAC companies, Ask Maps often narrows the field, interprets user intent, and frames businesses around qualities such as responsiveness, specialization, honesty, and repair-first thinking.
In more complex prompts, it sometimes provides guidance before recommending businesses. This shows Google Maps moving beyond simple local retrieval and toward a more recommendation-driven experience.
To evaluate that shift, we tested Ask Maps across five levels of local intent — starting with simple category searches and progressing toward conversational prompts involving uncertainty, trust, and decision-making.
A clear pattern emerged. As query nuance increased, Ask Maps shifted from listing businesses to interpreting which businesses fit and why.
This article draws from hands-on testing across a limited set of local service queries in one geographic area. Treat these findings as an early directional view, not a comprehensive representation across all markets or query types.
The testing framework
To evaluate progression, we built a five-level intent model based on how homeowners and local service customers actually search. Instead of organizing around traditional keyword categories, we structured the framework from simple retrieval toward conversational decision-making.
Level 1 focused on basic requests with minimal context.
Example: “Looking for an HVAC company near me.”
Level 2 introduced more service specificity.
Example: “I need an electrician to upgrade my panel in an older home.”
Level 3 moved into situational queries, where the user described a problem.
Example: “My furnace is making a loud banging noise and I’m not sure if it needs to be replaced or repaired.”
Level 4 introduced trust and decision concerns.
Example: “I think my furnace might need to be replaced, but I don’t want to get overcharged. Who is honest about that?”
Level 5 combined those elements into fully conversational prompts asking for guidance, validation, and recommendations in the same search.
Example: “I was told I need a full furnace replacement, but it feels expensive. How do I know if that’s actually necessary, and who should I call for a second opinion in my area?”
This framework allowed us to evaluate:
Which businesses appeared.
How Ask Maps interpreted prompts.
What attributes it emphasized.
When results started to resemble guided recommendations rather than search results.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
Ask Maps narrows the field and adds interpretation
One of the clearest patterns across the testing was that Ask Maps consistently returned a relatively small set of businesses while increasing the amount of interpretation as the user’s search intent became more complex.
At Level 1, the average number of businesses shown was 3.6. Level 2 rose to 4.3. Level 3 dropped slightly to 3.3. Level 4 averaged 5, and Level 5 averaged 4.6. Across the full set, the range remained fairly tight, generally between three and eight businesses.
That’s a different experience from traditional Maps, where a user can scroll through a much broader set of options and do more of the evaluation work themselves.
Ask Maps narrows choices early and spends more effort explaining why those businesses fit the prompt, but stops short of being fully action-oriented. Even when a phone number is shown, there’s no clickable call button directly in the Ask Maps response.
To call or access the full set of contact options, the user still has to click into the business’s Google Business Profile. That matters because while Ask Maps is becoming more interpretive, the underlying GBP is still where action happens.
As prompts become more nuanced, uncertain, or trust-sensitive, Ask Maps draws on a broader range of sources. It shows fewer businesses, replacing breadth with interpretation.
Even the simplest queries don’t behave like a traditional Maps result.
At the baseline level, Ask Maps still relies heavily on Google Business Profile data, including:
Business descriptions.
Review content.
Ratings.
Hours.
In some cases, posts.
Website influence is minimal here, and there’s little evidence of outside sourcing. But even within that mostly closed ecosystem, it goes beyond listing nearby businesses.
Instead of just showing names, ratings, and locations, Ask Maps:
Generates narrative summaries based on information in the Google Business Profile.
Describes businesses in terms of responsiveness, experience, specialization, or the kinds of situations they seem well-suited for.
Draws on reviews when framing businesses.
Even at the most basic level, Ask Maps isn’t neutral. It’s beginning to interpret businesses for the user.
As queries become more specific, Ask Maps starts matching capability
Once the prompt shifts from a general service search to a specific type of job, Ask Maps becomes more selective in how it matches businesses to the request.
A query about an electrical panel upgrade doesn’t behave the same way as a query about urgent AC repair.
Replacement-oriented prompts emphasize installation and system expertise.
Repair-oriented prompts emphasize speed, availability, and responsiveness.
Queries tied to older homes or higher-risk work call for more evidence of specialization.
At this level, Google Business Profile and reviews still carry much of the weight, but websites matter more when the job is more complex or costly. A panel upgrade query produces stronger external link usage than a more straightforward AC repair prompt.
That doesn’t mean websites are always heavily used. It shows more selectivity. As decisions become more complex, Google looks for more supporting evidence before recommending businesses.
The more noticeable shift begins once the prompts move from service categories to real-world scenarios.
At Level 3, the user is no longer looking for a plumber, electrician, or HVAC company. Instead, they’re describing a problem, such as a loud banging furnace, outdated electrical in an older home, or an AC unit that has stopped working during extreme heat. In those cases, Ask Maps increasingly interprets the problem before introducing businesses.
Some responses provide guidance or context first. Others identify the provider and clarify the work before making recommendations. The businesses that follow aren’t framed as generic providers. They’re framed as possible solutions to the situation.
Review content becomes important here. Rather than simply supporting a business’s credibility, reviews act as evidence that the company has handled similar situations before. Fast arrival times, experience with older homes, communication during stressful repairs, and problem-solving ability all become more meaningful when describing businesses.
This is the point where Ask Maps moves more clearly from retrieval to interpretation.
Trust-oriented queries change what gets emphasized
When the prompts introduce fear, skepticism, or concern about making the wrong decision, Ask Maps changes again.
At Level 4, the focus is less on the service need itself and more on the emotional context around it. The user is worried about being overcharged, being pushed into unnecessary replacement, or hiring someone who would cut corners.
Ask Maps doesn’t just return businesses capable of doing the work. It organizes businesses around trust-related qualities such as honesty, transparency, careful workmanship, fairness, and second-opinion value.
This is one of the strongest patterns in the research. At this stage, review language is the primary signal shaping how businesses are framed. Specific phrases and anecdotes matter, elevating businesses that explain options clearly, don’t upsell, offer honest assessments, or deliver careful, professional work.
External sources become more relevant here. In addition to GBP information and reviews, Ask Maps shows more willingness to pull from company websites, testimonials, third-party platforms, and educational resources when the user’s concern involves decision risk rather than just service need.
Once the query becomes trust-driven, the recommendation no longer appears to be based only on who can do the job. It reflects who is most likely to handle the situation in a way that the user feels good about.
The strongest example of this progression came at Level 5. These are prompts where the user combines a problem, uncertainty, and a request for recommendations in a single query.
For example, someone might say they were told they needed a full furnace replacement but were unsure whether that was really necessary and wanted to know who to call for a second opinion. In these cases, Ask Maps moves most clearly into a decision-support role.
Instead of leading with local businesses, it often starts with an explanation, introducing frameworks, safety context, or ways to think about the decision.
Only after that does it recommend businesses, and those businesses are often grouped not just by rating or proximity, but by approach. Some are framed as repair-first options. Others are framed as second-opinion experts or safety-focused specialists.
This is where Ask Maps feels least like a directory and most like an advisor. The structure of the response looks more like a guided decision process than a traditional local search result.
That doesn’t mean the system is flawless or that every answer is equally strong. But it does suggest that when a prompt includes uncertainty and a need for validation, Ask Maps is trying to do more than match a category. It’s trying to help the user think through what to do next.
Across the testing, several source patterns appear repeatedly, and the mix appears to shift depending on the type of query.
At the foundation, Google Business Profile does much of the early work. Business categories, service descriptions, hours, ratings, and review counts help determine which businesses are eligible to appear and how they are initially framed. In some cases, Ask Maps also pulls from GBP services and products, business descriptions, and occasionally posts when those help reinforce what the business does.
Reviews seem to be one of the most important inputs across nearly every query type. Not just in ratings, but in how review language shapes the summary.
Ask Maps often draws on review themes tied to:
Responsiveness.
Honesty.
Professionalism.
Fast arrival times.
Work on older homes.
Repair-versus-replace situations.
Whether customers feel the company explains options clearly or avoids unnecessary upselling.
In other words, reviews support reputation and help define how a business is positioned in the response.
Business websites matter more once the query becomes more specific, higher-stakes, or more tied to decision-making. In those cases, Ask Maps seems more likely to pull in service pages, testimonial pages, or other on-site business information that helps reinforce specialization, repair-first positioning, second-opinion value, or experience with a particular type of job.
That’s more noticeable in queries tied to things like panel upgrades, replacement decisions, or older-home electrical concerns than in simpler “near me” searches.
External sources are the most selective layer, but they become more visible when the query involves safety, diagnosis, pricing uncertainty, or broader decision support.
In those cases, Ask Maps pulls in:
Educational content around issues like repair-versus-replace decisions, quote validation, and electrical safety.
Third-party review and directory platforms such as Angi, HomeAdvisor, YouTube, and Facebook.
Other publicly available business information, when it helps reinforce trust, workmanship, or reputation.
In some of the trust-oriented electrician queries in particular, this outside sourcing is more prominent than in simpler local lookups, suggesting Google may broaden its evidence base when evaluating how a business is likely to operate, not just what services it offers.
Ask Maps isn’t relying on a single source of truth. It appears to be constructing an answer from a mix of Google Business Profile data, review language, business website content, and selectively chosen outside sources, with the balance shifting based on what the user is actually asking.
What this may mean for local visibility
If Ask Maps continues to develop in this direction, it could have meaningful implications for local visibility in Google Maps.
Inclusion alone may matter less than interpretation. If Ask Maps is consistently showing a smaller set of businesses and adding more explanation around them, the question is no longer just whether a business appears. It’s also how that business is framed and whether Google has enough confidence to position it as a good fit for the situation.
Review content is becoming more important than many businesses realize. The language within reviews appears to influence not just credibility, but the actual way a business is described and recommended.
Website content plays a more targeted role than many local businesses assume. It may not be equally important for every prompt, but it matters more when the service is complex, expensive, or tied to greater uncertainty.
More broadly, Ask Maps points toward a version of local search in which retrieval, evaluation, and decision support occur much more closely together. Instead of searching, comparing, researching, and then deciding across several steps, the user may increasingly be guided through much of that process within a single AI-mediated Maps experience.
What businesses and SEOs should tighten up now
If Ask Maps continues moving in this direction, the practical response isn’t to chase a new tactic or treat it like a separate channel. It’s to make the business easier for Google to understand and easier for customers to trust.
Keep the Google Business Profile current and specific
A Google Business Profile may play a bigger role when Ask Maps is trying to decide what a business does, what kinds of jobs it handles, and whether it fits a more nuanced prompt.
Review primary and secondary categories to make sure they reflect the core work accurately.
Tighten the business description so it clearly explains the services offered, the types of jobs handled, and any specialties or areas of focus.
Make sure hours, service areas, and contact details are complete and current.
Add photos that reinforce the kinds of jobs the business wants to be associated with.
Treat posts and profile updates as another way to reinforce services and activity, not just as optional extras.
Use the Services and Products sections fully, adding clear descriptions that reflect the specific jobs, specialties, and situations the business wants to be known for.
Pay closer attention to review language
If Ask Maps uses review language to shape how businesses are positioned, then the wording in reviews may matter more than many businesses realize.
Look beyond review volume and average rating.
Pay attention to whether reviews naturally mention specific jobs, customer concerns, and outcomes.
Watch for language around responsiveness, honesty, professionalism, repair-first thinking, and clear communication.
Encourage reviews that reflect real experiences rather than generic praise.
Use review trends to understand how the business is likely being framed by Google.
Revisit website content for higher-consideration services
Website content appears more likely to matter when the query is more complex, more expensive, or tied to more uncertainty.
Strengthen service pages for the higher-value or higher-risk work the business wants to be known for.
Add FAQs that address real decision points, not just basic definitions.
Include examples of the kinds of jobs handled, especially where context matters.
Reinforce trust signals such as experience, process, reviews, and proof of work.
Use language that helps explain situations like repair versus replace, older-home work, or second-opinion scenarios.
Think beyond ranking for a phrase
There’s a broader strategic shift here for local SEO. The question may no longer be only whether a business can rank for a phrase. It may also be whether Google has enough evidence to recommend that business in response to a real-world question.
Evaluate whether the business is easy to understand across GBP, reviews, website content, and broader digital mentions.
Look at whether the business is clearly associated with the jobs and situations it wants to win.
Think about trust and decision support, not just service relevance.
Focus on making the business more legible to both Google and potential customers.
Treat local optimization less like keyword matching alone and more like building a clear, consistent business profile across sources.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
The direction of Ask Maps is becoming clearer
The main question behind this research was when Ask Maps stops behaving like a directory and starts behaving more like a recommendation engine. Based on this testing, that shift starts earlier than many might expect.
Even at the most basic level, Ask Maps narrows, summarizes, and interprets. As prompts become more specific, situational, and trust-driven, they move further toward guided recommendations. At the highest level of complexity, it begins to look less like traditional local search and more like a system designed to help users make decisions.
That doesn’t mean Google Maps has fully changed into something else. But it does suggest the direction is becoming clearer. For local businesses and the people who support them, that makes this worth watching closely. Visibility inside Maps may increasingly depend not just on being present, but on being understood well enough for Google to explain why the business fits the user’s needs.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-14 13:00:002026-04-14 13:00:00Google Ask Maps is moving from listings to recommendations
A user asks Gemini: “Find me a task chair under $400 with lumbar support and free shipping. Order the best one.”
The AI doesn’t open a new tab. It doesn’t ask the user to click anything. Instead, it queries product databases, cross-references reviews, checks real-time inventory, compares shipping policies, and initiates a checkout — all without a human touching a single page.
These are all things the user would have done themselves, but now in a fraction of the time, with as much effort as it took to write the initial prompt.
Okay, we might not be quite at the stage where everyone is letting AI agents make all their purchases for them. But it’s no longer an unrealistic future.
What made that possible isn’t the AI models themselves. It’s the infrastructure we’re seeing become an increasingly important part of how modern websites are built. This infrastructure consists of a stack of protocols that tells AI agents how to find each retailer’s site, understand their catalog, verify their claims, and take action.
These protocols define how AI agents interact with your brand. And most SEOs have no idea they exist.
By the end of this article, you’ll understand what each protocol does, how they differ from one another, and why you need to pay attention to what’s going on underneath the hood of AI search if you want to stay visible going forward.
Why Protocols Matter for SEOs
Protocols determine whether an AI agent can interact with your brand programmatically, or whether it has to guess. Brands that can speak the agent’s language are more likely to not just be surfaced, but also recommended and, ultimately, interacted with to make purchases.
Think of how robots.txt and XML sitemaps became table stakes for search crawlers. Agentic protocols are shaping up to be that for AI agents.
Put simply: if you want agents to be able to take action on your site — whether that’s making a purchase, booking a table, or completing a form — you need to understand these protocols.
Note: We’re not suggesting that without these protocols AI agents and users will never access your site or buy from them. Agentic commerce is still pretty new, and even the protocols themselves are still evolving. But we believe that agents will increasingly act on behalf of users, and that the easier you make it for them to do that on your website, the better positioned you’ll be as agentic commerce becomes the norm.
The Protocol Stack: A Quick Map
These protocols aren’t competing standards fighting for dominance. They operate at different layers of the same stack, and most are designed to work together.
Here’s a quick breakdown of what these protocols do:
Layer
What It Does
Key Protocols
Agent / Tool
Connects agents to external data, APIs, and tools
MCP
Agent / Agent
Lets agents hand off tasks to other agents
A2A
Agent / Website
Lets websites become directly queryable by agents
NLWeb, WebMCP
Agent / Commerce
Enables agents to discover products and complete purchases
ACP, UCP
Note: As with everything AI, the agentic protocols we’ll give more details on below are constantly evolving. This means some platforms are yet to adopt some of the protocols, and the specifics of each protocol could also change over time.
MCP: Model Context Protocol
MCP is the universal connector between AI agents and external tools, data sources, and APIs.
How It Works
Before MCP, every AI tool needed a custom integration for every data source it wanted to access. If you wanted a chatbot to pull live pricing from your database and cross-reference it with your CMS, someone had to build a bespoke connection between those systems. Then rebuild it whenever either one changed.
MCP standardizes that connection. Think of it as USB-C for AI: one protocol that lets any agent plug into any tool, database, or website that supports it.
An agent using MCP can pull live pricing data, check inventory, read structured content from a site, or execute a workflow, all through the same interface.
The website or tool publishes an MCP server, and the agent connects to it. There’s much less need for custom integration work on either side.
Who’s Behind It
MCP was launched by Anthropic in November 2024. It has since been adopted by OpenAI, Google, and Microsoft. MCP is now governed by an open-source community under the Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation.
As of early 2026, there are more than 10K MCP servers out there, making it the de facto standard for agent-to-tool connectivity.
What It Means for Your Brand
Structured data, clean APIs, and accessible HTML have always been good technical SEO. Now they’re also agent compatibility requirements. Brands with MCP-compatible data give agents something to work with. Brands without it force agents to scrape pages and infer meaning, which creates friction and can affect whether they recommend you.
A2A is the standard that lets AI agents from different vendors communicate, delegate tasks, and hand off work to one another.
How It Works
MCP lets an agent talk to tools. A2A lets agents talk to each other.
When a task is complex enough to need multiple specialist agents — like one for research, one for comparison, and one for completing a transaction — A2A is the protocol that coordinates them.
Each A2A-compliant agent publishes an “Agent Card” at a standardized URL (that looks like “/.well-known/agent-card.json”). This card advertises what the agent can do, what inputs it accepts, and how to authenticate with it. Other agents discover these cards and route tasks accordingly.
The result: agents from entirely different companies, built on different frameworks, running on different servers, can collaborate on a single user request. No custom-built connections required.
Who’s Behind It
Google launched A2A in April 2025 with 50+ technology partners, including Salesforce, PayPal, SAP, Workday, and ServiceNow. The Linux Foundation now maintains it under the Apache 2.0 license.
What It Means for Your Brand
As multi-agent workflows become more common, agents may evaluate your brand across multiple checkpoints before a human sees the result.
That chain might look something like this:
A research agent surfaces your product from a broad category query
An evaluation agent reads your reviews and checks the sentiment
A pricing agent verifies your costs against third-party sources
A trust agent cross-references your claims for consistency
A2A orchestrates that entire chain. If your data is inconsistent across sources, like if your pricing page says one thing and your G2 profile says another, the AI agent might filter your brand out as a contender. All before the user even sees you as an option.
NLWeb is Microsoft’s open protocol that turns any website into a natural language interface, queryable by both humans and AI agents.
How It Works
Right now, when an AI agent visits your website, it might have to make a lot of guesses. It scrapes your HTML, infers meaning from your content, and relies on your page being structured properly to be able to parse it effectively. There’s a lot of room for error.
Once a site implements NLWeb, any agent can send a natural language query to a standard “/ask” endpoint and receive a structured JSON response. Your site then answers the agent’s question directly, rather than the agent interpreting your HTML.
Every NLWeb instance is also an MCP server. A site implementing NLWeb automatically becomes discoverable within the broader MCP agent ecosystem without any additional configuration.
Who’s Behind It
NLWeb was created by R.V. Guha, the same person behind RSS, RDF, and Schema.org. (That’s no coincidence.) NLWeb deliberately builds on web standards that already exist, which means a lot of websites are close to NLWeb-ready right now.
Microsoft announced NLWeb at Build 2025 in May 2025. It’s open-source on GitHub. Early adopters include TripAdvisor, Shopify, Eventbrite, O’Reilly Media, and Hearst.
What It Means for Your Brand
For SEOs, NLWeb is a natural extension of work you may already be doing.
Schema markup, clean RSS feeds, and well-structured content are the foundation NLWeb builds on. Sites that have invested in structured data have a head start. Sites that haven’t are harder for agents to work with, but they can easily catch back up by implementing schema markup now.
Structured data already helps search engines, and it can make it easier for agents to understand and interact with your site too. That increases the value of technical SEO work you may have been putting off.
WebMCP is a proposed W3C standard that lets websites declare their capabilities directly to AI agents through the browser.
How It Works
NLWeb makes your content queryable. WebMCP goes one step further: it lets websites declare what actions they support. These actions could include “add to cart,” “book a demo,” “check availability,” and “start a trial.”
These capabilities are declared in a structured, machine-readable format. Instead of an agent scraping your UI and guessing how your checkout works, WebMCP gives it an explicit map, straight from the source (you).
Who’s Behind It
Google and Microsoft proposed WebMCP, and the W3C Community Group is currently incubating it. Chrome’s early preview shipped in February 2026, with broader browser support expected by mid-to-late 2026.
What It Means for Your Brand
WebMCP is the clearest preview of where agent-website interaction is heading.
Imagine you have two brands with similar products, similar pricing, and similar reviews. The one whose site declares clear, structured capabilities is easier for an agent to act on. The other requires guesswork.
Agents are likely to take the path of least friction, and WebMCP helps you reduce friction to a minimum.
ACP is OpenAI and Stripe’s open standard for enabling AI agents to initiate purchases.
How It Works
ACP focuses specifically on the checkout moment. It creates a standardized way for an AI agent to complete a purchase on a merchant’s behalf, handling payment credentials, authorization, and security through the protocol itself.
Before ACP, an agent that wanted to complete a purchase had to navigate each merchant’s unique checkout flow. A different form, a different payment process, and a different confirmation step for every retailer. ACP standardizes this process.
Merchants integrate with ACP through their commerce platform, and once live, checkout becomes agent-executable. The user doesn’t have to do anything except approve.
ACP originally powered ChatGPT’s instant checkout functionality, but that has since been removed by OpenAI in favor of dedicated merchant apps. ACP may still power product discovery within ChatGPT, and may be used within these apps, but things are evolving fast.
Who’s Behind It
OpenAI and Stripe launched ACP in September 2025. It’s open-sourced under Apache 2.0, with platform support still expanding.
What It Means for Your Brand
If an agent has shortlisted your product and the user tells it to go ahead and pay, ACP is what allows the agent to complete the transaction. If your brand isn’t integrated with this workflow, you risk the AI agent getting stuck or being unable to complete that purchase.
The agent can recommend you, but it can’t buy from you. That gap will matter more as agentic commerce becomes the norm.
UCP is Google and Shopify’s open standard for the full agentic commerce journey, from product discovery through checkout and post-purchase.
How It Works
ACP focuses on the checkout moment, while UCP covers the entire shopping lifecycle.
An agent using UCP can discover a merchant’s capabilities, understand what products are available, check real-time inventory, initiate a checkout with the appropriate payment method, and manage post-purchase events like order tracking and returns. All through a single protocol.
UCP is built to work alongside MCP, A2A, and AP2 (Agent Payments Protocol), meaning it plugs into the broader agent infrastructure rather than replacing it.
Merchants publish a machine-readable capability profile. Agents then discover it, negotiate which capabilities both sides support, and proceed.
Who’s Behind It
Google and Shopify co-developed UCP, with Google CEO Sundar Pichai announcing it at NRF 2026. More than 20 launch partners signed on, including Target, Walmart, Wayfair, Etsy, Mastercard, Visa, and Stripe.
What It Means for Your Brand
When a user asks Google AI Mode to find and buy something, UCP determines whether your brand is in the conversation, and whether the agent can actually complete the transaction.
The machine-readability of your product data, the consistency of your pricing across sources, the clarity of your inventory signals: all of it feeds directly into whether an agent can successfully transact with you.
ACP and UCP are often confused, and they do share some similarities, but here’s where they differ:
ACP
UCP
Built by
OpenAI + Stripe
Google + Shopify
Scope
Discovery and checkout layers
Full journey: discovery, checkout, and post-purchase
Powers
ChatGPT instant checkout and product discovery
Google AI Mode, Gemini
Architecture
Centralized merchant onboarding
Decentralized: merchants publish capabilities at /.well-known/ucp
Status (early 2026)
Live, wider rollout in progress
Live, wider rollout in progress
ACP and UCP are complementary, not competing. A brand may eventually support both — one for ChatGPT’s ecosystem, one for Google’s.
For now, the practical question is: which platforms matter most to your customers, and where does your commerce infrastructure make integration easiest? Choose the protocol that aligns with your answer, or use both.
Example of Agentic Search Protocols in Action
These protocols don’t operate in isolation. Here’s what they might look like working together (note that this isn’t necessarily exactly what’s going on at each stage, and is just for illustrative purposes):
Scenario: A user asks Gemini: “Find me a comfortable task chair under $400 with lumbar support and free shipping. Order the best option.”
Step 1: MCP Activates
The agent uses MCP to connect to external tools: product databases, review platforms, retailer inventory feeds. It can query live data rather than relying on cached or trained knowledge.
Step 2: A2A Coordinates
The agent then coordinates with specialist agents published by brands and review platforms via A2A. One evaluates ergonomics reviews. One checks pricing consistency across sources. One verifies free shipping claims against each retailer’s actual policy page.
Step 3: NLWeb Answers Queries Directly
The agents query each retailer’s site. Brands with NLWeb implemented respond to the agent’s /ask query with structured data. This includes things like accurate inventory, real-time pricing, and product attributes. Brands without it force the agent to scrape and infer, slowing it down and potentially leading to them being skipped altogether.
Step 4: WebMCP Declares Available Actions
The “winning” retailer’s site has declared its checkout capabilities via WebMCP. The agent knows exactly what actions are available and how to initiate them without any guesswork.
Step 5: UCP Completes the Transaction
The purchase is executed via UCP, entirely within Google’s AI experience. The merchant’s backend communicates through the standardized API. The user gets an order confirmation, and they never visited a single product page.
Obviously this is the fully agentic scenario. In reality, not every purchase is going to be left entirely to an AI agent.
But even when a human wants to evaluate options before clicking buy, making it as easy as possible for the agent to make recommendations is still good practice. That’s why these protocols are worth paying attention to.
What SEOs Should Do Now
Understanding the protocol layer is step one. Here’s where to focus next:
1. Prioritize Machine-Readable Content Over Volume
Before adding more pages, make sure your existing pages can be parsed cleanly by an agent. That means:
Having your pricing in plain text, not locked behind JavaScript drop-downs
Using feature lists that don’t require interaction to reveal
Including FAQ content that renders server-side
Using schema markup on product and organization pages
An agent that can’t read your page can’t recommend or buy your products.
2. Audit Your Structured Data
NLWeb builds on Schema.org, RSS, and structured content that sites already publish. If you’ve invested in schema markup, you have a head start on NLWeb compatibility.
If you haven’t, this is now a double reason to prioritize it: it improves your search visibility and makes your site more easily queryable by agents.
3. Check Your Consistency Across Sources
Agents verify claims by cross-referencing your site, review platforms, and third-party content. If your pricing page says one thing and your Capterra profile says another, agents can flag the discrepancy and lose confidence in your brand, making the recommendation or purchase less likely.
Audit for cross-source consistency the same way you’d audit NAP consistency in local SEO. It’s the same underlying principle, just for a different kind of crawler.
4. Get on the ACP and UCP Waitlists Now
These protocols are in active rollout. Early adopters benefit from lower competition in agent-mediated commerce while the rest of the ecosystem catches up. Join Stripe’s waitlist for ACP access. And join Google’s UCP waitlist too.
For other protocols like MCP, talk to your dev team about making sure your site supports them.
5. Monitor Your AI Footprint as a Regular Practice
Search your brand in ChatGPT, Perplexity, and Google AI Mode. Are agents describing your product accurately? Is your pricing consistent with what they’re surfacing? Are competitors appearing where you aren’t?
This is the new version of checking your SERP presence, and it needs to become a recurring part of your workflow, not a one-time audit.
Understand how your brand is appearing in AI search right now with Semrush’s AI Visibility Toolkit. It shows you where you’re showing up, where you’re behind your rivals, and exactly what AI tools are saying about your brand.
What’s Next for Agentic Search Protocols?
The protocols we’ve discussed here are already live, but they’re still evolving.
WebMCP is still in early preview. ACP and UCP are mid-rollout. New protocols — for agent payments, agent identity, agent-to-user interaction — are still being drafted and debated.
But the SEOs that understand and implement these protocols correctly are the ones most likely to see success.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-14 12:09:182026-04-14 12:09:18The 6 Agentic AI Protocols Every SEO Needs to Know
At midnight on Jan. 5, hackers took over our Google Ads Manager Account (MCC). We weren’t alone. While it’s hard to get an exact count, hundreds, if not thousands, of agencies have been affected by the hacks, in turn affecting tens of thousands of accounts.
While I wouldn’t wish this experience on our worst enemy, having been through it, I have some insights that I hope can help you prevent the same experience from happening to your MCC account.
How we were hacked
Despite having two-factor authentication (2FA) and allowed domains enabled, the hackers were able to get into our account via an employee’s email address. It was clearly a targeted hack: the night of the hack, the hackers tried to get in via two other email accounts at our company before they succeeded with the third.
While phishing or compromised passwords may have originally gotten them into the system — we still don’t know which — we later learned that the account the hackers used had been compromised for months and that they had created their own 2FA that they had been using all along.
Once they gained access to our account, the hackers removed everyone else’s access to the MCC. They then changed the allowed domain to Gmail and granted access to over a dozen people. The hackers then created a new MCC in our company’s name and invited most of our clients. Luckily, none of them accepted.
In the few hours they were in the MCC, the hackers proceeded to create chaos. They removed all the users from some accounts and changed the payment method in others. They launched new campaigns on only a few accounts, yet somehow also attempted half-million-dollar credit card charges on two others (despite not running any ads in those accounts).
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
What happened after the hack
We were very lucky. The hackers were locked out within eight hours, and we regained access in just over a week. They spent only about $100 across the MCC. Neither crazy credit card charge went through. We were fully recovered from the hack within two weeks. How did we do this? Let’s take a look at the steps we took.
Step 1: We contacted Google
When we were hacked, we immediately contacted our reps at Google. We’re incredibly lucky to have wonderful Google reps with whom we’ve built longstanding relationships, including one we’ve worked with for over three years.
These long-term relationships helped, and our reps went to bat for us. They continued to put pressure on the support cases until they were resolved and helped connect us to the resources we needed. Not everyone has their own reps, but you can also take these steps on your own.
Step 2: Fill out the forms
Our Google reps immediately directed us to their “What to do if your account is compromised” resource. From there, we filed Account Takeover Forms, alerting Google to the hack. We were directed to file a form for each of our accounts that had been hacked.
We first filed one for our MCC, even though the form, at the time, said not to use it for MCCs. It looks like that language has since been changed, which is great — don’t skip this step. Getting back into the MCC makes it easier to resolve all issues, rather than having to file tickets and coordinate access for each account.
Step 3: Contact clients
At the same time, we directed any clients who still had access to their accounts to disconnect them from our MCC, and to grant access to a non-compromised email account. That way we were able to secure the accounts, work on them, and mitigate any damages immediately. We were also able to triage our accounts to figure out which we were still able to access, and which had no admins left with access.
Step 4: Reset billing
Disconnecting from our MCC wound up being a very important step. That’s because when our accounts were disconnected from the MCC, we were easily able to reset the billing by editing the payment manager and undoing all of the payment chaos that the hackers had created. We were then able to reconnect them without issue.
Step 5: Check change history
When we eventually did get back into the accounts, we immediately checked the change history, which we were able to do at the MCC level for additional speed. All the changes the hackers made during that time were there with time stamps, allowing us to put together a timeline of the hack and remediate any remaining issues.
During all this activity, a few things were especially critical to our success in recovering the account and mitigating damage. Here’s a quick rundown of best practices to keep in mind.
Make sure clients have access
This isn’t just a best practice, but something we believe should always be the case for ethical reasons. Having additional admins in the account let us regain access immediately, despite being locked out of the MCC, and remediate issues without losing time or momentum.
Google also pushed back on any access or billing changes that didn’t have approval from an existing admin, so having people still in the accounts was critical.
Keep your MCC clean
Remove old clients, and any other MCCs for tools you’re no longer using. We didn’t do this, and wish we had. We’ve made it a best practice for our accounts moving forward.
Limit team access
Make sure your team only has the minimum access they need. Standard access is great. Admin access should be reserved for as few people as possible. The compromised account belonged to a junior team member who didn’t need admin-level access.
This isn’t to say they wouldn’t have gotten in through a more senior team member’s account — as mentioned, they did try to get in through several before succeeding — but it would have mitigated risk.
Use credit cards or invoices
Neverconnect your bank accounts to your MCC. We’ve heard of companies that have lost hundreds of thousands of dollars with this same kind of hack. Because our clients were all either on invoice or credit cards, the hackers couldn’t quickly spend money in a way that hit their accounts.
As noted earlier, the credit card companies rejected the very suspicious half-million-dollar charges the hackers attempted to make, and notified the credit card holders. The clients we were invoicing were never charged, and everything was captured on the invoices before billing.
Invest in relationships
It’s important to invest in your relationships with your Google reps, and fellow agency owners. We remain incredibly grateful to all of the people who helped us, or even just commiserated with us along the way. This experience would’ve been even more painful if we’d had to go through it alone.
How to prevent being hacked
For those who have yet to be hacked, congratulations! Let’s try to keep it that way. Here are some things you can do to make it much less likely that this will ever happen to your accounts.
Start with a clean reset
Begin by kicking every single user out of your account, and have everybody on the accounts reset their passwords. Make sure you log everyone out of every session they were in on every device.
Our hackers were sitting around auto-logging in and keeping their sessions open for over two months prior to the night they took over the MCC. If we’d forced a reset and logged everyone off, we would’ve removed their access without even realizing it.
Enable 2FA and allowed domains
Make sure there’s only one 2FA per person. 2FAs that use authenticators or physical keys are better than pinging a device. The hackers had created their own 2FA to get into our employees’ accounts, and we never even had an idea that it was happening.
Audit and limit access
Make sure the minimum number of people have the minimum access they need to the MCC. This reduces your risk.
Enable multi-party approval
Google rolled out this new feature quite recently to help prevent account takeovers. Essentially, the feature requires that a second admin verifies any big changes before they happen. If you’d like to read up on this feature, here’s a great guide introducing multi-party approval.
Back up your accounts
You can copy and paste your accounts into your preferred spreadsheet app via Google Ads Editor. Make a habit of doing this periodically so that you’ll always have a copy of how things were in case of a hack. With the backups, you can easily revert back if you need to.
Use strong passwords
It’s important to use unique passwords that aren’t being used anywhere else. That way, if one site gets hacked, your MCC is still not at risk. We’re still not sure how the hackers passed the initial password stage to be able to create their own 2FA.
Invest in security monitoring
If you want to be extra careful, invest in security software and/or a cybersecurity expert to monitor your system. We have now done this, and it’s been amazing (and scary) to see how many phishing attempts have already been caught in the six weeks since we did it.
A note for clients: If you’re a client and another team is managing your Google Ads, do not accept any Google Ads MCC access requests that you aren’t expecting. Please make sure you always know who and what you’re giving access to. When in doubt, double-check with the team that is managing your account. A little caution can go a long way.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
Stay safe out there
The good news is that Google knows about these issues, and is actively finding ways to tighten their systems to prevent hacks. In the meantime, I hope this article has helped make our loss your gain. With an ounce of prevention, you’re likely to prevent a pound of pain.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-14 12:00:002026-04-14 12:00:00Google Ads MCC hacked? Here’s what to do immediately
When a client calls about a damaging search result, you might typically default to one of two responses: “we can suppress it” or “there’s nothing we can do.” Both skip the middle ground — where Google’s removal tools live.
Google provides tools to remove or deindex content from search results. They’re underused, frequently misunderstood, and often conflated.
This guide breaks down what each tool does, when to use it, and what it can’t do — so you can triage client situations accurately and set expectations that hold.
The distinction that changes everything: removal vs. deindexing
Before you use any tool, get one thing right with clients: the difference between two outcomes that look the same but aren’t.
Removal at source: The content is deleted from the site where it lives. Once removed, Google will drop it from its index as it re-crawls the page. This is the cleanest outcome — but it requires the site owner to act. Google’s tools can’t force it.
Deindexing: Google removes the URL from its index, so it won’t appear in search results — even if the page still exists. Anyone with the direct URL can still access it. This is what most of Google’s self-service tools do.
The practical implication: deindexing fixes a search problem, not a content problem. If the content is the liability — a news article, court record, or damaging forum post — deindexing reduces risk but doesn’t eliminate it. That context matters when you advise clients.
Google’s removal tools, explained one by one
1. The URL removal tool (Search Console)
In Google Search Console under Index > Removals, this tool lets you temporarily hide a URL or directory from search results. Removal lasts about six months. If the URL still exists, it may reappear.
Who it’s for: You, if you control the site in Search Console. You can’t use it to remove someone else’s content.
Common use case: Your site has an outdated page you don’t want surfacing — old press releases, deprecated product pages, or pages you’ve updated or removed.
What it won’t do: Remove content from a site you don’t control. This misconception causes significant client frustration.
When it works: The content is gone (the page 404s or the content is removed), but Google still shows a cached version. You submit the URL, Google recrawls it, and if the content is gone, it removes the result and cached snippet.
When it doesn’t: The page still exists and the content is live. Google will verify it and reject the request.
Practical use: After you’ve removed content at the source, use this to speed up deindexing instead of waiting for the next crawl. It’s not a removal tool — it triggers a recrawl.
Launched in 2022 and expanded in August 2023, the Results About You tool lets you request the removal of specific categories of personal information from Google Search. It added proactive alerts and broader coverage, then expanded again in early 2026 to include government-issued IDs, passport data, Social Security numbers, and improved reporting for non-consensual explicit imagery, including AI-generated deepfakes.
What it can remove:
Home addresses and precise location data
Phone numbers
Email addresses
Login credentials and passwords
Credit card and bank account numbers
Images of handwritten signatures
Medical records
Personal identification documents (passports, driver’s licenses)
Explicit or intimate images shared without consent
What it can’t remove: General information that falls outside these categories — news articles, reviews, social posts, court records, or professional information. Those require different paths.
Why it matters: If you’re dealing with doxxing, data broker sites, or exposed sensitive data, you now have a self-service path. Managing this tool is increasingly part of ORM work.
4. Legal removal requests
For content outside self-service categories, you can submit legal removal requests to Google:
Defamation: False statements of fact about an identifiable person.
Copyright (DMCA): Unauthorized use of copyrighted material.
Other legal grounds: Harassment, illegal imagery, or other violations.
Google’s legal team reviews these requests; they aren’t automatic, and approval isn’t guaranteed. Defamation has a high bar: the content must be false, not just negative. A bad review isn’t defamation; an inaccurate factual claim may be.
Right to be Forgotten applies only if you’re in the EU or UK. It allows deindexing from Google’s European search properties. It doesn’t remove content globally or impact U.S. search.
5. The personal content removal form
Separate from Results About You, this Google form handles requests to remove non-consensual explicit images, doxxing content, and certain sensitive information on other sites.
This process is more manual. Google reviews the external site content rather than just deindexing a URL. Approval rates are higher for explicit imagery than for other categories, but the process is slower and less predictable.
What none of these tools do
Understanding the limits matters as much as knowing the tools. None of Google’s removal tools will:
Force a third-party site to delete content.
Remove content from other search engines (Bing, Yahoo, DuckDuckGo).
Remove content from Google Images, News, or Maps without separate requests.
Permanently fix the underlying content problem.
Remove results that are accurate, lawful, and in the public interest.
That’s why suppression remains core to reputation management: when you can’t remove content, you push it down with authoritative, well-optimized content.
How to triage a client removal situation
A practical decision flow for incoming removal requests:
Step 1: Can the client control the source site?
If yes, remove it at the source, then use the outdated content tool to speed up deindexing.
Step 2: Is it personal information in Google’s covered categories?
Use Results About You.
Step 3: Is there a legal basis?
Defamation, copyright, court order, or GDPR right to be forgotten. If yes, file the appropriate request and set realistic timelines (weeks to months, not days).
Step 4: Is it none of the above?
Suppression is likely the primary path. Build a content and link strategy around the branded SERP to displace the result over time.
For high-stakes cases — like non-consensual content or permanent court records — firms like Erase.com handle direct outreach and legal escalation on a pay-for-success basis, bridging the gap between DIY tools and litigation.
Setting realistic client expectations
The most common client mistake is expecting Google to act like a content moderator. It isn’t.
Google’s removal tools cover specific, narrow categories. Outside them, Google defaults to indexing what exists on the web.
Set this expectation upfront to protect the client relationship. It also positions suppression not as a fallback, but as the right tool for most ORM situations.
When removal is viable, these tools have improved over the past two years. Results About You has expanded and should be included in your standard ORM audit. The outdated content tool remains underused and is a quick win when source removal has already happened.
Know the tools. Use them where they apply. Suppress where they don’t.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-14 11:00:002026-04-14 11:00:00How Google’s removal tools work for SEO and reputation management by Erase Technologies
We launched the Yoast SEO Task List in December to give you a clear, actionable to-do list for your site’s SEO. In this update, we’ve added two new tasks, improved how you navigate to fixes, and resolved a bug that was showing tasks in the wrong language.
A quick recap: what does the Task List do?
The Task List scans your site and surfaces specific content that needs attention, ranked by priority with an estimated time to fix. Instead of guessing what to work on next, you click a task and Yoast takes you directly to the right place to make the improvement. Think of it as a personal SEO assistant that knows your site.
What’s new in this update
New task: improve your meta descriptions
Meta descriptions are the short snippets that appear under your page title in Google search results. They don’t directly affect rankings, however they have a significant impact on whether someone clicks your link. The Task List will now flag recent posts where the meta description is missing or could be stronger, and point you to where you can fix it. Premium users can use the AI Generate button to write one in seconds.
New task: delete your sample page
Every new WordPress site comes with a default “Sample Page” that most people never delete. It adds no value and can create unnecessary noise for search engines. The Task List will now remind you to remove it if it’s still there. It’s a two-minute job that’s easy to overlook.
When someone shares your content on Facebook or X, the image that appears alongside it can make a real difference to whether people click. The Task List will now remind you to set a custom social sharing image for your posts and pages, so your content looks its best every time it gets shared.
Go directly to the right place in the editor
Previously, clicking a task would open the post editor and leave you to find the right section yourself. Now, Yoast takes you to the exact part of the editor you need: the SEO tab, the readability panel, or the meta description field. Less scrolling, faster fixing.
Bug fix: tasks now appear in your language
We fixed a bug where task descriptions were showing up in the site’s language rather than the logged-in user’s language. If you manage a multilingual site, or your personal language settings differ from your site’s default, tasks will now display correctly for you.
Also in this release
We’ve added a new Yoast tab to the WordPress Plugins screen that groups all your installed Yoast plugins in one place. This requires WordPress 7.0+.
We fixed a bug where alt text changes made via the inline image editor in How-to and FAQ blocks weren’t saving correctly to the frontend. Thanks to @param-chandarana for the report.
What’s coming next
We’re continuing to expand the Task List with improvements that surface high-impact changes specific to your content. Users of paid plans will see additional tasks in upcoming releases.
Update to Yoast SEO 27.4 to get these improvements automatically, or download the latest version from the WordPress plugin directory.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00Dubado Solutionshttp://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.pngDubado Solutions2026-04-14 08:25:542026-04-14 08:25:54Three new tasks, better navigation, and a bug fix in the Yoast SEO Task List