Google will begin enforcing a minimum daily budget for Demand Gen campaigns starting April 1, 2026.
What’s happening: The Google Ads API will require a minimum daily budget of $5 USD (or local equivalent) for all Demand Gen campaigns. The change is designed to help campaigns move through the “cold start” phase with enough spend for Google’s models to learn and optimize effectively. The update will roll out as an unversioned API change, applying across all buying paths.
Technical details:
In API v21 and above, campaigns set below the threshold will trigger a BUDGET_BELOW_DAILY_MINIMUM error, with additional details available in the error metadata.
In API v20, advertisers will receive a generic UNKNOWN error, with the specific validation failure referenced in the unpublished error code field.
The rule applies when modifying budgets, start dates, or end dates in ways that push daily spend below the $5 floor — covering both daily and flighted budgets.
Impact on existing campaigns. Current Demand Gen campaigns running below the minimum will continue serving. However, any future edits to budgets or scheduling will require compliance with the new floor.
Why we care. For advertisers and developers, this adds a new compliance layer to campaign management workflows. Systems will need updating to catch and handle the new validation errors before deployment.
The bottom line. Google is standardizing a minimum investment threshold for Demand Gen — prioritizing performance stability, while requiring advertisers to adjust budgets and automation accordingly.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/03/In-Google-Ads-automation-everything-is-a-signal-in-2026-4eYe8f.jpg?fit=1920%2C1080&ssl=110801920http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-03 17:47:162026-03-03 17:47:16Google Ads API enforces daily minimum budget for Demand Gen campaigns
AI recommendations are inconsistent for some brands and reliable for others because of cascading confidence: entity trust that accumulates or decays at every stage of an algorithmic pipeline.
The mechanics behind that shift sit inside the AI engine pipeline. Here’s how it works.
The AI engine pipeline: 10 gates and a feedback loop
Every piece of digital content passes through 10 gates before it becomes an AI recommendation. I call this the AI engine pipeline, DSCRI-ARGDW, which stands for:
Discovered: The bot finds you exist.
Selected: The bot decides you’re worth fetching.
Crawled: The bot retrieves your content.
Rendered: The bot translates what it fetched into what it can read.
Indexed: The algorithm commits your content to memory.
Annotated: The algorithm classifies what your content means across dozens of dimensions.
Recruited: The algorithm pulls your content to use.
Grounded: The engine verifies your content against other sources.
Displayed: The engine presents you to the user.
Won: The engine gives you the perfect click at the zero-sum moment in AI.
After “won” comes an 11th gate that belongs to the brand, not the engine: served. What happens after the decision feeds back into the AI engine pipeline as entity confidence, making the next cycle stronger or weaker.
DSCRI is absolute. Are you creating a friction-free path for the bots?
ARGDW is relative. How do you compare to your competition? Are you creating a situation in which you’re relatively more “tasty” to the algorithms?
Both sides of the AI engine pipeline are sequential. Each gate feeds the next.
Content entering DSCRI through the traditional pull path passes through every gate. Content entering through structured feeds or direct data push can skip some or all of the infrastructure gates entirely, arriving at the competitive phase with minimal attenuation.
Skipped gates are a huge win, so take that option wherever and whenever you can. You “jump the queue” and start at a later stage without the degraded confidence of the previous ones. That changes the economics of the entire pipeline, and I’ll come back to why.
Why the four-step model falls short
The four-step model the SEO industry inherited from 1998 — crawl, index, rank, display — collapses five distinct infrastructure processes into “crawl and index” and five distinct competitive processes into “rank and display.”
It might feel like I’m overcomplicating this, but I’m not. Each gate has nuance that merits its standalone position. If you have empathy for the bots, algorithms, and engines, remove friction, and make the content digestible, they’ll move you through each gate cleanly and without losing speed.
Each gate is an opportunity to fail, and each point of potential failure needs a different diagnosis. The industry has been optimizing a four-room house when it lives in a 10-room building, and the rooms it never enters are the ones where the pipes leak the worst.
Most SEO advice operates at the selection, crawling, and rendering gates. Most GEO advice operates at “displayed” and “won,” which is why I’m not a fan of the term.
Most teams aren’t yet working on annotation and recruitment, which are actually where the biggest structural advantages are created.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
Three audiences you need to cater to and three acts you need to master
The AI engine pipeline has an entry condition — discovery — and nine processing gates organized in three acts of three, each with a different primary audience.
Act I: Retrieval (selection, crawling, rendering)
The primary audience is the bot, and the optimization objective is frictionless accessibility.
The primary audience is the algorithm, and the optimization objective is being worth remembering: verifiably relevant, confidently annotated, and worth recruiting over the competition.
Act III: Execution (grounding, display, won)
The primary audience is the engine and, by extension, the person using the engine, where the optimization objective is being convincing enough that the engine chooses and the person acts.
Frictionless for bots, worth remembering for algorithms, and convincing for people. Content must pass every machine gate and still persuade a human at the end.
The audiences are nested, not parallel. Content can only reach the algorithm through the bot and can only reach the person through the algorithm. You can have the most impeccable expertise and authority credentials in the world. If the bot can’t process your page cleanly, the algorithm will never see it.
This is the nested audience model: bot, then algorithm, then person. Every optimization strategy should start by identifying which audience it serves and whether the upstream audiences are already satisfied.
Discovery: The system learns you exist
Discovery is binary. Either the system has encountered your URL or it hasn’t. Fabrice Canel, principal program manager at Microsoft responsible for Bing’s crawling infrastructure, confirmed:
“You want to be in control of your SEO. You want to be in control of a crawler. And IndexNow, with sitemaps, enable this control.”
The entity home website, the canonical web property you control, is the primary discovery anchor. The system doesn’t just ask, “Does this URL exist?” It asks, “Does this URL belong to an entity I already trust?” Content without entity association arrives as an orphan, and orphans wait at the back of the queue.
The push layer — IndexNow, MCP, structured feeds — changes the economics of this gate entirely. A later piece in this series is dedicated to what changes when you stop waiting to be found.
Act I: The bot decides whether to fetch your content
Selection: The system decides whether your content is worth crawling
Not everything that’s discovered gets crawled. The system makes a triage decision based on countless signals, including entity authority, freshness, crawl budget, perceived value, and predicted cost.
Selection is where entity confidence first translates into a concrete pipeline advantage. The system already has an opinion about you before it crawls a single page. That opinion determines how many of your pages it bothers to look at.
Crawling: The bot arrives and fetches your content
Every technical SEO understands this gate. Server response time, robots.txt, redirect chains. Foundational, but not differentiating.
What most practitioners miss is that the bot doesn’t arrive in a vacuum. Canel confirmed that context from the referring page can be carried forward during crawling. With highly relevant links, the bot carries more context than it would from a link on an unrelated directory.
Rendering: The bot builds the page the algorithm will see
This is where everything changes and where most teams aren’t yet paying attention. The bot executes JavaScript if it chooses to, builds the Document Object Model (DOM), and produces the full rendered page.
But here’s a question you probably haven’t considered: how much of your published content does the bot actually see after this step? If bots don’t execute your code, your content is invisible. More subtly, if they can’t parse your DOM cleanly, that content loses significant value.
Google and Bing have extended a favor for years: they render JavaScript. Most AI agent bots don’t. If your content sits behind client-side rendering, a growing proportion of the systems that matter simply never see it.
Representatives from both Google and Bing have also discussed the efforts they make to interpret messy HTML. Here’s one way to look at it: search was built on favors, and those favors aren’t being offered by the new players in AI.
Importantly, content lost at rendering can’t be recovered at any downstream gate. Every annotation, grounding decision, and display outcome depends on what survives rendering. If rendering is your weakest gate, it’s your F on the report card. Everything downstream inherits that grade.
Act II: The algorithm decides whether your content is worth remembering
This is where most brands are losing out because most optimization advice doesn’t address the next two gates. And remember, if your content fails to pass any single gate, it’s no longer in the race.
Indexing: Where HTML stops being HTML
Rendering produces the full page as the bot sees it. Indexing then transforms that DOM into something the system can store. Two things happen here that the industry often misses:
The system strips the navigation, header, footer, and sidebar — elements that repeat across multiple pages on your site. These aren’t stored per page. The system’s primary goal is to identify the core content. This is why I’ve talked about the importance of semantic HTML5 for years. It matters at a mechanical level: <nav>, <header>, <footer>, <aside>, <main>, and <article> tell the system where to cut. Without semantic markup, it has to guess. Gary Illyes confirmed at BrightonSEO in 2017, possibly 2018, that this was one of the hardest problems they had at the time.
The system chunks and converts. The core content is broken into blocks or passages of text, images with associated text, video, and audio. Each chunk is transformed into a proprietary internal format. Illyes described the result as something like a folder with subfolders, each containing a typed chunk. The page becomes a hierarchical structure of typed content blocks.
I call this conversion fidelity: how much semantic information survives the strip, chunk, convert, and store sequence. Rendering fidelity (Gate 3) measures whether the bot could consume your content. Conversion fidelity (Gate 4) measures whether the system preserved it accurately when filing it away.
Both fidelity losses are irreversible, but they fail differently. Rendering fidelity fails when JavaScript doesn’t execute or content is too difficult for the bot to parse. Conversion fidelity fails when the system can’t identify which parts of your page are core content, when your structure doesn’t chunk cleanly, or when semantic relationships between elements don’t survive the format conversion.
Something we often overlook is that even after a successful crawl, indexing isn’t guaranteed. Content that passes through crawl and render may still not be indexed.
That might sound bad enough, but here’s a distinction that should concern you: indexing and annotation are separate processes. Content may be indexed but poorly annotated — stored in the system but semantically misclassified. Non-indexed content is invisible. Misannotated content actively confuses the system about who you are, which can be worse.
Annotation: Where entity confidence is built or broken
This is the gate most of the industry has yet to address.
Think of annotations as sticky notes on the indexed “folders” created at the indexing gate. Indexing algorithms add multiple annotations to every piece of content in the index.
I identified 24 annotation dimensions I felt confident sharing with Canel. When I asked him, his response was, “Oh, there is definitely more.”
Those 24 dimensions were organized across five annotation layers:
Gatekeepers (scope classification).
Core identity (semantic extraction).
Selection filters (content categorization).
Confidence multipliers (reliability assessment).
Extraction quality (usability evaluation).
There are certainly more layers, and each layer likely includes more dimensions than I’ve mapped. Hundreds, probably thousands. This is an open model. The community is invited to map the dimensions I’ve missed.
Annotation is where the system decides the facts:
What your content is about.
Where it fits into the wider world.
How useful it is.
Which entity it belongs to.
What claims it makes.
How those claims relate to claims from other sources.
Credibility signals — notability, experience, expertise, authority, trust, transparency — are evaluated here. Topical authority is assessed here, too, along with much more.
Annotation operates on what survives rendering and conversion. If critical information was lost at either gate, the annotation system is working with degraded raw material. It annotates what the annotation engine received, not what you originally published.
Canel confirmed a principle I suggested that should reshape how we think about this gate: “The bot tags without judging. Filtering happens at query time.” Annotation quality determines your eligibility for every downstream triage.
I have a full piece coming on annotation alone. For now, annotation is the gate where most brands silently lose and the one most worth working on.
Recruitment: Where the algorithmic trinity decides whether to absorb you
This is the first explicitly competitive gate. After annotation, the pipeline feeds into three systems simultaneously.
Search engines recruit content for results pages (the document graph).
Knowledge graphs recruit structured facts for entity representation (the entity graph).
Large language models recruit patterns for training data and grounding retrieval (the concept graph).
Before recruitment, the system found, crawled, stored, and classified your content. At recruitment, it decides whether your content is worth keeping over alternatives that serve the same purpose.
Being recruited by all three elements of the algorithmic trinity gives you a disproportionate advantage at grounding because the grounding system can find you through multiple retrieval paths, and at display because there are multiple opportunities for visibility.
Recruitment is the structural advantage that separates brands with consistent AI visibility from brands that appear inconsistently.
Act III: The engine presents and the decision-maker commits
Grounding: Where AI checks its confidence in the content against real-time evidence
This is the gate that separates traditional search from AI recommendations.
Ihab Rizk, who works on Microsoft’s Clarity platform, described the grounding lifecycle this way:
The user asks a question.
The LLM checks its internal confidence. If it’s insufficient, it sends cascading queries, multiple angles of intent designed to triangulate the answer, which many people call fan-out queries.
Bots are dispatched to scrape selected pages in real time.
The answer is generated from a combination of training data and fresh retrieval.
But grounding isn’t just search results, as many people believe. The other two technologies in the algorithmic trinity play a role.
The knowledge graph is used to ground facts. AI Overviews explicitly showed information grounded in the knowledge graph. It’s reasonable to assume specialized small language models are used to ground user-facing large language models.
The takeaway is that your content’s performance from discovery through recruitment determines whether your pages are in the candidate pool when grounding begins. If your content isn’t indexed, isn’t well annotated, or isn’t associated with a high-confidence entity, it won’t be in the retrieval set for any part of the trinity. The engine will ground its answer on someone else’s content instead.
You can’t optimize for grounding if your content never reaches the grounding stage.
Display: The output of the pipeline
Display is where most AI tracking tools operate. They measure what AI says about you. But by the time you’re measuring display, the decisions were already made upstream, from discovery through grounding.
Display is where AI meets the user. It also covers the acquisition funnel, which is easy to understand and meaningful for marketers. This is where most businesses focus because it’s visible and sits just before the click. I’ll write a full article on that later in this series.
Won: The moment the decision-maker commits
Won is the terminal processing gate in the AI engine pipeline. Ten gates of processing, three acts of audience satisfaction, and it comes down to this: Did the system trust you enough to commit?
The accumulated confidence at this gate is called “won probability,” the system’s calculated likelihood that committing to you is the right decision. Three resolutions are possible, and they form a spectrum. To understand why that spectrum matters, you need to understand the 95/5 rule.
Professor John Dawes at the Ehrenberg-Bass Institute demonstrated that at any given moment, only about 5% of potential buyers are actively in-market. The other 95% aren’t ready to purchase. You sell to the 5%, but the real job of marketing is staying top of mind for the other 95% so that when they decide to move to purchase, on their schedule, not yours, you’re the brand they think of.
The three scenarios that follow show how AI takes over the job of being top of mind at the critical moment for the 95%. I call this top of algorithmic mind.
The imperfect click: The person browses a list of options, pogo-sticks between results, and decides. Traditional search and what Google called the zero moment of truth. The system doesn’t know who is ready. It shows everyone the same list and hopes. The 95/5 efficiency is low. You’re hitting and hoping, and so is the engine.
The perfect click: The AI recommends one solution and the person takes it. I call this the zero-sum moment in AI. This is where we are right now with assistive engines like ChatGPT, Perplexity, and AI Mode. The system has filtered for intent, context, and readiness. It presents one answer to a person moving from the 95% into the 5% with much higher precision.
The agential click: The agent commits, either after pausing for human approval, “Shall I book this?” or autonomously. The agent caught the moment of readiness, did the work, and closed it. Maximum precision. This is the ultimate solution to the 95/5 problem: AI catches the exact moment and acts.
Search won’t disappear. Most people will always want to browse some of the time. Window shopping is fun, and emotionally charged decisions aren’t something people will always delegate.
The trajectory, however, moves from imperfect to perfect to agential. Brands need to optimize for all three outcomes on that spectrum, starting now. Optimizing for agents should already be part of your strategy, as should optimizing for assistive engines and search engines. AAO covers them all.
Search engines, AI assistive engines, and assistive agents are your untrained salesforce. Your job is to train them well enough that you’re top of algorithmic mind at the moment the 95% become the 5%, and the AI either:
After conversion, the brand takes over. You should optimize the post-won feedback gate. The processing pipeline, the DSCRI-ARGDW spine, gets you to the decision. Served sits outside that spine as the gate that closes the loop, turning the line into a circle.
Every “won” that produces a positive outcome strengthens the next cycle’s cascading confidence. Every “won” that produces a negative outcome weakens it. Ten gates get you to the decision. The 11th, served, determines whether the decision repeats and your advantage compounds.
This is where the business lives. Acquisition without retention is a leak, both directly and indirectly through the AI engine pipeline feedback loop.
Brands that engineer their post-won experience to generate positive evidence, reviews, repeat engagement, low return rates, and completion signals, build a flywheel. Brands that neglect post-won burn confidence with every cycle.
Diagnosing failure in the pipeline
The three acts — bot, algorithm, engine, or person — describe who you’re speaking to. The two phases describe what kind of test you’re taking.
Phase 1: Infrastructure, discovery through indexing
Absolute tests. You either pass or fail. A page that can’t be rendered doesn’t get partially indexed. Infrastructure gates are binary: pass or stall.
Phase 2: Competitive, annotation through won
Relative tests. Winning depends not just on how good your content is but on how good the competition is at the same gate.
The practical implication is infrastructure first, competitive second. If your content isn’t being found, rendered, or indexed correctly, fixing annotation quality is wasted effort. You’re decorating a room the building inspector hasn’t cleared.
In practice, brands tend to fail in three predictable ways.
Opportunity cost (Act I: Bot failures)
Your content isn’t in the system, so you have zero opportunity. Cheapest to fix, most expensive to ignore.
Competitive loss (Act II: Algorithm failures)
Your content is in the system, but competitors’ content is preferred. The brand believes it’s doing everything right while AI systems consistently choose a competitor at recruitment, grounding, and display.
Conversion leak (Act III: Engine failures)
Your content is presented, but the system hedges or fumbles the recommendation. In short, you lose the sale.
Every gate you pass still costs you signal
In 2019, I published How Google Universal Search Ranking Works: Darwinism in Search, based on a direct explanation from Google’s Illyes about how Google calculates ranking bids by multiplying individual factor scores. A zero on any factor kills the entire bid.
Darwin’s natural selection works the same way: fitness is the product across all dimensions, and a single zero kills the organism. Brent D. Payne made this analogy: “Better to be a straight C student than three As and an F.”
As with Google’s bidding system, cascading confidence is multiplicative, not additive. Here’s what that means:
Per-gate confidence
Surviving signal at the won gate
90%
34.9%
80%
10.7%
70%
2.8%
60%
0.6%
50%
0.1%
Illustrative math, not a measurement. The principle is what matters: strengths don’t compensate for weaknesses in a multiplicative chain.
A single weak gate destroys everything. Nine gates at 90% plus one at 50% drops you from 34.9% to 19.4%. If that gate drops to 10%, it kills the surviving signal entirely. A near-zero anywhere in a multiplicative chain makes the whole chain near-zero.
This is competitive math. If your competitors are all at 50% per gate and you’re at 60%, you win: 0.6% surviving signal against their 0.1%. Not because you’re excellent, but because you’re less bad.
Most brands aren’t at 90%. The worse your gates are, the bigger the gap a small improvement opens. Here’s an example.
Gate
D
S
C
R
I
A
Re
G
Di
W
Surviving Signal
Discovered
Selected
Crawled
Rendered
Indexed
Annotated
Recruited
Grounded
Displayed
Won
Your Brand
75%
80%
70%
85%
75%
5%
80%
70%
75%
80%
0.4%
Competitor
65%
60%
65%
70%
60%
60%
65%
60%
65%
60%
1.8%
I chose annotated as the “F” grade in this example for demonstrative purposes.
Annotation is the phase-boundary gate. It’s the hinge of the whole pipeline. If the system doesn’t understand what your content is, nothing downstream matters.
Applying this Darwinian principle across a 10-gate pipeline, where confidence is measurable at every transition, is my diagnostic model. I recently filed a patent for the mechanical implementation.
Improving gates versus skipping them
There are two ways to increase your surviving signal through the pipeline, and they aren’t equal.
Improving your gates
Better rendering, cleaner markup, faster servers, and schema help the system classify your content more accurately. These are real gains, single-digit to low double-digit percentage improvements in surviving signal.
For many brands and SEOs, this is maintenance rather than transformation. It matters, and most brands aren’t doing it well, but it’s incremental.
Skipping gates entirely
Structured feeds, Google Merchant Center and OpenAI Product Feed Specification, bypass discovery, selection, crawling, and rendering altogether, delivering your content to the competitive phase with minimal attenuation.
MCP connections skip even further, making data available from recruitment onward with triple-digit percentage advantages over the pull path.
If you’re only improving gates, you’re leaving an order of magnitude on the table.
The highest-value target is always the weakest gate
Improving your best gate from 95% to 98% is nearly invisible in the pipeline math. Improving your worst gate from 50% to 80% transforms your entire surviving signal. That’s the Darwinian principle at work: fitness is multiplicative, the weakest dimension determines the outcome, and strengths elsewhere can’t compensate.
Most teams are optimizing the wrong gate. Technical SEO, content marketing, and GEO each address different gates. Each is necessary, but none is sufficient because the pipeline requires all 10 to perform. Teams pouring budget into the two or three gates they understand are ignoring the ones that are actually killing their signal.
Then there’s the single-system mistake. At recruitment, the pipeline feeds into three graphs, the algorithmic trinity. Missing one graph means one entire retrieval path doesn’t include you.
You can be perfectly optimized for search engine recruitment and completely absent from the knowledge graph and the LLM training corpus. In a multiplicative system, that gap compounds with every cycle.
Most of the AI tracking industry is measuring outputs without diagnosing inputs, tracking what AI says about you at display when the decisions were already made upstream. That’s like checking your blood pressure without diagnosing the underlying condition.
The tools to do this properly are emerging. Authoritas, for example, can inspect the network requests behind ChatGPT to understand which content is actually formulating answers. But the real work is at the gates upstream of display, where your content either passed or stalled before the engine ever opened its mouth.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
Audit your pipeline: Earliest failure first
The correct audit order is pipeline order. Start at discovery and work forward.
If content isn’t being discovered, nothing downstream matters. If it’s discovered but not selected for crawling, rendering fixes are wasted effort. If it’s crawled but renders poorly, every annotation and grounding decision downstream inherits that degradation.
This is your new plan: Find the weakest gate. Fix it. Repeat.
The inconsistency Fishkin documented is a training deficit. The AI engine pipeline is trainable. The training compounds. The walled gardens increase their lock-in with every cycle.
The brand that trains its AI salesforce better than the competition doesn’t just win the next recommendation. It makes the next one easier to win, and the one after that, until the gap widens to the point where competitors can’t close it without starting from scratch.
Without entity understanding, nothing else in this pipeline works. The system needs to know who you are before it can evaluate what you publish. Get that right, build from the brand up through the funnel, and the compounding does the rest.
Next: The five infrastructure gates the industry compressed into ‘crawl and index’
The next piece opens the infrastructure gates in full: rendering fidelity, conversion fidelity, JavaScript as a favor, not a standard, structured data as the native language of the infrastructure phase, and the investment comparison that puts numbers on improving gates versus skipping them entirely.
The sequential audit shows where your content is dying before the algorithm ever sees it, and once you see the leaks, you can start plugging them in the order that moves your surviving signal the most.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/03/Cascading-confidence-is-multiplicative-sVBDif.png?fit=1600%2C1600&ssl=116001600http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-03 16:00:002026-03-03 16:00:00The AI engine pipeline: 10 gates that decide whether you win the recommendation
To help you prevent a single policy issue from snowballing into a full account suspension, here’s how Google’s three-strike system works and what you should do at every stage to keep your ads running.
Case study: Appealing a Google Ads strike
Over the past 10+ years, I’ve helped thousands of advertisers identify and resolve Google’s policy concerns so that their businesses can resume running ads. One such situation involved helping a business that sells ceremonial swords for military dress uniforms.
Google’s Other Weapons policy prohibits advertising swords intended for combat. However, that same policy permits the advertising of non-sharpened, ceremonial swords, which is what this business sells. Even though this business was properly advertising its products within Google’s ad policy parameters, Google issued them a warning for violating the Other Weapons policy.
After the warning, we documented for Google that the business wasn’t violating Google’s policy. We also added specific disclaimers to the business’s sword product pages, noting that the swords were only ceremonial. Frustratingly, Google decided to issue a first strike to the business anyway.
We appealed the strike because the business wasn’t violating Google’s policy. But Google quickly denied that appeal. We tried appealing again, and Google denied the second appeal. The ad account remained on hold with no ads serving, and the business was losing revenue.
Ultimately, we had to “acknowledge” the strike to Google (I’ll explain what that means later) so that the ads would resume serving. We then worked with Google to craft more precise disclaimer language, stating that the swords for sale were ceremonial blades and not sharpened for use as weapons. This disclaimer was added to the business’s website footer so that both Google’s robots and human reviewers could see it on every single page (regardless of whether swords were for sale on a particular page).
Because of all these changes, Google’s concerns were satisfied and the business has never received any subsequent warnings or strikes. The end result was a success, even though technically there should never have been a warning or strike issued because an actual policy violation never occurred.
Key takeaway: Google will sometimes incorrectly issue warnings and strikes, and even reject appeals, and will often require excessive website disclaimers to convince them that all is well.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
Navigating Google’s three-strikes system
Understanding Google’s strikes system can save your ads account from suspension. The search giant adheres to a system that begins with an initial warning and is followed by a “three strikes and you’re out” protocol.
The warning: Your ‘mulligan’ opportunity
Before issuing your ad account an initial strike, Google will first send you a warning notification.
This warning informs you that there’s a problem and allows you to address and resolve Google’s concern before your account is penalized with an official strike.
The penalty: None (yet). Your ads can continue to run.
What to do: Appeal any ad/asset disapprovals if you’re confident Google made a mistake, or identify the issue and replace the disapproved ads/assets with fully compliant versions
Treat warnings seriously — ignoring them likely ensures your account will begin receiving strikes.
Strike 1: At least three days without ads
If Google decides that the same policy violation still exists after a warning was issued, your ad account will receive its first official strike.
The penalty: All ads will stop serving for three full days.
What to do: Acknowledge or appeal the strike.
Acknowledge the strike
This is your fastest path back to serving ads. But Google counts strikes as cumulative over a 90-day period.
If you acknowledge the strike rather than successfully appeal it, you’ve started the clock on the possibility of three strikes and a permanent suspension. Deciding which approach is best is a case-by-case determination.
To acknowledge the strike, you must:
Remove all ads/assets that violate Google’s cited policy
If Google decides there’s been another policy violation within 90 days of resolving your first strike, or if your original violation was unresolved during those 90 days, your account will receive a second strike.
The penalty: All ads will stop serving for seven full days.
What to do: Your options are the same as for Strike 1: acknowledge or appeal the strike.
Strike 3: Your account is suspended
If Google decides there’s been another policy violation within 90 days of resolving your second strike, or if your previous violation was unresolved during those 90 days, your account will receive a third strike.
The penalty: Your account is suspended, and you may not run any ads or create a new ad account.
What to do: Your only recourse now is to appeal the suspension.
Successfully appealing a suspension is definitely possible. But the process is often a nightmare, and the results are never guaranteed.
Important: Once suspended, you’re unable to make any changes to your ad account.
Google is sometimes inconsistent at following their own rules. Here are two examples I’ve seen first-hand.
Successfully appealing a strike doesn’t always reset the 90-day clock
I have a client who acknowledged a first strike on June 25. They received a second strike on July 26, which they successfully appealed. You would think that should reset the 90-day counter back to June 25.
However, Google gave them another second strike on October 16, far beyond 90 days from the date of the first strike, but within 90 days from the date of the “first” second strike, which they successfully appealed.
Google sometimes automatically returns your account to ‘warning’ status after a first strike expires
I have a client who received a warning on August 7, followed by a first strike on September 7. They acknowledged the first strike, and that strike expired on December 6, 90 days after it was issued.
However, the account immediately reentered “warning” status, with a new 90-day clock starting from when the first strike expired. There was no new email notification about this warning, and the warning didn’t appear on the Strike history tab.
Look for a notification at the top of your Google Ads account.
Check the Policy manager page in your Google Ads account.
How do I see my history of strikes?
Go to the Strike historytab on the Policy manager page in your Google Ads account.
Can you get a strike without having ad disapprovals?
Yes. Google can issue strikes even if no ads are formally disapproved.
How are Google’s three- and seven-day ad holds calculated?
Google counts full days. For example, if you receive and acknowledge a first strike (a three-day hold) on January 1, your ads won’t be eligible to resume serving until January 4th.
Are account strikes worse than ad disapprovals?
Yes, account strikes are significantly worse than individual ad disapprovals. A strike prevents all your account’s ads from serving and can easily escalate to a full account suspension.
Which Google policies have the three-strikes rule?
Enabling dishonest behavior.
Unapproved substances.
Guns, gun parts, and related products.
Explosives.
Other weapons.
Tobacco.
Compensated sexual acts.
Mail-order brides.
Clickbait.
Misleading ad design.
Bail bond services.
Call directories, forwarding services, and recording services.
Credit repair services.
Binary options.
Personal loans.
Important: If you violate one of Google’s many other policies not listed above, you could find your ad account suspended immediately, with no warning or three-strikes system.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
What you can do to prevent and navigate Google Ads strikes
Follow these best practices and tips to minimize the chances of receiving a Google Ads strike:
Read the Google Ads policies that apply to your industry so that you know what to do and what not to do.
Delete old ads and assets you no longer need, so they can’t trigger strikes unexpectedly.
Add clear and comprehensive disclaimers to your website that will help Google understand you’re complying with any ad policies you think they might otherwise decide you aren’t.
Save copies of any appeals you submit because Google won’t show them to you after they’re submitted.
If you receive an account strike, closely monitor the 90-day clock so you know when you’re safely out of the previous “strike” window.
Google understandably cares deeply about its reputation and the safety of its users. That’s why Google’s policy team often strictly enforces its advertising policies, and why they’re sometimes over-aggressive when interpreting and applying their own policy language.
To keep our Google Ads accounts in good health and our ads running, the best thing we can do as advertisers is to deeply understand Google’s advertising policies and requirements.
Always be ready to jump through hoops to explain your unique situations, and over-comply with Google’s edicts whenever feasible.
Meta is updating its ad measurement framework, aiming to simplify attribution in what it calls a “social-first” advertising world.
What’s happening. Meta is narrowing its definition of click-through attribution for website and in-store conversions. Going forward, only link clicks — not likes, shares, saves or other interactions — will count toward click-through attribution. The change is designed to reduce discrepancies between Meta Ads Manager and third-party tools like Google Analytics.
Between the lines. Social media has overtaken search as the world’s largest ad channel, according to WARC, but many attribution systems were built for search-era behaviors. On social platforms, engagement extends beyond link clicks. Historically, Meta counted all click types toward click-through conversions, while many third-party tools only counted link clicks — creating reporting misalignment.
What’s changing. Conversions previously attributed to non-link interactions will now fall under a renamed “engage-through attribution” (formerly engaged-view attribution). Meta is also shortening the video engaged-view window from 10 seconds to 5 seconds, reflecting faster conversion behavior — particularly on Reels. The company says 46% of Reels purchase conversions happen within the first two seconds of attention.
Why we care. This update makes it easier to see which actions actually drive conversions, reducing confusion between Meta reporting and third-party analytics like Google Analytics. By separating link clicks from other social interactions, marketers get a clearer view of campaign performance, while the new engage-through attribution captures the value of likes, shares, and saves.
This gives advertisers more confidence in their data and helps them make smarter, more impactful
Third-party tie-ins. Meta is partnering with analytics providers like Northbeam and Triple Whale to incorporate both clicks and views into attribution models, aiming to give advertisers a more complete performance picture.
The rollout. Changes will begin later this month for campaigns optimizing toward website or in-store conversions. Billing will not change, but reporting inside Ads Manager may shift as attribution definitions update.
The bottom line. Meta is attempting to balance clearer, search-aligned click reporting with better visibility into uniquely social interactions — giving advertisers cleaner comparisons across platforms while still capturing the incremental impact of engagement-driven conversions.
For more than a decade, the dominant model was simple — identify a keyword, write an article, publish, promote, rank, capture traffic, convert a fraction of visitors, and repeat. But that model is breaking.
Content marketing is collapsing and rebuilding simultaneously. AI systems now answer informational queries directly inside search results. Large language models (LLMs) synthesize known information instantly. Information production is accelerating faster than distribution capacity. Public feeds are already saturated.
The cost of producing content has fallen to nearly zero, while the cost of being seen has never been higher. That changes everything.
Here’s a system for content marketing in a world where being found is increasingly unlikely.
The decline of informational SEO
Informational SEO used to be treated as a growth opportunity. Publish enough articles targeting informational queries, and traffic would compound.
But traffic was always a proxy metric. It felt productive because dashboards moved. In reality, most content was never read deeply, rarely linked to, and often indistinguishable from competitors. Page 1 often contained 10 variations of the same article, each rewritten with minor differences.
Now, AI answers absorb demand directly. Users receive summaries without clicking. The known information layer of the web is becoming commoditized.
If your strategy relies on answering known informational questions, you’re competing with a machine trained on the entire web. Informational SEO is over as a strategy.
Search content will still matter, but its role shifts. It becomes closer to customer service and sales enablement. It exists to support conversion once intent is clear. It doesn’t build fame.
Content marketing, properly understood, must do something else entirely.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
All content marketing is advertising
Growth hackers came in and took over SEO. Driven by the desire to show impressive charts to the board, they turned SEO from a practical channel into a landfill of skyscrapered, informational content that did little for real growth.
So, we need a reset. There are only two reasons to create content:
You’re in the publishing business.
You’re marketing a business.
If you’re in the second category, your content is advertising. That doesn’t mean banner ads. It means its job is to build mental availability. As advertising science has repeatedly shown, brands grow by increasing the likelihood of being thought of in buying situations and making themselves easy to purchase from.
The advertising analytics company System1 describes the three drivers of profit growth from advertising as fame, feeling, and fluency.
Fame means broad awareness.
Feeling means positive emotional association.
Fluency means easy recognition and processing.
If your content doesn’t contribute to those outcomes, it’s activity and not helping your growth.
SEO teams optimized for clicks, but clicks aren’t the objective. Being remembered is. In an AI era, this distinction becomes decisive.
Historically, content marketing relied heavily on pull: Someone searched, you ranked, and you pulled them from Google to your website. That channel is narrowing.
As AI summaries answer queries directly, the ability to pull strangers through informational search decreases. Pull remains critical for transactional queries and high-intent keywords, but the gravitational pull of informational content is weakening.
Push becomes more important. You have to push your content to people, distributing it intentionally through media, partnerships, events, advertising, communities, and networks rather than waiting to be discovered. It must be placed directly in front of people.
The paradox is this: We once believed gatekeeping had disappeared. Social media and Google created the illusion of fair and direct access. Now, gatekeepers are back — algorithms, publishers, influencers, media outlets, and even AI systems themselves.
When channels are flooded, selection mechanisms tighten.
Kevin Kelly wrote in his book “The Inevitable” that work has no value unless it’s seen. An unfound masterpiece, after all, is worthless.
As tools improve and creation becomes frictionless, the number of works competing for attention expands exponentially, with each new work adding value while increasing noise.
Kelly’s point was that in a world of infinite choice, filtering becomes the dominant force. Recommendation systems, algorithms, media editors, and social networks become the arbiters of visibility. When there are millions of books, songs, apps, videos, and articles, abundance concentrates attention, creating a structural shift.
When production is scarce, quality alone can surface work. When production is abundant, discoverability depends on networks, signals, and amplification. The value is migrating from creation to curation and distribution. In practical terms, every additional AI-generated article makes it harder for any single article to be noticed.
The supply curve has shifted outward dramatically. Demand hasn’t. Human attention remains finite. As supply approaches infinity and attention remains fixed, the probability of being found declines.
Being found is now an economic problem of scarcity rather than a technical exercise in optimization. When production is abundant, attention is scarce. When attention is scarce, distinctiveness and distribution become currency.
This is where Rory Sutherland’s concept of powerful messaging becomes essential for us. In his book, “Alchemy,” he argues that rational behavior conveys limited meaning.
When everything is optimized, efficient, and frictionless, nothing signals importance. Powerful messages must contain elements of absurdity, illogicality, costliness, inefficiency, scarcity, difficulty, or extravagance — qualities that serve as signals. They tell the market that something matters.
Consider a wedding invitation. The rational option is an email — instant, free, and efficient. Yet most couples choose heavy paper, embossed type, textured envelopes, even wax seals. The cost and inefficiency are the point. They signal commitment and create emotional weight. The medium amplifies the meaning.
The same logic applies to marketing. When everyone can publish a competent article in seconds, competence carries no signal. A 1,000-word blog post answering a known question communicates efficiency, not importance. Scarcity and effort change perception.
MrBeast built early fame by counting to extreme numbers on camera. The act was irrational. It was inefficient and difficult. That difficulty was the hook. It signaled commitment and created memorability. The content spread not because it was informational, but because it was remarkable.
In an AI-saturated environment, rational content becomes invisible. If 10,000 companies publish summaries of the same topic, none stand out.
But if one brand commissions original research, prints a limited run of a physical report, hosts a live event around the findings, and strategically distributes it, the signal is different. The effort itself becomes part of the message.
Scarcity also changes economics. Sherwin Rosen’s work on the economics of superstars demonstrated that small differences in recognition can lead to disproportionate returns because markets reward the most recognized participants disproportionately.
Moving from being chosen 1% of the time to 2% can double outcomes because fame compounds. In crowded markets, the most recognized option captures an outsized share and reinforces its own dominance.
This is why being found is fundamentally different now. In the past, discoverability was a function of production and optimization. Today, it hinges on distinctiveness and signal strength. When production approaches zero cost, attention becomes the only scarce resource, which means you should be aiming for fame rather than optimization.
Paul Feldwick, in “Why Does The Pedlar Sing?” argues that fame is built through four components:
The offer must be interesting and appealing.
It must reach large audiences.
It must be distinctive and memorable.
The public and media must engage voluntarily.
These four elements provide a practical framework for content marketing in an AI era. Here’s how that works in practice.
Create something interesting
You must create new information, not restate existing information. That could mean:
Proprietary data studies.
Original research.
Indexes updated annually.
Experiments conducted publicly.
Tools that solve real problems.
Physical artifacts with limited distribution.
Events that convene a specific community.
Consider the origins of the Michelin Guide. A tire company created a restaurant guide that became a cultural authority.
Awards ceremonies, industry rankings, annual reports, and indexes all function as content marketing. These are fame engines.
The key is the perception of effort and distinctiveness. A limited-edition printed book sent to 100 target prospects can carry more weight than 1,000 blog posts. Costliness signals meaning.
Reach mass or concentrated influence
Interest without distribution is invisible. Distribution options include:
Media coverage.
Partnerships.
Paid advertising.
Events.
Webinars.
Physical mail.
Community amplification.
If you lack a budget, focus on the smallest viable market. Concentrate on a defined audience and saturate it.
Many iconic technology companies began by dominating narrow communities before expanding outward. Public relations and content marketing converge here.
Earned media multiplies reach.
Paid media accelerates it.
Community activation sustains it.
If your content is never placed intentionally in front of people, it can’t build fame.
Be distinctive and memorable
SEO content historically failed on distinctiveness. Ten articles answering the same question looked interchangeable. But in an AI era, repetition disappears into the model.
Distinctiveness can come from:
A recurring annual report with a recognizable format.
A proprietary scoring system.
A unique visual identity.
A specific tone.
A tool that becomes habitual.
An award or certification owned by your brand.
Memorability drives mental availability. Fluency increases recall. When someone recognizes your brand instantly, you reduce cognitive effort. Repetition of distinctive assets compounds over time.
You have to continually go to market with distinctive, memorable content. If you don’t do this, you will fade in memory and distinctiveness.
Enable voluntary engagement
You can’t force people to share, but you can design for shareability. Content spreads when it carries social currency, enhances the sharer’s identity, rewards participation, and makes access feel exclusive.
Referral loops, limited access programs, community recognition, and public acknowledgment can all increase spread. The key is that the message must move freely between humans. It must be portable, discussable, and referencable.
Memetics matters. If it can’t be passed along, it can’t compound.
If content must be designed for distinctiveness, distribution, and voluntary engagement, search leaders need a different playbook. Here’s a five-step framework.
Step 1: Separate infrastructure from fame
Maintain search infrastructure for high-intent queries, optimize product pages, support conversion, and provide clear answers where necessary. But stop confusing informational volume with brand growth.
Audit your content portfolio. Identify what builds mental availability and what merely fills space to reduce waste.
Step 2: Invest in originality
Allocate budget to proprietary research, data collection, and creative initiatives. If everyone can generate competent summaries, originality becomes leverage.
This may require shifting the budget from content volume to creative depth.
Step 3: Design for distribution first
Before creating content, define distribution.
Who needs to see this?
How will it reach them?
Which gatekeepers matter?
What media outlets might care?
Reverse engineer reach.
Step 4: Build distinctive assets
Create repeatable formats that become associated with your brand.
An annual index.
A recurring event.
A recognizable report structure.
A named methodology.
Consistency builds fluency.
Step 5: Measure fame
Track:
Brand search volume.
Direct traffic growth.
Share of voice in media.
Unaided awareness, where possible.
Traffic alone is insufficient.
If content doesn’t increase the probability that someone thinks of you in a buying moment, it’s not performing its primary job.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
The return of creativity
We’re entering a period where automation handles the average, freeing humans to focus on the exceptional. The future of content marketing isn’t high-volume AI-generated articles. It’s the creation of new information, new experiences, new events, and new signals that machines can’t fabricate credibly.
It requires a partnership with PR, a strategic use of physical and digital channels, disciplined distribution, and a commitment to fame. Budgets will need to shift from volume production to creative impact.
In a world where information is infinite and attention is finite, the brands that win will be those that understand that being found is more valuable than being published. Content marketing in the AI era isn’t about producing more. It’s about becoming known.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-03 14:00:002026-03-03 14:00:00Content marketing in an AI era: From SEO volume to brand fame
What do conversion rate optimization (CRO) and findability look like for an AI agent versus a human, and how different do your strategies really need to be?
More and more marketers are embracing the agentic web, and discovery increasingly happens through AI-powered experiences. That raises a fair question: what does CRO and findability look like for an AI agent compared with a human?
Several considerations matter, but the core takeaway is clear: serving people supports AI findability. AI systems are designed to surface useful, grounded information for people. Technical mechanics still matter, but you don’t need entirely different strategies to be findable or to improve CRO for AI versus humans.
What CRO looks like beyond the website
If a consumer does business directly through an agent or an AI assistant, your business needs to make the right information available in a way that can be understood and used. Your products or services need to be represented through clean, well-structured data, with information formatted in ways that downstream systems can process reliably.
As more people explore doing business with AI assistants, part of the work involves making sure your products and services can connect cleanly. Standards, such as Model Context Protocol (MCP), can help by enabling agents to interact with shared sources of information.
In many cases, a human may still decide to engage directly on a brand’s site. In that context, content and formatting choices matter. Whether you focus on paid media or organic, ensuring your humans can take desired actions — and will want to — is important.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
Optimization 1: How much text is on the page?
Old‑school SEO encouraged the idea that more keywords and larger walls of text would perform better. That approach no longer holds.
Wayfair does a great job using accessible fonts, a call to action when the user shifts to a transactional mindset, and easy-to-understand language.
Both humans and AI systems tend to work better with clearly structured, modular content. Large blocks of uninterrupted text can be harder for people to scan and understand. Clear sections, spacing, layout, and visual hierarchy help users quickly understand what they can do and how to accomplish the goal that brought them to the page.
There’s no fixed minimum or maximum amount of text that works best. You should use the amount of content needed to clearly explain what you offer, why it’s useful, and what sets it apart.
A technical topic will need more text, broken into smaller paragraphs. There are great calls to action as well.
A technical topic will need more text, broken into smaller paragraphs. There are great calls to action as well.
Visual components can be helpful when paired with useful alt text. Lead gen forms should be easy for humans to complete and regularly audited for spam or friction. Content that’s hard for people to use is also harder for automated systems to interpret as helpful or relevant.
Optimization 2: How are you communicating with your humans?
One of the best ways to communicate clearly to systems is to communicate clearly to people. Lean into what makes you an expert, but avoid unnecessary jargon or overly complex language. Descriptions should stay specific, accurate, and on-brand.
A simple gut check: if a 10-year-old couldn’t broadly understand what you do, why it matters, and how to engage with you, you’re probably making things harder than necessary. Even though AI systems are sophisticated, clarity still matters because the goal is ultimately to support a human outcome.
If you’re unsure, try putting your positioning copy into an AI assistant and asking it to critique its clarity. Ask for simplification and clearer explanations, not for new claims or embellishment.
Visual components matter here as well. Comparison tables can help when they genuinely support understanding, but they can hurt when they’re used as a gimmick rather than a guide. Accessibility principles matter, too. Color contrast, readable font sizes, and restrained font choices reduce the risk that someone can’t process your site.
IAMS has a thoughtful quiz to find the right dog breed and offers additional close matches. High-contrast color, easy-to-understand buttons, and high-quality photos help.
Images should be easy to understand and clearly connected to the surrounding text. Alt text helps people using assistive technologies and reinforces the relationship between visuals and written content.
A user comes to your site to do something. They might want to buy, request a quote, or speak with your team. That action should be clear.
When the intended action is unclear, it becomes harder for both people and automated systems to understand what your site enables.
Tarte Cosmetics does a great job of leaning into CRO principles, including inclusivity, accessibility, and social proof.
Shopping experiences tend to surface in conversations with shopping intent because assistants are trying to complete the task they were given. If it’s unclear how to add an item to a cart or complete a purchase, you make it harder for a human to do business with you. You also make it harder for systems to understand that you’re a transactional site rather than a catalog of items without a clear path forward.
Lead generation requires similar clarity. If the goal is to talk to your team, include a phone number that can be clicked to call. You might also include a form that submits directly into your lead system or a flow that opens an email client. Forcing users through multiple form pages often frustrates people and adds unnecessary complexity to the experience.
I cover technical considerations last for a reason. The most important work you can do is support the humans you serve. Technical improvements help, but they rarely succeed on their own.
Tips from the Microsoft AI guidebook. (Disclosure: I’m the Ads Liaison at Microsoft Advertising.)
Excessive imagery, low contrast between text and background, or unstable layouts can create challenges.
Make sure your site renders consistently and meaningfully. Large layout shifts after load, measured in cumulative layout shift (CLS), can frustrate users. Pages overloaded with ads or pop-ups can distract from the reason someone arrived in the first place and may introduce trust concerns.
Security matters as well. Malware warnings, broken rendering, or incomplete page loads can raise red flags for both users and automated systems.
Tools like IndexNow can help notify search systems of content changes more quickly. Microsoft Clarity is a free tool that shows how users behave on your site, surfacing friction you might otherwise miss. This includes Brand Agents that help your humans have more meaningful chatbot experiences.
One useful check is to review how your site appears when used as input for ad platforms or auto-generated creative tools, such as Performance Max campaigns or audience ads.
These can provide a helpful lens into how platforms interpret your content. When the resulting positioning and creative align with what you intend, you’re usually doing a good job serving both crawlers and people. When they don’t, it’s often a signal to revisit clarity, structure, or user flow.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-03 13:00:002026-03-03 13:00:004 CRO strategies that work for humans and AI
Google is rolling out Video Reach Campaign (VRC) Non-Skip ads, expanding how brands reach connected TV audiences on YouTube.
What’s happening. VRC Non-Skips are now live globally in Google Ads and Display & Video 360. Built for the living room experience, they run as non-skippable placements optimized for connected TV (CTV) screens.
Why we care. YouTube has been the No. 1 streaming platform in the U.S. for three straight years, making the TV screen a critical battleground for your brand budget. With guaranteed, non-skippable delivery, you can ensure your full message reaches viewers in premium, lean-back environments.
AI in the mix. Google AI dynamically optimizes across 6-second bumper ads, 15-second standard spots, and 30-second CTV-only non-skippable formats. Instead of manually splitting your budget by format, you can rely on AI to allocate impressions for maximum reach and efficiency.
Bottom line. Advertisers now have a simpler way to secure guaranteed, full-message delivery on the biggest screen in the house — using AI to maximize reach and efficiency across non-skippable formats without manually managing the mix.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-03 11:46:472026-03-03 11:46:47Google launches non-skippable Video Reach campaigns for connected TV
Google is expanding its recurring billing policy to allow certified U.S. online pharmacies to promote prescription drugs with subscriptions and bundled services.
What’s happening. Certified merchants can now offer:
Prescription drug subscriptions — recurring billing for prescription medications.
Prescription drug bundles — combining drugs with services like coaching or treatment programs, as long as the drug is the primary product.
Prescription drug consultation services — recurring consults to determine prescription eligibility, either standalone or bundled with medications.
Requirements for eligibility. Merchants must maintain certified status, submit subscription costs in Merchant Center using the [subscription_cost] attribute, include clear terms and transparent fees on landing pages, and comply with all existing Healthcare & Medicine and recurring billing policies. Accounts previously disapproved can request a review once requirements are met.
Why we care. The update opens new revenue opportunities for online pharmacies, letting them leverage recurring models and bundled services while staying compliant with Google policies.
The bottom line. Certified U.S. online pharmacies can now run recurring prescription and bundled offers, giving them more flexibility to reach patients and scale subscription-based services.
Google updated both its image SEO best practices and Google Discover help documents to clarify that Google uses both schema.org markup and the og:image meta tag as sources when determining image thumbnails in Google Search and Discover.
Image SEO best practices. Google added a new section to the image SEO best practices help document named Specify a preferred image with metadata. In that section, Google wrote:
“Google’s selection of an image preview is completely automated and takes into account a number of different sources to select which image on a given page is shown on Google (for example, a text result image or the preview image in Discover).”
Here is how you influence the thumbnails Google chooses:
Specify the schema.org primaryImageOfPage property with a URL or ImageObject.
Or specify an image URL or ImageObject property and attach it to the main entity (using the schema.org mainEntity or mainEntityOfPage properties)
Here are the overall best practices when choosing these methods:
Choose an image that’s relevant and representative of the page.
Avoid using a generic image (for example, your site logo) or an image with text in the schema.org markup or og:imagemeta tag.
Avoid using an image with an extreme aspect ratio (such as images that are too narrow or overly wide).
Use a high resolution, if possible.
Google Discover image selection. In the Discover documentation Google added a section that reads:
“Include compelling, high-quality images in your content that are relevant, especially large images that are more likely to generate visits from Discover. We recommend using images that meet the following specifications: At least 1200 px wide, High resolution (at least 300K) and 16×9 aspect ratio”
“Google tries to automatically crop the image for use in Discover. If you choose to crop your images yourself, be sure your images are well-cropped and positioned for landscape usage, and avoid automatically applying an aspect ratio. For example, if you crop a vertical image into 16×9 aspect ratio, be sure the important details are included in the cropped version that you specify in the og:image meta tag).”
“Use either schema.org markup or the og:image meta tag to specify a large image that’s relevant and representative of the web page, as this can influence which image is chosen as the thumbnail in Discover. Learn more about how to specify your preferred image. Avoid using generic images (for example, your site logo) in the schema.org markup or og:image meta tag. Avoid using images with text in the schema.org markup or og:image meta tag.”
Why we care. Images can have a big impact on click-through rates from both Google Search and Google Discover. Here, Google is telling us ways we can encourage Google to select a specific image for that thumbnail. So review these help documents and see if any of this can help you with the images Google selects in Search and Discover.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-02 18:19:592026-03-02 18:19:59Google uses both schema.org markup and og:image meta tag for thumbnails in Google Search and Discover
If you’re not actively managing your branded search campaigns, you’re leaving money on the table and your reputation in the hands of competitors, review aggregators, and affiliate marketers.
Brand protection through PPC isn’t just about bidding on your own name. It’s a strategy that spans defensive bidding, query monitoring, ad copy testing, and reputation management across the entire customer research journey.
Why brand search deserves more than basic defense
Most PPC managers treat brand campaigns as an afterthought. Set up a campaign, bid on the exact brand name, maybe add some close variants, and call it done.
But the reality is far more complex, especially when we’re talking about bigger, well-known brands. Your brand exists across dozens of query contexts, each representing a different stage of the customer journey and requiring a different strategic approach.
Consider what happens when someone searches for your brand. They’re not just typing your company name, they’re asking questions, seeking validation, comparing alternatives, and researching specific features.
If you’re only covering exact-match brand terms, you’re missing the majority of brand-related searches and leaving those high-intent users exposed to competitor messaging.
Third-party sites like review aggregators and affiliate comparison websites actively bid on your brand terms to capture traffic and redirect it to their comparison pages, where your competitors pay for prominence.
The cost? Your brand equity, customer trust, and ultimately, conversion rates.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
4 categories of branded searches you need to cover
Based on user intent and competitive vulnerability, branded searches fall into four strategic categories. Each requires different bid strategies, ad copy approaches, and landing page experiences.
Let’s break down each category and the specific PPC tactics that can work.
Brand trust and reputation queries
“Is [Brand] good?”
“[Brand] reviews.”
“Is [Brand] legit?”
“Is [Brand] worth it?”
These searchers are in the validation phase. They’ve heard of your brand but want social proof before committing.
The competitive threat here comes from review aggregators and affiliate sites that will happily show your reviews alongside competitor CTAs.
PPC strategy
Bid aggressively — these are high-intent users who are close to converting.
Use review extensions and star ratings in your ads.
Highlight trust signals in ad copy (years in business, customer count, awards).
Send users to dedicated testimonial or case study landing pages, not your homepage.
Test callout extensions with specific proof points.
Product features queries
“What is [Brand] known for?”
“Pros and cons of [Brand].”
“Does [Brand] offer [feature]?”
Users searching for feature-specific information are evaluating whether your solution meets their requirements. Competitors often bid on these queries with ads suggesting they offer superior features.
PPC strategy
Create feature-specific ad groups with tailored ad copy.
Use sitelink extensions to direct users to specific feature pages.
Address the specific feature in headline 1, don’t waste space on your brand name.
Include feature demos or video on the landing page.
Test whether these queries warrant higher bids than core brand terms.
Comparison queries
“Alternatives to [Brand].”
“How does [Brand] compare?”
“Is [Brand] better than [Competitor]?”
“Is [Brand] right for [use case]?”
This is the most competitive category. Users are actively comparing you to alternatives, and both direct competitors and third-party comparison sites are bidding heavily. This is where you’re most vulnerable to losing customers who were already considering you.
PPC strategy
Bid at or above top-of-page estimates to maintain Position 1.
Create dedicated comparison landing pages for each major competitor.
Include pricing transparency if it’s a competitive advantage.
Monitor auction insights obsessively to identify new competitive threats.
Consider category-level comparison ads for “best [category] tools/products” searches.
Niche questions
“Is [Brand] expensive?”
“Does [Brand] offer discounts?”
“Is [Brand] secure?”
These queries reveal specific concerns or evaluation criteria. They’re often low-volume but extremely high-intent because they represent genuine decision-making criteria.
PPC strategy
Develop FAQ landing pages that address multiple related concerns.
Test lower bids — these queries often have less competition.
Use search query reports to identify emerging concerns and address them proactively.
The traditional single-brand campaign approach doesn’t give you enough control or insight at scale. Instead, structure your brand defense across four specialized campaigns, each targeting different intent signals and requiring distinct bid strategies.
Core brand defense
This covers exact-match brand terms and common misspellings with aggressive bidding to maintain 95%+ impression share and top positions. Never let this campaign be budget-limited.
Use multiple RSAs to test different value propositions. Monitor lost impression share due to rank as your primary competitive threat indicator.
Brand + category
Capture phrase-match queries like “[Brand] CRM” or “[Brand] for [use case],” where users are researching you within a specific product context.
Bid slightly lower than core brand terms, but ensure ad copy acknowledges the category and emphasizes your category leadership. Test whether category-specific landing pages outperform your homepage for these queries.
Brand reputation and reviews
Theseintercept validation-phase users searching “[Brand] reviews,” “[Brand] ratings,” or “is [Brand] good” before they click through to third-party aggregators. Bid aggressively here — these comparison-shopping clicks are worth more than core brand searches.
Use review extensions prominently, include specific social proof metrics in ad copy (4.8 stars, 10,000+ reviews), and send traffic to dedicated testimonial pages rather than your homepage. Test video testimonials on landing pages.
Competitive comparison defense
Control the narrative for queries like “[Brand] vs [Competitor],” “[Brand] alternative,” or “better than [Brand].” These are users you’re at risk of losing, so pay up to your maximum acceptable CPA.
Create unique landing pages for each major competitor with honest comparisons that emphasize your advantages, include side-by-side feature tables, and offer special conversion incentives like extended trials or migration assistance.
Defensive tactics against third-party aggregators
Sites like G2, Capterra, and other affiliate comparison sites actively bid on your brand terms without violating trademark policy because they legitimately have content about your brand.
But they’re siphoning off your traffic and often presenting biased or incomplete information. Your defense requires three coordinated approaches.
Bid aggressively on review keywords
Review aggregators bid heavily on “[Brand] reviews” and “[Brand] ratings” because these are their money keywords, so you need to bid even higher.
Run the math: If a review aggregator click costs you $3 but sends that user to a page where your competitor’s ad costs $50, you’re getting a deal at $10 per click on your own review keywords.
Calculate the lifetime value of a customer versus the cost of letting them click to a third-party site where competitors can advertise. Also, keep in mind it’s cheaper for you to bid on your own brand than for competitors to outbid you.
Claim and optimize your profiles on major review platforms you want to work with
Even if you can’t prevent them from bidding on your brand, ensure that when users click through, they see optimized content, strong ratings, and an active presence with responses to reviews.
Many review platforms offer advertising options — test running ads on your own profile pages to capture users who arrive via organic search or competitor ads.
Build dedicated testimonial and customer story pages
Make yours more compelling than third-party review aggregators. Include video testimonials, detailed case studies with metrics, filterable reviews by industry or use case, and verified customer badges.
Then use your PPC ads to drive users to these owned properties instead of letting them discover review aggregators organically.
Your brand campaign ad copy needs to do more than confirm your brand name. It needs to preempt objections, differentiate from competitors, and provide compelling reasons to click your ad instead of a competitor’s or third-party site. Three frameworks deliver results.
The preemptive strike
Identify the top 3-5 objections that come up in your sales process and address them directly in your ad copy before users encounter them on competitor or review sites.
If implementation time is a concern, use “Live in 5 days, not 5 months.”
If pricing is opaque, try “Transparent pricing, no hidden fees.”
If enterprise readiness is questioned, lead with “Trusted by 500+ enterprise customers.”
If ease of use is a concern, emphasize “No training required, start today.”
The competitive differentiator
Don’t just state features, state features your competitors don’t have or can’t match. This is especially critical for comparison queries where you know competitors are showing ads. Examples include:
“Only platform with native [unique integration].”
“Industry’s fastest performance, verified by [third party].”
If you can’t identify any unique features or USPs, that’s a signal to improve your product positioning or capabilities. Without clear differentiation, PPC alone won’t drive sustainable conversions.
Social proof stacking
Combine multiple types of social proof to build credibility quickly. Don’t just pick one element, stack them. Try
“4.8 stars from 10,000+ reviews. G2 leader 5 years running.”
“Join 50,000+ companies. Featured in Forbes and TechCrunch.”
“Winner: Best [category] 2025. 98% customer satisfaction.”
Sending all brand traffic to your homepage is a missed opportunity. Different branded queries represent different user intents and concerns, and your landing pages should address those specific intents.
Feature-specific pages
When users search “[Brand] + [feature],” send them to dedicated pages that explain the feature in detail, show it in action, and provide clear next steps.
Include a hero section explaining the feature in one sentence, a video demo or animated screenshot, technical specifications for enterprise buyers, integration details if relevant, and customer examples using this specific feature.
Comparison pages
Create dedicated comparison landing pages for each major competitor. Be honest about differences while emphasizing your advantages. Include side-by-side feature tables, pricing comparisons if advantageous, and customer testimonials from switchers.
Acknowledge competitor strengths without being dismissive, highlight 3-5 key differentiators where you excel, and offer migration assistance or switch incentives. Make your CTA clear and prominent, offering a trial or demo.
Trust and validation pages
For review and reputation queries, create dedicated pages that aggregate social proof rather than linking to your G2 profile or hoping users browse scattered testimonials.
Display aggregate ratings prominently (average of G2, Capterra, etc.), place video testimonials above the fold, show recent reviews with verified badges, make reviews filterable by industry, company size, and use case, include case studies with concrete metrics, and highlight third-party awards and recognition.
Monitoring and optimization: The ongoing battle
Brand protection isn’t a set-it-and-forget-it strategy. The competitive landscape constantly evolves, new competitors emerge, third-party sites adjust their strategies, and user search behavior shifts. You need systematic monitoring and rapid response capabilities across three time horizons.
Weekly monitoring
Review:
Search term reports to identify new query patterns.
Auction insights for increased competitor presence.
Impression share metrics to diagnose declining performance.
Lost impression share breakdowns by budget and rank.
Manual searches of your top 10 brand queries to see what ads are showing.
Quality score checks for brand keywords to diagnose landing page or ad relevance issues.
Monthly deep dives
Analyze conversion paths to understand how brand search fits into the broader customer journey.
Review assisted conversions since brand campaigns often contribute to non-brand conversions.
Audit landing pages for relevance and conversion performance.
Gather competitive intelligence on what landing pages competitors use for brand conquesting.
Test new ad copy variations focused on emerging objections or competitive threats.
Analyze search impression share by device and location to identify gaps.
Quarterly strategic reviews
Audit your complete branded query coverage to identify missing categories or query types.
Assess whether your coverage across the four query categories remains comprehensive.
Conduct competitive conquest analysis to determine which competitors most aggressively target your brand.
Evaluate ROI of different brand campaign types to optimize budget allocation.
Review third-party aggregator presence for new sites bidding on your brand.
Advanced tactics for sophisticated brand protection
Dynamic keyword insertion
For validation queries like “is [Brand] good” or “does [Brand] work,” use dynamic keyword insertion to echo the user’s specific question in your ad copy, creating higher relevance and click-through rates. Try headlines like “Yes, {KeyWord:[Brand]} Is Excellent” or “Absolutely, {KeyWord:[Brand]} Works.”
Geo-modified campaigns
If you have location-specific offerings or competitors vary by geography, create geo-modified brand campaigns. Users searching “[Brand] New York” or “[Brand] enterprise” may have different needs than general brand searchers.
Audience layering
Apply audience segments to brand campaigns to adjust bids based on user quality. Users who’ve visited your pricing page before should get higher bids on brand searches than first-time visitors. Similarly, prioritize users who match your ideal customer profile demographics.
Trademark enforcement
While Google generally allows competitors to bid on your brand terms, using your trademarked brand name in their ad copy is often prohibited.
Monitor competitor ads and file trademark complaints when they use your brand name in headlines or descriptions. This is particularly effective against smaller competitors and affiliates who may not realize they’re violating policy.
Problem/solution queries
Capture queries where users are researching whether your solution addresses a specific problem. These are often high-intent and represent clear use case alignment.
Target queries like:
“[Brand] for [problem].”
“How to [solve problem] with [Brand].”
“[Brand] [use case] solution.”
“Can [Brand] help with [challenge].”
Budget allocation and ROI considerations
How much should you invest in brand protection versus acquisition campaigns? The answer depends on three factors:
Competitive pressure.
Brand strength.
Customer lifetime value.
If you operate in a highly competitive category where multiple well-funded competitors actively bid on your brand terms, invest more in brand protection. Run auction insights weekly to monthly to quantify competitive presence.
If competitors show in 40% or more of your brand auctions, this is a high-threat environment requiring aggressive defense. Stronger brands with dominant organic presence can afford to spend less on core brand defense because their organic listings provide natural protection. This doesn’t apply to reputation and comparison queries where third-party sites rank organically.
High LTV businesses should invest more aggressively in brand protection because the cost of losing a customer to a competitor or having them influenced by negative review sites is substantial. If your average customer is worth $50,000 over their lifetime, paying $50 per click to defend against comparison queries is economically rational.
For most B2B SaaS and high-consideration products, allocate approximately 15-25% of total paid search budget to comprehensive brand protection. Within that allocation, dedicate 40% to core brand defense (exact match), 25% to competitive comparison defense, 20% to reputation and review queries, and 15% to feature and niche question queries.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
Brand protection as competitive moat
Brand protection through PPC isn’t just defensive marketing. It’s a competitive moat. When you control the narrative across branded search contexts, you ensure high-intent users see accurate information instead of competitor ads or third-party pages monetizing your brand equity.
The brands that win treat this as strategy, not maintenance. They segment branded queries by intent, build landing pages to match, monitor threats continuously, and defend high-value search real estate aggressively.
Start with an audit using the four-category framework. Close coverage gaps, align campaigns and landing pages to intent, and commit to weekly monitoring, monthly optimization, and quarterly strategic reviews.
If you don’t own your branded searches, someone else will.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-03-02 16:00:002026-03-02 16:00:00Own your branded search: Building a competitive PPC defense