Google pushes Demand Gen deeper into performance marketing

How to tell if Google Ads automation helps or hurts your campaigns

Google Ads’ Demand Gen campaigns – once thought of as mid-funnel discovery tools – are evolving into full-funnel, conversion-focused campaigns, with YouTube at the core.

Why we care. Marketers are under pressure to prove ROI across channels. Demand Gen now blends social-style ad formats with Google’s AI-driven targeting, giving advertisers new ways to drive sales, leads, and app installs from audiences they can’t reach elsewhere.

What’s new:

  • Target CPC bidding: Advertisers can now align Demand Gen with social campaigns for apples-to-apples budget comparisons.
  • Channel controls. Run ads only on YouTube, or expand to Display, Discover, Gmail, and even Maps.
  • Creative tools. Features like trimming and flipping help repurpose social assets for Shorts and other placements, lowering barriers to entry.
  • Feeds + app support. Product feeds in Merchant Center show a 33% conversion lift; Web-to-App Connect now extends to iOS for smoother in-app conversions.

By the numbers. According to Google advertisers using Demand Gen have seen on average:

  • 26% YoY increase in conversions per dollar
  • 33% uplift when attaching product feeds

Between the lines. Google says Demand Gen’s shift from contextual matching to predicting user intent and purchase propensity has made it a contender for bottom-funnel performance. In short: YouTube is no longer just discovery – it’s decision-making.

What’s next. Expect more AI-driven creative tools, expanded shoppable formats, and deeper integrations across channels.

The takeaway. Don’t wait for “perfect” YouTube creative. Lift, adapt, and test now — Demand Gen is no longer a mid-funnel experiment, it’s a performance channel.

Read more at Read More

Your GEO content audit template

GEO content audit

My SEO mantra in the age of GEO is from the great Lil Wayne, “Real G’s move in silence like lasagna.”

Translation for SEO marketers: the most effective GEO moves aren’t loud growth hacks. They’re the subtle edits and formatting that make AI cite you without fanfare.

To help with your GEO audit, here’s an inside peek into my secret menu.

Take a look at my GEO content audit template.

It’s an evolution of my SEO content audit.

As Google’s Danny Sullivan has been telling rooms full of marketers, “Good SEO is good GEO.”

That’s why I like to think of GEO as SEO’s MTV Unplugged version. It’s the same band, same lyrics, just stripped down, reimaged, and way more personal.

Alright, enough philosophy. You came here for the secret recipe. Let’s crack open the GEO content audit template and see how it works in practice.

Use this GEO content audit template

Cool. You’ve made a copy of the GEO content audit template. Now what?

Here are the key sheets you’ll work through:

  • Summary: High-level snapshot once you’ve scored everything.
  • Action list: Quick recap of next steps that summarizes all your findings from the other tabs.
  • Content inventory: This is the backbone. Filters include:
    • URL.
    • Action.
    • Strategy.
    • Page title.
    • Last updated date.
    • Author.
    • Word count.
    • Readability.
    • Average words per sentence.
    • Keywords.
    • Canonical.
    • Internal links.
    • And more.
  • Indexability/architecture/ URL design/on-page: Your technical health is still important.
  • Structured data: Markup needed and where.
  • International: Hreflang, language, local cues (currency, units, spelling, trust marks).
  • Speed: Yes, page speed is still important.
  • Content and gaps: Quality scoring and what’s missing.
  • Linking: Internal link plans and external targets.
  • Refresh: Cadence schedules by asset type.

You’ll bounce between content inventory, structured data, content, content gaps, and linking the most.

Tools you’ll want on hand

  • Crawling and gaps: Screaming Frog, Ahrefs, or Semrush for keywords and links.
  • Search Console: Build a regex brand view (brand + misspellings + product names) to watch demand move as AI answers spread.
  • Prompt testing: Manually test core buyer prompts in ChatGPT, Google AI Overviews/AI Mode, Gemini, Copilot, Perplexity. Log inclusions and citations. BrightEdge’s dataset shows that you’ll see a lot of disagreement across platforms. Expect that.
  • Attribution: Roadway AI (or similar) to connect topics/pages to pipeline and revenue for your QBR.

How to do a GEO content audit (with the template)

1. Set goals

Pick outcomes that map to how people actually find you now:

  • Inclusion rate: Percentage of target prompts where your brand is mentioned inside ChatGPT/AI Overviews/AI Mode/Perplexity/Copilot.
  • Homepage and direct lift: Buyers often go to AI → Google → your homepage. This is why you’ll want to watch branded impressions and homepage sessions.
  • Revenue by topic/page: Wire this to your attribution tool.

Why this mix?

Because AI boxes change, and engines disagree.

A blended scoreboard helps you avoid chasing one fragile metric.

Pro move: Add “ChatGPT/AI Search” to “How did you hear about us?” in forms and sales notes and review weekly. Many teams report this is where the hint of AI-assisted discovery shows up first.

2. Build your content inventory

Using Screaming Frog, export every URL with: title/H1, last updated, author, canonical, word count, readability metrics, internal links, and a target query/theme.

Add a few custom fields:

  • Direct-answer present (Y/N): Is there a <120-word summary up top that answers the main question?
  • FAQ present (Y/N): Does it mirror prompt fragments and include FAQ schema?

Why?

If your page is tidy, answer-first, and properly marked up, it’s far easier to reference.

3. Segment by site, market, and language 

Break your inventory into:

  • Domain/subdomain/folder (e.g., .co.uk, /fr/).
  • Market language variations (U.S. vs. U.K. English, Spain vs. Mexico Spanish).
  • Indexability quirks (hidden duplicates, parameters, session IDs).

For international pages, score:

  • Hreflang implementation (pointing to the right alternates; reciprocal).
  • Local cues (currency, units, spelling, trust marks like local badges, VAT specifics).
  • CTAs (country-specific copy, phone numbers, store links).

A shaky international setup is a fast way to look sloppy to users and AI models.

4. Pull the numbers

Look at more than organic:

  • Organic sessions and conversions.
  • Direct sessions and homepage trend 
  • GSC clicks/impressions/queries and brand regex trendline.
  • Manual AI inclusion log (engine, prompt, did we show, who else got named?).

Google says the AI experiences drive “higher-quality clicks,” while many SEO marketers report general traffic decline. 

Read both, and measure your own reality.

5. Judge the substance

Score every high-value page for “citable signal”:

  • Direct answer up top (<120 words).
  • Evidence: Proprietary data, SME quotes, methods, and links out to credible sources.
  • Trade-offs: Where your product is not the best fit.
  • FAQ block that mirrors prompt syntax (e.g., “best X for Y,” “X vs Y,” “pricing,” “implementation time”).
  • Schema: FAQ, HowTo, Product, Organization/Author with published/updated dates.

6. Map gaps and conflicts

Create a hit list:

  • Duplicates and cannibalization: Merge or redirect. If two pages answer the same thing, decide which one lives.
  • Missing BOFU pages:
    • “[Competitor] alternatives.”
    • “X vs Y.”
    • “Pricing.”
    • “Industry-specific use cases.”
  • Offsite holes: Are you absent from “Best of” lists, comparison hubs, review sites, and relevant forum threads? That’s where AI models shop for context. The more you appear on those domains, the likelier you are to get named in answers.

7. Establish next steps

Turn findings into a real plan:

  • Fixes
    • Hreflang clean-up.
    • Canonicals.
    • FAQ/HowTo/Product/Organization/Author schema.
    • Direct-answer summaries added to target pages.
  • Net-new assets
    • “Alternatives,” “X vs Y,” pricing explainer, implementation guide.
    • Video explainers (YouTube) with clear chapters.
    • Region-specific FAQs and CTAs.
  • Earned presence
    • Shortlist the publishers and communities your buyers read.
    • Pitch data-led pieces. Offer SME quotes and screenshots.
    • For review sites (G2/Capterra), set up a gentle ask after X days live.
  • Attribution
    • Connect the page/topic to the pipeline and revenue so that GEO progress is reflected in QBRs (e.g., Roadway or similar).

Get the newsletter search marketers rely on.


Worked example: Filling the template

Let’s see what this looks like in practice. Here’s a sample workflow that uses the GEO content audit template step by step. 

  • Create your goals:
    • Hit 40% inclusion across 50 priority prompts in AI Overviews/ChatGPT
    • +15% homepage sessions QoQ
    • +25% topic revenue for X cluster.
  • Load all URLs. Pick the top 100 URLs to tackle first. Manually update columns with and complete the action with: keep, update, merge, redirect.
  • Plan to add FAQ where you mirror prompt fragments. Think about adding Organization/Author (with bios and dates).
  • Check hreflang and copy cues (currency, units, etc.). Flag any market where your “local” page reads like a machine translation or uses the wrong signals.
  • List missing BOFU pages and industry variants. Prioritize by buyer impact.
  • Add internal links from top-traffic pages to the pages you want cited. Short, descriptive anchors that mirror the question asked.
  • Turn your action list into tickets. Dates. Owners. Status. Nothing lingers.
  • For each of your 50 prompts, record: engine, date, question, inclusion (Y/N), snippet, and other brands named. Check weekly for movement. Why weekly? Because Google keeps tinkering with AI Mode, links, carousels, and new UI, your presence can shift with those tweaks.

Keep these moves in mind to keep your audit on beat

Refresh cadence

Fast vertical (finance, travel, fast-moving SaaS)? Aim for quarterly.

Other verticals can run biannual or annual content refreshes.

Fresh, cited, and updated content tends to fare better for AI Overviews and Perplexity. 

Both are leaning hard on recency and clarity, and Google is actively testing more visible links in those AI blocks.

Local content beats translation

U.K. ≠ U.S.

Spain ≠ Mexico. 

Adjust spelling, units, currency, trust marks, and examples.

Tune the FAQ to local search habits and buyer objections. 

If your Canadian page says “sales tax” instead of GST/HST, people notice – so do models.

Track what matters, not just what’s easy

  • Inclusion rate across engines
  • Brand regex impressions in GSC
  • Homepage/direct lift
  • Revenue by topic/page

Chasing “LLM rankings” is sketchy. Use trackers as signals, not gospel.

What’s really the difference between a GEO content audit and an SEO content audit?

It’s a review of your content and your offsite footprint through an AI lens: Is your stuff scannable, citable, up-to-date, and backed by authority that LLMs trust?

SEO audit focuses on ranking and traffic on your own site. You crawl, fix indexing, resolve cannibalization, etc. It’s the classic playbook.

GEO audit focuses on representation and citability. You still care about structure and technicals, but you also score whether your brand appears in AI answers, even when the cited page isn’t yours.

You check if your content opens with a direct answer, mirrors prompt questions (and has FAQ schema), and is referenced by publishers, YouTube videos, Reddit threads, and review sites.

You need both

Skipping either is like training your upper body only. You’ll look fine in a tank top, but probably should avoid shorts.

Rankings still matter for discovery and for the content AI scrapes.

GEO pushes you to become answer-worthy across the broader web.

Or in Sullivan’s phrasing, good SEO already points you toward good GEO.

Pour one out for your old SEO friends – GEO is part of the scene

GEO is here to stay. Call it bad news delivered with a good whiskey. 

Visibility is shifting to AI answers. If you’re referenced in AI answers, you’ll feel it in your top-funnel numbers.

Competitors can “win” even when their site isn’t ranking because third-party pages that mention them get cited.

“You want a content plan that isn’t sipping vodka Red Bulls like it’s still 2015, then blacking out the second AI changes the playlist.

This is your GEO content audit curtain call 

ChatGPT doesn’t even have a SERP. It has an answer. If Google leans further into AI Mode, that answer becomes the main act.

Your job: be the source cited.

You want a content plan that doesn’t involve sipping vodka Red Bulls like it’s still 2015 and blacking out the second AI changes the playlist.

So run the audit, tighten structure, add proof, win some offsite mentions, and track inclusion, not just rankings.

Tie this to revenue so nobody calls this a science project.

Read more at Read More

Google Ads streamlines scripts documentation

Trusting Google Ads AI

Google has refreshed its Ads scripts documentation to make it easier for advertisers and developers to build, test, and customize automations.

Why we care. Scripts help advertisers save time and scale campaigns, but the old documentation was clunky and fragmented. The overhaul puts guides, references, and examples into a more intuitive flow that reduces friction for both beginners and power users.

What’s new:

  • Guides are now grouped by experience level and campaign type.
  • A dedicated reference tab makes it easier to browse available script objects.
  • Solutions and examples have been merged, centralizing sample code in one place.

Bottom line: Advertisers and agencies relying on automation should be able to work faster and with less guesswork, while new users have a smoother entry point into scripts.

Read more at Read More

Microsoft clarifies nonprofit ad grant program status

Microsoft Ads

Microsoft Ads Liaison Navah Hopkins confirmed that the company’s Ads for Social Impact program, which grants nonprofits ad credits across Microsoft’s ad inventory, is currently on a waitlist.

Why we care. The program gives nonprofits free ad credits to reach audiences across Bing, Outlook, MSN, Microsoft Edge, and even Yahoo and AOL via syndicated partners. With no strict feature requirements, charities can apply the credit to strategies that best fit their goals, while also accessing training and optional AI tools for creative support.

Eligibility: To qualify, nonprofits must be registered 501(c)(3) or equivalent, have a functioning mission-aligned website, and not fall into excluded categories like hospitals, schools, or government entities. Applying is not a guarantee of acceptance and all applications will be reviewed on the merits.

Bottom line. While details around nonprofit support have been murky, Microsoft is reaffirming its commitment to helping charities stretch their marketing budgets and amplify their missions through free ad spend.

Read more at Read More

LinkedIn Company Intelligence API links ads to pipeline, revenue

5 tests to run to drive growth with LinkedIn Ads

B2B marketers under pressure to prove ROI now have a new tool from LinkedIn – the Company Intelligence API. It is designed to connect campaign performance directly to sales pipeline and revenue outcomes.

Why we care. Traditional attribution models struggle with complex B2B buying journeys, often missing early signals and undervaluing campaigns. LinkedIn’s new API aims to bridge the gap between ad performance and real business outcomes, letting them see which companies are actually moving through the funnel, prove ROI with hard numbers, and make smarter budget shifts toward what drives pipeline and revenue.

By the numbers: Early beta users reported (LinkedIn data):

  • 288% increase in companies engaged
  • 93% increase in pipeline value
  • 30% boost in ROI
  • 37% reduction in cost per acquisition

How it works: Advertisers can access aggregated company-level data (e.g., impressions, clicks) through LinkedIn’s certified analytics partners (Channel99, Octane11, Dreamdata, Factors.ai, Fibbler). The data is ingested into CRM-connected dashboards, giving marketers clearer visibility into ROI, pipeline acceleration, and company engagement across the funnel.

What they’re saying:

  • DataSnipper: “We can now clearly see the impact on pipeline and revenue, uncovering nearly twice as much influenced pipeline as before.”
  • Eftsure: “Reductions in cost per SQL give me strong evidence to justify investment to leadership.”
  • Inovalon: “We plan to shift budget from other channels to LinkedIn.”

What’s next. The Company Intelligence API is now available globally through LinkedIn’s B2B attribution and analytics partners. Adoption could grow as marketers seek stronger proof of performance in a tight-budget environment.

Read more at Read More

Schema and AI Overviews: Does structured data improve visibility?

Schema and AI Overviews: Does structured data improve visibility?

A controlled test compared three nearly identical pages: one with strong schema, one with poor schema, and one with none. 

Only the page with well-implemented schema appeared in an AI Overview and achieved the best organic ranking. 

The results suggest that schema quality – not just its presence – may play a role in AI Overview visibility.

Schema, AI Overviews, and the need for proof

AI Overview visibility is becoming increasingly important to businesses.

One debate within the SEO community has stood out: Does adding schema improve the chances of being cited in an AI Overview?

Schema was created to make webpages more machine-readable, and it has even been shown to help large language models – like Microsoft’s – better interpret content freshness. 

That makes it tempting to assume schema is a best practice for AI visibility. 

Still, AI Overviews are the result of complex and layered processes. 

It’s difficult to draw firm conclusions from logic alone or from limited glimpses into one part of a model’s behavior.

That uncertainty is what motivated us to run a controlled experiment.

  • In earlier work, Molly analyzed 100 healthcare sites and found a slight correlation between schema use and AI Overview visibility. But the correlation was not statistically significant, and the analysis had two limitations: it didn’t assess the quality of the schema, and because it wasn’t an experiment, site differences in content, structure, and audience couldn’t be controlled.
  • At the same time, Benjamin’s experiments showed that ChatGPT retrieved information more thoroughly and accurately from pages with structured data. Those findings pointed to schema’s role in AI visibility, but they didn’t address Google’s AI Overviews.

With those perspectives in mind, we decided to collaborate on a test that would build on Molly’s earlier analysis and extend Benjamin’s experiments into Google Search – focusing directly on whether schema quality plays a role in AI Overview visibility.

Dig deeper: AI visibility: An execution problem in the making

The setup: Three sites, three schema approaches

We built three single-page sites to compare schema directly: 

  • One with well-implemented schema.
  • One with poorly implemented schema.
  • One with none. 

Aside from schema, the pages were kept as similar as possible, with keywords chosen to match in difficulty and search volume. 

After publishing, we submitted all three for indexing to see whether they would rank – and, more importantly, whether any would appear in an AI Overview.

Get the newsletter search marketers rely on.


The result: Only the page with well-implemented schema appeared in an AI Overview

The page with well-implemented schema was the only one to appear in an AI Overview. 

It also ranked for six keywords in traditional search, reaching as high as Position 3. 

Rank 3 was the highest conventional search rank achieved by any page in our experiment, and it was also the query that triggered the AI Overview appearance.

Google AI Overviews - data pool vs data lake

The page with poorly implemented schema ranked for 10 keywords and peaked at Position 8, but none of its queries surfaced in an AI Overview.

The page with no schema was crawled by Google within minutes of the others, but was not indexed. 

Without indexing, it didn’t rank for any keywords and could not appear in AI Overviews.

Methodology: How we controlled for variables and defined ‘good schema’

To isolate schema as the variable, we kept everything else about the test pages as consistent as possible – from keyword choice to site setup.

Keyword selection

We used Ahrefs to choose three keywords with identical metrics. Each returned an AI Overview at the time of selection:

  • “How much does a marketing team cost.”
  • “What are common elements in the promotional mix.”
  • “Data pool vs. data lake.”

Metrics (Ahrefs)

  • Keyword difficulty: 3
  • Monthly search volume: 60
  • Traffic potential: 20

We also chose keywords that were qualitatively similar and within the same general industry (marketing/martech).

Site build controls

All three were single-page sites deployed on Vercel, with the following constraints applied consistently:

  • No JavaScript.
  • No custom domain name or homepage.
  • No sitemap.
  • No robots.txt file.
  • No canonical tags.

Schema treatments

To create a page that exemplified a solid implementation of schema best practices, we included:

  • Complete Article schema with all required fields.
  • FAQ schema for common questions.
  • Breadcrumb navigation schema.
  • Proper date formatting.
  • Author and publisher information.
  • Educational level and audience targeting.
  • Related topics and mentions.
  • Word count and reading time.

We deliberately introduced errors into the poor schema page, including:

  • Incomplete Article schema (missing required fields).
  • No FAQ schema despite having FAQ-like content.
  • Missing breadcrumb navigation schema.
  • Incorrect date format.
  • Missing essential properties.

The third site was built without any schema at all. 

All three sites were submitted to Google on Aug. 29 and crawled the same night.

Interpreting the results: Promising, but inconclusive

We don’t consider these results to be absolute proof that well-implemented schema plays a role in AI Overview presence. 

However, the story is clear: the page with well-implemented schema was the winner in our small, carefully controlled test. It achieved the best organic rank and was the only page to appear in an AI Overview.

We don’t see any obvious alternative explanation for why this happened, either. 

The “no schema” page had the lowest word count of the three pages, but word count shouldn’t matter.

What’s next

There’s still more to do. 

Unseen variables could have muddied the waters, and there’s always the possibility that our results were simply a coin-flip-style fluke of the Google algorithm.

As a follow-up, we plan to de-index the pages, create new pages with identical content, and then swap the schema. 

We want to see if putting schema on the “no schema” page gets it indexed and ranked. That would be a very compelling result indeed.

Appendix

For those who want to review the test materials directly, here are the URLs of the sites and supporting documentation:

Test pages

Code repositories

Google Search Console screenshots 

The following screenshots show indexing and enhancement status:

GSC - Well-implemented schema page
GSC - No schema page
GSC - Poor schema page

Read more at Read More

The must-have social media tool for multi-location brands in 2026 by Rallio

With increased competition, stricter Google guidelines, and the rise of AI-powered search, standing out online is more challenging than ever. 

For multi-location brands, this task is even harder as they must maintain a unified brand presence across all locations, yet fulfill consumers’ desire for personalized, local engagement.

Rallio, Powered by Ignite Visibility, is the solution. 

More than just a social media tool

Rallio isn’t just another scheduling platform, it’s the next generation of AI-powered tools revolutionizing how multi-location businesses stand out online. 

  • AI-powered insights: Rallio’s AI Assistant operates 24/7, analyzing your data to uncover your strengths and growth opportunities. It also compares your performance against competitors, providing actionable strategies to outperform them.
  • AI-generated posts: Rallio’s AI instantly generates social media posts tailored to your brand’s style and messaging. Fresh, approved content is just a click away. It can also generate captions and hashtags from any image in your media library, saving time while boosting engagement.
  • AI-generated captions: Rallio makes creating compelling captions easy. With a simple prompt, it crafts engaging captions complete with relevant emojis and hashtags, driving interactions with your audience.
  • AI playbook: With the Rallio AI Playbook, you are able to customize your own AI engine that powers your brand from top to bottom throughout the platform. Your Playbook captures your brand voice, tone, and content preferences – everything the AI needs to create effective on-brand, tailored posts just for you.
  • Reputation management: Rallio simplifies managing reviews across multiple locations by consolidating all reviews into a single dashboard. Its AI generates personalized responses based on review context, saving time and ensuring consistency.
  • Employee advocacy: Rallio’s mobile app empowers your team to contribute authentic, hyper-local content by submitting photos and videos. This employee-driven content boosts engagement and local relevance, which are key for improving local SEO.
  • REVV – review acceleration: Positive reviews are crucial for visibility in search results, especially in Google’s Map Pack. Rallio’s REVV platform helps businesses collect and manage reviews through smart surveys, driving up review volume and improving online reputation.

By automating content creation, enhancing employee engagement, and streamlining review management, Rallio helps multi-location businesses build authentic relationships with local audiences while strengthening their national presence. Watch our free demo to see it in action.

How Rallio helps brands gain visibility in AI-powered search

Savvy marketers are focusing on generative engine optimization (GEO) to gain visibility in AI-powered search engines like ChatGPT, Perplexity, and more. A principle of GEO is to prioritize relevance, engagement, and authority, and that’s precisely how Rallio helps boost visibility. 

  • Social signals: Rallio generates content that drives likes, shares, and comments, increasing engagement with your brand.
  • Local SEO: By focusing on localized posts and employee-driven content, Rallio boosts visibility in local search results and Google’s Map Pack.
  • Authority: Rallio ensures consistent, high-quality content across platforms, which signals trust and authority to search engines.
  • Reputation: Managing and responding to reviews effectively enhances local SEO and reinforces brand credibility.

If your brand is leveraging GEO strategies, Rallio can be your secret weapon to boost engagement, relevance, and authority, helping you stand out in search results.

Ready to see Rallio in action? Get instant access to our free demo

Take the first step toward transforming your brand’s online presence with Rallio. Get instant access to our free demo video here.

Read more at Read More

Google launches seasonal bid adjustments for app campaigns

Google Ads rolled out a beta feature that lets app marketers apply Seasonality Adjustments to Smart Bidding, giving advertisers more control during short, high-impact events like flash sales or product launches.

Why we care. App campaigns often see sharp conversion swings during promotions, but Smart Bidding typically learns reactively. This beta gives them the ability to proactively boost bids during predictable conversion spikes, ensuring they capture maximum value from short-term promotions and avoid leaving revenue on the table.

How it works:

  • Works across all App campaign bid strategies.
  • Best for short, intense periods (1–7 days).
  • Not meant for minor fluctuations (Smart Bidding already accounts for those).

Bottom line. Advertisers now have a lever to prevent missed opportunities during critical promotional windows, making Smart Bidding more predictable when the stakes are highest.

First seen. This was announced by Qais Haddad, senior app growth manager at Google, on LinkedIn.

Read more at Read More

Google Ads doubles negative keyword list limit: Glitch or quiet policy change?

Auditing and optimizing Google Ads in an age of limited data

Google’s documentation says advertisers can only add 5,000 keywords to a campaign-level negative keyword list. But one advertiser has reported successfully adding more – raising questions about whether this is a glitch or an unannounced update.

Why we care. Negative keyword lists are critical for advertisers, helping them cut wasted spend and prevent ads from showing on irrelevant searches. A higher limit could be a welcome change for large accounts managing thousands of exclusions – but only if Google confirms it’s intentional.

Driving the news. Stan Oppenheimer, paid search specialist at Dallas SEO Dogs, spotted a search campaign with more than 5,000 negatives (i.e. the published limit).

  • Oppenheimer flagged the issue to Google, asking for clarification and for the official help docs to be updated.

Between the lines. If this is more than a glitch, it could be part of Google’s broader push to standardize campaign limits across formats. But the lack of clarity leaves advertisers unsure whether they can rely on the higher cap.

What’s next. Until Google confirms, advertisers should proceed cautiously – and assume the official 5,000-word cap still applies to search campaigns.

What are Google saying. “The threshold remains 5,000 keywords per negative keyword list, but there may be some cases in which lists a bit over the limit are accepted.” Ginny Marvin, Google Ads Liaison, confirmed on X:

Read more at Read More

Google’s ad tech monopoly remedies trial begins

A trial many expected to fizzle has delivered a bombshell: Judge Leonie Brinkema ruled Google illegally monopolized digital advertising, setting up a remedies phase that could force major changes to its ad tech stack. But with Google already losing ground in ad tech and the web fragmenting into retail media, walled gardens, and AI-native platforms, the remedies may feel like too little, too late.

Why we care. The DOJ wants to unwind Google’s dominance by weakening its ad exchange (AdX) and prying open its auction logic. Publishers and advertisers argue this could level the playing field. If auction logic is opened up and interoperability enforced, advertisers may see more competition, better pricing, and greater transparency. But if the remedies stall or prove symbolic, the status quo remains – while spend continues shifting toward walled gardens and retail media networks.

Zoom in:

  • The DOJ’s asks. Strip AdX from DFP, open-source auction logic, and revisit divestiture if competition doesn’t improve.
  • Google’s counter. Interoperability with rival ad servers, no “first look” or “last look” privileges, and scrapping unified pricing rules—without divestiture.
  • Witnesses. Executives from DailyMail.com, AWS, PubMatic, and Index Exchange will testify against Google, while Google leans on its own engineers and Columbia University experts.

Between the lines. Even if the court forces remedies, Google’s grip on display ads has already slipped as advertisers shift spend into walled gardens and AI-driven platforms. The ruling could end up more symbolic than transformative.

What’s next. Testimony runs Sept. 22–30, with a ruling expected in 2026. Until then, the ad industry is bracing for a decision that could either shake up—or barely dent—the future of the open web.

Read more at Read More