Google launches Data Manager API

GPT-4 or Google Cloud’s API library- What should you choose for SEO task automation

Google is rolling out a new Data Manager API that lets you plug first-party data into Google’s AI-powered ad tools with less friction. The goal: stronger measurement, smarter targeting, and better performance without the hassle of managing multiple systems.

Why we care. The Data Manager API helps you get more value from the data you already have by sending reliable first-party data into Google’s AI. This improves your targeting, measurement, and bidding. It also replaces several separate APIs with one easy connection, cutting down on engineering work and getting insights back into your campaigns faster.

About the Data Manager API. It will replace several separate Google platform APIs with one centralized integration point for advertisers, agencies, and developers. It builds on Google’s existing codeless Data Manager tool, which tens of thousands of advertisers already use to activate their first-party data.

You can use it to:

  • Upload and refresh audience lists.
  • Send offline conversions to improve measurement.
  • Improve bidding performance by giving Google AI richer signals.

Partnership push. To speed adoption, Google is launching with integrations from AdSwerve, Customerlabs, Data Hash, Fifty Five, Hightouch, Jellyfish, Lytics, Tealium, Treasure Data, Zapier, and others.

Available today. The API is available starting today across Google Ads, Google Analytics and Display & Video 360, with more product integrations on the way.

Google’s announcement. Data Manager API helps advertisers improve measurement and get better results from Google AI

Read more at Read More

Google AI cites retailers 4% vs. ChatGPT at 36%: Data

Google vs ChatGPT retail citations

Google cites retailers only 4% of the time, while ChatGPT does it 36% of the time. That 9x gap means shoppers on each platform get steered in very different ways, according to new BrightEdge data.

Why we care. Millions of shoppers now turn to AI for deals and gift ideas, but product discovery works differently on the two leading AI search platforms. Google leans on what people say, while ChatGPT focuses more on where you can buy it.

What each AI prioritizes. Google AI Overviews cite YouTube reviews, Reddit threads, and editorial sites, while ChatGPT cite retail giants like Amazon, Walmart, Target, and Best Buy.

Google AI Overviews prioritize:

  • YouTube reviewers and unboxings.
  • Reddit threads and community consensus.
  • Editorial reviews and category experts.

ChatGPT prioritizes:

  • Major retailer listings.
  • Brand and manufacturer product pages.
  • Editorial sources (secondary).

The citation divide. On Google, retailers appear only about 4% of the time. Its citations lean toward user-generated content and expert reviews. Google AI Overviews serve more as a research tool than a purchase assistant. Top sources included:

  • YouTube
  • Reddit
  • Quora
  • Editorial sites like CNET, The Spruce Eats, and Wirecutter

On ChatGPT, retailers appear about 36% of the time. ChatGPT acts as both the explainer and the shopping assistant, so retailer links show up far more often. Its top sources included:

  • Amazon
  • Target
  • Walmart
  • Home Depot
  • Best Buy

About the data. BrightEdge analyzed tens of thousands of ecommerce prompts across Google AI Overviews and ChatGPT during the 2025 holiday shopping season, then extracted and categorized citation sources. Domains were classified by type (retailer, UGC/social, editorial, brand) and compared across identical prompts.

The report. Who Does AI Trust When You Search for Deals? Google vs. ChatGPT Citation Patterns Reveal Different Shopping Philosophies

Read more at Read More

Mentions, citations, and clicks: Your 2026 content strategy

Mentions, citations, and clicks- Your 2026 content strategy

Generative systems like ChatGPT, Gemini, Claude, and Perplexity are quietly taking over the early parts of discovery – the “what should I know?” stage that once sent millions of people to your website. 

Visibility now isn’t just about who ranks. It’s about who gets referenced inside the models that guide those decisions.

The metrics we’ve lived by – impressions, sessions, CTR – still matter, but they no longer tell the full story. 

Mentions, citations, and structured visibility signals are becoming the new levers of trust and the path to revenue.

This article pulls together data from Siege Media’s two-year content performance study, Grow and Convert’s conversion findings, Seer Interactive’s AI Overview research, and what we’re seeing firsthand inside generative platforms. 

Together, they offer a clearer view of where visibility, engagement, and buying intent are actually moving as AI takes over more of the user journey – and has its eye on even more.

Content type popularity and engagement trends

In a robust study, the folks at Siege Media analyzed two years of performance across various industry blogs, covering more than 7.2 million sessions. It’s an impressive dataset, and kudos to them for sharing it publicly.

A disclaimer worth noting: the data focuses on blog content, so these trends may not map directly to other formats such as videos, documentation, or landing pages.

With that in mind, here’s a run-through of what they surfaced.

TL;DR of the Siege Media study

Pricing and cost content saw the strongest growth over the past two years, while top-of-funnel guides and “how-to” posts declined sharply.

They suggest that pricing pages gained ground at the expense of TOFU content. I interpret this differently. 

Pricing content didn’t simply replace TOFU because the relationship isn’t zero-sum. 

As user patterns evolve, buyers increasingly start with generative research, then move to high-intent queries like pricing or comparisons as they get closer to a decision.

That distinction – correlation vs. causation – matters a lot in understanding what’s really changing.

The data shows major growth in pricing pages, calculators, and comparison content. 

Meanwhile, guides and tutorials – the backbone of legacy SEO – took a sharp hit. 

Keep that drop in mind. We’ll circle back to it later.

Interestingly, every major content category saw an increase in engagement. That makes sense. 

As users complete more of their research inside generative engines, they reach your site later in the journey or for additional details, when they’re already motivated and ready to act.

If you’re a data-driven SEO, this might sound like a green light to focus exclusively on bottom-of-funnel content. 

Why bother with top-of-funnel “traffic” that doesn’t convert? 

Leave that for the suckers chasing GEO visibility metrics for vanity, right?

But of course, this is SEO, so I have to say it …

Did you expect me to say, “It depends?”

Here’s a question instead: when that high-intent user typed the query that surfaced a case study, pricing page, or comparison page, where did they first learn the brand existed?

Dig deeper: AI agents in SEO: What you need to know

Don’t forget the TOFU!

I can’t believe I’m saying this, but you’ll have to keep making TOFU content. 

You might need to make even more of it.

Let’s think about legacy SEO.

If we look back – waaaaay back – to 2023 and a study from Grow and Convert, we see that while there is far more TOFU traffic…

…it converts far worse.

Note: They only looked at one client, so take it with a grain of salt. However, the direction still aligns with other studies and our instincts.

This pattern also shows up across channels like PPC, which is why TOFU keywords are generally cheaper than BOFU.

The conversion rate is higher at the bottom of the funnel.

Now we’re seeing this shift carry over to generative engines, except that generative engines cover the TOFU journey almost entirely. 

Rather than clicking through a series of low-conversion content pieces as they move through the funnel, users stay inside the generative experience through TOFU and often MOFU, then click through or shift to another channel (search or direct) only when it’s time to convert.

For example, when I asked ChatGPT to help me plan a trip to the Outer Banks:

After a dozen back-and-forths planning a trip and deciding what to eat, I wanted to find out where to stay.

That journey took me through many steps and gave me multiple chances to encounter different brands and filtering or refinement options. 

I eventually landed on my BOFU prompt, “Some specific companies would be great.” 

From there, I might click the links or search for the company names on Google.

What matters about this journey – apart from the fact that my final query would be practically useless as insight in something like Search Console – is that throughout the TOFU and MOFU stages, I was seeing citations and encountering brands I would rely on later. 

Once I switched into conversion mode, I wanted help making decisions. That’s where I’m likely to click through to a few companies to find a rental.

So, when we read statistics like Pew’s finding that AI Overviews reduce CTR by upwards of 50%, and then consider what happens when AI Mode hits the browser, it’s easy to worry about where your traffic goes. Add to that ChatGPT’s 700 million weekly active users (and growing):

And according to their research on how users engage with it:

We can see a clear TOFU hit and very little BOFU usage.

So, on top of the ~50% hit you may be taking from AI Overviews, 700+ million people are going to ChatGPT and other generative platforms for their top-of-funnel needs. 

I did exactly that above with my trip planning to the OBX.

Dig deeper: 5 B2B content types AI search engines love

Get the newsletter search marketers rely on.


But wait!

The good news is that while that vacation rental company or blue widget manufacturer might not see me on their site when I’m figuring out what to do – or what a blue widget even is – I’m still going to take the same number of holidays and buy the same number of products I would have without AI Overviews or ChatGPT, Claude, Perplexity, etc.

Unless you’re a publisher or make money off impressions, you’ll still have the same amount of money to be made. 

It just might take fewer website visits to do it.

More about TOFU

Traffic at the bottom of the funnel is holding steady for now (more on that below), but the top of the funnel is being replaced quickly by generative conversations rather than visits. 

The question is whether being included in those conversations affects your CTR further down the funnel.

The folks at Seer Interactive found that organic clicks rose from 0.6% to 1.08% when a site was cited in AI Overviews. 

And while the traffic was far lower, ChatGPT had a conversion rate of 16% compared with Google organic’s 1.8%.

If we look at the conversion rate for organic traffic at the bottom of the funnel – which we saw above – it was 4.78%. 

Users who engage with generative engines clearly get further into their decision-making than users who reach BOFU queries through organic search. 

But why?

While I can’t be certain, I agree with Seer’s conclusion that AI-driven users are pre-sold during the TOFU stage. 

They’ve already encountered your brand and trust the system to interpret their needs. When it’s time to convert, they’re almost ready with their credit card.

Why bottom-funnel stability won’t last much longer

Above, I noted that “traffic at the bottom of the funnel is holding steady for now.”

It’s only fair to warn you that through 2026 and 2027, we’ll likely see this erode. 

The same number of people will still travel and still buy blue widgets. 

They just won’t book or buy them themselves. And at best, attribution will be even worse than it is today.

I spoke at SMX Advanced last spring about the rise of AI agents. 

I won’t get into all the gory details here, but the Cliff Notes are this:

Agents are AI systems with some autonomy that complete tasks humans otherwise would. 

They’re rising quickly – it’s the dominant topic for those of us working in AI – and that growth isn’t slowing anytime soon. You need to be ready.

A few concepts to familiarize yourself with, if you want to understand what’s coming, are:

  • AP2 (Agent Payments Protocol): A standard that allows agents to securely execute payments on your behalf. Think of it as a digital letter of credit that ensures the agent can only buy the specific “blue widget” you approved within the price limit you set. Before you say, “But I’d never send a machine to do a human’s job,” let me tell you, you will. And if you somehow prove me wrong individually out of spite, your customers will.
  • Gemini Computer Use Model API: A model with reasoning and image understanding that can navigate and engage with user interfaces like websites. While many agentic systems access data via APIs, this model (OpenAI has one too, as do others) lets the agent interact with visual interfaces to access information it normally couldn’t – navigating filters, logins, and more if given the power.
  • MCP (Model Context Protocol): An emerging standard acting as a universal USB port for AI apps. It lets agents safely connect to your internal data (like checking your calendar or reading your emails) to make purchasing decisions with full context and to work interactively with other agents. Hat tip to Ahrefs for building an awesome MCP server.

Dig deeper: How Model Context Protocol is shaping the future of AI and search marketing

Why do these protocols matter to a content strategist?

Because once AP2 and Computer Use hit critical mass, the click – that sacred metric we’ve optimized for two decades – changes function. 

It stops being a navigation step for a human exploring a website and becomes a transactional step for a machine executing a task.

If an agent uses Computer Use to navigate your pricing page and AP2 to pay for the subscription, the human user never sees your bottom-of-the-funnel content. 

So in that world, who – or rather, what – are you optimizing for?

This brings us back to the Siege Media data. 

Right now, pricing pages and calculators are winning because humans are using AI to research (TOFU and MOFU) and then manually visiting sites to convert (BOFU). 

But as agents take over execution, that manual visit disappears. The “traffic” to your pricing page may be bots verifying costs, not humans persuaded by your copy.

The 2026 strategy

This reality pushes value back up the funnel. 

If the agent handles the purchase, the human decision – the “moment of truth” – happens entirely inside the chat interface or agentic system during the research phase.

In this world, you don’t win by having the flashiest pricing page. 

You win by being the brand the LLM recommends when the user asks, “Who should I trust?”

Your strategy for 2026 requires a two-pronged approach:

  • For the agent (the execution): Ensure your BOFU content is technically flawless. Use clean schema, accessible APIs, and clear data structures so that when an agent arrives via MCP or Computer Use to execute a transaction, it encounters no friction.
  • For the human (the selection): Double down on TOFU. Focus on mentions and citations. You need to be the entity referenced in the generative answer so that users – and agents – trust you.

As we move toward 2026 and then 2027 (it’ll be here sooner than you think), the “click” will become a commodity more often handled by machines. 

The mention, however, remains the domain of human trust. And in my opinion, that’s where your next battle for visibility will be fought.

Time to start – or hopefully keep – making the TOFU.

Read more at Read More

How to evaluate your SEO tools in 2026 – and avoid budget traps

How to evaluate your SEO tools in 2026 – and avoid budget traps

Evaluating SEO tools has never been more complicated. 

Costs keep rising, and promises for new AI features are everywhere.

This combination is hardly convincing when you need leadership to approve a new tool or expand the budget for an existing one. 

Your boss still expects SEO to show business impact – not how many keywords or prompts you can track, how fast you can optimize content, or what your visibility score is. 

That is exactly where most tools still fail miserably.

The landscape adds even more friction. 

Features are bundled into confusing packages and add-on models, and the number of solutions has grown sharply in the last 12 months. 

Teams can spend weeks or even months comparing platforms only to discover they still cannot demonstrate clear ROI or the tools are simply out of budget.

If this sounds familiar, keep reading.

This article outlines a practical framework for evaluating your SEO tool stack in 2026, focusing on:

  • Must-have features.
  • A faster way to compare multiple tools.
  • How to approach vendor conversations.

The new realities of SEO tooling in 2026

Before evaluating vendors, it helps to understand the forces reshaping the SEO tooling landscape – and why many platforms are struggling to keep pace.

Leadership wants MQLs, not rankings

Both traditional and modern SEO tools still center on keyword and prompt tracking and visibility metrics. These are useful, but they are not enough to justify the rising prices.

In 2026, teams need a way to connect searches to traffic and then to MQLs and revenue. 

Almost no tool provides that link, which makes securing larger budgets nearly impossible. 

(I say “almost” because I have not tested every platform, so the unicorn may exist somewhere.)

AI agents raise expectations

With AI platforms like ChatGPT, Claude, and Perplexity – along with the ability to build custom GPTs, Gems, and Agents – teams can automate a wide range of tasks. 

That includes everything from simple content rewriting and keyword clustering to more complex competitor analysis and multi-step workflows.

Because of this, SEO tools now need to explain why they are better than a well-trained AI agent. 

Many can’t. This means that during evaluation, you inevitably end up asking a simple question: do you spend the time training your own agent, or do you buy a ready-made one?

Small teams need automation that truly saves time

If you want real impact, your automation shouldn’t be cosmetic. 

You can’t rely on generic checklists or basic AI recommendations, yet many tools still provide exactly that – fast checklists with no context.

Without context, automation becomes noise. It generates generic insights that are not tailored to your company, product, or market, and those insights will not save time or drive results.

Teams need automation that removes repetitive work and delivers better insights while genuinely giving time back.

Dig deeper: 11 of the best free tools every SEO should know about

A note on technical SEO tools

Technical SEO tools remain the most stable part of the SEO stack. 

The vendor landscape has not shifted dramatically, and most major platforms are innovating at a similar pace. 

Because of this, they do not require the same level of reevaluation as newer AI-driven categories.

That said, budgeting for them may still become challenging. 

Leadership often assumes AI can solve every problem, but we know that without strong technical performance, SEO, content, and AI efforts can easily fail.

I will also make one bold prediction – we should be prepared to expect the unexpected in this category. 

These platforms can crawl almost any site at scale and extract structured information, which could make them some of the most important and powerful tools in the stack.

Many already pull data from GA and GSC, and integrating with CRM or other data platforms may be only a matter of time. 

I see that as a likely 2026 development.

What must-have features actually look like in 2026

To evaluate tools effectively, it helps to focus on the capabilities that drive real impact. These are the ones worth prioritizing in 2026.

Advanced data analysis and blended data capabilities

Data analysis will play a much bigger role. 

Tools that let you blend data from GA, GSC, Salesforce, and similar sources will move you closer to the Holy Grail of SEO – understanding whether a prompt or search eventually leads to an MQL or a closed-won deal. 

This will never be a perfect science, but even a solid guesstimation is more useful than another visibility chart.

Integration maturity is becoming a competitive differentiator. 

Disconnected data remains the biggest barrier between SEO work and business attribution.

SERP intelligence for keywords and prompts

Traditional SERP intelligence remains essential. You still need:

  • Topic research and insights for top-ranking pages.
  • Competitor analysis.
  • Content gap insights.
  • Technical issues and ways to fix them.

You also need AI SERP intelligence, which analyzes:

  • How AI tools answer specific prompts.
  • What sources do they cite.
  • If your brand appears, and if your competitors are also mentioned.

In an ideal world, these two groups should appear side by side and provide you with a 360-degree view of your performance.

Automation with real-time savings

Prioritize tools that:

  • Cluster automatically.
  • Detect anomalies.
  • Provide prioritized recommendations for improvements.
  • Turn data into easy-to-understand insights.

These are just some of the examples of practical AI that can really guide you and save you time.

Strong multilingual support

This applies to SEO experts who work with websites in languages other than English. 

Many tools are still heavily English-centric. Before choosing a tool, make sure the databases, SERP tracking, and AI insights work across languages, not just English.

Transparent pricing and clear feature lists

Hidden pricing, confusing bundles, and multiple add-ons make evaluation frustrating. 

Tools should communicate clearly:

  • Which features they have.
  • All related limitations.
  • Whether a feature is part of the standard plan or an add-on.
  • When something from the standard plan moves to an add-on. 

Many vendors change these things quietly, which makes calculating the investment you need difficult and hard to justify. 

Dig deeper: How to choose the best AI visibility tool

Plus, some features that might be overhyped

AI writing

If you can’t input detailed information about your brand, product, and persona, the content you produce will be the same as everyone else’s. 

Many tools already offer this and can make your content sound as if it were written by one of your writers. 

So the question is whether you need a specialized tool or if a custom GPT can do the job.

Prompt tracking 

It’s positioned as the new rank tracking, but it is like looking at one pixel of your monitor. 

It gives you only a tiny clue of the whole picture. 

AI answers change based on personalization and small differences in prompts, and the variations are endless.

Still, this tactic is helpful in:

  • Providing directional signals.
  • Helping you benchmark brand presence.
  • Highlighting recurring themes AI platforms use.
  • Allowing competitive analysis within a controlled sample.

Large keyword databases

They still matter for directional research, but are not a true competitive differentiator. 

Most modern tools have enough coverage to guide your strategy. 

The value now stems from the practical insights derived from the data.

How to compare 10 tools without wasting your time

Understanding features is only half the equation. 

The real challenge is knowing how to evaluate specialized tools and all-in-one platforms without losing your sanity or blocking your team for weeks. 

After going through this process for the tenth time, I’ve found an approach that works for me.

Step 1: Start with the pricing page

I always begin my evaluation on the pricing page. 

With one page, you can get a clear sense of: 

  • All features.
  • Limitations.
  • Which ones fall under add-ons.
  • The general structure of the pricing tiers. 

Even if you need a demo to get the exact price, the framework should still be relatively transparent.

Step 2: Test using your normal weekly work

No checklist will show you more than trying your regular BAU tasks with a couple of tools in parallel. 

This reveals:

  • How long each task takes.
  • What insights appear or disappear.
  • What feels smoother or more clunky.

How difficult the setup is – including whether the learning curve is huge. 

I work in a small team, and a tool that takes many hours just to set up likely will not make my final list.

Not all evaluations can rely on BAU tasks. 

For example, when we researched tools for prompt and AI visibility tracking, we tested more than ten platforms. 

This capability did not exist in our stack, and at first, we had no idea what to check. 

In those cases, you need to define a small set of test scenarios from scratch and compare how each tool performs. 

Continue refining your scenarios, because each new evaluation will teach you something new.

Dig deeper: Want to improve rankings and traffic? Stop blindly following SEO tool recommendations

Step 3: Always get a free trial

Demos are polished. Reality often is not. 

If there is no option for a free trial, either walk away or, if the tool is not too expensive, pay for a month.

Get the newsletter search marketers rely on.


Step 4: Involve only the people who will actually use the tool

Always ask yourself who truly needs to be involved in the evaluation. 

For example, we are currently assessing a platform used not only by the SEO team but also by two other teams. 

We asked those teams for a brief summary of their requirements, but until we have a shortlist, there is no reason to involve them further or slow the process. 

And if your company has a heavy procurement or security review, involving too many people too early will slow everything down even more.

At the same time, involve the whole SEO team, because each person will see different strengths and weaknesses and everyone will rely on the tool.

Step 5: Evaluate results, not features

Many features sound like magic wands. 

In reality, the magic often works only sometimes, or it works but is very expensive. To understand what you truly need, always ask yourself:

  • Did the tool save time?
  • Did it surface insights that my current stack does not?
  • Could a custom GPT do this instead?
  • Does the price make sense for my team, and can I prove its ROI?

These questions turn the decision into a business conversation rather than a feature debate and help you prepare your “sales” pitch for your boss.

Step 6: Evaluate support quality, not just product features

Support has become one of the most overlooked parts of tool evaluation. 

Many platforms rely heavily on AI chat and automated replies, which can be extremely frustrating when you are dealing with a time-sensitive issue or have to explain your problem multiple times.

Support quality can significantly affect your team’s efficiency, especially in small teams with limited resources. 

When evaluating tools, check:

  • How easy it is to reach a human.
  • What response times look like.
  • Whether the vendor offers onboarding or ongoing guidance. 

A great product with weak support can quickly become a bottleneck.

Once you have a shortlist, the quality of your vendor conversations will determine how quickly you can move forward. 

And this may be the hardest part – especially for the introverted SEO leads, myself included.

How to navigate vendor conversations

I’m practical, and I don’t like wasting anyone’s time. I have plenty of tasks waiting, so fluff conversations aren’t helpful. 

That’s why I start every vendor call by setting clear goals, limitations, a timeline, and next steps. 

Over time, I’ve learned that conversations run much more smoothly when I follow a few simple principles.

Be prepared for meetings

If you are evaluating a tool, come prepared to the demo. 

Ideally, you should have access to a free trial, tested the platform, and created a list of practical questions. 

Showing up unprepared is not a good sign, and that applies to both sides.

For example, I am always impressed when a vendor joins the conversation having already researched who we are, what we do, and who our competitors are. 

If you have spoken with the vendor before, directly ask what has changed since your last discussion.

Ask for competitor comparisons

When comparing a few tools, I always ask each vendor for a direct comparison. 

These comparisons will be biased, but collecting them from all sides can reveal insights I had not considered and give me ideas for specific things to test. 

Often, there is no reason to reinvent the wheel.

Ask how annual contracts influence pricing

Annual contracts reduce administrative work and give vendors room to negotiate, which can lead to better pricing. 

Many tools include this information on their pricing pages, and we have all seen it. 

Ask about any other nuances that might affect the final price – such as additional user seats or add-ons.

Don’t start from scratch with vendors you know

Often, the most effective approach is simply to say:

“This is our budget. This is what we need. Can you support this?”

This works especially well with vendors you have used before because both sides already know each other.

What to consider from a business perspective

Even if you select a tool, that does not mean you will receive the budget for it.

Proving ROI is especially difficult with SEO tools. But there are a few things you can do to increase your chances of getting a yes.

Present at least three alternatives in every request

This shows you have done your homework, not just picked the first thing you found. Present your leadership with:

  • The criteria you used in your evaluation.
  • Pros and cons of each tool.
  • The business case and why the capability is needed.
  • What happens if you do not buy the tool.

Providing this view builds trust in your ability to make decisions.

Avoid overselling

Tools improve efficiency, but they cannot guarantee outcomes – especially in SEO, GEO, or whatever you call it. 

Spend time explaining how quickly things are changing and how many factors are outside your control. Managing expectations will strengthen your team’s credibility.

But even with thorough evaluation and negotiation, we still face the same issue: the SEO tooling market has not caught up with what companies now expect. 

Let’s hope the future brings something closer to the clarity we see in Google Ads.

Dig deeper: How to master the enterprise SEO procurement process

The future of the SEO tool stack

The next generation of SEO tools must move beyond vanity metrics. 

Trained AI agents and custom GPTs can already automate much of the work.

In a landscape where companies want to reduce employee and operational costs, you need concrete business numbers to justify high tool prices. 

The platforms that can connect searches, traffic, and revenue will become the new premium category in SEO technology.

For now, most SEO teams will continue to hear “no” when requesting budgets because that connection does not yet exist. 

And the moment a tool finally solves this attribution problem, it will redefine the entire SEO technology market.

Read more at Read More

AI tools for PPC, AI search, and social campaigns: What’s worth using now

AI tools for PPC, AI search, and social campaigns: What’s worth using now

In 2026 and well beyond, a core part of the performance marketer’s charter is learning to leverage AI to drive growth and efficiency. 

Anyone who isn’t actively evaluating new AI tools to improve or streamline their PPC work is doing their brand or clients a disservice.

The challenge is that keeping up with these tools has become almost a full-time job, which is why my agency has made AI a priority in our structured knowledge-sharing. 

As a team, we’ve honed in on favorites across creative, campaign management, and AI search measurement. 

This article breaks down key options in each category, with brief reviews and a callout of my current pick.

One overarching recommendation before we dive in: be cautious about signing long-term contracts for AI tools or platforms. 

At the pace things are moving, the tool that catches your eye in December could be an afterthought by April.

AI creative tools for paid social campaigns

There’s no shortage of tools that can generate creative assets, and each comes with benefits as well as the risks of producing AI slop. 

Regardless of the tool you choose, it must be thoroughly vetted and supported by a strong human-in-the-loop process to ensure quality, accuracy, and brand alignment.

Here’s a quick breakdown of the tools we’ve tested:

  • AdCreative.ai: Auto-generates images, video creatives, ad copy, and headlines in multiple sizes, with data-backed scoring for outputs.
  • Creatify: Particularly strong on video ads with multi-format support.
  • WASK: Combines AI creative generation with campaign optimization and competitor analysis.
  • Revid AI: Well-suited for story formats.
  • ChatGPT: Free and widely familiar, giving marketers an edge in effective prompting.

Our current tool of choice is AdCreative.ai. It’s easy to use and especially helpful for quickly brainstorming creative angles and variations to test. 

Like its competitors, it offers meaningful advantages, including:

  • Speed and scale that allow you to generate dozens or hundreds of variants in minutes to keep creative fresh and reduce ad fatigue.
  • Less reliance on external designers or editors for routine or templated outputs.
  • Rapid creative experimentation across images, copy, and layouts to find winning combinations faster.
  • Data-driven insights, such as creative scores or performance predictions, when available.

The usual caveats apply across all creative tools:

  • Build guardrails to avoid off-brand outputs by maintaining a strong voice guide, providing exemplar content, enforcing style rules and banned words, and ensuring human review at every step.
  • Watch for accuracy issues or hallucinations and include verification in your process, especially for technical claims, data, or legal copy. 

Dig deeper: How to get smarter with AI in PPC

AI campaign management and workflow tools for performance campaigns

There are plenty of workflow automation tools on the market, including long-standing options, like Zapier, Workato, and Microsoft Power Automate. 

Our preferred choice, though, is n8n. Its agentic workflows and built-in connections across ad platforms, CRMs, and reporting tools have been invaluable in automating redundant tasks.

Here are my agency’s primary use cases for n8n:

  • Lead management: Automatically enrich new leads from HubSpot or Salesforce with n8n’s Clearbit automation, then route them to the right rep or nurture sequence.
  • UTM cleanup: When a form fill or ad conversion comes in, automatically normalize UTM parameters before pushing them to your CRM. Some systems, like HubSpot, store values in fields such as “first URL seen” that aren’t parsed into UTM fields, so UTMs remain associated with the user but aren’t stored properly and require reconciliation.
  • Data reporting: Pull metrics from APIs, structure the data, and use AI to summarize insights. Reports can then be shared via Slack and email, or dropped into collaborative tools like Google Docs.

As with any tool, n8n comes with caveats to keep in mind:

  • It requires some technical ability because it’s low-code, not no-code. You often need to understand APIs, JSON, and authentication, such as OAuth or API keys. Even basic automations may involve light logic or expressions. Integrations with less mainstream tools can require scripting.
  • You need a deliberate setup to maintain security. There’s no built-in role-based access control in all configurations unless you use n8n Cloud Enterprise. Misconfigured webhooks can expose data if not handled properly.
  • Its ad platform integrations aren’t as broad as those of some competitors. For example, it doesn’t include LinkedIn Ads, Reddit Ads, or TikTok Ads. These can be added via direct API calls, but that takes more manual work.

Dig deeper: Top AI tools and tactics you should be using in PPC

Get the newsletter search marketers rely on.


AI search visibility measurement tools

Most SEOs already have preferred platforms for measurement and insights – Semrush, Moz, SE Ranking, and others. 

While many now offer reports on brand visibility in AI search results from ChatGPT, Perplexity, Gemini, and similar tools, these features are add-ons to products built for traditional SEO.

To track how our brands show up in AI search results, we use Profound. 

While other purpose-built tools exist, we’ve found that it offers differentiated persona-level and competitor-level analysis and ties its reporting to strategic levers like content and PR or sentiment, making it clear how to act on the data.

These platforms can provide near real-time insights such as:

  • Performance benchmarks that show AI visibility against competitors to highlight strengths and weaknesses.
  • Content and messaging intel, including the language AI uses to describe brands and their solutions, which can inform thought leadership and messaging refinement.
  • Signals that show whether your efforts are improving the consistency and favorability of brand mentions in AI answers.
  • Trends illustrating how generative AI is reshaping search results and user behavior.
  • Insights beyond linear keyword rankings that reveal the narratives AI models generate about your company, competitors, and industry.
  • Gaps and opportunities to address to influence how your brand appears in AI answers.

No matter which tool you choose, the key is to adopt one quickly. 

The more data you gather on rapidly evolving AI search trends, the more agile you can be in adjusting your strategy to capture the growing share of users turning to AI tools during their purchase journey.

Dig deeper: Scaling PPC with AI automation: Scripts, data, and custom tools

What remains true as the AI toolset keeps shifting

I like to think most of my content for this publication ages well, but I’m not expecting this one to follow suit. 

Anyone reading it a few months after it runs will likely see it as more of a time capsule than a set of current recommendations – and that’s fine.

What does feel evergreen is the need to:

  • Monitor the AI landscape.
  • Aggressively test new tools and features.
  • Build or maintain a strong knowledge-sharing function across your team. 

We’re well past head-in-the-sand territory with AI in performance marketing, yet there’s still room for differentiation among teams that move quickly, test strategically, and pivot together as needed.

Dig deeper: AI agents in PPC: What to know and build today

Read more at Read More

Think different: The Positionless Marketing manifesto by Optimove

In 1997, Apple launched a campaign that became cultural gospel. “Think Different” celebrated the rebels, the misfits, the troublemakers. The ones who saw things differently. The ones who changed the world. 

Apple understood something fundamental: the constraints that limited imagination weren’t real. They were inherited. Accepted. Assumed. And the people who broke through weren’t smarter or more talented. They simply refused to believe the constraints applied to them. 

Twenty-eight years later, marketing faces its own Think Different moment. 

The constraints are gone. Technology has removed them. AI can generate infinite variants. Data platforms deliver real-time insights. Orchestration tools coordinate across every channel instantly. The infrastructure that once required armies of specialists, weeks of coordination and endless approvals now exists in platforms accessible to any marketer willing to learn them. 

Yet most marketers still operate as if the box exists. 

They wait for the data team to run the analysis. They wait for creative to deliver the assets. They wait for engineering to build the integration. They operate within constraints that technology has already eliminated, not because they must, but because assembly-line marketing taught them that’s how it worked. 

Creative waits for data. Campaigns wait for creative. Launch waits for engineering. Move from station to station. Hand off to the next department. That was the assembly line. That was the box. 

And that box is gone. But the habits remain.  

Here’s to the marketers who refuse to wait for approval

The ones who see a customer signal at 3 p.m. and launch a personalized journey by 4 p.m., not because they asked permission but because the customer needed it now. 

The ones who don’t send briefs to three different teams. They access the data, generate the creative and orchestrate the campaign themselves. Not because they’re trying to eliminate specialists, but because waiting days for what they can deliver in hours wastes the moment. 

The ones who run experiments constantly, not occasionally. Who test 10 variants instead of two. Who measure lift instead of clicks. Who know that perfect insight arrives through iteration, not through analysis paralysis. 

Here’s to the ones who see campaigns where others see dependencies 

They don’t see a handoff to the analytics team. They see customer data they can access instantly to understand behavior, predict intent and target precisely. 

They don’t see a creative approval process. They see AI tools that generate channel-ready assets in minutes, allowing them to personalize at scale rather than compromise for efficiency. 

They don’t see an engineering backlog. They see orchestration platforms that automate journeys, test variations and optimize outcomes without a single ticket. 

They’re not reckless. They’re not cowboys  

They’re simply operating at the speed technology now enables, constrained only by strategy and judgment rather than structure and process.  

This is what Positionless Marketing means: Wielding Data Power, Creative Power and Optimization Power simultaneously. Not because you’ve eliminated everyone else, but because technology eliminated the dependencies that once made those handoffs necessary. 

And here’s what most people miss: This isn’t just about speed. It’s about potential 

When marketers were constrained by assembly-line marketing infrastructure, their job was to manage the line. Write the brief. Coordinate the teams. Navigate the approvals. Wait for each station to finish its work. The marketer’s skill was project management. Their value was orchestrating others. 

Now? Your job in marketing has changed entirely 

Your job is no longer to manage process. Your job is to enable potential. To help every person on your team (and yourself) realize what they’re capable of when the constraints disappear. To show them that the data they’ve been waiting for is accessible now. That the creative they’ve been briefing can be generated instantly. That the campaigns they’ve been coordinating can be orchestrated autonomously.  

Teach people to think outside the box by showing them there is no longer a box 

The data analyst who only ran reports can now build predictive models and operationalize them in real time. The campaign manager who only coordinated handoffs can now design, test and optimize end-to-end journeys independently. The creative strategist who only wrote briefs can now generate and deploy assets across every channel. 

This is the revolution: not that technology does the work, but that technology removes the barriers that prevented people from doing work they were always capable of. 

The misfits and rebels of 1997 saw possibilities where others saw limitations. They refused to accept that things had to be done the way they’d always been done. 

The Positionless Marketers of today are doing the same thing 

They’re refusing to wait when customers need action now. They’re refusing to accept that insight takes weeks when platforms deliver it in seconds. They’re refusing to operate within constraints that technology has already eliminated. 

They’re thinking differently. Not because they’re trying to be difficult. But because the old way of thinking no longer matches the new reality of what’s possible. 

In 1997, Apple told us: “The people who are crazy enough to think they can change the world are the ones who do.”  

In 2025, the people crazy enough to think they can deliver personalized experiences at scale, launch campaigns in hours instead of weeks, and operate without dependencies are the ones who will. 

The constraints are gone. 

The assembly-line marketing box can no longer exist. 

Read more at Read More

Google Search Console performance reports adds weekly and monthly views

Screenshot of Google Search Console

Google added weekly and monthly views to Search Console performance reports. These options give you clearer, longer-term insights instead of relying only on the 24-hour view.

What it looks like. Here are a few photos I took during the announcement at the Google Search Central event in Zurich this morning:

Why we care. This small update gives SEOs, publishers, and site owners access to more detailed data. It can help you pinpoint why your performance shifted in a specific month, week, or day.

Read more at Read More

Judge limits Google’s default search deals to one year

Google is being forced to cap all default search and AI app deals at one year. This will end the long-term agreements (think: Apple, Samsung) that helped secure its default status on billions of devices. Just don’t expect this to end Google’s search dynasty anytime soon.

Driving the news. Judge Amit Mehta on Friday called the one-year cap a “hard-and-fast termination requirement” needed to enforce antitrust remedies after his 2024 ruling that Google illegally monopolized search and search ads, Business Insider reported. In September, Mehta ruled on Google search deals:

  • “Google will be barred from entering or maintaining any exclusive contract relating to the distribution of Google Search, Chrome, Google Assistant, and the Gemini app. Google shall not enter or maintain any agreement that
    • (1) conditions the licensing of the Play Store or any other Google application on the distribution, preloading, or placement of Google Search, Chrome, Google Assistant, or the Gemini app anywhere on a device;
    • (2) conditions the receipt of revenue share payments for the placement of one Google application (e.g., Search, Chrome, Google Assistant, or the Gemini app) on the placement of another such application;
    • (3) conditions the receipt of revenue share payments on maintaining Google Search, Chrome, Google Assistant, or the Gemini app on any device, browser, or search access point for more than one year; or
    • (4) prohibits any partner from simultaneously distributing any other GSE, browser, or GenAI product search access point for more than one year; or (4) prohibits any partner from simultaneously distributing any other GSE, browser, or GenAI product.”

Why we care. A more fragmented search landscape means user queries could start anywhere. If AI-powered rivals like OpenAI, Perplexity, or Microsoft make even small gains in search, you’ll face a broader and more complicated world to compete in.

Reality check. This is a speed bump, not a shake-up. Google’s cash, brand power, and user habits still give it a big edge in yearly talks.

Read more at Read More

Google denies ads are coming to Gemini in 2026

AdWeek reported that Google told clients it plans to add ads to its Gemini AI chatbot in 2026, but Google’s top ads executive is publicly denying it.

Driving the news. Google reps reportedly told major advertisers on recent calls that Gemini would get its own ad placements in 2026, according to Adweek. This is separate from the ads already running in AI Mode, the AI-powered search experience Google launched in March.

  • Buyers said they saw no prototypes, formats, or pricing.
  • They described the conversations as exploratory and light on technical detail.

Google says that’s wrong. Dan Taylor, Google’s VP of Global Ads, disputed the report directly on X, writing:

  • “This story is based on uninformed, anonymous sources who are making inaccurate claims. There are no ads in the Gemini app and there are no current plans to change that.”

Why we care. Advertisers are watching closely for monetization inside AI assistants, which many see as the next major ad frontier. Conflicting signals about ads in Gemini hint at where Google may take AI monetization, even as the company denies any immediate plans. Any move to add paid placements to a high-engagement chatbot could reshape budgets, shift user behavior, and create a new ad surface separate from search.

Between the lines. There is a great debate over whether AI chatbots should stay pure utility tools or evolve into new ad surfaces. Even early speculation about ads inside Gemini is already prompting agencies to start planning.

What’s next. For now, Google says Gemini is still ad-free. But rivals are already testing ways to make money from AI, and advertisers are eager for new places to run ads. The debate over ads in Gemini isn’t going away – only the timeline is shifting.

Adweek’s report. EXCLUSIVE: Google Tells Advertisers It’ll Bring Ads to Gemini in 2026

Read more at Read More

November 2025 Digital Marketing Roundup: What Changed and What You Should Do About It

November pushed the industry further into AI-shaped discovery. Search behaviors shifted. Platforms tightened control. Visibility started depending less on who publishes most and more on who earns trust across the ecosystem.

AI summaries reached Google Discover. ChatGPT released a browser. TikTok exposed true attribution paths. Meta refined placements. Google rolled out guardrails for AI-written ads. Social platforms changed how your data trains models. Streaming dominated households, and schema picked up a new strategic role.

Here’s what mattered most and how to stay ahead.

Key Takeaways

• AI is rewriting the click path. Google Discover summaries and AI Overviews are reducing CTRs across categories.
• Cross-channel influence is becoming measurable. TikTok attribution now shows how much value standard reporting misses.
• Visibility depends on authority across ecosystems, not just your site. LLMs pull from places brands often ignore.
• Platforms are tightening data controls and usage rules. Expect stricter compliance requirements across ads and content.
• Structured data has moved from “SEO extra” to critical infrastructure for AI-driven search.

Search & AI Evolution

AI is now shaping what users see before they click and in many cases, removing the need to click at all.

AI summaries hit Google Discover

Google added AI-generated recaps to Discover for news and sports stories. Users now get context from summaries instead of visiting publisher sites.

Our POV: Discover has been one of the few remaining high-intent traffic drivers untouched by AI. That buffer is gone. Zero-click consumption will rise.

What to do next: Track Discover CTR in Analytics. Refresh headline structure and imagery to compete with summaries. Expand content distribution beyond traditional articles, since Discover now surfaces YouTube, X, and other formats.

ChatGPT releases an AI-powered browser

ChatGPT Atlas launched with built-in summarization, product comparison, agent actions, and persistent memory settings.

ChatGPT Atlas's interface.

Our POV: The browser itself isn’t the threat. The shift in user behavior is. People will expect AI to interpret pages for them, not just display them.

What to do next: Strengthen structured data. Audit category and product pages for clarity. Start monitoring brand visibility inside AI-driven search using LLM-aware tools.

AI Overviews drive a drop in search CTRs

A new study shows that when AI Overviews appear, both organic and paid clicks fall sharply. They currently trigger for about fifteen percent of queries, most of them high-volume informational searches.

Paid and organic CTR trends driven by AI Overviews.

Our POV: AI Overviews function like a competitor. If your content doesn’t get pulled into the summary, discovery becomes significantly harder.

What to do next: Optimize for inclusion. Use schema, succinct summaries, and expert signals. Track performance beyond rankings. Visibility inside AI answers must become a KPI you can track through tools like Profound.

Schema’s new role in AI-driven discovery

Schema moved from a snippet enhancer to a foundational layer for machine understanding. W3C’s NLWeb group is helping standardize how AI agents consume the web.

Our POV: Schema is now infrastructure. AI agents need structured context to interpret brands, products, and expertise.

What to do next: Expand schema sitewide. Prioritize entity definitions, not just rich result templates. Add relationships between key content pieces to help machines map authority.

Paid Media & Automation

Platforms are folding more automation into ad delivery. Control now comes from strategy, not settings.

Google adds Waze to PMax

PMax can now serve location-targeted ads inside Waze for store-focused campaigns.

Our POV: This extends real-world intent targeting. For multi-location brands, Waze becomes a measurable foot-traffic lever.

What to do next: Audit store listings and geo-extensions. Monitor budget shifts once Waze impressions begin flowing. Validate whether foot-traffic lifts justify expanded proximity targeting.

Asset-level display reporting rolls out

Google Ads added per-asset reporting for Display campaigns. Marketers can now evaluate individual images, headlines, and copy.

Our POV: Better visibility helps refine creative, but it’s only part of the truth. Placement, bid strategy, and audience still determine performance.

What to do next: Organize assets with naming conventions before rollout hits your account. Use data to retire low-impact creatives and test new variants.

Meta introduces limited-spend placements

Advertisers can allocate up to five percent of budget toward excluded placements when Meta predicts performance upside.

Our POV: This creates a middle ground between strict exclusions and Advantage+ automation. It reduces risk without cutting off potential high-efficiency wins.

What to do next: A/B test manual vs. limited-spend placement setups. Evaluate cost per result and incremental conversions instead of pure CPM efficiency.

Social & Content Trends

Brands are being pushed into new storytelling styles, shaped by identity, utility, and AI-assisted behaviors.

Lifestyle branding gains momentum

Consumers are gravitating toward brands tied to identity and aspiration. Affordable luxury and status signaling are driving engagement.

Our POV: Features alone don’t move people. Identity and belonging do. If your copy focuses only on product attributes, you’re leaving impact on the table.

What to do next: Rework product messaging to show how your offering fits into a buyer’s desired lifestyle. Update CTAs, social captions, and headlines to evoke identity.

LLM-briefed CTAs redefine engagement

CXL tested CTAs that include a ready-made prompt for ChatGPT. Engagement improved because users received higher-quality AI outputs.

An example of an LLM-informed CTA.

Our POV: As users ask AI to interpret brand content, shaping the question becomes part of conversion optimization.

What to do next: Experiment with prompt-style CTAs in guides, templates, and tools. Test which phrasing drives more accurate and useful AI interpretations.

Influencer partners expand beyond typical creators

Brands are leaning into unconventional creators; think niche experts, offbeat personalities, and micro-communities.

Our POV: As traditional influencer pools saturate, originality becomes a differentiator.

What to do next: Identify unexpected storytellers your competitors ignore. Prioritize people with unique voices and strong community trust over polished aesthetics.

PR, Reputation & Brand Risk

Data control, AI training, and brand representation became major flashpoints in November.

Reddit files legal action over AI scraping

Four companies allegedly scraped Reddit content through Google search results instead of its paid API. Reddit is suing.

Our POV: Reddit is a major training source for LLMs. Legal pressure will reshape how models access user-generated content.

What to do next: Monitor how your brand appears in Reddit threads. Insights from these conversations often influence AI outputs, even indirectly.

LinkedIn will use member data to train AI

LinkedIn updated its policy to allow profile content and posts to train in-house models unless users opt out.

Our POV: This raises transparency questions and could affect brand safety for professional voices.

What to do next: Review employee account settings. Update your governance policies to clarify how team-generated content may be reused.

ChatGPT reduces brand mentions

ChatGPT lowered brand references per response while elevating trusted entities like Wikipedia and Reddit.

A graphic showing reduced brand mentions by ChatGPT.

Our POV: Authority now comes from third-party validation, not just your site. If you’re missing from high-trust platforms, AI tools won’t surface you consistently.

What to do next: Strengthen your presence on Wikipedia, industry directories, and review platforms. Build citations that AI models depend on.

AI search tools mention different brands for the same queries

BrightEdge found almost zero overlap between brands recommended by Google’s AI Overview and ChatGPT.

Our POV: Each model prioritizes different signals based on its training data. Ranking in one environment doesn’t guarantee visibility in another.

What to do next: Expand Digital PR efforts beyond search. Build authority in the sources each LLM favors.

Streaming & Media Shifts

Streaming hits ninety-one percent of U.S. households

Homes now average six subscriptions and spend over one hundred dollars per month on streaming.

Our POV: Streaming is now a core channel for shaping intent long before search happens.

What to do next: Add OTT to your awareness mix. Use it to influence demand before users reach paid search or social ads.

Conclusion

AI pushed every channel toward greater automation, heavier reliance on structure, and stricter expectations for authority. Success now depends on clarity, credibility, and presence across platforms that train and inform AI, not just traditional search engines.

Brands that adapt their data, content, and distribution strategies now will stay visible as user behavior shifts.

Need help applying these insights? Talk to the NP Digital team. We’re already working with brands to navigate these changes and rebuild visibility in an AI-first world.

Read more at Read More