How to evaluate your SEO tools in 2026 – and avoid budget traps

How to evaluate your SEO tools in 2026 – and avoid budget traps

Evaluating SEO tools has never been more complicated. 

Costs keep rising, and promises for new AI features are everywhere.

This combination is hardly convincing when you need leadership to approve a new tool or expand the budget for an existing one. 

Your boss still expects SEO to show business impact – not how many keywords or prompts you can track, how fast you can optimize content, or what your visibility score is. 

That is exactly where most tools still fail miserably.

The landscape adds even more friction. 

Features are bundled into confusing packages and add-on models, and the number of solutions has grown sharply in the last 12 months. 

Teams can spend weeks or even months comparing platforms only to discover they still cannot demonstrate clear ROI or the tools are simply out of budget.

If this sounds familiar, keep reading.

This article outlines a practical framework for evaluating your SEO tool stack in 2026, focusing on:

  • Must-have features.
  • A faster way to compare multiple tools.
  • How to approach vendor conversations.

The new realities of SEO tooling in 2026

Before evaluating vendors, it helps to understand the forces reshaping the SEO tooling landscape – and why many platforms are struggling to keep pace.

Leadership wants MQLs, not rankings

Both traditional and modern SEO tools still center on keyword and prompt tracking and visibility metrics. These are useful, but they are not enough to justify the rising prices.

In 2026, teams need a way to connect searches to traffic and then to MQLs and revenue. 

Almost no tool provides that link, which makes securing larger budgets nearly impossible. 

(I say “almost” because I have not tested every platform, so the unicorn may exist somewhere.)

AI agents raise expectations

With AI platforms like ChatGPT, Claude, and Perplexity – along with the ability to build custom GPTs, Gems, and Agents – teams can automate a wide range of tasks. 

That includes everything from simple content rewriting and keyword clustering to more complex competitor analysis and multi-step workflows.

Because of this, SEO tools now need to explain why they are better than a well-trained AI agent. 

Many can’t. This means that during evaluation, you inevitably end up asking a simple question: do you spend the time training your own agent, or do you buy a ready-made one?

Small teams need automation that truly saves time

If you want real impact, your automation shouldn’t be cosmetic. 

You can’t rely on generic checklists or basic AI recommendations, yet many tools still provide exactly that – fast checklists with no context.

Without context, automation becomes noise. It generates generic insights that are not tailored to your company, product, or market, and those insights will not save time or drive results.

Teams need automation that removes repetitive work and delivers better insights while genuinely giving time back.

Dig deeper: 11 of the best free tools every SEO should know about

A note on technical SEO tools

Technical SEO tools remain the most stable part of the SEO stack. 

The vendor landscape has not shifted dramatically, and most major platforms are innovating at a similar pace. 

Because of this, they do not require the same level of reevaluation as newer AI-driven categories.

That said, budgeting for them may still become challenging. 

Leadership often assumes AI can solve every problem, but we know that without strong technical performance, SEO, content, and AI efforts can easily fail.

I will also make one bold prediction – we should be prepared to expect the unexpected in this category. 

These platforms can crawl almost any site at scale and extract structured information, which could make them some of the most important and powerful tools in the stack.

Many already pull data from GA and GSC, and integrating with CRM or other data platforms may be only a matter of time. 

I see that as a likely 2026 development.

What must-have features actually look like in 2026

To evaluate tools effectively, it helps to focus on the capabilities that drive real impact. These are the ones worth prioritizing in 2026.

Advanced data analysis and blended data capabilities

Data analysis will play a much bigger role. 

Tools that let you blend data from GA, GSC, Salesforce, and similar sources will move you closer to the Holy Grail of SEO – understanding whether a prompt or search eventually leads to an MQL or a closed-won deal. 

This will never be a perfect science, but even a solid guesstimation is more useful than another visibility chart.

Integration maturity is becoming a competitive differentiator. 

Disconnected data remains the biggest barrier between SEO work and business attribution.

SERP intelligence for keywords and prompts

Traditional SERP intelligence remains essential. You still need:

  • Topic research and insights for top-ranking pages.
  • Competitor analysis.
  • Content gap insights.
  • Technical issues and ways to fix them.

You also need AI SERP intelligence, which analyzes:

  • How AI tools answer specific prompts.
  • What sources do they cite.
  • If your brand appears, and if your competitors are also mentioned.

In an ideal world, these two groups should appear side by side and provide you with a 360-degree view of your performance.

Automation with real-time savings

Prioritize tools that:

  • Cluster automatically.
  • Detect anomalies.
  • Provide prioritized recommendations for improvements.
  • Turn data into easy-to-understand insights.

These are just some of the examples of practical AI that can really guide you and save you time.

Strong multilingual support

This applies to SEO experts who work with websites in languages other than English. 

Many tools are still heavily English-centric. Before choosing a tool, make sure the databases, SERP tracking, and AI insights work across languages, not just English.

Transparent pricing and clear feature lists

Hidden pricing, confusing bundles, and multiple add-ons make evaluation frustrating. 

Tools should communicate clearly:

  • Which features they have.
  • All related limitations.
  • Whether a feature is part of the standard plan or an add-on.
  • When something from the standard plan moves to an add-on. 

Many vendors change these things quietly, which makes calculating the investment you need difficult and hard to justify. 

Dig deeper: How to choose the best AI visibility tool

Plus, some features that might be overhyped

AI writing

If you can’t input detailed information about your brand, product, and persona, the content you produce will be the same as everyone else’s. 

Many tools already offer this and can make your content sound as if it were written by one of your writers. 

So the question is whether you need a specialized tool or if a custom GPT can do the job.

Prompt tracking 

It’s positioned as the new rank tracking, but it is like looking at one pixel of your monitor. 

It gives you only a tiny clue of the whole picture. 

AI answers change based on personalization and small differences in prompts, and the variations are endless.

Still, this tactic is helpful in:

  • Providing directional signals.
  • Helping you benchmark brand presence.
  • Highlighting recurring themes AI platforms use.
  • Allowing competitive analysis within a controlled sample.

Large keyword databases

They still matter for directional research, but are not a true competitive differentiator. 

Most modern tools have enough coverage to guide your strategy. 

The value now stems from the practical insights derived from the data.

How to compare 10 tools without wasting your time

Understanding features is only half the equation. 

The real challenge is knowing how to evaluate specialized tools and all-in-one platforms without losing your sanity or blocking your team for weeks. 

After going through this process for the tenth time, I’ve found an approach that works for me.

Step 1: Start with the pricing page

I always begin my evaluation on the pricing page. 

With one page, you can get a clear sense of: 

  • All features.
  • Limitations.
  • Which ones fall under add-ons.
  • The general structure of the pricing tiers. 

Even if you need a demo to get the exact price, the framework should still be relatively transparent.

Step 2: Test using your normal weekly work

No checklist will show you more than trying your regular BAU tasks with a couple of tools in parallel. 

This reveals:

  • How long each task takes.
  • What insights appear or disappear.
  • What feels smoother or more clunky.

How difficult the setup is – including whether the learning curve is huge. 

I work in a small team, and a tool that takes many hours just to set up likely will not make my final list.

Not all evaluations can rely on BAU tasks. 

For example, when we researched tools for prompt and AI visibility tracking, we tested more than ten platforms. 

This capability did not exist in our stack, and at first, we had no idea what to check. 

In those cases, you need to define a small set of test scenarios from scratch and compare how each tool performs. 

Continue refining your scenarios, because each new evaluation will teach you something new.

Dig deeper: Want to improve rankings and traffic? Stop blindly following SEO tool recommendations

Step 3: Always get a free trial

Demos are polished. Reality often is not. 

If there is no option for a free trial, either walk away or, if the tool is not too expensive, pay for a month.

Get the newsletter search marketers rely on.


Step 4: Involve only the people who will actually use the tool

Always ask yourself who truly needs to be involved in the evaluation. 

For example, we are currently assessing a platform used not only by the SEO team but also by two other teams. 

We asked those teams for a brief summary of their requirements, but until we have a shortlist, there is no reason to involve them further or slow the process. 

And if your company has a heavy procurement or security review, involving too many people too early will slow everything down even more.

At the same time, involve the whole SEO team, because each person will see different strengths and weaknesses and everyone will rely on the tool.

Step 5: Evaluate results, not features

Many features sound like magic wands. 

In reality, the magic often works only sometimes, or it works but is very expensive. To understand what you truly need, always ask yourself:

  • Did the tool save time?
  • Did it surface insights that my current stack does not?
  • Could a custom GPT do this instead?
  • Does the price make sense for my team, and can I prove its ROI?

These questions turn the decision into a business conversation rather than a feature debate and help you prepare your “sales” pitch for your boss.

Step 6: Evaluate support quality, not just product features

Support has become one of the most overlooked parts of tool evaluation. 

Many platforms rely heavily on AI chat and automated replies, which can be extremely frustrating when you are dealing with a time-sensitive issue or have to explain your problem multiple times.

Support quality can significantly affect your team’s efficiency, especially in small teams with limited resources. 

When evaluating tools, check:

  • How easy it is to reach a human.
  • What response times look like.
  • Whether the vendor offers onboarding or ongoing guidance. 

A great product with weak support can quickly become a bottleneck.

Once you have a shortlist, the quality of your vendor conversations will determine how quickly you can move forward. 

And this may be the hardest part – especially for the introverted SEO leads, myself included.

How to navigate vendor conversations

I’m practical, and I don’t like wasting anyone’s time. I have plenty of tasks waiting, so fluff conversations aren’t helpful. 

That’s why I start every vendor call by setting clear goals, limitations, a timeline, and next steps. 

Over time, I’ve learned that conversations run much more smoothly when I follow a few simple principles.

Be prepared for meetings

If you are evaluating a tool, come prepared to the demo. 

Ideally, you should have access to a free trial, tested the platform, and created a list of practical questions. 

Showing up unprepared is not a good sign, and that applies to both sides.

For example, I am always impressed when a vendor joins the conversation having already researched who we are, what we do, and who our competitors are. 

If you have spoken with the vendor before, directly ask what has changed since your last discussion.

Ask for competitor comparisons

When comparing a few tools, I always ask each vendor for a direct comparison. 

These comparisons will be biased, but collecting them from all sides can reveal insights I had not considered and give me ideas for specific things to test. 

Often, there is no reason to reinvent the wheel.

Ask how annual contracts influence pricing

Annual contracts reduce administrative work and give vendors room to negotiate, which can lead to better pricing. 

Many tools include this information on their pricing pages, and we have all seen it. 

Ask about any other nuances that might affect the final price – such as additional user seats or add-ons.

Don’t start from scratch with vendors you know

Often, the most effective approach is simply to say:

“This is our budget. This is what we need. Can you support this?”

This works especially well with vendors you have used before because both sides already know each other.

What to consider from a business perspective

Even if you select a tool, that does not mean you will receive the budget for it.

Proving ROI is especially difficult with SEO tools. But there are a few things you can do to increase your chances of getting a yes.

Present at least three alternatives in every request

This shows you have done your homework, not just picked the first thing you found. Present your leadership with:

  • The criteria you used in your evaluation.
  • Pros and cons of each tool.
  • The business case and why the capability is needed.
  • What happens if you do not buy the tool.

Providing this view builds trust in your ability to make decisions.

Avoid overselling

Tools improve efficiency, but they cannot guarantee outcomes – especially in SEO, GEO, or whatever you call it. 

Spend time explaining how quickly things are changing and how many factors are outside your control. Managing expectations will strengthen your team’s credibility.

But even with thorough evaluation and negotiation, we still face the same issue: the SEO tooling market has not caught up with what companies now expect. 

Let’s hope the future brings something closer to the clarity we see in Google Ads.

Dig deeper: How to master the enterprise SEO procurement process

The future of the SEO tool stack

The next generation of SEO tools must move beyond vanity metrics. 

Trained AI agents and custom GPTs can already automate much of the work.

In a landscape where companies want to reduce employee and operational costs, you need concrete business numbers to justify high tool prices. 

The platforms that can connect searches, traffic, and revenue will become the new premium category in SEO technology.

For now, most SEO teams will continue to hear “no” when requesting budgets because that connection does not yet exist. 

And the moment a tool finally solves this attribution problem, it will redefine the entire SEO technology market.

Read more at Read More

AI tools for PPC, AI search, and social campaigns: What’s worth using now

AI tools for PPC, AI search, and social campaigns: What’s worth using now

In 2026 and well beyond, a core part of the performance marketer’s charter is learning to leverage AI to drive growth and efficiency. 

Anyone who isn’t actively evaluating new AI tools to improve or streamline their PPC work is doing their brand or clients a disservice.

The challenge is that keeping up with these tools has become almost a full-time job, which is why my agency has made AI a priority in our structured knowledge-sharing. 

As a team, we’ve honed in on favorites across creative, campaign management, and AI search measurement. 

This article breaks down key options in each category, with brief reviews and a callout of my current pick.

One overarching recommendation before we dive in: be cautious about signing long-term contracts for AI tools or platforms. 

At the pace things are moving, the tool that catches your eye in December could be an afterthought by April.

AI creative tools for paid social campaigns

There’s no shortage of tools that can generate creative assets, and each comes with benefits as well as the risks of producing AI slop. 

Regardless of the tool you choose, it must be thoroughly vetted and supported by a strong human-in-the-loop process to ensure quality, accuracy, and brand alignment.

Here’s a quick breakdown of the tools we’ve tested:

  • AdCreative.ai: Auto-generates images, video creatives, ad copy, and headlines in multiple sizes, with data-backed scoring for outputs.
  • Creatify: Particularly strong on video ads with multi-format support.
  • WASK: Combines AI creative generation with campaign optimization and competitor analysis.
  • Revid AI: Well-suited for story formats.
  • ChatGPT: Free and widely familiar, giving marketers an edge in effective prompting.

Our current tool of choice is AdCreative.ai. It’s easy to use and especially helpful for quickly brainstorming creative angles and variations to test. 

Like its competitors, it offers meaningful advantages, including:

  • Speed and scale that allow you to generate dozens or hundreds of variants in minutes to keep creative fresh and reduce ad fatigue.
  • Less reliance on external designers or editors for routine or templated outputs.
  • Rapid creative experimentation across images, copy, and layouts to find winning combinations faster.
  • Data-driven insights, such as creative scores or performance predictions, when available.

The usual caveats apply across all creative tools:

  • Build guardrails to avoid off-brand outputs by maintaining a strong voice guide, providing exemplar content, enforcing style rules and banned words, and ensuring human review at every step.
  • Watch for accuracy issues or hallucinations and include verification in your process, especially for technical claims, data, or legal copy. 

Dig deeper: How to get smarter with AI in PPC

AI campaign management and workflow tools for performance campaigns

There are plenty of workflow automation tools on the market, including long-standing options, like Zapier, Workato, and Microsoft Power Automate. 

Our preferred choice, though, is n8n. Its agentic workflows and built-in connections across ad platforms, CRMs, and reporting tools have been invaluable in automating redundant tasks.

Here are my agency’s primary use cases for n8n:

  • Lead management: Automatically enrich new leads from HubSpot or Salesforce with n8n’s Clearbit automation, then route them to the right rep or nurture sequence.
  • UTM cleanup: When a form fill or ad conversion comes in, automatically normalize UTM parameters before pushing them to your CRM. Some systems, like HubSpot, store values in fields such as “first URL seen” that aren’t parsed into UTM fields, so UTMs remain associated with the user but aren’t stored properly and require reconciliation.
  • Data reporting: Pull metrics from APIs, structure the data, and use AI to summarize insights. Reports can then be shared via Slack and email, or dropped into collaborative tools like Google Docs.

As with any tool, n8n comes with caveats to keep in mind:

  • It requires some technical ability because it’s low-code, not no-code. You often need to understand APIs, JSON, and authentication, such as OAuth or API keys. Even basic automations may involve light logic or expressions. Integrations with less mainstream tools can require scripting.
  • You need a deliberate setup to maintain security. There’s no built-in role-based access control in all configurations unless you use n8n Cloud Enterprise. Misconfigured webhooks can expose data if not handled properly.
  • Its ad platform integrations aren’t as broad as those of some competitors. For example, it doesn’t include LinkedIn Ads, Reddit Ads, or TikTok Ads. These can be added via direct API calls, but that takes more manual work.

Dig deeper: Top AI tools and tactics you should be using in PPC

Get the newsletter search marketers rely on.


AI search visibility measurement tools

Most SEOs already have preferred platforms for measurement and insights – Semrush, Moz, SE Ranking, and others. 

While many now offer reports on brand visibility in AI search results from ChatGPT, Perplexity, Gemini, and similar tools, these features are add-ons to products built for traditional SEO.

To track how our brands show up in AI search results, we use Profound. 

While other purpose-built tools exist, we’ve found that it offers differentiated persona-level and competitor-level analysis and ties its reporting to strategic levers like content and PR or sentiment, making it clear how to act on the data.

These platforms can provide near real-time insights such as:

  • Performance benchmarks that show AI visibility against competitors to highlight strengths and weaknesses.
  • Content and messaging intel, including the language AI uses to describe brands and their solutions, which can inform thought leadership and messaging refinement.
  • Signals that show whether your efforts are improving the consistency and favorability of brand mentions in AI answers.
  • Trends illustrating how generative AI is reshaping search results and user behavior.
  • Insights beyond linear keyword rankings that reveal the narratives AI models generate about your company, competitors, and industry.
  • Gaps and opportunities to address to influence how your brand appears in AI answers.

No matter which tool you choose, the key is to adopt one quickly. 

The more data you gather on rapidly evolving AI search trends, the more agile you can be in adjusting your strategy to capture the growing share of users turning to AI tools during their purchase journey.

Dig deeper: Scaling PPC with AI automation: Scripts, data, and custom tools

What remains true as the AI toolset keeps shifting

I like to think most of my content for this publication ages well, but I’m not expecting this one to follow suit. 

Anyone reading it a few months after it runs will likely see it as more of a time capsule than a set of current recommendations – and that’s fine.

What does feel evergreen is the need to:

  • Monitor the AI landscape.
  • Aggressively test new tools and features.
  • Build or maintain a strong knowledge-sharing function across your team. 

We’re well past head-in-the-sand territory with AI in performance marketing, yet there’s still room for differentiation among teams that move quickly, test strategically, and pivot together as needed.

Dig deeper: AI agents in PPC: What to know and build today

Read more at Read More

Think different: The Positionless Marketing manifesto by Optimove

In 1997, Apple launched a campaign that became cultural gospel. “Think Different” celebrated the rebels, the misfits, the troublemakers. The ones who saw things differently. The ones who changed the world. 

Apple understood something fundamental: the constraints that limited imagination weren’t real. They were inherited. Accepted. Assumed. And the people who broke through weren’t smarter or more talented. They simply refused to believe the constraints applied to them. 

Twenty-eight years later, marketing faces its own Think Different moment. 

The constraints are gone. Technology has removed them. AI can generate infinite variants. Data platforms deliver real-time insights. Orchestration tools coordinate across every channel instantly. The infrastructure that once required armies of specialists, weeks of coordination and endless approvals now exists in platforms accessible to any marketer willing to learn them. 

Yet most marketers still operate as if the box exists. 

They wait for the data team to run the analysis. They wait for creative to deliver the assets. They wait for engineering to build the integration. They operate within constraints that technology has already eliminated, not because they must, but because assembly-line marketing taught them that’s how it worked. 

Creative waits for data. Campaigns wait for creative. Launch waits for engineering. Move from station to station. Hand off to the next department. That was the assembly line. That was the box. 

And that box is gone. But the habits remain.  

Here’s to the marketers who refuse to wait for approval

The ones who see a customer signal at 3 p.m. and launch a personalized journey by 4 p.m., not because they asked permission but because the customer needed it now. 

The ones who don’t send briefs to three different teams. They access the data, generate the creative and orchestrate the campaign themselves. Not because they’re trying to eliminate specialists, but because waiting days for what they can deliver in hours wastes the moment. 

The ones who run experiments constantly, not occasionally. Who test 10 variants instead of two. Who measure lift instead of clicks. Who know that perfect insight arrives through iteration, not through analysis paralysis. 

Here’s to the ones who see campaigns where others see dependencies 

They don’t see a handoff to the analytics team. They see customer data they can access instantly to understand behavior, predict intent and target precisely. 

They don’t see a creative approval process. They see AI tools that generate channel-ready assets in minutes, allowing them to personalize at scale rather than compromise for efficiency. 

They don’t see an engineering backlog. They see orchestration platforms that automate journeys, test variations and optimize outcomes without a single ticket. 

They’re not reckless. They’re not cowboys  

They’re simply operating at the speed technology now enables, constrained only by strategy and judgment rather than structure and process.  

This is what Positionless Marketing means: Wielding Data Power, Creative Power and Optimization Power simultaneously. Not because you’ve eliminated everyone else, but because technology eliminated the dependencies that once made those handoffs necessary. 

And here’s what most people miss: This isn’t just about speed. It’s about potential 

When marketers were constrained by assembly-line marketing infrastructure, their job was to manage the line. Write the brief. Coordinate the teams. Navigate the approvals. Wait for each station to finish its work. The marketer’s skill was project management. Their value was orchestrating others. 

Now? Your job in marketing has changed entirely 

Your job is no longer to manage process. Your job is to enable potential. To help every person on your team (and yourself) realize what they’re capable of when the constraints disappear. To show them that the data they’ve been waiting for is accessible now. That the creative they’ve been briefing can be generated instantly. That the campaigns they’ve been coordinating can be orchestrated autonomously.  

Teach people to think outside the box by showing them there is no longer a box 

The data analyst who only ran reports can now build predictive models and operationalize them in real time. The campaign manager who only coordinated handoffs can now design, test and optimize end-to-end journeys independently. The creative strategist who only wrote briefs can now generate and deploy assets across every channel. 

This is the revolution: not that technology does the work, but that technology removes the barriers that prevented people from doing work they were always capable of. 

The misfits and rebels of 1997 saw possibilities where others saw limitations. They refused to accept that things had to be done the way they’d always been done. 

The Positionless Marketers of today are doing the same thing 

They’re refusing to wait when customers need action now. They’re refusing to accept that insight takes weeks when platforms deliver it in seconds. They’re refusing to operate within constraints that technology has already eliminated. 

They’re thinking differently. Not because they’re trying to be difficult. But because the old way of thinking no longer matches the new reality of what’s possible. 

In 1997, Apple told us: “The people who are crazy enough to think they can change the world are the ones who do.”  

In 2025, the people crazy enough to think they can deliver personalized experiences at scale, launch campaigns in hours instead of weeks, and operate without dependencies are the ones who will. 

The constraints are gone. 

The assembly-line marketing box can no longer exist. 

Read more at Read More

Google Search Console performance reports adds weekly and monthly views

Screenshot of Google Search Console

Google added weekly and monthly views to Search Console performance reports. These options give you clearer, longer-term insights instead of relying only on the 24-hour view.

What it looks like. Here are a few photos I took during the announcement at the Google Search Central event in Zurich this morning:

Why we care. This small update gives SEOs, publishers, and site owners access to more detailed data. It can help you pinpoint why your performance shifted in a specific month, week, or day.

Read more at Read More

Judge limits Google’s default search deals to one year

Google is being forced to cap all default search and AI app deals at one year. This will end the long-term agreements (think: Apple, Samsung) that helped secure its default status on billions of devices. Just don’t expect this to end Google’s search dynasty anytime soon.

Driving the news. Judge Amit Mehta on Friday called the one-year cap a “hard-and-fast termination requirement” needed to enforce antitrust remedies after his 2024 ruling that Google illegally monopolized search and search ads, Business Insider reported. In September, Mehta ruled on Google search deals:

  • “Google will be barred from entering or maintaining any exclusive contract relating to the distribution of Google Search, Chrome, Google Assistant, and the Gemini app. Google shall not enter or maintain any agreement that
    • (1) conditions the licensing of the Play Store or any other Google application on the distribution, preloading, or placement of Google Search, Chrome, Google Assistant, or the Gemini app anywhere on a device;
    • (2) conditions the receipt of revenue share payments for the placement of one Google application (e.g., Search, Chrome, Google Assistant, or the Gemini app) on the placement of another such application;
    • (3) conditions the receipt of revenue share payments on maintaining Google Search, Chrome, Google Assistant, or the Gemini app on any device, browser, or search access point for more than one year; or
    • (4) prohibits any partner from simultaneously distributing any other GSE, browser, or GenAI product search access point for more than one year; or (4) prohibits any partner from simultaneously distributing any other GSE, browser, or GenAI product.”

Why we care. A more fragmented search landscape means user queries could start anywhere. If AI-powered rivals like OpenAI, Perplexity, or Microsoft make even small gains in search, you’ll face a broader and more complicated world to compete in.

Reality check. This is a speed bump, not a shake-up. Google’s cash, brand power, and user habits still give it a big edge in yearly talks.

Read more at Read More

Google denies ads are coming to Gemini in 2026

AdWeek reported that Google told clients it plans to add ads to its Gemini AI chatbot in 2026, but Google’s top ads executive is publicly denying it.

Driving the news. Google reps reportedly told major advertisers on recent calls that Gemini would get its own ad placements in 2026, according to Adweek. This is separate from the ads already running in AI Mode, the AI-powered search experience Google launched in March.

  • Buyers said they saw no prototypes, formats, or pricing.
  • They described the conversations as exploratory and light on technical detail.

Google says that’s wrong. Dan Taylor, Google’s VP of Global Ads, disputed the report directly on X, writing:

  • “This story is based on uninformed, anonymous sources who are making inaccurate claims. There are no ads in the Gemini app and there are no current plans to change that.”

Why we care. Advertisers are watching closely for monetization inside AI assistants, which many see as the next major ad frontier. Conflicting signals about ads in Gemini hint at where Google may take AI monetization, even as the company denies any immediate plans. Any move to add paid placements to a high-engagement chatbot could reshape budgets, shift user behavior, and create a new ad surface separate from search.

Between the lines. There is a great debate over whether AI chatbots should stay pure utility tools or evolve into new ad surfaces. Even early speculation about ads inside Gemini is already prompting agencies to start planning.

What’s next. For now, Google says Gemini is still ad-free. But rivals are already testing ways to make money from AI, and advertisers are eager for new places to run ads. The debate over ads in Gemini isn’t going away – only the timeline is shifting.

Adweek’s report. EXCLUSIVE: Google Tells Advertisers It’ll Bring Ads to Gemini in 2026

Read more at Read More

Google Shopping Ads now show merchant location labels

Google Local Services Ads vs. Search Ads- Which drives better local leads?

Google is quietly testing a new way to make Shopping ads feel more local. Select ads using local inventory feeds now display the merchant’s city or town directly above the product title — think “London” or “Tonbridge” — giving shoppers a clearer sense of where the store is based.

Why we care. The new location labels make Shopping ads feel more local and trustworthy, helping nearby retailers stand out in crowded results. Clear city or town indicators can increase click-through rates and drive more in-store visits from shoppers who prefer buying close to home.

It also gives merchants using local inventory feeds a competitive edge by highlighting proximity without needing new ad formats or extra setup.

How it works. The label appears within Shopping ads that already use local inventory data. It joins existing formats like:

  • In-store
  • Pickup later
  • Curbside pickup

But unlike those, this label focuses purely on the store’s location, not fulfillment options.

The catch. Google hasn’t officially announced the feature. Details on rollout, eligibility, and technical requirements remain unknown.

Between the lines. Merchants using local inventory feeds may get a visibility boost if they operate in recognisable or high-trust locations. For users, it’s another nudge to choose nearby retailers over marketplace or long-distance sellers.

First seen. This update was spotted by PPC News Feed founder Hana Kobzová.

Read more at Read More

Google pushes deeper into lifecycle targeting with new GA audience templates

Google is expanding its customer lifecycle capabilities in Google Analytics, launching new audience templates and dynamic remarketing features designed to make high-value targeting and re-engagement easier for advertisers.

Driving the news. Google has introduced two new suggested audience templates in GA to help advertisers instantly build lifecycle segments:

  • High-Value Purchasers — powered by purchase count or lifetime value, with Google adding a new LTV percentile field so marketers can isolate their top-tier customers.
  • Disengaged Purchasers — defined by days since last purchase, giving Google a built-in way to help brands re-engage lapsed buyers.

Google designed these templates to sync directly with Google Ads customer lifecycle goals, including high-value new customer acquisition and re-engagement modes.

Google’s next move: dynamic remarketing inside GA. Google is also bringing display dynamic remarketing directly into Analytics, letting brands show personalized product-based ads to past site visitors without needing to build remarketing setups externally.

Once advertisers implement Google’s recommended eCommerce event collection, Analytics will automatically share dynamic remarketing data with linked Google Ads accounts — as long as personalized advertising is enabled.

Why we care. Google is making it much easier to target the customers who matter — high-value buyers and lapsed purchasers — without building complex audiences from scratch. These new templates and dynamic remarketing tools create faster, smarter ways to drive acquisition, retention, and repeat purchases directly from Google Analytics.

Google is giving you more precise lifecycle targeting with less manual work, and that can translate directly into better performance and more profitable campaigns.

The big picture. Google is tightening its ecosystem, giving advertisers more automated ways to identify, activate, and re-engage customers — all fueled by audience intelligence built inside Google Analytics.

The bottom line. Google is doubling down on lifecycle marketing by turning Google Analytics into an even stronger audience engine for Google Ads.

Read more at Read More

Google adds Search Partners segment to PMax reporting

Auditing the Performance Max black box: A strategic approach

Google rolled out a long-awaited Performance Max (PMax) reporting upgrade, giving advertisers their first clear look at how Search Partners affect campaign results.

Driving the news. The update is now live in Google Ads and adds Search Partners to the PMax channel performance tables. Advertisers can now see:

  • How Search Partners contribute to PMax results.
  • Whether they add incremental value.
  • How their performance compares with other PMax channels.
  • Total spend going to Search Partners.

What’s changing. The added transparency shows how PMax spreads budget across channels – especially in search – and helps confirm whether Search Partners traffic is profitable or pulling down efficiency.

Why we care. Search Partners activity has long been hidden inside PMax, making it hard for advertisers to see where spend was going or gauge its impact, but the new reporting line finally brings visibility to this opaque slice of search inventory. With that clarity, teams can assess incremental value, compare performance against other PMax channels, and make smarter optimization and budgeting decisions. In short, you can now measure spend that was previously invisible, and that insight can directly influence performance and profitability.

The big picture. The update may look small, but it’s a meaningful step toward unpacking how PMax works. For accounts running PMax at scale or analyzing profitability by channel, isolating Search Partners data can shape optimization, budgeting, and broader strategy.

First seen. Google Ads specialist Aleksejus Podpruginas first spotted the update and shared it on LinkedIn.

Bottom line. PMax is finally revealing a missing piece of the puzzle, giving advertisers a clearer view of how Google’s automation spends their money.

Read more at Read More

OpenAI hits pause on ChatGPT ads as CEO declares a ‘code red’

Code red

OpenAI CEO Sam Altman issued an all-hands “code red” to improve ChatGPT – a move that could delay the company’s advertising plans – according to an internal memo obtained by The Wall Street Journal.

Driving the news. Altman told employees the company must urgently improve ChatGPT’s personalization, speed, reliability, and ability to handle a wider range of questions.

  • Daily calls, temporary team reassignments, and a companywide push now center on one priority: make ChatGPT better, fast.
  • Nick Turley, who leads ChatGPT, said the team is focused on growing the assistant and making it feel “more intuitive and personal.”

Why now? Competition is catching up. The memo signals rising pressure on several fronts:

  • Google: Its upgraded Gemini model topped OpenAI on key benchmarks last month.
  • User growth: Gemini’s ecosystem jumped from 450 million monthly active users in July to 650 million in October, helped by new tools like the Nano Banana image generator.
  • Anthropic: Gaining ground with enterprise customers as the “safer, more predictable” LLM provider.

OpenAI is also facing heavy financial strain, with planned data center investments in the hundreds of billions, while the company remains unprofitable and reliant on constant fundraising. Internal forecasts suggest that OpenAI must get roughly $200 billion in revenue by 2030 to become profitable.

What’s getting delayed. To refocus on ChatGPT quality, Altman said OpenAI is pushing back work on:

  • Advertising initiatives.
  • AI agents for health and shopping.
  • A personal assistant called Pulse.

What’s next. Altman told staff a new reasoning model arriving next week is already outperforming Google’s latest Gemini release.

  • OpenAI previously declared a “code orange” over ChatGPT quality – part of an internal urgency scale (yellow → orange → red). GPT-5’s August launch drew criticism for feeling colder, being less helpful on simple tasks, and acting too cautiously. A November update made the model feel warmer and better at following instructions.

Why we care. It appears that OpenAI is pausing its rollout of ChatGPT ads to focus on product quality. That means advertisers hoping to use ChatGPT as an ad channel will have to wait longer.

Flashback. This isn’t the first code-red moment in the AI arms race. Google once issued its own “code red” because of OpenAI. In December 2022, after ChatGPT went viral, Google CEO Sundar Pichai declared a companywide code red, calling the chatbot an existential threat to Google Search. What followed:

  • Founders returned: Larry Page and Sergey Brin rejoined product meetings after years away.
  • Search overhauled: Google accelerated plans to add conversational features to Search.
  • Product surge: A leaked slide deck outlined 20+ new AI products and a demo of a chatbot-powered version of Search.

The report. OpenAI Declares ‘Code Red’ as Google Threatens AI Lead

Read more at Read More