Posts

Google starts showing sponsored ads in the Images tab on mobile search

In Google Ads automation, everything is a signal in 2026

Google has begun placing sponsored ad units directly inside the Images tab of mobile search results — a new placement that eligible campaigns can access without any changes to existing keyword targeting.

What’s happening. When a user navigates to the Images tab within Google Search on mobile, they may now see sponsored units appearing within the image grid. Each unit shows a full image creative as the primary visual alongside text, and is clearly labelled “Sponsored” — consistent with how Google labels ads elsewhere in search results.

How it works. Eligible campaigns can serve into the Images tab without any changes to keyword targeting or campaign structure. The placement draws from existing image assets, meaning advertisers running Search or Performance Max campaigns with strong visual creative are best positioned to benefit. No separate image-only campaign setup is required.

Why we care. This is a meaningful expansion of Google’s paid search real estate. For product-led and catalog-heavy advertisers, the Images tab is where purchase-intent discovery often starts — and now ads can appear right in that moment. If your campaigns already use strong image assets, you may be picking up incremental impressions without lifting a finger.

The big picture. Early indications suggest this placement behaves more like a visual discovery surface than classic paid search. Expect high impression volume but lower click-through rates — more in line with display or Shopping than traditional text ads. That said, the assist value in multi-touch conversion paths could be significant, particularly for retail and direct-to-consumer brands. Treat it as upper-funnel reach, not a last-click channel.

What to watch. Google has not made a formal announcement, and there is no dedicated reporting breakdown for Images tab placements yet. Monitor your impression share and segment data closely to understand whether this placement is contributing — and whether it’s eating into organic image visibility for competitors.

First seen. The placement was spotted by Google Ads Expert – Matteo Braghetta, who shared seeing this update on LinkedIn. No official documentation has been published by Google at the time of writing.

Read more at Read More

Web Design and Development San Diego

5 priorities for lead gen in AI-driven advertising

5 priorities for lead gen in AI-driven advertising

Many of today’s PPC tools were designed to be easily accessible to ecommerce. That doesn’t mean lead gen can’t take advantage of them, but it does mean more intentional application is required.

Lead gen with AI still requires a creative approach, and many conventional ecommerce tools still apply — but not always in the same way.

Here are the priorities that matter most for succeeding with lead gen using AI.

Disclosure: I’m a Microsoft employee. While this guidance is platform-agnostic, I’ll reference examples that lean into Microsoft Advertising tooling. The principles apply broadly across platforms.

1. Fix your conversion data first

This is the single most important thing you can do as AI becomes more embedded in media buying.

Between evolving attribution models, privacy changes, different platform connections, and shifts in how consumers engage with brands, it’s reasonable to ask whether your data is still telling an accurate story.

Start by auditing your CRM or lead management system. Make sure the data you pass back to advertising platforms is clean, consistent, and intentional.

In most cases, data issues stem from human choices rather than technical failures. Still, there are a few technical checks that matter:

  • Confirm conversions are firing consistently.
  • Regularly review conversion goal diagnostics.
  • Validate that lead status updates and downstream signals are actually flowing back.

If AI systems are learning from your data, you want to be confident that the feedback loop reflects reality.

Dig deeper: How to make automation work for lead gen PPC

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

2. Make landing pages easy to ingest and easy to understand

Lead gen campaigns often have multiple conversion paths, which can be helpful for users. But from an AI perspective, ambiguity is a risk.

Your landing pages should make it clear:

  • What action you want the user to take.
  • What happens after action is taken.
  • Which conversions matter most.

Redundant or unclear conversion paths can confuse both users and systems. If AI crawlers detect that anticipated outcomes are inconsistent, they may begin to question the accuracy of what your site claims to do. That can limit eligibility for certain placements.

Language clarity matters just as much. Avoid jargon, eccentric terminology, or internally focused phrasing when describing your services. Clear, plain language makes it easier for AI systems to understand who you are, what you offer, and how to match creative to the right audience.

A practical test: Put your website content into a Performance Max campaign builder and review how the system attempts to position your business. If you agree with the messaging, imagery, and framing, your site is likely easy to understand. If not, that feedback is valuable.

You can also paste your site content into AI assistants and ask them to describe your business and services. If the response aligns with reality, you’re in a good place. If it doesn’t, that’s a signal to refine your content.

Behavioral analytics tools, like Clarity, can help you understand exactly how humans are engaging with your site and how often AI tools are crawling your site.

Dig deeper: AI tools for PPC, AI search, and social campaigns: What’s worth using now

3. Budget across the entire funnel

Lead gen has always struggled with long conversion cycles. That challenge doesn’t go away, and in some ways, it becomes more pronounced.

AI-driven systems increasingly weigh sentiment, visibility, and contextual signals, not just last-click performance. If all of your budget and reporting focuses on immediate traffic, you may miss meaningful impact higher in the funnel.

That means:

  • Budgeting intentionally across awareness, consideration, and conversion.
  • Applying the right metrics at each stage.
  • Looking beyond traffic as the primary success indicator.

In many lead gen models, citations, qualified leads, and eventual revenue tell a more accurate story than clicks alone.

Dig deeper: Lead gen PPC: How to optimize for conversions and drive results

Get the newsletter search marketers rely on.


4. Clean up your feeds and map data

You may not think you have a “feed” in your lead gen setup, but that absence can put you at a disadvantage.

Feeds help AI systems understand your business structure, services, and site architecture. Even if you don’t have hundreds of pages, a simple, well-maintained feed in an Excel document can provide valuable context when uploaded to ad platforms.

Clean up your feeds and map data
Example of a feed for lead gen

Feed hygiene matters. Use clear, specific columns. Follow platform standards for text, images, and categorization. Make sure all relevant categories are represented.

On the local side, claim and maintain all map profiles. Ensure information is accurate and consistent. If you use call tracking in map placements, review your labeling carefully. AI systems may pull data from map listings or your website, and mismatches can create attribution confusion, particularly for phone leads.

Account for potential AI-driven inflation in reporting, whether you’re looking at map pack data, direct reporting, or site-level performance. Any changes you make should also be reflected correctly in your conversion goals.

5. Pressure-test your creative for clarity

Creative assets may be mixed, matched, or shortened using AI. In some cases, you may only get one headline to explain who you are and why someone should contact you.

If your value proposition requires three headlines, or a headline plus a description, to make sense, that’s a risk.

Review your existing creative and identify assets that stand on their own. You should have at least some options where a single headline clearly communicates:

  • What you do
  • Who you help
  • Why it matters

If that clarity isn’t there, AI-driven placements can quickly become confusing.

Dig deeper: Why creative, not bidding, is limiting PPC performance

The fundamentals that still move the needle

Lead gen today doesn’t need to be complicated.

Most of the actions that matter today are things strong advertisers already do: clean data, clear messaging, intentional budgeting, and disciplined execution. What changes is how attribution may shift, and how much weight systems place on different signals.

The fundamentals still win. The difference is that AI makes weaknesses more visible and strengths more scalable.

If you focus on clarity, accuracy, and alignment across your funnel, you give both people and systems the best possible chance to understand your business — and that’s where sustainable performance comes from.

Read more at Read More

Web Design and Development San Diego

The Mad Men era of SEO: Why AI is shifting search to persuasion

The Mad Men era of SEO- Why AI is shifting search to persuasion

For most people, “Mad Men” means the TV show. But the phrase points to something more specific: Madison Avenue in the 1950s and ‘60s, when agencies grew brands through persuasion, positioning, and earned trust in a world of scarce media channels and powerful gatekeepers. If you wanted attention, you bought your way in, then made your product the obvious choice.

When the internet arrived and Google made the chaos navigable, an entire industry was built on getting brands found. Search and SEO became one of the most commercially valuable disciplines in marketing.

That model isn’t disappearing. But something new is taking shape on top of it — and most of the industry is still using the wrong language to describe what’s happening.

AI is exposing everything SEO has neglected. Brands that win recommendations from AI systems won’t do so by publishing more content. They’ll win through positioning, persuasion, and corroborated proof.

In other words, they’ll win the way Madison Avenue always did.

SEO was never really about content

One of the strangest things about the current industry conversation is how many people talk as if the job of SEO is to create content. It isn’t. Not for most businesses.

If you’re a publisher, content is the product. Traffic is the commercial engine. But for most brands, content never did what people thought.

Early on, people wrote content for customers, and it worked. Then it changed. Content became a keyword vehicle. “Get people to our site” replaced good marketing comms.

Traffic became a proxy for exposure. It worked because search rewarded retrieval: type a query, get a page, get a click. All you needed to sell that model was the belief that any traffic was good traffic. That traffic somehow led to revenue that your agency could keep delivering.

That model is now under serious pressure. 

Google and ChatGPT are increasingly taking the click. Every serious large language model is trying to satisfy informational intent before the user reaches the source. They aren’t trying to be better search engines. They’re trying to make search engines unnecessary — and that’s the entire point.

There’s too much information on the web. People don’t want to open 10 tabs and read five near-identical blog posts to find a basic answer. They want the answer. The AI systems exist precisely to give it to them.

So if informational retrieval gets absorbed into the interface, what remains? Marketing. That’s the part many SEOs are still not fully grappling with.

Dig deeper: The three AI research modes redefining search – and why brand wins

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

From place to preference

The cleanest way to understand this shift is through the “4 Ps” of marketing: product, price, place, and promotion.

Traditional SEO has been, almost entirely, a place discipline. It’s been about getting your products, services, or information onto the digital shelf when people go looking.

Keyword rankings are shelf position. Paid search is just a more expensive version of the same principle. In commercial search, you pay for premium placement in a digital aisle.

That still matters enormously.

Buyer-intent search remains valuable. Google hasn’t solved its commercial transition to a fully AI-led interface, and won’t overnight. Search is too important to Google’s revenue to disappear fast. But another layer is emerging above it, and this is the layer that most agencies aren’t yet equipped to compete on.

As AI systems become the first interaction point for more users, the game shifts from being present to being preferred.

Users don’t just search. They ask. They describe a problem. They want the best CRM for a mid-market SaaS company, the best estate agent in their area, the best sandwich shop near the office. And the system responds with recommendations.

If classic SEO was about rankings, the next phase is about recommendations. If classic SEO was about digital placement, the next phase is about shaping preference. And recommendation, in practice, is advertising.

Not a display banner. Not a 30-second TV spot. But advertising in the oldest and most commercially powerful sense: influencing the choice someone makes before they’ve even consciously made it.

An AI-generated recommendation is an invisible ad unit. It doesn’t bill by impression.

Why AI recommendations hit differently

When an LLM recommends a brand, it can’t know with certainty what will work best. So it infers. It weighs signals: past success, prominence, reviews, case studies, corroborating sources, and repeated associations between a brand and a specific type of problem.

Humans do something almost identical. 

Where performance is clearly bounded, we can identify a winner. We know who won the Oscar. We know which film topped the box office.

But when performance isn’t obvious in advance, we rely on proxies. We ask friends, read reviews, and scan for authority. We use familiarity, logic, and social proof to estimate what is likely to be right.

That’s exactly the territory AI recommendation is now entering — the consideration set problem. If I ask an LLM to find me a reliable accountant for a small business, I’m not asking it to retrieve a blog post. I’m asking it to build me a shortlist. 

Unlike traditional search, the recommendation layer is invisible to brands unless they test for it actively. You don’t see the prompt or the source chain. You don’t even know why one brand made the cut and another didn’t.

But the commercial effect is real, possibly stronger than anything traditional search produced. If you’re in the recommendation set, you’re in the running. If you’re absent, you’ve lost the sale before the conversation started.

Dig deeper: Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it

Get the newsletter search marketers rely on.


Your website is now an argument for preference

The first practical consequence: your website can no longer function like a polite digital brochure. Despite being optimized for search, many commercial web pages simply:

  • Introduce the company.
  • Gesture vaguely at services.
  • Bury differentiation under generic corporate language.
  • Treat the page as an endpoint for a ranking rather than a persuasive asset.

Still, they’re weak where it matters most: actual selling.

In the Mad Men era of SEO, your landing pages and service pages need to function like sales pages, not in a cheesy direct-response way, but in the strategic sense that they must clearly answer four things:

  • Who is this for?
  • What problem does it solve?
  • Why is it different?
  • Why choose it over the alternatives?

This comes down to positioning, which is key to GEO. If seven brands do broadly the same thing, the model needs distinctions. It needs enough clarity to say: this brand is best for X kind of buyer with Y kind of problem because it does Z better than everyone else.

Your website copy must surface real performance attributes: the specific things you genuinely do better or more distinctively than competitors. Your pages must become machine-readable arguments for preference.

Copywriting is back

Actual commercial copywriting — not fluffy brand storytelling or word count for its own sake — identifies a target customer, sharpens the problem, articulates the value, and makes the offer easy to recommend.

Good copy isn’t optional.

Take a local sandwich shop. The old SEO conversation runs to “best sandwich near me,” local pack, and review acquisition. It’s useful, but limited. 

The GEO version starts with the shop’s actual performance attributes. 

  • Is it the speed? 
  • The handmade bread? 
  • The office catering? 
  • The locally sourced produce?

Those claims must be clear on the website first. Then they need corroboration everywhere else:

  • Reviews that mention the sourdough specifically.
  • A local food blogger’s write-up.
  • Inclusion in “best lunch spots” roundups.

They’re specific, repeated, retrievable evidence of why this shop is the right recommendation for a particular type of customer.

Scale that logic to a B2B software company, and the principle holds. Pages that clearly explain who the product is for, which problems it solves, and why it outperforms rivals. Then build mentions, customer reviews, and gain trade-press coverage — the body of evidence to support recommending you to buyers — and let the AI find it.

That’s pretty much GEO in a nutshell.

Keywords don’t disappear, but they lose their throne

Keywords are a human workaround. Approximations of intent, built for a retrieval system that needed exact string matching. LLMs process fuller context, layered needs, and comparative requirements. They move from keyword matching toward problem understanding.

Keyword research still matters for classic search, paid search, and buyer-intent pages. But the center of gravity shifts.

Instead of asking only “what terms should we rank for?”, the better question is: what attributes make us the right recommendation for the buyer we actually want, and what evidence exists across the web to support that claim?

The future of SEO is starting to look like the old agency model, as the work is increasingly promotional. Once your website clearly expresses your positioning, the challenge becomes promoting that position across the wider web through credible, repeated, relevant signals.

  • Digital PR. 
  • Traditional PR. 
  • Expert commentary. 
  • Case studies. 
  • Reviews. 
  • Listicles.
  • Awards. 
  • Trade press.
  • Brand mentions. 
  • Conference speaking. 
  • Events. 
  • Creator coverage. 
  • Product comparisons. 
  • Original data studies that other people actually cite. 

These are the things you go after, create, and encourage. Sadly, many “AI visibility” conversations flatten this into nonsense.

The goal isn’t merely to have content cited by AI. It’s to gather enough market evidence that AI systems repeatedly encounter your brand in the right contexts, with the right associations.

The work stops being optimization and becomes maximization: building the largest possible volume of persuasive, corroborated, retrievable evidence that your brand is a sensible recommendation for a specific kind of buyer.

That’s a fundamentally different model from anything the SEO industry has been selling. It’s promotional and strategic brand marketing.

Dig deeper: How to design content that AI systems prefer and promote

Where SEO still fits

SEOs need to grow up. There’s still significant value in buyer-intent search, technical site architecture, entity clarity, internal linking, and structured data. SEOs are well placed to monitor recommendation environments, test prompts, and identify where visibility is being won or lost.

But the identity crisis is real. Many agencies were built for a world of rankings, informational blogs, and monthly traffic graphs. They aren’t equipped to lead a world defined by positioning, copy, PR, brand evidence, and recommendation science.

Tracking brand citations inside AI outputs isn’t a complete strategy. It’s a temporary metric. 

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

The new agency model

Winning agencies look like hybrid commercial strategy firms: part SEO, part copywriting, part PR, part brand strategy, part technical infrastructure. They know how to protect buyer-intent search revenue today while building the fame, clarity, and corroborated authority that earns recommendation tomorrow.

This is the Mad Men model of SEO. Persuasion, positioning, and clear claims backed by public proof matter again. And the job is to become recommended by AI.

Read more at Read More

Web Design and Development San Diego

Google, Meta, and the long history of misaligned incentives in paid media

Google, Meta, and the long history of misaligned incentives in paid media

I’m getting a mid-career executive MBA. Last week, in class, we discussed the interaction between automation and advertising. The lecture covered why A/B testing in Meta is less valuable now, since Facebook can auto-optimize faster and better than marketers can on their own.

A classmate took the logical leap and asked the professor, “If digital channels have more data and more processing power, why don’t advertisers just give them a URL and a credit card and let them go wild?”

The argument has real merit. Google, Meta, and LinkedIn have access to more data than any agency ever will. Their optimization engines are improving fast. Handing them a budget and a URL and walking away isn’t entirely crazy.

But that means we’d need to have faith in the channels to optimize media in a business’s best interests, and there’s a long, proud history of that not being the case.

1. The opt-in that wasn’t

About six years ago, we met with a Google rep who pitched a product that introduced broader, more aggressive targeting and bidding. We listened to the pitch and said no. We didn’t want to try it. The reps turned it on anyway.

What happened next was what we predicted. The campaigns spent significantly more money and didn’t generate any additional conversions.

We had to comp the client for the wasted spend, which was bad enough. But what made it worse was the principle of the thing: we hadn’t agreed to this. Google made unauthorized changes to our account.

When I tried to get the money back, Google’s position was that we’d set our campaign budgets at a certain level, and they were within their rights to spend up to that amount. That framing ignores that a budget cap is a ceiling, not an invitation. 

Our agency methodology is to never hit a budget cap. We set those numbers based on the strategy we’d approved, not the one they decided to test. I hounded them for weeks, but never got any resolution. It still makes me angry.

The reps were clearly incentivized to get adoption of the new feature. When it didn’t work, there was no accountability and no recourse. We were left covering the cost of a decision we explicitly declined.

What’s being misrepresented

Budget caps were treated as implicit consent to spend. A product we declined was activated without authorization, and when it failed, the platform pointed to our own settings as justification.

The incentive structure rewarded the reps for turning it on. There was no corresponding mechanism to make the advertiser whole when it didn’t work.

Dig deeper: Google rep’s unauthorized ad changes spark advertiser concerns

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

2. The profit maximization pitch

This was years ago for a successful retainer. A pair of senior Google reps sat across from us and asked what our client’s gross margin was. Around 50%, we said. They went to the whiteboard and wrote out: if overall revenue/2 – overall media cost >= 0, then we should keep spending money on ads.

On the surface, the math sounds right. In practice, it has two problems.

  • It assumes the reported conversions are incremental, meaning they wouldn’t have happened without the paid ad. A substantial portion of any Google campaign’s reported conversions, particularly in brand and retargeting, are users who were already going to convert.
  • The model assumes a flat cost curve, where the 500th conversion costs the same as the 50th. It does not. Marginal returns fall as you scale. The last dollars of spend are always the least efficient, but they’re exactly what this pitch is designed to help Google access. (They should have said marginal revenue/2 – marginal cost = 0 is profit maximization.)

What’s being misrepresented

The model treats all reported conversions as incremental and assumes cost per conversion is constant across spend levels. Both assumptions are wrong, and together they can justify significant overspend.

3. The ‘higher CPCs buy better clicks’ pitch

This one still happens all the time. The pitch is that if you raise your CPCs, you’ll get access to higher-quality traffic. The implied logic is that conversion rate is influenced by CPC, and that if your investment isn’t high enough, you’re missing the best clicks.

There’s a version of this that has some truth to it. Higher CPCs can mean higher ad positions, which can mean higher impression frequency against the same users. More frequency can drive higher aggregate conversion rates, because repeated exposure matters.

But the argument glosses over the other side of that equation. 

  • Higher frequency has diminishing marginal returns. 
  • The third impression is worth less than the first. The tenth is worth a lot less.
  • The cost curve isn’t flat. You’re paying more per click at every step.

In practice, raising CPCs to chase quality traffic is almost always correlated with substantially worse overall return on ad spend.

This is a variant of the marginal return problem seen across these cases. The pitch frames the upside without acknowledging the cost curve. More spend gets positioned as access to better outcomes, when it often delivers the same outcomes at a higher price.

What’s being misrepresented

CPC and conversion rate are presented as if higher bids unlock better traffic. In most cases, the incremental cost outpaces the incremental return. The pitch frames diminishing returns as an opportunity, rather than a constraint.

Dig deeper: Dealing with Google Ads frustrations: Poor support, suspensions, rising costs

4. The learning phase as a get-out-of-jail card

“If your Meta campaigns are underperforming, it’s because the algorithm just needs more time to learn.”

“Don’t make changes, and don’t reduce budget, just give the platform more data.” 

This is sometimes true. Machine learning systems need volume to optimize effectively, and premature intervention can reset progress.

But “it needs to learn” has become a catch-all explanation that’s almost impossible to disprove in the short run. It explains away poor CPAs, delays accountability, and keeps spend flowing when a reasonable advertiser might otherwise pull back and reassess.

There’s rarely a clear definition of when the learning phase ends, which makes it a moving target. The learning phase ends when performance improves. If performance doesn’t improve, more learning is prescribed.

What’s being misrepresented

A real technical concept is being used in ways that resist falsification. When there’s no defined endpoint and no stated criteria for success, “it needs to learn” serves as a blank check for budgetary continuity.

5. The metric pivot: When conversions fail, sell sentiment

In many cases, YouTube or display campaigns aren’t driving measurable conversions. The rep’s suggestion: let’s look at brand measurement. We can measure recall rates, positive sentiment, and intent to purchase. These are real signals of brand health, and they matter in the long run.

But the shift from conversion to sentiment metrics tends to occur when conversion metrics are poor, not as a principled measurement strategy. Brand lift surveys measure awareness under controlled conditions, but they rely on self-reported intent and don’t connect to downstream revenue.

Recall is almost never translated into a cost per point of lift that can be compared across the media plan. You end up with a number that’s positive and presented as evidence of success, with no agreed-upon framework for what sufficient lift would look like.

What’s being misrepresented

A softer metric is substituted for a harder one after the harder one fails. Brand lift is a legitimate measurement tool when defined upfront as a success criterion. Introduced afterward, it functions as a consolation prize.

Dig deeper: PPC mistakes that humble even experienced marketers

Get the newsletter search marketers rely on.


6. Upper funnel combined with lower funnel for a blended average

Upper-funnel and lower-funnel campaigns serve different purposes and perform differently on a cost-per-acquisition basis. When a channel reports blended CPA across all campaign types, an average that looks acceptable can hide the fact that some portion of the media plan is wildly inefficient at the margin.

The argument for blending is that upper-funnel spend creates the conditions for lower-funnel performance. That is plausible, but plausibility isn’t the same as demonstrated causality. 

Often, it’s assumed the upper funnel is directly contributing and that, in aggregate, the system is profitable and fully incremental. This is never the case.

What’s being misrepresented

Aggregate CPA can look fine while specific segments of spend have no measurable return. Blending is a reporting choice, and it can obscure where money is and isn’t working.

7. View-through conversions: The numbers that shouldn’t count

A view-through conversion is counted when a user sees an ad, doesn’t click it, and then converts within some attribution window, often 24 hours or more. Platforms report these alongside click-through conversions by default. 

For retargeting campaigns, which by definition serve ads to people who have already visited your site, view-through attribution is particularly problematic. These users were likely going to return and convert regardless. The ad may have had nothing to do with it.

The issue isn’t that view-throughs aren’t meaningful. For a cold audience, some brand-influenced conversions happen without clicks.

The issue is that those conversions are almost never broken out proactively (you have to ask). And when you remove view-throughs from retargeting campaigns, the ROAS numbers can change dramatically. 

We’ve seen cases where removing VTAs cuts reported conversions by more than half. I would note that by moving to incremental measurement options, Meta has become substantially more transparent.

What’s being misrepresented

View-through conversions inflate reported performance, particularly in retargeting, where incrementality is already low. Default reporting includes them without flagging the methodological problem.

Dig deeper: Outsmarting Google Ads: Insider strategies to navigate changes like a pro

8. The competitor benchmark as a spending lever

This one is a pattern. A channel rep brings industry benchmark data to a meeting showing that your competitors are spending at a level above your current budget. The implication is clear: you’re being outspent, and you should close the gap.

Industry benchmarks are among the most valuable inputs a channel can provide. Knowing where you sit relative to the market is useful context for planning. The problem is how they get deployed. More often than not, benchmark data shows up as a tool to expand media spend, not as a neutral input into strategy.

And it works. CEOs and CMOs are particularly susceptible to this framing. Nobody wants to hear that a competitor is outspending them.

The emotional pull of “they’re investing more than you” is hard to counter with a measured conversation about marginal returns or strategic fit. The benchmark becomes the argument, and the argument is almost always “spend more.”

What gets lost is any discussion of whether:

  • The competitor’s spend is actually working for them.
  • Your business model and margins support the same level of investment.
  • The benchmark even reflects an apples-to-apples comparison.

Competitive spend data without context is just a number that makes your budget feel inadequate.

What’s being misrepresented

Benchmark data is real, but it’s selectively introduced to justify budget increases rather than treated as one input among many. The framing skips over whether the comparison is meaningful and relies on competitive anxiety to sell.

9. The default settings trap

This one is hard to frame as a single incident because it’s everywhere. I’ve talked to so many people trying to break into the industry, or launch their first campaigns, and the story is almost always the same. 

They follow the platform’s setup guide, accept the default settings, and end up opted into programs that have close to zero chance of being successful.

This is true across pretty much every major channel. 

  • LinkedIn defaults you into audience network inventory that runs outside the LinkedIn feed. 
  • Google opts you into display inventory when you’re trying to run search. Broad match keywords are set way too far out of the box. Suggested CPCs are astronomical. 
  • Google’s geographic targeting defaults to “presence or interest” rather than actual location. 

Each of these defaults, taken individually, could be defended as a reasonable starting point. Taken together, they create a setup that maximizes the platform’s revenue from day one, before the advertiser knows what’s happening.

A new advertiser following the guided setup is accepting a configuration that the platform designed, and the platform’s incentives aren’t aligned with efficient spend.

This one is genuinely difficult to solve. Platforms need to provide default settings, and they can’t expect every new advertiser to understand every option. 

But there’s something predatory about the gap between what people think they’re signing up for and what they’re getting. The defaults are revenue-optimized for the channel, not performance-optimized for the advertiser.

What’s being misrepresented

Setup guides and default settings are presented as best practices when they’re actually configurations that favor the platform’s revenue. New advertisers trust the guided experience, and have no reason to suspect the defaults are working against them.

Dig deeper: Are you being manipulated by Google Ads?

10. The tracking gap as a faith exercise

Privacy regulations and platform changes have created real limitations in conversion tracking. GDPR and Apple’s App Tracking Transparency aren’t invented problems. 

We have less visibility than we used to, and the platforms have responded by layering probabilistic modeling and modeled conversions on top of deterministic tracking.

But the tracking gap has also become a convenient shelter for underperformance. The argument goes like this:

  • “The conversions are happening, we just can’t see them all yet. There’s latency in the data.”
  • “There are limits to what can be tracked. We need a longer attribution window.”
  • “We need more time for the modeled data to populate. And in the meantime, here are some proxy metrics that we think are directionally valid, so let’s keep pushing.”

Each of those can be true in isolation. Modeled conversions take time to appear. Attribution is harder than it was five years ago. Proxy metrics can be useful when direct measurement breaks down. 

The problem is when all of these caveats get stacked together and used to justify sustained spend in the absence of any measurable result. At some point, “the data will come in” stops being a reasonable expectation and becomes an article of faith.

The tracking gap is real, but it cuts both ways. If you can’t measure the result, you also can’t prove the spend is working. The platform’s default position is to assume it is, and keep going. The advertiser’s job is to ask what happens if the modeled conversions never materialize, and what the fallback plan looks like if they don’t.

What’s being misrepresented

Legitimate tracking limitations are used to defer accountability indefinitely. When measurement is hard, the platform’s recommendation is always to maintain or increase spend, never to reduce it. The uncertainty gets resolved in the channel’s favor by default.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

What does this mean for AI-run campaigns?

None of this is an argument that agencies are irreplaceable in their current form. We used to question tCPA, and now it’s a preferred bidding strategy. Automation handles execution-level work that used to require skilled practitioners. In-house teams are viable for more companies than they used to be.

But the argument for fully autonomous, channel-run advertising assumes the channel will optimize for your outcomes rather than revenue. Even if we imagine new profit-sharing contracts, this assumption carries real risk.

And I’m not blaming reps or the channels. They believe in their products, but they’re also measured on metrics that create a predictable drift in how they frame data. I should note that agencies struggle with misaligned incentives as well.

The advertiser’s job, with or without an agency, is to keep asking the inconvenient questions.

  • What is the marginal return at this spend level?
  • What percentage of conversions are view-throughs?
  • What does performance look like if we exclude brand search?
  • Are we measuring incrementality, or are we measuring correlation, and calling it causation?

Maybe the answer to everything is eventually full automation. But the entity building the machine shouldn’t be the one telling you when it’s ready.

Read more at Read More

 Most Marketing Metrics Are Misleading. Here’s What Leaders Measure Instead

Key Takeaways

  1. Traditional marketing metrics like traffic, search rankings, and ROAS were designed for a more trackable internet. They still have uses, but they no longer tell the full story.
  2. Marketing attribution assigns credit to touchpoints but cannot prove that marketing caused the outcome. It typically rewards demand capture over demand creation.
  3. ROAS averages compress marginal return curves into a single number, hiding where spend becomes inefficient.
  4. Executives want to know whether marketing caused growth, not just whether activity occurred. Those are different questions with different answers.
  5. Modern measurement tracks incremental signals, branded demand growth, and customer value metrics to give a more complete picture of what is actually working.

Your marketing reports probably look fine. Traffic is up. Engagement is solid. Return on ad spend (ROAS) hits the benchmarks your team set last quarter. But here is the problem with why your marketing reports are inaccurate: the numbers that look best are often the ones least connected to actual business growth.

Marketing dashboards were built for a version of the internet that no longer exists. When clicks were cheap and user journeys were predictable, tracking activity was a reasonable proxy for impact. That is no longer the case. Discovery now happens in AI summaries, social feeds, and private conversations that never show up in analytics. Attribution systems reward the last touchpoint, not the one that created demand. And ROAS averages can hide the fact that the last dollar spent barely broke even.

The shift underway is significant. Measurement is moving from tracking activity to proving impact. Marketing leaders who recognize this will make better budget decisions and communicate more credibly with leadership.

This is the first part of a three-part series examining how modern organizations measure marketing performance in a way that actually connects to growth.

The Old Marketing Scoreboard Was Built for a Different Internet

For most of the last decade, marketing teams built their reporting around a stable set of marketing metrics: organic traffic, search rankings, click-through rates, and ROAS. These became the dominant performance indicators not because they were perfect, but because they were easy to track and easy to report.

The logic made sense at the time. More organic traffic meant more potential customers. Higher rankings meant greater visibility. Click-through rate measured whether ads were relevant.

ROAS connected spend to revenue in a single ratio. These gave teams something concrete to optimize and executives something simple to evaluate.

The problem was that teams began equating activity with impact. A spike in sessions became evidence of a successful campaign. A high ROAS figure became justification for more spend. 

But these metrics measured what happened on a screen, not what drove a purchase decision. Many of them are what marketers now call vanity metrics: numbers that look meaningful but don’t connect reliably to revenue.

Analytics dashboards were built to track what they could see, and teams made decisions based on what was visible. That created a structural bias toward channels that were easy to measure, even when harder-to-measure channels were doing more of the actual work.

Three-panel infographic from NP Digital showing why the old marketing playbook is breaking: declining traffic relevance, attribution noise, and growing executive demand for proof of business impact.

Why Many Marketing Metrics Are Becoming Misleading

The way people discover brands has changed substantially, and many standard marketing KPIs were not built to account for that shift. Three changes in particular are making traditional metrics less reliable.

Zero-Click Discovery Is Increasing

AI-generated answers, featured snippets, and knowledge panels now resolve many queries without requiring a click. According to Pew Research, when users encounter an AI summary in search results, they click through to websites at roughly half the rate they do with standard results. Around 26 percent end their session after viewing an AI summary, compared to 16 percent for standard search results.

For marketing teams, this creates an invisible influence problem. A brand can shape a buyer’s thinking through AI-cited content without that interaction ever appearing in a traffic report. Organic search may be doing more work than the data suggests, and session counts alone cannot tell you which.

Discovery Happens Inside Platforms

Buyers increasingly research and evaluate brands inside closed ecosystems: social platforms, marketplaces, YouTube, and AI-driven interfaces. These platforms have their own algorithms, their own ad systems, and limited data sharing with external analytics tools.

According to NP Digital research, 82 percent of marketing engagement now happens through video, while SERP and AI answers account for 79 percent of engagement. Only 12 percent happens on-site. Website analytics captures a fraction of where influence actually occurs. 

Brands get evaluated across Google, YouTube, LinkedIn, review sites, and AI engines, often before a customer ever visits a website. NP Digital data also shows that the average customer journey has grown from 8.5 touchpoints in 2021 to 11.1 touchpoints in 2025. What looks like a direct visit or a branded search conversion often reflects influence that originated somewhere else entirely.

Traffic No Longer Reflects Influence

Even when traffic increases, the quality of that traffic has become harder to assess. NP Digital research tracking 602 websites found that 51 percent of traffic came from bots and 21 percent were short sessions, leaving only 16 percent that could be classified as genuinely engaged sessions.

An NP Digital infographic with a traffic quality breakdown.

More sessions do not equal more intent. Traffic can grow while real engagement shrinks, particularly as bots, low-intent visits, and passive content consumption inflate session counts. Optimizing for traffic volume in this environment can mean more spend for fewer qualified outcomes.

The Attribution Problem Most Teams Ignore

Marketing attribution became central to reporting because it appeared to solve a hard problem: connecting activity to conversions. For direct-response channels with short feedback loops, it worked reasonably well. But attribution has a structural limitation that deserves more attention. For a deeper look at where these systems break down, see this overview of marketing attribution blind spots.

Attribution models credit the touchpoints that preceded a conversion. They track what happened well. They are not built to determine whether marketing caused the outcome.

That distinction matters more than it might seem. Algorithmic platforms optimize toward users who are already likely to convert. 

Last-click models, and many of their more sophisticated variants, inherit this bias. They reward demand capture over demand creation, which means the channels that appear most efficient are often the ones intercepting customers who would have converted regardless.

The evidence from major advertisers is instructive. When Airbnb paused its performance marketing budget, there was no significant drop in bookings. When Uber reduced spend in certain channels, rider acquisition was largely unaffected. In both cases, attribution had been crediting spend for outcomes that would have occurred without it.

Privacy changes have made this harder to ignore. Third-party cookie deprecation, cross-device behavior, and private sharing channels all reduce the fidelity of attribution data. According to NP Digital research, nearly 47 percent of marketers lack confidence in their attribution model. Yet most teams still use attribution reports as the primary input for budget decisions. Data-driven attribution improves on last-click models in some respects, but it still cannot fully separate demand creation from demand capture.

Attribution remains useful for day-to-day campaign optimization. The problem is treating it as strategic truth, as proof that marketing caused growth.

Why ROAS Can Hide the Real Economics of Marketing

ROAS is the most widely used efficiency metric in paid marketing, and for good reason. It is simple, ties spend to revenue, and is easy to compare across campaigns and channels. The problem is that ROAS compresses a marginal return curve into a single number, and that compression hides where spending stops being productive.

Consider a channel running at an overall 4x ROAS. That number looks strong. But if the first $100,000 spent generated 8x returns and the last $200,000 generated 0.5x returns, the blended average conceals a significant amount of wasted spend. Optimizing toward the average means continuing to invest in the tail of a diminishing curve.

ROAS also ignores what created the demand being captured. Branded search conversions frequently get credited to paid search, but the intent behind that search often originated from a video campaign, a piece of organic content, or a recommendation that happened in a private channel. The channel capturing the intent gets the credit. The channel that generated it does not. This dynamic is especially relevant for ecommerce metrics, where brands often over-invest in bottom-funnel capture while underfunding the upper-funnel activity that makes conversion possible.

The question ROAS does not answer is: how much of this revenue was incremental?

Separating captured demand from created demand requires different tools, which is why leading organizations are increasingly pairing ROAS with incrementality testing and marketing mix modeling.

A chart comparing Organic Traffic Trends vs. Revenue Growth.

The Question Executives Actually Care About

The metrics most marketing teams optimize are not the ones most executives prioritize. According to NP Digital research, 92 percent of marketers say profit is a primary metric, and 87 percent prioritize pipeline. Search rankings rank near the bottom at 18 percent, and ROAS comes in at 16 percent.

That gap reflects a real tension. Marketing teams spend considerable time reporting on activity and efficiency. Leadership wants to know whether marketing is actually changing the economics of the business.

The core question executives ask is whether marketing caused growth, or whether it captured demand that already existed. These are different outcomes. A campaign can generate strong attribution numbers while producing no incremental growth. A brand investment can create lasting demand without generating a single directly trackable conversion.

The questions that matter most at the leadership level are:

  1. Did this campaign create new demand, or intercept demand that already existed?
  2. Would revenue have changed if this marketing activity had not occurred?
  3. Which investments change the underlying economics of the business?

These are questions about causality, not efficiency. They cannot be answered by ROAS or click-through rates. They require measurement methods designed to isolate actual marketing impact from demand that would have existed regardless. This is the gap that is pushing high-growth organizations toward a different approach.

What Modern Marketing Leaders Measure Instead

The most important marketing metrics for growth-focused organizations look different from the ones that dominate standard dashboards. The shift is away from activity-based signals and toward measures tied directly to business outcomes.

Rather than optimizing for total traffic, leading teams track branded demand growth, which captures whether the brand is generating more direct interest over time. Rather than reporting on attributed conversions, they measure incremental conversions: the outcomes that would not have happened without the marketing. Understanding the most important marketing metrics for your business means asking which numbers reflect whether marketing is creating demand, not just capturing it.

Customer value metrics have become more prominent as well. Lifetime value (LTV), customer acquisition cost (CAC) adjusted for margin, and payback periods give a more accurate picture of whether growth is sustainable. For teams managing ecommerce KPIs, this means looking past add-to-cart rates and conversion percentages toward cohort retention, repeat purchase rates, and revenue per customer over time.

Revenue per session, lead-to-close rates by channel, and downstream conversion quality provide a fuller picture of marketing performance than surface metrics can. A channel that generates high traffic but low-quality leads may look better on a standard dashboard than one generating fewer, higher-value conversions.

The shift does not mean abandoning familiar metrics entirely. Traffic, rankings, and ROAS still provide useful context. The change is in treating them as diagnostics rather than goals. The next piece in this series examines how high-growth organizations build the measurement systems that track these signals, combining marketing mix modeling, incrementality testing, and attribution into a layered approach that answers different questions at different levels of the business.

A chart comparing new and old KPIs for marketing organizations.

FAQs

What Are KPIs in Marketing?

Marketing key performance indicators (KPIs) are the metrics teams use to evaluate performance against business goals. Common marketing KPIs include traffic, leads, conversion rates, ROAS, and customer acquisition cost. The most useful KPIs are ones tied directly to business outcomes rather than activity alone.

What Are Marketing Metrics?

Marketing metrics are the data points used to evaluate marketing performance. These range from top-of-funnel measures like impressions and traffic to bottom-of-funnel measures like conversion rate and revenue. Not all marketing metrics examples reflect real business impact equally, which is why understanding which metrics to prioritize matters as much as tracking them.

How Do You Make a Marketing Report?

A strong marketing report connects activity data to business outcomes. Start by identifying the decisions the report needs to support, then select metrics that reflect progress toward those outcomes. Include both leading indicators, such as branded search volume and engaged session rates, and lagging indicators like revenue and customer acquisition cost.

Conclusion

Marketing measurement has not failed. The environment around it changed, and the metrics that once served as reliable proxies for growth have become less accurate as discovery, attribution, and buyer behavior grew more complex.

The organizations gaining ground are the ones questioning which metrics actually reflect growth, rather than which ones look best in a dashboard. That means looking past traffic and attribution toward signals tied to incremental outcomes, customer value, and causal impact.

This is the foundation the rest of this series builds on. The next installment covers how high-growth companies structure their measurement systems, combining multiple methods to get directional confidence across different levels of the business. If you want to start reviewing your current approach, this guide to website performance metrics is a useful starting point, as is this breakdown of which marketing KPIs are worth keeping and which may be leading your team in the wrong direction.

Read more at Read More

YouTube adds AI creator matching and ad formats to its partnerships platform

YouTube used its NewFront presentation to unveil a significant upgrade to its Creator Partnerships platform, adding Gemini-powered creator matching, stronger measurement tools, and new ways to run creator content as paid ads.

Why we care. Influencer marketing has become a core part of many brands’ strategies, but finding the right creators at scale and proving ROI is a pain point. tackles influencer marketing’s two biggest friction points — finding the right creator and proving ROI.

Gemini-powered matching cuts through the noise of three million creators, while the ability to run creator content as paid Shorts and in-stream ads makes performance measurable like any standard campaign, backed by a reported 30% conversion lift.

How it works. The updated platform uses Gemini to recommend creators from a pool of more than three million YouTube Partner Program members, filtered by campaign goals. Advertisers get more control over who they work with and better visibility into how those partnerships perform.

The big new feature. A revamped Creator Partnerships boost lets brands run creator-made content directly as Shorts and in-stream ads — formats YouTube says deliver an average 30% lift in conversions.

The big picture. The announcement builds on BrandConnect, YouTube’s existing creator monetization infrastructure, showing that the platform is doubling down on the creator economy as a growth lever for advertisers — not just a content strategy.

What’s next. Brands interested in the updated tools can watch the full NewFront presentation on YouTube for more details.

Read more at Read More

Google expands Merchant Center loyalty features to 14 countries and AI surfaces

Google Shopping Ads - Google Ads

Google is giving retailers more firepower to promote loyalty program benefits directly within product listings — expanding the program internationally and into its newest AI-powered shopping experiences.

What’s new. Merchants can now highlight member pricing and exclusive shipping options directly on listings. Loyalty annotations have also expanded to local inventory ads and regional Shopping ads — making it easier to promote in-store or geography-specific perks.

Why we care. The more you can personalize an offer for a shopper, the better. Embedding member perks into the moment of purchase discovery — rather than requiring a separate loyalty app or webpage — makes programs more visible and more likely to drive sign-ups.

By the numbers. According to Google, some retailers have reported up to a 20% lift in click-through rates when showing tailored offers to existing loyalty members.

The big picture. Loyalty benefits will now appear on Google’s AI-first surfaces, including AI Mode and Gemini, putting member offers in front of shoppers at an entirely new layer of the search experience.

Where it’s available. The expansion covers 14 countries — Australia, Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Netherlands, South Korea, Spain, the UK, and the US.

How to get started. Merchants activate the loyalty add-on in Merchant Center, configure member tiers, and set up pricing and shipping attributes. Connecting Customer Match lists in Google Ads is required to display strikethrough pricing and shipping perks to known members.

Don’t miss. US merchants can apply to join a pilot that uses Customer Match as a relationship data source for free listings — potentially expanding loyalty reach without additional ad spend.

Read more at Read More

59% of SEO jobs are now senior-level roles: Study

SEO command center

SEO hiring is shifting toward senior, strategy-led roles as AI reshapes search and expands the scope of the job. A new Semrush analysis of 3,900 listings shows companies now prioritize leadership, experimentation, and cross-channel visibility over pure technical execution.

Why we care. SEO hiring, career paths, and required skills are changing. Entry roles focus on execution, while most demand sits at the leadership level — owning strategy across search, AI assistants, and paid channels, with clear revenue impact.

What changed. Senior roles dominated, accounting for 59% of listings. Mid-level roles, such as specialists (15%) and managers (10%), trailed far behind.

  • Companies are shifting budget toward strategy as AI tools absorb more execution work.

The skills shift. In-demand capabilities extend beyond traditional SEO into coordination, testing, and decision-making:

  • Project management appeared in more than 30% of listings.
  • Communication led non-senior roles at 39.4%.
  • Experimentation appeared in 23.9% of senior roles compared with 14% of other roles.
  • Technical SEO appeared in about 6% of listings.

Tools and channels. The SEO tech stack now spans analytics, paid media, and data.

  • Google Analytics appeared in up to 47.7% of listings.
  • Google Ads appeared in 29% of listings.
  • SQL demand grew at the senior level.
  • AI tools like ChatGPT were increasingly listed.

AI expectations: AI literacy is moving from optional to expected:

  • 31% of senior roles mentioned AI.
  • Nearly 10% referenced LLM familiarity.
  • AI search concepts like AI search and AEO appeared more often.

Pay and positioning: SEO is increasingly treated as a business function.

  • The median salary for senior roles reached $130,000, compared to $71,630 for others. Some listings were much higher.
  • Degree preferences skewed toward business and marketing.

Remote work is now standard. More than 40% of listings offered remote options, with little difference by seniority.

About the data: Semrush analyzed 3,900 U.S.-based SEO job listings from Indeed as of Nov. 25. Roles were deduplicated, segmented by seniority, and analyzed using semantic keyword extraction.

The study. What 3,900 SEO Job Listings Reveal for 2026: Experiments, AI, and Six-Figure Salaries

Read more at Read More

5-step Google Business Profile audit to improve local rankings

5-step Google Business Profile audit to improve local rankings

Google Business Profile (GBP) may be getting shoved down the SERPs by ads and AI Overviews more than ever, but it’s still a top source of inbound leads for local businesses — and one of the fastest ways to improve rankings with simple fixes.

Here’s a five-step audit to find and fix the gaps most businesses miss.

1. Evaluate Google review velocity and recency

It’s a common misconception that the business with the most Google reviews wins in Google Maps ranking. While a high review count provides social proof, Google’s algorithm has more of a “what have you done for me lately?” attitude.

The number of reviews you get a month, and how recent your last review was, often outweigh the total count for all important map pack positions. We call these metrics review velocity and review recency.

Think about it like this: If you have 500 reviews but haven’t received a new one since 2024, a competitor with 100 fresh reviews from the last month will likely blow past you.

So, how do you measure your review velocity and recency? Analyze competitors to see how top-ranking businesses perform on those metrics.

Follow these steps:

  • Run a geo-grid ranking scan: Identify which competitors are outranking you for your top keywords.
  • Analyze the last 30 days: Note how many reviews they received this month, and when their most recent one was posted.
  • Benchmark your data: Create a simple table comparing your monthly count and recency.
  • Recommended tools: Places Scout, Local Falcon, or Whitespark for automated grid scans and review data.

You don’t just need more reviews. You need to match or exceed the consistency of top-ranking listings.

Lead Gen Reviews Performance

You can automate this with Places Scout API data. That’s what our agency does, tracking it consistently to keep clients ahead of competitors. Automated charts make it easier to see how you stack up.

Automating with Places Scout API data

Dig deeper: Local SEO sprints: A 90-day plan for service businesses in 2026

2. Add keywords to your business name

Including keywords in your business name is one of the most powerful local ranking signals. Sometimes a profile will rank in the map pack based solely on its name, beating out businesses with better reviews and higher recency.

Google’s algorithm hasn’t fully filtered out this type of keyword targeting, so it remains an opportunity. Take this business: only 21 reviews, yet it ranks first in the map pack for an extremely competitive term, thanks to the keywords in its business name.

AC repair dallas

You can’t simply keyword-stuff your name, though. Google can verify your legal name and take action to remove keywords from your profile — or worse, require reverification or suspend it. Your best option is a legal DBA (doing business as) certificate, also known as a trade name, or fictitious name certificate, in some areas.

For example, if your legal name is “Smith & Sons,” you’re missing out. Registering a DBA as “Smith & Sons HVAC Repair” allows you to update your GBP name while technically adhering to Google’s guidelines.

  • Competitor analysis: Are your competitors outranking you simply because their name contains the keyword? If yes, you need to take action to match those tactics.
  • Make it legal: Check your local Secretary of State website. Filing a DBA is an effective SEO tactic for moving from Position 4+ into the map pack for certain keywords.
  • Update business website: Update your website with the new name. Google uses website content to verify business details and may update your GBP accordingly. Make sure it only finds the new name, not outdated versions.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

3. Optimize categories (primary vs. secondary)

Choosing the wrong primary category for your GBP is a leading reason businesses fail to rank. If you’re a personal injury lawyer, but your primary category is set to “trial attorney,” you’re fighting an uphill battle to rank for those highly competitive terms like “personal injury lawyer” searches.

How to pick the best primary category:

  • Competitor analysis: Use Chrome extensions like Pleper or GMB Everywhere to see exactly which primary categories the top-ranking businesses are using.
  • Max out secondary categories: You have 10 total slots. Fill all of them with relevant subcategories.
  • Check off all relevant services: Under each category, Google lists specific services. Select the ones relevant to your business.
Personal injury attorney - Google Search

Dig deeper: How to pick the right Google Business Profile categories

Get the newsletter search marketers rely on.


4. Improve your GBP landing page

Many businesses link their GBP to their homepage and stop there. For multi-location businesses, this is a mistake. You should link to a dedicated local landing page optimized for your top keywords that mentions the city your GBP address is in.

Linking your GBP to a hyper-local city page (e.g., /tampa-plumbing/ instead of the homepage) reinforces “entity alignment.” When the information on your GBP matches a unique, highly relevant page on your site, Google’s confidence in your location increases, often leading to a jump in the local pack. Make sure your GBP landing page is optimized with all your services and links to dedicated service pages to boost your listing for service-specific searches.

Watch out for the diversity update. Sometimes a business ranks well in the map pack, but its website is nowhere to be found in organic results. This is often due to Google’s diversity update.

If you suspect you’re being filtered out organically, try linking your GBP to a different localized interior page. This is often a quick fix that helps your site reappear in organic search. Here’s an example of a client I recently helped beat the diversity update with a simple GBP landing page swap.

GBP landing page swap results

Dig deeper: Google’s Local Pack isn’t random – it’s rewarding ‘signal-fit’ brands

5. Understand proximity and city borders

Your business’s physical location within the city and its proximity to the city center are extremely strong ranking signals. It’s not something you can easily manipulate, though, because it’s not always easy to move your office, store, or warehouse. However, you need to know your “ranking radius” and how much room there is to improve rankings for certain keywords within it.

Identify the ranking ceiling in your market. I use Local Falcon’s Share of Local Voice (SoLV) metric to do this. If your top competitors only have a 53% SoLV, as in this example, it’s unlikely you’ll be able to get more than that either. 

Competitor Report - Local Falcon

This shows when you’ve “maxed out” a keyword and need to target new keywords or open a new location outside that radius. It can also show there’s room to improve — and that you need to increase your SoLV score.

Keep in mind that certain keywords are harder to improve based on where your business is physically located. If you’re not physically located within a city’s borders, and your map pin sits anywhere outside the Google-defined border of your city, you will struggle to rank for explicit terms like “Plumber Tampa FL,” and within the city borders in general. Always do this analysis on a keyword-by-keyword basis.

Tip: In the current local search landscape, expanding your physical footprint, and verifying more GBPs, is the most reliable way to grow visibility. Max out your current GBPs first, then look for your next location.

Dig deeper: The proximity paradox: Beating local SEO’s distance bias

Prioritize where you can win now

This is a strong starting point, but it’s just the beginning. From review strategy and category selection to city borders and the diversity update, every detail counts.

Between overreaching ads and ever-expanding AI Overviews, staying proactive with your GBP strategy is the only way to keep your leads flowing from the map pack. Build your GBP foundation, max out your current locations, and strategize new locations to keep your business in the top spot across your service area.

Read more at Read More

15 Key Marketing Automation Statistics

The marketing tech stack is always evolving. Marketing automation software enables improved efficiency with various features, from customer segmentation to campaign management.

What’s the marketing automation industry market size? What are the adoption rates of marketing automation, and what benefits does it bring to businesses? Continue reading as we’ll cover answers to these questions with these recent marketing automation statistics.

Here’s what you’ll find on this page:

  • Marketing Automation Industry Revenue
  • Leading Players in the Marketing Automation Software Industry
  • Marketing Automation Budget Changes
  • Top Channels Where Marketing Automation is Used
  • Top Benefits of Marketing Automation
  • Marketing Automation and Customer Data Platform Integration

Marketing Automation Industry Statistics

In 2026, marketing automation is steadily growing, with spending reaching billions every year.

This section presents key statistics to show market automation revenues, top players in the industry, and expected budget changes on automation among marketers.

  • Between 2026 and 2032, the worldwide marketing automation industry revenue is forecasted to grow by 62.4% from $8.44 billion to $21.7 billion – Statista

Year Marketing automation market revenue (worldwide)
2021 $4.79 billion
2022 $5.19 billion
2023 $5.86 billion
2024 $6.62 billion
2025 $7.47 billion
2026 $8.44 billion
2027 $9.53 billion
2028 $10.76 billion
2029 $12.14 billion
2030 $13.71 billion
2031 $17.2 billion
2032 $21.7 billion
  • Revenue of marketing automation solution vendors is forecast to reach $6.6 billion in 2026 (up from $2.9 billion in 2020) – Frost & Sullivan
  • Around 68% of surveyed marketers expect an increase in their budget for marketing automation for the upcoming year – Ascend2
Marketing Automation Budget Share of marketers
Increasing significantly 14%
Increasing moderately 54%
Staying the same 21%
Decreasing moderately 9%
Decreasing significantly 2%
  • HubSpot dominates the marketing automation software market, holding a market share of 29.58%. Other commonly used marketing automation tools include RD Station (9.25%), Welcome (7.38%), All In Marketing Cloud (6.87%), and Cheetah Digital (6.86%) – Datanyze

  • As of March 2026,  at least 454 companies provide marketing automation software solutions – G2

Marketing Automation Usage and Performance Statistics

Marketing automation software is widely used as a part of an effective marketing tech stack.

This section highlights key insights into current adoption and planned usage of marketing automation and its impact on helping achieve business objectives.

  • Email (58%), social media management (49%), and content management (33%) are the most reported areas for currently using marketing automation – Ascend2

Area Marketing Automation Usage
Email marketing 58%
Social media management 49%
Content management 33%
Paid ads 32%
SMS marketing 30%
Campaign tracking 28%
Landing pages 27%
Live chat 24%
SEO efforts 22%
Workflows/visualization 20%
Account-based marketing 20%
Sales funnel communications 19%
Push notifications 18%
Dynamic web forms 18%
Lead scoring 17%
  • 29% of surveyed marketers are planning to implement marketing automation for social media management and paid ads. Another 28% claim they will be adding marketing automation to email marketing programs – Ascend2

Planned Marketing Automation Usage

Area Planned Marketing Automation Usage
Social media management 29%
Paid ads 29%
Email marketing 28%
Landing pages 21%
SMS marketing 21%
Content management 20%
Campaign tracking 18%
Live chat 18%
Push notifications 17%
Account-based marketing 16%
Workflows/visualization 16%
SEO efforts 15%
Dynamic web forms 14%
Sales funnel communications 13%
Lead scoring 9%
  • Optimizing overall strategy and improving data quality are top goals for improving marketing automation, according to 37% and 34% of B2B and B2C marketers surveyed, respectively – Ascend2

Marketing Automation Primary Goals

Primary Goal for Improving Marketing Automation Share of Marketers
Optimize overall strategy 43%
Improve data quality 37%
Identify ideal customers/prospects 34%
Optimize messaging/campaigns 31%
Increase personalization 30%
Decrease costs/drive efficient growth 21%
Decrease automation across the customer journey 19%
Integrate technologies/data 15%
Increase employee adoption/usage 13%
  • Around 41% of marketers report that their customer journeys are “mostly automated” or “fully automated” – Ascend2
Extent of Marketing Automation across the Customer Journey Share of Marketers
Fully automated 9%
Mostly automated 32%
Partially automated 59%
  • 30% of surveyed marketers strongly agree with the statement that their “marketing automation platform makes it easy to build effective customer journeys” – Ascend2
“Marketing Automation Platform Makes It Easy to Build Effective Customer Journeys” Share of Marketers
Strongly agree 30%
Somewhat agree 59%
Somewhat disagree 10%
Strongly disagree 1%
  • About 1 in 4 (26%) of marketers say their multi-channel marketing strategy is fully or mostly automated. Another 22% claim it’s not automated at all – Ascend2
Extent of Multi-Channel Marketing Strategy Automation Share of Marketers
Fully 5%
Mostly 21%
Partially 29%
Very little 23%
Not at all 22%
  • Pricing is considered a key factor by 53% of marketers when deciding on a marketing automation tool. Ease of use (54%) and customer service (27%) are regarded as the other top factors driving automation tool purchase – Ascend2

Marketing Automation Solution Purchase Factors

Factors Driving Marketing Automation Solution Purchase Share of Marketers
Price 58%
Ease of use 54%
Customer service 27%
Customization options 24%
Integration capabilities 22%
Breadth of features 21%
Depth of features 19%
Data visualization/analytics 13%
Streamlined onboarding/training 11%
Data consolidation capabilities 10%
  • Only 18% of B2B marketers state they use marketing automation that’s integrated with a customer data platform (CDP). Another 42% say they use B2B marketing automation but don’t have CDP in their current tech stack. Other 40% have both B2B marketing automation and CDP, but they aren’t integrated – Adobe

Marketing Automation Benefits Statistics

Marketing teams can greatly improve their effectiveness through the use of automation software, which offers a number of benefits, from improving customer experience to enabling better use of marketing budgets.

  • Improving customer experience (43%), enabling better use of working hours (38%), and better decision making (35%) are the most commonly reported advantages of using marketing automation among surveyed marketers – Ascend2

Marketing Automation Benefits

Advantage of Marketing Automation Share of Marketers
Improves customer experience 43%
Enables better use of staff time 38%
Better data and decision-making 35%
Improves lead generation and nurturing 34%
Enables better use of the budget 33%
Increases personalization options 24%
Increased ability to measure important metrics/KPIs 23%
Aligning marketing efforts to adjacent departments 21%
  • Around 2 in 3 (66%) surveyed marketing professionals state that their current marketing automation is “somewhat successful” in helping to achieve marketing objectives. Another 25% of respondents say it’s “very successful”. Only 9% of marketers report no success from their marketing automation efforts – Ascend2

Conclusion

We hope you enjoyed this list of marketing automation statistics.

We frequently update this list of statistics. So feel free to check this stats page later for new insights.

The post 15 Key Marketing Automation Statistics appeared first on Backlinko.

Read more at Read More