Web Design and Development San Diego

Google simplifies Analytics and Ads consent rules

How to use Performance Planner and Reach Planner in Google Ads

Google is changing how Google Analytics and Google Ads share consent signals — a shift that could have major implications for marketers’ tracking setups starting this summer.

What’s happening. Beginning June 15th, Google Ads data collection will rely solely on the ad_storage consent setting, removing a layer of complexity that previously came from linked Google Analytics configurations.

Until now, ad data flows between Analytics and Ads were influenced by both Consent Mode and Google Signals settings inside GA. That created confusion for marketers, especially because some of the controls were buried in Analytics settings rather than clearly surfaced in ad consent banners or tag implementations.

Starting in June, Google is simplifying that structure. Google Analytics data collection will still be governed by Google Signals, but Google Ads will look only at whether users have granted ad_storage consent.

That means a linked Google Analytics tag will no longer affect whether Google Ads can collect or use advertising identifiers.

What changes. For many advertisers, the update will effectively create a cleaner — but more rigid — consent framework.

If ad_storage is granted, Google Ads may use all available advertising signals, including linking activity to a user’s signed-in Google account when possible. If ad_storage is denied, Google will be limited to less persistent signals, such as URL parameters like gclid.

There appears to be little middle ground. Marketers will have less ambiguity about what drives ads data collection, but they will also have fewer ways to fine-tune what gets shared.

Why we care. This change makes consent settings much more consequential for measurement, attribution and audience targeting. From June, whether Google Ads can use identifiers will depend almost entirely on the ad_storage signal, so any gaps or errors in consent mode setup could directly affect campaign performance data.

It also removes some hidden complexity from linked Google Analytics settings, giving advertisers clearer rules — but less flexibility.

Between the lines. The move reflects Google’s broader push to make consent systems easier to understand for advertisers and regulators.

A single source of truth for ad consent could reduce implementation errors and make compliance easier to explain. But it also puts more pressure on brands to ensure their Consent Mode setup is working properly.

If consent updates are delayed, misconfigured or incomplete, marketers could see gaps in measurement, attribution and audience targeting.

What marketers should do now. Audit your consent implementation before the June deadline.

Teams should confirm that Consent Mode update calls are firing correctly and that ad_storage settings accurately reflect user choices. Brands with Google Signals turned off should pay particular attention: under the new setup, they could see more Ads-linked data than before if users grant ad consent.

For marketers, the takeaway is simple: cleaner rules are coming, but getting consent right will matter more than ever.

Dig deeper. Updates to Google Analytics Data Controls

Read more at Read More

Web Design and Development San Diego

Google is bringing back a familiar name: Data Studio

In an AI-driven economy, companies have more data than ever but still struggle to turn it into useful daily decisions. Google is betting that a revamped Data Studio can become the place where users quickly explore, organize and act on data across its ecosystem.

Why the switch back. Google says the new Data Studio will serve as a central hub for a range of assets, from traditional reports and dashboards to data apps built in Colab and BigQuery conversational agents. The idea is to give users one place to work with the tools and information that shape their business each day.

Flashback. Three years ago, Google folded Data Studio into its broader analytics push by rebranding it as Looker Studio. Now, it is separating the products again as customer needs evolve.

Two versions. Google is launching two versions of the product.

  • Data Studio will remain free for individuals and small teams that need quick analysis and visualization.
  • Data Studio Pro, meanwhile, is aimed at larger organizations that need stronger security, compliance, management controls and AI capabilities, with licenses sold through the Google Cloud and Workspace admin consoles.

Why we care. The (kind of) new Data Studio could make it much easier to pull together campaign, audience and performance data from across Google’s ecosystem in one place. That means faster reporting, easier ad hoc analysis and quicker answers without relying as heavily on analysts or engineering teams. For brands already using Google Ads, BigQuery or Sheets, it could streamline how teams track performance and make day-to-day budget and creative decisions.

Where Looker fits in. Under the new structure, Looker will remain Google Cloud’s enterprise business intelligence platform, focused on governed data, semantic modeling and large-scale analytics. Data Studio, by contrast, is being positioned as the faster, more flexible option for personal exploration, ad hoc reporting and lightweight dashboards across services like BigQuery, Google Sheets and Ads.

What’s next. For existing users, Google says the transition should be seamless. Current reports, data sources and assets will carry over automatically, with no action required.

Google plans to share more about the relaunch and its broader analytics strategy at Google Cloud Next ’26 later this month.

Dig deeper. Data Studio returns as new home for Data Cloud assets

Read more at Read More

Web Design and Development San Diego

Introducing a new spam policy for “back button hijacking”

Today, we are expanding our spam policies
to address a deceptive practice known as “back button hijacking”, which will become an explicit
violation of the “malicious practices”
of spam policies, leading to potential spam actions.

Read more at Read More

How to Do Keyword Research for SEO

Key Takeaways

  • Keyword research is the process of finding and analyzing the search terms your audience uses to determine which ones are worth targeting and why.
  • Search intent, keyword difficulty, search volume, and topical authority are the core variables that determine whether a keyword is a viable target for your site.
  • AI Overviews now appear in a significant share of searches and measurably reduce click-through rates. 
  • Long-tail keywords carry more weight than ever. They convey highly specific intent and mirror the natural language patterns behind voice and LLM queries.
  • Prompt research is a discipline that sits alongside traditional keyword research. It accounts for how people interact with AI tools, where query structure and user intent differ meaningfully from traditional search.

Have you been tracking your target keywords, only to watch rankings hold steady while organic traffic falls? 

You’re not imagining it. 

According to SEOClarity, AI Overviews (AIOs) appear for 30 percent of U.S. desktop searches, and according to Ahrefs, that presence alone reduces organic click-through rate (CTR) for position-one results by 58 percent.

You might think that makes keyword research for SEO less important now, but that couldn’t be further from the truth. 

Your research still matters. What’s changed is the goal. High-volume terms alone won’t cut it anymore. 

You need to identify which keywords still drive clicks and understand how large language models (LLM) prompts are reshaping the demand signals you rely on.

This guide covers the full research process, updated for how search works today.

What Is Keyword Research?

Keyword research is the process of identifying and analyzing the search terms your target audience types into search engines and LLMs. The goal is to determine which terms are worth targeting based on factors like the intent behind a user’s query.

Intent is the why behind what people search, and it’s an area many teams underinvest in.

Finding a high-volume keyword is easy enough. The harder part is understanding the true intent behind the keyword. That’s the key to making sure your content satisfies that intent better than what’s already ranking.

Why Is Keyword Research Important for SEO?

Creating content without keyword research is a gamble. 

Sure, you might produce something useful. However, without confirming what people are actually searching for and that you have a realistic shot at ranking, you’re spending resources on content that may never be found.

Keyword research solves for three variables that determine whether a keyword is worth pursuing:

  • Search volume tells you how many people are looking for a term each month. A keyword with zero volume isn’t worth a dedicated page. Search volume alone doesn’t close the case, though. The vast majority (94.74 percent) of keywords receive 10 or fewer monthly searches, proving low-volume, high-relevance terms can still drive traffic that converts.
  • Keyword difficulty tells you how competitive a keyword is based on the authority of the pages currently ranking for it. This is where many teams misjudge their opportunities. A keyword with a high difficulty score might be within reach for a high-authority domain but completely out of scope for a site with limited backlink equity. Targeting beyond your domain’s current authority just adds to your backlog.
  • Topical authority has become increasingly important over the past two years. Google has gotten a lot better at evaluating whether a domain demonstrates depth and consistency within a topic area. Keyword research should inform a content strategy that builds clusters of related content rather than targeting disconnected terms.

There’s also the AI layer. 

AIOs now appear in a significant share of searches and reshape the value of a keyword depending on whether one shows up. 

Research from Seer Interactive tracking 3,119 informational queries finds that organic CTR dropped 61 percent for queries with AIOs compared to queries without them.

Notice how a more semantic long-tail keyword for the same subject produces a Google AIO versus a product-based search:

Google AI Overview for how to do keyword research

Source: Google.com

Google results for keyword research tools query

Source: Google.com

See how small differences in keywords can drastically change your results? This is why doing proper keyword research is important.

Long-tail keywords are more likely to trigger AIOs, which means users get their answer without clicking through. 

That’s worth knowing, but it’s not a reason to abandon those keywords. Flag them during analysis and see where they fit in your broader strategy.

Why Search Intent Is Important for Keyword Research

Search intent is the underlying goal behind a query. 

Google organizes intent into four broad categories: 

  • Informational (users want to learn something)
  • Navigational (users are looking for a specific site or brand)
  • Commercial (users are comparing options before a purchase)
  • Transactional (users are ready to buy or act)
Four keyword intent types chart by NP Digital

Intent type is a big deal because Google matches results to intent. 

An e-commerce product page won’t rank for a query that Google interprets as informational. A how-to article won’t win for a transactional query where users want a product listing. 

No amount of optimization compensates for a content-to-intent mismatch.

Use keyword research for SEO to verify intent before you commit to a content format. The fastest way to do this is to run the keyword in Google and see what’s ranking. 

If listicles dominate page one, that’s what Google thinks the searcher wants. If product pages own the top positions, a blog post isn’t going to break through.

“What sort of things do they search for during the awareness, research, and transaction phases of their buying journey? Target each of these clearly in different areas of the website by bucketing groups of terms into these different intent groups,” explains William Kammer, Vice President of SEO at NP Accel.

Bucketing your keyword list by intent before mapping keywords to pages is one of the most practical things you can do to make sure your SEO efforts match how your audience actually moves through the funnel.

Prompt Research and AI Visibility

Traditional keyword research focuses on what people type into Google. 

Prompt research focuses on how people interact with AI tools like ChatGPT, Perplexity, and Gemini. The patterns across them are quite different.

When someone searches Google for “email marketing tools,” they enter that short phrase (or a close variant) and scan a list of results. 

When someone asks ChatGPT the same question, the query looks more like this: “I run a small e-commerce business, and I’m looking for an email marketing tool that integrates with Shopify and has automation features. What would you recommend?”

The intent might be the same, but the structure and the specificity are completely different.

LLMs take these longer queries and break them down into three key components:

  • Persona: Defines who the user is and helps the LLM tailor the response to them
  • Context: Identifies the user’s specific needs and narrows the scope of the answer
  • Question: The actual “ask” contained within the query defines the LLM’s output
Anatomy of an AI prompt persona context question

Source: Claude.ai

This structural difference affects your content strategy. 

LLMs synthesize information from multiple sources to generate a response. They evaluate content for credibility and depth. 

A page optimized around a head keyword might rank well in Google but never appear in an LLM response if it doesn’t fully answer the underlying question a user would actually ask.

Prompt research is the practice of identifying the underlying questions within the full, natural-language queries people use when interacting with AI tools and the keyword-related topic clusters those queries reveal.

Think of it as keyword research for a different interface. LLMs use a process called query fan-out, breaking out a single user prompt into multiple sub-queries to retrieve information. That means your content needs to answer not just the surface question but the related ones surrounding it.

A quarter of search volume has already shifted toward AI-driven chatbots and answer engines, according to Gartner. 

That shift is gradual, but it’s not stopping. Get ahead of it now by building prompt research into your workflow alongside traditional keyword research.

How to Do Keyword Research

Good keyword research starts with the same core process regardless of where you’re starting. Here’s how to work through it, whether you’re building a content strategy from scratch or auditing an existing one.

Six-step keyword research process by NP Digital

1. Revisit Your SEO Goals

Before you open a keyword tool, get clear on what you’re trying to accomplish. Your keyword strategy should follow from your business goals, not the other way around.

A site prioritizing revenue will have a different keyword mix than one focused on growing organic traffic volume. A brand building topical authority in a new vertical needs different content targets than one trying to hang on to existing rankings. 

Your objectives will dictate the metrics you optimize for and which parts of the keyword funnel you invest in first.

Three common goal types shape keyword priorities:

  • Conversion-focused goals call for commercial and transactional keywords. These terms sit at the bottom of the funnel and carry strong purchase or sign-up intent. They also tend to have higher keyword difficulty. That means traffic volumes are often lower, but the quality is high.
  • Traffic-growth goals point toward informational keywords with higher search volumes. These terms attract users earlier in the funnel and are generally easier to rank for, though they convert at lower rates.
  • Topical authority goals are where keyword clusters shine. These are groups of semantically related terms that together signal depth of expertise to Google. The cluster approach is a longer-term play, but it’s often the only sustainable way to rank for the high-difficulty terms in competitive verticals.

Keep your competition in mind as you match keywords to goals, too. 

If a transactional keyword is out of reach for your domain right now, targeting it could hurt your conversion goals and waste resources. A smarter move is finding long-tail keywords around the same seed and intent as a backdoor into that topic.

2. Keyword Discovery

Keyword discovery is where you build a broad list of potential targets before narrowing it down during analysis. A lot of teams spend too much time here without a clear method. Here’s one that works.

Start by mapping your core topic areas from your audience’s perspective. Consider their pain points and the industry terminology they naturally use. These become your seed keywords,  the starting points you’ll expand through tools.

From there, enter your seed keywords into a keyword tool. 

My SEO tool, Ubersuggest, has a Keyword Ideas feature that gives you dozens of variations to shape the focal point of your content. 

Here’s what it delivers for the seed keyword “hiking boots”:

Ubersuggest Keyword Ideas results for hiking boots

Source: https://app.neilpatel.com/en/ubersuggest/keyword_ideas/

Run enough seed keywords through the tool to build a list of hundreds of candidates before you start cutting.

Your competitors are a valuable third-party source, too. Pull competitor domains into Ubersuggest’s Keywords by Traffic feature to see which keywords are driving traffic to their pages. This surfaces real gaps in your strategy rather than theoretical ones.

Here’s what you get when you search my domain, neilpatel.com.

Ubersuggest Keywords by Traffic for neilpatel.com

One caveat to note is that tools may not yet have reliable volume data for trending or emerging topics. 

Jonathan Hoffer, SEO Manager at NP Digital, notes that “in the case of new trends, they might not appear in a tool, so you’ll have to check social media or forums to see if something is trending.”

Long-Tail Keywords

Long-tail keywords are search phrases of three or more words. They carry lower search volumes than head terms, but they’re more specific. That means they face less competition and tend to attract users with clearer intent, which often translates to higher conversion rates.

“Hiking boots skechers” illustrates the point well. The difficulty score is lower than our seed keyword phrase, meaning it’s easier to rank for. 

As you can see below, Ubersuggest rates “hiking boots” 39 in SEO difficulty vs. 27 for “hiking boots skechers.”

Ubersuggest SEO difficulty hiking boots
Ubersuggest SEO difficulty hiking boots skechers

That keyword is still valuable, though, because someone typing “hiking boots skechers” probably knows exactly what they want to buy. That means the odds are good that they’re close to a purchasing decision. 

A page that directly addresses that particular brand is far more likely to rank and convert than a generic “hiking boots” page ever would for that searcher.

The value of long-tail keywords goes beyond traditional SEO.

For starters, voice search queries are naturally long-tail. They’re phrased the way people speak in real life rather than in typed shorthand.

Someone typing might enter “hiking boots waterproof.” The same person using voice search asks, “What are the best waterproof hiking boots for wide feet?”

LLM prompts follow the same conversational pattern. A user asking an AI assistant a question phrases it the way they’d phrase it to a knowledgeable colleague. 

Targeting long-tail keywords in these cases gives you the best shot at matching how your audience searches.

Local Keywords

Local keyword research follows the same core process as broader keyword research. There’s one important distinction, though: Potential competitors and search intent are filtered through geography. 

Someone searching “pizza delivery” in Santa Monica isn’t looking for the same results as someone searching the same term in Chicago. Both are looking to get pizza delivered, yes, but the keyword effectively becomes a different target once location comes into play.

Don’t limit yourself to a single location modifier. 

A pizzeria in Santa Monica can target “pizza delivery Santa Monica” and neighborhood-level variants like “pizza near the pier.” Service-specific combinations like “late night pizza delivery Santa Monica” work, too.

Each geographic variation is a keyword opportunity in its own right.

Local keywords tend to have lower difficulty than non-local ones, but that doesn’t make them uniformly easy. 

Local rankings don’t run on content alone. Your Google Business Profile and the consistency of your name, address, and phone number (NAP) across the web factor in, too.

3. Keyword Analysis

Keyword target criteria checklist by NP Digital

By the end of discovery, you’ll have a long list of potential keywords. Keyword analysis is how you cut it down to a working set.

The primary metrics to evaluate are search volume, keyword difficulty, and search intent alignment.

A tool like Ubersuggest lets you organize all your candidates in a Keywords List and sort by these variables simultaneously, which is faster than evaluating them one at a time.

Ubersuggest Keyword Lists for activewear research

The right search volume floor depends on your goals. Don’t automatically filter out low-volume keywords. A term with 50 monthly searches and clear commercial intent can be worth more than a 5,000-volume informational keyword with no realistic conversion path.

For keyword difficulty, calibrate your threshold to your domain authority. 

Sites with limited backlink equity are usually better off focusing on terms with difficulty scores under 40. Higher-authority domains have more room to compete for scores of 50 and above. What counts as realistic is site-specific.

After sorting by the numbers, run a Google search on each shortlisted keyword and analyze the search engine results page (SERP) directly. Your goal is to answer two questions:

  • Does the content format match what you can produce? If every top-ranking result is a detailed comparison guide and you’re planning a product page, that’s an intent mismatch.
  • Does your domain belong in this conversation? Look at who’s ranking. If the top results are all major publications with significantly more backlink equity than your site has, be realistic about your timeline and consider adjusting your target keyword.

You should also consider whether your target keyword generates an AIO. A keyword where an AIO is present doesn’t make it a bad target, but it does change how you measure success. For those terms, landing an AIO citation matters as much as ranking position.

Nikki Brandemarte, Sr. SEO Strategist and Local SEO Team Lead at NP Digital, offers this guidance: “Pay attention to content coverage for specific topic areas. For example, are your SERP competitors publishing multiple blogs that explain the basics of a topic, or a single comprehensive guide? This can help pinpoint gaps in topical authority.”

By the end of analysis, every keyword on your shortlist should clear these bars:

  • Measurable search volume
  • Relevant to your brand or industry
  • A difficulty score your domain can realistically compete for
  • Clear search intent alignment
  • A content format your site can actually produce

4. Keyword Targeting

Once you have a refined keyword list, you need to decide which keywords to pursue first and which URLs to target them with. 

For prioritization, start with keywords that combine low difficulty with reasonable volume. These are your highest-probability wins. They won’t always be the most valuable keywords on your list, but early traction validates the strategy and gives you ranking data to learn from.

From there, move to high-intent commercial keywords. These carry more difficulty but have the most direct line to revenue. A few hundred visitors from a well-targeted commercial keyword can generate more return than thousands of visits from an informational term.

Finally, layer in top-of-funnel, high-volume informational terms. These are the awareness plays. They’re hard to rank for and have longer time horizons, but they’re important for building topical authority over time.

When assigning keywords to pages, be deliberate about avoiding keyword cannibalization

Cannibalization happens when two or more pages on your site target the same or nearly identical keywords. This splits ranking signals, creating competition between your own content. 

It’s one of the more common structural problems in mature content programs. Audit for it before you start mapping new keywords to existing pages. If you find two pages competing for the same term, consolidate, redirect, or clearly differentiate the content before adding more.

5. Keyword Optimization

With your keyword targets set, optimization is how you signal relevance to search engines without sacrificing content quality. Here’s a rundown of what current best practices look like.

  • Title tag and H1: Your primary keyword belongs in both. This remains one of the most consistent on-page ranking signals. According to Rankability, 93.5 percent of page-one results use their target keyword in the title or H1.
  • URL slug: Use a clean, keyword-inclusive URL. Research shows that URLs that include the target keyword see up to 45 percent higher click-through rates than those without.
  • Meta description: Your meta descriptions don’t directly influence rankings, but they do influence clicks. The goal is to include the keyword naturally and give searchers a clear reason to click.
  • Body copy: Use your keyword and related semantic terms throughout, but write it for the reader first. Resist the urge to stuff keywords. Density has declined as a ranking factor. Pages in the top 10 today have significantly lower keyword density than those that ranked well even a few years ago. 
  • Image alt text: Include your keyword in at least one image’s alt attribute on the page. Alt text serves accessibility and SEO purposes.
  • Structured data: Schema markup helps search engines and AI systems understand the content type and context of your page. For competitive keywords, structured data improves your eligibility for featured snippets and AIO citations.
  • Content completeness: For any keyword you’re seriously targeting, your content needs to address the topic more thoroughly than what’s currently ranking. That doesn’t mean longer for its own sake. Your piece can be shorter and still outrank what’s currently there if yours is more helpful.

For highly competitive keywords, link building to the specific page will almost certainly be part of the equation. Rankings alone won’t hold in a tough vertical without external authority pointing at the page.

6. Keyword Tracking

Systematically tracking your keyword research is what separates good SEO results from great SEO. 

Rankings change, and competitor or algorithm adjustments can swiftly change the playing field. A tracking system catches those changes before they become problems.

Typically, keyword research tools include a rank-tracking feature that monitors your keyword positions daily and displays ranking distribution or visibility trends across your tracked keyword set. 

Here’s what Ubersuggest’s Rank Tracking feature looks like:

Ubersuggest Rank Tracking dashboard keyword SEO

You can track performance separately by desktop and mobile, which is a big plus given how differently Google’s SERPs behave across devices.

The core metrics to monitor are:

  • Ranking position
  • Organic impressions via Google Search Console
  • CTR

CTR is especially worth watching for any keywords where AIOs are present. 

A stable ranking alongside a declining CTR is a signal that an AIO has entered the picture, but don’t panic. This is less a traffic problem and more an opportunity for content optimization. You may be able to go back and refresh that page with long-tail keywords that more properly align with AI search.

For broader keyword programs, tracking AI citation frequency is increasingly worth adding to your reporting stack. Brands cited in AIOs earn 35 percent more organic clicks and 91 percent more paid clicks than brands that aren’t cited on the same queries, according to Seer Interactive. 

Citation is now a meaningful key performance indicator (KPI) alongside position.

The Prompt Research Process: Is It Any Different?

The short answer is yes. Prompt research differs somewhat from traditional keyword research, but the fundamentals overlap.

Prompt and keyword research share the same goal, though: to understand what your audience is looking for and create content that satisfies that need. 

The difference is the interface.

LLM users don’t type compressed keyword strings. They ask full questions and often include specific constraints. 

The prompt below breaks down how each component works together. Notice how far it goes beyond a simple keyword search:

Structured AI prompt example with labeled components

Source: https://www.thevccorner.com/p/guide-writing-powerful-ai-prompts

These added layers change what a good target keyword looks like.

Here’s a practical approach to building prompt research into your workflow:

  • Start with your existing keyword list. Take your top commercial and informational keywords and expand them into full-sentence questions. “Email marketing tools” becomes “What’s the best email marketing tool for a small business that already uses Shopify?” 
  • Mine community forums and Q&A platforms. Reddit threads and Quora discussions show you the actual language your audience uses when asking for help. These tend to be longer and more detailed than keyword tool data, and that specificity is precisely what LLM prompts look like.
  • Use your keywords in LLMs directly. Type your target topics into ChatGPT or Perplexity and observe their results and how they phrase follow-up questions. Those follow-up questions represent the sub-queries the model identified as relevant, which are also the content gaps your pages can fill.
  • Monitor brand mention prompts. Tools like Profound track which prompts lead AI engines to mention your brand or your competitors, and how those mentions change over time. This is the closest thing to rank tracking for LLM visibility.

The content strategy implication is to prioritize completeness. 

Content scoring highly on semantic completeness appears in AI-generated answers at a rate 340 percent higher than content that scores lower, according to recent AIO research data. 

LLMs reward content that fully addresses a topic, which is the same thing Google has been rewarding since the Helpful Content updates. The convergence is not coincidental.

Bonus: More Ways to Find Keywords

As your skills grow or you take on more competitive keywords, the tools below are worth adding to your stack to spot opportunities you might otherwise miss. You’ve already seen a little of what Ubersuggest can do, so let’s start there.

Ubersuggest

One sometimes-overlooked part of Ubersuggest is the Keyword Ideas feature’s ability to filter keyword results by suggestions, related terms, questions, prepositions, and comparisons. 

Each filter uncovers a different angle on how people search for your topic (as shown in our hiking boots example).

Ubersuggest keyword filter tabs for hiking boots

The Questions modifier is particularly useful for content planning.

Ubersuggest keyword questions filter hiking boots

The Questions filter alone gives you 120 variations for “hiking boots.” They range from informational queries like “how long do hiking boots last” to commercial ones like “where to buy hiking boots near me.” 

Each has a potential content angle with its own intent and difficulty profile.

It shows you exactly what people are asking about a keyword, giving you ready-made content angles and FAQ targets. 

Ahrefs and Semrush

Ahrefs’ Keywords Explorer provides full SERP analysis in one dashboard. 

One feature worth highlighting is the AI visibility filter in Ahrefs’ Site Explorer, which shows exactly which of your ranking keywords are currently triggering AIOs. That filter turns AIO exposure into a specific, actionable list of keywords you can monitor more closely.

Semrush has integrated AI-specific research tools into its platform, too. 

Its tracking functionality enables you to monitor your brand’s performance across ChatGPT, Perplexity, and Google’s search generative experience (SGE) simultaneously. Plus, its AI sentiment feature tells whether AI-generated responses mention your brand positively or negatively. 

For teams building out an AEO strategy alongside traditional SEO, that cross-platform visibility is difficult to replicate manually.

Many experienced SEOs use multiple tools in parallel, cross-referencing data from Ubersuggest, Ahrefs, and Semrush to build a more complete picture. Because volume figures are estimates and can vary by platform, using multiple tools reduces the risk of making targeting decisions based solely on a single platform’s data.

AnswerThePublic

AnswerThePublic generates question-based keyword ideas from a seed keyword. Enter a topic, and the tool maps the questions people are asking about it, organized by preposition and question type.

The output is useful for building FAQ sections and identifying informational content angles that pure volume-based tools can’t see. 

For example, if you search for “social media marketing,” AnswerThePublic returns questions like “what are the best social media marketing strategies?” and “how to measure ROI in social media marketing?”

AnswerThePublic keyword map social media marketing

Both are strong long-tail targets with real search demand.

LLMs and AI Tools

AI tools have become genuinely useful for scaling keyword research, particularly in the brainstorming and clustering phases.

Take Claude or ChatGPT. You can rapidly expand a seed keyword into related angles and intent clusters. Use the persona component of your prompt to make them think like your target audience.

For example, you might ask an LLM to generate the questions a small business owner would ask before buying a product. Or you might dig into the objections they’d have at each stage of the purchase process. 

LLM output isn’t a replacement for tool-based volume data, but it’s a fast way to surface angles you wouldn’t have thought to search for.

Here’s a sample query I ran in Claude: “What questions would someone ask before buying email marketing software?”

Claude AI keyword brainstorm for email marketing

Source: Claude.ai

This is just a small snippet of what it returned. The LLM returned questions across a variety of categories, covering the entire buying journey someone might go through when purchasing email marketing software. 

Doing the same could provide you with long-tail keyword opportunities to reach every segment of your target audience exactly where they are. 

Semrush’s AI-powered keyword clustering tools take this further by grouping related keywords by semantic meaning and search intent. Running your keyword list through clustering before mapping keywords to pages can reveal topical gaps and consolidation opportunities that spreadsheet-based sorting misses.

Of course, you need to keep these tools’ limitations in mind. They’re strong at synthesis and pattern recognition but weaker at providing reliable volume and difficulty data. Use them alongside your keyword tools, not instead of them.

Search Suggestions

Search engines themselves are a free, always-up-to-date resource for keyword research. Google autocomplete, the People Also Ask box, and the related searches section at the bottom of the SERP all surface real query patterns from real users.

Google autocomplete is particularly useful for long-tail discovery. Enter your seed keyword and add a letter:

Google autocomplete suggestions for hiking boots

Source: Google.com

Google will suggest several popular phrases, each of which is a data point about what people search with that keyword as a root. 

People Also Ask (People also search for) displays related questions that Google considers topically connected to your query, often revealing adjacent content opportunities worth targeting independently.

Google People Also Search For hiking boots results

Source: Google.com

FAQs

What is keyword research?

Keyword research is the practice of finding and analyzing search queries to identify which ones are worth targeting with your content. It involves evaluating search volume, keyword difficulty, and the intent behind each query to build a targeted list of terms that align with your site’s goals and domain authority.

How do I do keyword research?

Start by defining your goals, then build a list of seed keywords based on your audience’s pain points and your core topic areas. Use a tool like Ubersuggest to expand that list and analyze candidates by search volume, difficulty, and intent. Audit the SERP directly for your top candidates before finalizing your targets. Then map keywords to specific pages, create or optimize content, and track performance over time.

Can I do keyword research for free?

Yes. Ubersuggest and AnswerThePublic both offer free keyword data. Google Search Console is also free. If you’re not ready to pay for a tool yet, you can use Google’s built-in search features like autocomplete and People Also Ask (People also search for). Free tools may have volume and feature limitations, but they’re more than sufficient for early-stage research or smaller sites. Paid plans unlock more comprehensive data that you may want to view as you progress.

What do I do after keyword research?

After completing keyword research, map your keywords to specific URLs, either existing pages you’ll optimize or new content you’ll create. Prioritize by intent and difficulty, then write or update content to match the search intent behind each keyword. Publish, build links where needed, and track performance in a rank tracker. Keyword research isn’t a one-time task. Revisit it regularly as your domain authority grows and as search behavior evolves.

Conclusion

Keyword research has always been the foundation of SEO. 

What’s changed is the complexity of the environment you’re researching. AIOs have changed how clicks are distributed. LLMs have introduced a layer of search behavior that operates under different rules entirely. And topical authority now matters as much as optimizing individual keywords.

The teams navigating this well aren’t researching keywords in isolation anymore. 

They’re combining traditional keyword analysis with prompt research and monitoring AI citation alongside ranking position. They then use that research to build content strategies around topic clusters rather than individual terms.

The process I’ve outlined here covers all that. If you want to go deeper on implementation, my complete SEO checklist walks through how keyword research connects to the rest of your optimization program. 

If you’d rather have an expert team handle the execution, NP Digital’s SEO consulting services are built for exactly this kind of work and dive into keyword research for your site using the process above.

Read more at Read More

Are AI Overviews Stealing Your Clicks? How Paid Search Teams Are Adapting to the Answer Engine Era

Key Takeaways

  1. AI Overviews can reduce paid search click-through rates by more than 50 percent for affected queries, making impression share a critical visibility metric.
  2. Informational queries are most vulnerable. AI answers resolve research intent directly in the SERP, reducing the number of users who scroll to ads.
  3. Transactional and brand queries hold up better. Teams reallocating budget toward high-intent searches see more consistent engagement.
  4. Measurement frameworks need to expand. Click-through rate alone no longer tells the full story when impressions rise but clicks fall.
  5. Search is no longer a single channel. Brands that extend paid strategy to YouTube, Pmax, Demand Gen, Reddit, TikTok, and AI platforms capture demand earlier and across more touchpoints.

Your impression numbers look healthy. Your click-through rate tells a different story.

For many paid search teams, this is the new reality. AI Overviews now appear at the top of Google search results for millions of queries, answering user questions before they ever reach the ads. Impressions hold steady or climb. Clicks get harder to come by.

Research from Seer Interactive found that when AI Overviews appeared in search results, paid click-through rate dropped to 9.87 percent compared to 21.27 percent on the same queries without an overview. That translates to a 53.6 percent reduction in traffic.

Let’s look into why certain query types are more exposed than others and what paid search teams are doing right now to adapt their strategy, targeting, and measurement.

AI Overviews Are Reshaping the Search Results Page

When Google introduced AI Overviews, it fundamentally changed the architecture of the SERP. The AI-generated summary now occupies the most visible real estate at the top of many search results, answering the user’s question before they interact with anything else on the page.

For paid search, the implications are significant. Ads that once appeared near the top of the page now often appear below the AI summary. Users scroll past a detailed AI-generated answer before they encounter a paid result.

Google SERP showing an AI Overview summary occupying the top of the page with paid search ads appearing below the overview section

This is not just a visual shift. Seer Interactive’s research found that the presence of an AI Overview correlates with a 12 percentage point decrease in paid click-through rate. Across a full dataset, that translated to a 53.6 percent reduction in traffic compared to searches where no AI Overview was shown.

The core issue: paid search visibility is no longer the same as paid search attention. An impression in a SERP dominated by an AI Overview does not carry the same weight as an impression on a traditional results page.

If teams assume all impressions carry equal value, their performance data will remain difficult to interpret. Impressions go up. Clicks stay flat. Revenue and ad costs become harder to predict.

Understanding this requires analyzing which query types most frequently trigger AI Overviews and the resulting implications for budget allocation.

Why Informational Queries Are Becoming Less Valuable for Paid Search

Not all queries are equally at risk. AI Overviews appear far more often on informational queries than on high-intent queries, and that distinction matters for budget allocation. This is closely tied to the broader trend of zero-click searches, where users get what they need from the SERP itself and never click through to a website.

Now the AI summary answers the question on the spot. The research phase that once sent users scrolling through several pages of results has been compressed into a single AI-generated box. Users read the answer, get what they need, and move on without clicking.

Transactional queries tell a different story. Searches with clear purchase intent, such as pricing inquiries, product comparisons, and demo requests, are less likely to trigger an AI Overview. When they do, ads still perform reasonably well. According to the same Seer research, brand queries with AI Overviews present still generated a 16.36 percent click-through rate, well above the average for informational query types.

The practical implication: budget allocated to queries that consistently trigger AI Overviews is at higher risk of generating impressions without clicks. Identifying which queries in your account fall into that category is a practical first step toward protecting performance.

10 Paid Search Pivots Teams Are Making Right Now

Paid search teams are not waiting for Google to solve this. The following pivots reflect what practitioners are already doing to protect performance and adapt to a more competitive SERP.

Shift Budget Toward Transactional Queries

Informational searches increasingly resolve in the SERP. Queries like “what is a CRM” or “how does ROAS work” are prime territory for AI Overviews, which means fewer users scroll to ads.

Transactional searches behave differently. “Best CRM for small business,” “Salesforce pricing,” and “schedule a demo” queries still generate strong ad engagement. Auditing your campaigns for intent and moving spend away from informational keywords toward conversion-ready queries is one of the most direct ways to protect revenue.

Structure Campaigns Around Intent, Not Just Keywords

Traditional keyword groupings by topic are giving way to segmentation by intent stage. Organizing campaigns into informational, commercial, and transactional buckets allows teams to allocate budget with more precision and adjust quickly as AI Overview coverage expands.

When informational campaigns are isolated from high-intent traffic, reducing or pausing them becomes a cleaner decision. You can act without disrupting the campaigns that are still driving results.

Defend and Expand Brand Search

Brand queries are among the most resilient in an AI-driven search environment. Users searching for your company by name carry strong purchase intent, and brand ads still convert at high rates even when AI Overviews appear.

Without an active brand campaign, competitors can bid on your brand terms and capture that traffic directly. Protecting brand terms is a baseline priority that pays off consistently.

Make Ads More Visually Competitive

Ads appearing below an AI summary need to work harder to earn attention. Every available asset matters. Sitelinks add navigation options. Callouts reinforce value propositions. Structured snippets give product category detail. Pricing extensions answer a buyer’s primary question before they click.

A well-extended ad standing out below an AI Overview will consistently outperform a barebones text ad in the same position.

Write Ad Copy That Moves the Decision Forward

The user who sees your ad has likely already read an AI-generated summary of the topic. Ad copy should not repeat what the AI already covered. It should move the decision forward.

“Get a free audit” does more work than “Learn more about SEO.” Specificity converts when users are already past the information-gathering stage. Copy focused on differentiation, pricing clarity, or a clear next action earns the click that a generic brand message will not.

Expand Competitor Conquesting

AI Overviews frequently name specific products and brands when summarizing a category. After reading a summary that lists top CRM tools, a user often searches immediately for a specific brand’s alternatives or pricing. That is a conquesting opportunity.

Bidding on “[Competitor] alternative” and “[Competitor] vs [Your Brand]” queries reaches users at the moment they are actively comparing options. These searches happen right after the AI Overview has done the initial filtering for them.

Invest More in Remarketing and Audience Targeting

AI Overviews compress the research phase, but they rarely close the decision entirely. Many users read the summary, step away, and return to search again before converting. Remarketing lets you reconnect with those users in that return window.

First-party data becomes more valuable here. Building audience segments from site visitors, email lists, and CRM data gives teams the targeting precision that broad keyword bidding alone cannot provide.

Use Broader Match to Capture Conversational Queries

AI-influenced searches tend to be longer and more natural in phrasing. Users accustomed to conversational AI tools bring that style to their search queries. Exact match lists built for shorter, traditional keyword patterns will miss a growing share of that traffic. Revisiting your paid search bidding strategies with this in mind is worth the time.

Performance Max campaigns and broader match types help capture the longer, less predictable queries that are becoming more common. The trade-off is less control, which makes ongoing performance monitoring more important.

Rethink How You Measure Search Performance

Click-through rate dropping while impressions hold is not necessarily a failure. In an AI Overview environment, it is often an expected outcome. The mistake is treating CTR as the primary health indicator when the SERP environment has fundamentally changed.

Teams shifting their measurement frameworks are tracking impression share, top-of-page visibility rate, branded search volume growth, and assisted conversions alongside traditional metrics. Together, those signals give a fuller picture of what search is actually contributing to business outcomes.

Measurement of search performance.

Source: The Media Captain

Diversify Beyond Search Ads

Zero-click trends reduce the available inventory of high-quality search clicks. As explored in the zero-click future of search, search still matters, but it cannot carry the same weight alone that it once did.

Demand Gen campaigns, YouTube, Display, and paid social all help reach users earlier in the funnel before they arrive at Google ready to buy. Search then becomes the capture mechanism for demand built elsewhere. The full paid media mix has to work together more tightly than before.

Paid Search Measurement Is Changing

The instinct to look at click-through rate when paid performance dips is understandable. It is one of the most visible metrics in any search account. In an AI Overview environment, though, it is an incomplete signal.

Rising impression counts with declining click-through rate is not always a campaign failure. It often reflects a change in SERP composition. Search Engine Land’s analysis of paid search teams confirms that AI Overviews are lowering CTR and raising CPCs simultaneously, compressing the buyer journey and requiring a measurement evolution rather than just a performance fix.

The Adthena interface.

Source: Adthena

Impression share tracks how often ads appear for eligible queries. A high impression share with low CTR confirms visibility is strong but engagement is soft. That is a different problem than an impression share problem, and it calls for a different solution.

Branded search volume is a proxy for overall demand. If awareness campaigns and upper-funnel efforts are working, brand search volume should rise over time. It is one of the cleaner ways to confirm whether broader marketing spend is translating into search intent.

Assisted conversions show how search contributes to outcomes that close on a different channel or in a later session. Search often does awareness and consideration work that surfaces in the last-click data of another touchpoint entirely.

Top-of-page rate tracks the share of impressions appearing in the highest-visibility positions above organic results. In an AI Overview environment, that position matters more than it ever has. Semrush’s AI Overviews study found that AI Overview prevalence varies significantly by industry, which means teams with niche-specific data will have an advantage in calibrating how aggressively to adjust their measurement benchmarks.

A SEMrush graphic about industries impacted by AI overviews.

Source: Semrush

The Bigger Shift: Search Is Becoming an Ecosystem

Google is still the dominant search platform. But as AI SEO continues to reshape how content gets discovered, search as a behavior now happens across a much wider set of surfaces.

Users looking for product reviews turn to Reddit. Short-form how-to content lives on YouTube and TikTok. AI tools like Perplexity and ChatGPT answer research queries directly. Younger audiences often bypass Google for discovery entirely, using social platforms as their primary search interface.

A graphic showing many different marketing channels.

Source: Yewx

For paid teams, search advertising strategy has expanded to match. Visibility on Google still matters. So does presence on the platforms where users form opinions and compare options before they ever open a search bar.

Paid search budgets are increasingly being redistributed to reflect this. Teams that once concentrated the majority of digital spend in Google search are now testing YouTube, PMax, Demand Gen, Reddit Ads, and TikTok in parallel. The goal is not to abandon search but to meet demand at every point it forms.

FAQs

What Is the Impact of Generative AI on Paid Search and PPC?

Generative AI has compressed the buyer research journey and pushed ads lower on the page. Seer Interactive’s research found paid click-through rate drops by more than 53 percent on queries where an AI Overview appears. The effect is most pronounced on informational and question-based searches. Transactional queries with clear purchase intent remain more resilient.

How Will AI Mode Redefine Paid Search Advertising?

Google’s AI Mode delivers deeper, more conversational answers than standard AI Overviews, which may further compress informational search traffic. For paid teams, this reinforces the shift toward transactional keywords, stronger ad creative, and multi-channel investment. Teams monitoring how AI-powered search is evolving will be better positioned to adapt their bidding and targeting structures before the impact hits performance.

What Solutions Help Improve AI-Driven Search Visibility in Paid Search?

Focus on transactional keyword targeting, expand ad extensions to maximize SERP real estate, and invest in brand defense campaigns. Pairing paid strategy with SEO content that earns AI Overview citations also improves overall search presence. Impression share reporting and top-of-page rate data in Google Ads are the most direct indicators of where visibility is slipping.

What Tools Help Analyze Paid Search Ads in AI-First Search Environments?

Google Ads provides impression share, top-of-page rate, and CTR data needed to diagnose AI Overview impact. Platforms like Adthena track how AI search changes are affecting competitive ad positioning in real time. Tools like Semrush and Ahrefs are also useful for AI Overview keyword tracking, helping your team understand what keywords are triggering AIOs.

Conclusion

The teams that adapt their targeting, measurement, and channel strategy will find that paid search still delivers. The approach that worked in 2022 or 2024 just needs a serious audit.

AI Overviews have compressed the research phase, shifted where attention falls on the SERP, and exposed the limitations of click-through rate as a standalone KPI. Marketers who recognize those shifts early and adjust accordingly will stay competitive as Google’s search experience continues to evolve.

Search is not disappearing, but the way people use it is. The paid media strategies built for that evolution will outperform those still built for a world where clicking through to a website was the default outcome of every query.

For a deeper look at how paid and organic strategies work together in this environment, explore the complete guide to Google Ads and the SEO strategy guide to see how these channels can reinforce each other. Our Google Ads Grader will also help make sure the ads you do make are best positioned to succeed.

Read more at Read More

How High-Growth Companies Actually Measure Marketing

Key Takeaways

  1. No single measurement method can answer all the questions modern marketing leaders face. A layered stack combining multiple tools is necessary.
  2. The challenge of marketing attribution is structural: it assigns credit to touchpoints but cannot prove causality. It works best for tactical optimization, not strategic decisions.
  3. Marketing mix modeling identifies marginal returns and channel saturation, helping guide long-term budget allocation.
  4. Incrementality testing is the most reliable way to determine whether marketing activity actually created outcomes, rather than captured demand that already existed.
  5. Organizing measurement teams into pioneers, settlers, and planners ensures each type of work gets the right standards and decision-making speed.

Most marketing leaders know the challenge of marketing attribution well: you have dashboards full of data, but the numbers don’t reliably answer which investments are actually driving growth. The instinct is to search for a better tool, a smarter model, or a more accurate attribution system. But the organizations getting measurement right have moved past that instinct.

They have stopped looking for a single source of truth. The challenge of marketing attribution is part of a broader problem: modern marketing environments are too complex for one method to cover everything. Discovery happens across too many platforms, buyer journeys are too fragmented, and privacy changes have eroded too much signal for any single tool to give a complete picture.

What works instead is a layered approach. Different measurement methods answer different questions, and high-growth organizations combine them deliberately. Marketing mix modeling guides strategic budget allocation. Incrementality testing validates whether a specific activity caused a result. Platform data handles day-to-day campaign optimization. Each plays a defined role. None of them works as a standalone strategy.

This is the second piece in a three-part series on modern marketing measurement. The first part examined why traditional metrics like traffic, rankings, and ROAS are becoming less reliable. This piece covers how to build a measurement system that actually supports growth decisions.

Why No Single Measurement Method Works Anymore

The digital marketing attribution tools most teams rely on were built for a different environment. They worked well when user journeys were relatively linear, cookies tracked reliably across sessions, and most discovery happened through channels that were easy to log. That environment is gone.

Today, a buyer might encounter a brand through an AI-generated answer, research it on YouTube, discuss it in a private message thread, and convert through a branded search three weeks later. The attribution system credits the last touchpoint. The channels that actually shaped the decision get little or nothing.

This is the core structural problem. Marketing attribution models are designed to assign credit, not establish cause. Even sophisticated multi-touch attribution marketing approaches still operate within the same fundamental constraint: they can show which touchpoints preceded a conversion, but they cannot prove that removing any of them would have changed the outcome.

What high-growth organizations have recognized is that different measurement tools answer different questions. Attribution modeling answers: which touchpoints were present before a conversion? Marketing mix modeling answers: where are marginal returns strongest across channels over time? Incrementality testing answers: did this specific activity actually change outcomes? 

A graphic talking about how strong measurement incorporates more than one method.

Each question matters. Each requires a different approach. According to NP Digital research, 90 percent of high-growth marketers prioritize incrementality testing, 61 percent use attribution modeling, and 42 percent use marketing mix modeling. The most effective teams use all three, weighted by the decision at hand.

Marketing Mix Modeling as Strategic Guidance

Marketing mix modeling, or MMM, takes a different approach to measurement than attribution. Rather than tracking individual user journeys, it uses aggregated historical data to model the relationship between marketing spend and business outcomes across channels over time. The result is a view of marginal returns that attribution systems cannot provide.

A graphic talking about when timing matters more than touchpoints.

MMM is most useful for identifying where each additional dollar of spend in a channel produces diminishing returns. A channel running at a strong blended ROAS may look efficient in a dashboard while the last 30 percent of its budget is generating negligible incremental revenue. MMM surfaces that inefficiency. It also helps identify cross-channel effects, such as how video or brand investment upstream affects conversion rates in paid search downstream.

For strategic budget allocation, this makes MMM the most reliable tool available. It does not require user-level tracking, which means privacy changes and cookie deprecation do not erode its accuracy the way they do for attribution. Quarterly MMM runs can consistently improve long-term budget decisions even when day-to-day attribution signals are noisy.

MMM does have real limits. It struggles to quantify upper-funnel brand building accurately, because the lag between a brand impression and a downstream conversion is too long and too indirect for historical correlations to capture cleanly. Organizations using MMM for strategic guidance while supplementing it with brand tracking and perception studies get the most complete picture.

<h2> Incrementality Testing as the Causal Engine </h2>

If MMM provides strategic direction, incrementality testing provides causal proof. The question it answers is specific: would this outcome have happened if this marketing activity had not occurred? That is a fundamentally different question from what attribution models ask, and the answer is far more useful for deciding where to invest.

The most common incrementality approaches include geo experiments, holdout tests, and campaign pauses. In a geo experiment, matched geographic markets are identified and spend is withheld in one group while maintained in another. The difference in outcomes between the two groups isolates the causal lift from the marketing activity. Holdout tests apply the same logic at the audience level. Campaign pauses, while cruder, can also reveal whether results drop when spend stops. 

For teams running Amazon attribution or other marketplace-based measurement, incrementality testing is especially valuable because platform-reported conversions often reflect demand that already existed rather than demand the campaign created.

NP Digital research tracking incremental versus attributed conversions across channels found meaningful gaps in almost every case. Organic social showed 13 percent incremental lift against 3 percent attributed lift. Paid social showed 17 percent incremental lift against 24 percent attributed, suggesting attribution was over-crediting that channel. These gaps directly affect where budget should go, and they are invisible without incrementality testing.

A graphic talking about incremental lift by channel.

Incrementality testing requires planning and clean data, but it does not require a large budget. Even a single well-designed geo holdout on a major channel provides more reliable insight into causal impact than months of attribution reporting.

Platform Data Still Matters, But Only for Optimization

Platform dashboards from Google, Meta, and other ad platforms remain useful, but their role is narrower than most teams treat it. The attribution blind spots built into platform reporting are structural, not accidental. Platforms are designed to optimize campaign performance within their own ecosystems. They are not designed to tell you whether that performance changed your business.

For day-to-day decisions, platform data is the right tool. Pacing spend against budget, adjusting bids based on performance signals, identifying creative fatigue, and diagnosing delivery issues all rely on platform metrics. These are operational decisions, and platform data handles them well.

Where platform data becomes unreliable is in strategic decisions. Algorithms optimize toward users most likely to convert, which means they systematically favor demand capture over demand creation. A high ROAS figure in a platform dashboard may reflect an efficient algorithm, not effective marketing. 

According to NP Digital research, poor attribution costs small businesses an average of 19.4 percent of ad spend, mid-market companies 11.5 percent, and enterprise brands 7.7 percent. That wasted spend is largely invisible in platform reporting because the platforms have no incentive to surface it.

A graphic talking about ad spend wasted due to ppor attribution.

The practical guidance is to use platform metrics for what they are: tactical steering, not strategic truth.

The Pioneer–Settler–Planner Measurement Model

Building a layered measurement system is not just a technical challenge. It is an organizational one. There are three distinct roles that every effective measurement organization needs: pioneers, settlers, and planners.

  • Pioneers work at the edges of what is currently measurable. They run incrementality experiments, build initial marketing mix models, test geo holdouts, and pressure-test assumptions that may no longer hold. Their work is uncertain by design. Pioneers do not deliver certainty; they deliver direction. Holding them to the same standards of statistical confidence as operational reporting will stop this work before it produces value.
  • Settlers take what emerges from experimentation and turn it into repeatable processes. They refine models, tighten assumptions, and connect insights back to planning decisions. This is where early MMM runs mature into playbooks, and where incrementality test results become frameworks teams can apply consistently. Settlers build trust by translating directional insight into systems that can actually be run.
  • Planners keep daily operations running. They rely on platform data, attribution signals, and conversion mechanics to manage spend in real time. This layer is necessary; without it, execution falls apart. But planners should not be asked to explain long-term growth or diagnose structural shifts in performance. Their focus is optimizing efficiency within channel constraints.

The failure mode most organizations fall into is applying planner-level standards of certainty to pioneer-level work. Requiring 95 percent statistical confidence from experiments that need time to develop guarantees that nothing new gets built. A model with 60 percent directional confidence, paired with fast iteration, consistently outperforms a perfect answer that arrives a quarter too late.

How High-Growth Companies Allocate Measurement Resources

NP Digital research tracking measurement practices across Canadian brands found a clear divide between average organizations and high-growth ones. Average teams allocate roughly 65 percent of their measurement influence to platform dashboards and 25 percent to attribution tools, leaving little room for more strategic methods.

High-growth brands with over $750,000 in annual media investment look meaningfully different. Platform dashboard reliance drops to around 45 percent. Attribution tool usage decreases to 15 percent. MMM grows from 5 percent to 20 percent. Incrementality testing reaches 10 percent, and early generative search optimization work accounts for another 10 percent.

These organizations are not abandoning attribution or platform data. They are reweighting them. The logic is straightforward: in markets that keep changing, you build measurement capability where change is happening, not where familiarity feels safe. The goal across all of these methods is directional confidence, meaning enough signal to make better budget decisions faster, not perfect certainty that arrives after the opportunity has closed.

Three-tier pyramid diagram from NP Digital showing the outcomes-first measurement stack, with business outcomes at the top, demand signals in the middle, and visibility and influence metrics forming the base.

Seven Steps to Evolve Your Measurement System

Rebuilding a measurement system does not require replacing everything at once. The organizations that do this well evolve gradually, adding capability in the right order rather than attempting a full overhaul.

  1. Map your current measurement inputs. List every tool and data source your team uses and identify where each one sits: operational platform data, attribution modeling, MMM, or incrementality. Most teams discover they are heavily concentrated in the first two.
  2. Identify the decision gaps. Be explicit about which strategic questions your current stack cannot answer. The challenge of marketing attribution is most visible here: where are you making budget decisions based on blended ROAS without visibility into marginal returns? Where are you crediting channels that may just be capturing existing demand?
  3. Introduce basic modeling. Even a simple quarterly MMM run provides more strategic direction than attribution alone. Start with your highest-spend channels and the business outcomes most directly tied to revenue.
  4. Run your first incrementality test. Pick one major channel and design a geo holdout or holdout audience test. The goal is not perfection; it is building the organizational capability and comfort with this type of measurement.
  5. Adapt governance expectations. Attribution reports will not disappear from leadership reviews overnight. Running a parallel track that shows incrementality and MMM findings alongside attribution data builds confidence in the new approach without requiring a full transition.
  6. Build processes gradually. Settlers turn pioneer experiments into repeatable workflows. Each incrementality test should produce a documented methodology that makes the next one faster and cheaper.
  7. Increase decision cadence. One of the advantages of directional confidence over perfect certainty is speed. Weekly budget adjustments based on incrementality signals and MMM outputs outperform quarterly reallocations based on attribution reports.
Four-panel action plan from NP Digital showing the first week of a 30-day measurement reset, covering reporting audits, profit-aware KPIs, definition standardization, and data hygiene improvements.

FAQs

What Is Marketing Attribution?

Marketing attribution is the process of assigning credit to the marketing touchpoints that contributed to a conversion. Common marketing attribution models include last-click, first-click, linear, and data-driven attribution. Each assigns credit differently across the customer journey. Attribution is most useful for optimizing campaign performance within channels, but it cannot establish whether marketing caused a business outcome.

How Do You Measure Marketing Attribution?

Attribution is measured by connecting conversion data to the touchpoints that preceded it, using tracking pixels, UTM parameters, and CRM data to map the path. Marketing attribution software platforms automate this process and offer different attribution models to choose from. The key limitation to understand is that all attribution approaches assign credit based on correlation, not causality.

Which Is the Best Software for Tracking Marketing Attribution?

The best marketing attribution software depends on your business model and measurement goals. Google Analytics 4 and platform-native dashboards handle basic attribution well. Tools like Northbeam, Triple Whale, and Rockerbox are built for direct-response and e-commerce contexts. For strategic decisions, attribution software works best when paired with MMM and incrementality testing rather than used in isolation.

Conclusion

The challenge of marketing attribution is not a problem that better software alone solves. It is a structural limitation of what attribution can do. Credit assignment and causal proof are different things, and conflating them leads to budget decisions that favor demand capture over demand creation.

High-growth organizations have addressed this by building layered measurement systems where each tool plays a defined role: platform data for operational steering, attribution for tactical signals, MMM for strategic allocation, and incrementality testing for causal validation. The next piece in this series examines how marketing leaders use these signals together to decide where the next dollar of investment should go.

If you want to go deeper on where attribution breaks down before moving to that piece, this breakdown of marketing attribution blind spots covers the specific failure modes in detail. For a broader view of how to connect measurement to revenue decisions, this guide to digital marketing attribution is a useful reference.

Read more at Read More

Sundar Pichai sees Google Search evolving into an ‘agent manager’

Google Search agent manager

Google Search is evolving beyond links and answers into a system that completes tasks, potentially fundamentally changing how users interact with the web. That’s according to Alphabet CEO Sundar Pichai, speaking on the Cheeky Pint podcast.

Why we care. Google is signaling a move from information retrieval to task execution.

Search becoming agentic. Traditional search behavior is already changing and will continue to, Pichai said.

  • “If I fast-forward, a lot of what are just information-seeking queries will be agentic in Search. You’ll be completing tasks. You’ll have many threads running.”

Pichai also described a future where Google Search acts less like a list of results and more like a system that coordinates actions:

  • “Search would be an agent manager in which you’re doing a lot of things. I think in some ways, I use Antigravity today, and you have a bunch of agents doing stuff. I can see search doing versions of those things, and you’re getting a bunch of stuff done.”

AI Mode is already changing queries. Users are already adapting their behavior in Google’s AI-powered search experiences, Pichai said:

  • “But today in AI Mode in Search, people do deep research queries. That doesn’t quite fit the definition of what you’re saying. But people adapted to that. I think people will do long-running tasks.”

Search vs. Gemini overlap. Despite the rise of Gemini, Pichai said Google isn’t replacing Search with a chatbot. Instead, the two will coexist — and diverge (echoing what Liz Reid said last month):

  • “We are doing both Search and Gemini. They will overlap in certain ways. They will profoundly diverge in certain ways. I think it’s good to have both and embrace it.”

The interview. The history and future of AI at Google, with Sundar Pichai

Read more at Read More

Google AI Overviews: 90% accurate, yet millions of errors remain: Analysis

Google AI Overviews accuracy

Google’s AI Overviews answered a standard factual benchmark correctly 91% of the time in February, up from 85% in October, according to a New York Times analysis with AI startup Oumi.

However, Google handles more than 5 trillion searches per year, so that means tens of millions of answers every hour may be wrong.

Why we care. We’ve watched Google shift from linking to sources to summarizing them for more than two years. This report suggests AI Overviews are improving, but still mix correct answers, weak sourcing, and clear errors in ways that can mislead searchers and reshape which publishers get visibility and clicks.

The details. Oumi tested 4,326 Google searches using SimpleQA, a widely used benchmark for measuring factual accuracy in AI systems, the Times reported. It found AI Overviews were accurate 85% of the time with Gemini 2 and 91% after an upgrade to Gemini 3.

  • The bigger problem may be sourcing. Oumi found that more than half of the correct February responses were “ungrounded,” meaning the linked sources didn’t fully support the answer.
  • That makes verification harder. The answer may be right, but the cited pages may not clearly show why.

What changed. Accuracy improved between October and February, but grounding worsened. In October, 37% of correct answers were ungrounded; in February, that rose to 56%.

Examples. The Times highlighted several misses:

  • For a query about when Bob Marley’s home became a museum, Google answered 1987; the correct year was 1986, according to the Times, and the cited sources didn’t support the claim or conflicted.
  • For a query about Yo-Yo Ma and the Classical Music Hall of Fame, Google linked to the organization’s site but still said there was no record of his induction.
  • In another case, Google gave the correct age at Dick Drago’s death but misstated his date of death.

Google’s response: Google disputed the Times analysis, saying the study used a flawed benchmark and didn’t reflect what people actually search. Google spokesperson Ned Adriance told the Times the study had “serious holes.”

  • Google also said AI Overviews use search ranking and safety systems to reduce spam and has long warned that AI responses can contain mistakes.

The report. How Accurate Are Google’s A.I. Overviews? (subscription required)

Read more at Read More

Google starts showing sponsored ads in the Images tab on mobile search

In Google Ads automation, everything is a signal in 2026

Google has begun placing sponsored ad units directly inside the Images tab of mobile search results — a new placement that eligible campaigns can access without any changes to existing keyword targeting.

What’s happening. When a user navigates to the Images tab within Google Search on mobile, they may now see sponsored units appearing within the image grid. Each unit shows a full image creative as the primary visual alongside text, and is clearly labelled “Sponsored” — consistent with how Google labels ads elsewhere in search results.

How it works. Eligible campaigns can serve into the Images tab without any changes to keyword targeting or campaign structure. The placement draws from existing image assets, meaning advertisers running Search or Performance Max campaigns with strong visual creative are best positioned to benefit. No separate image-only campaign setup is required.

Why we care. This is a meaningful expansion of Google’s paid search real estate. For product-led and catalog-heavy advertisers, the Images tab is where purchase-intent discovery often starts — and now ads can appear right in that moment. If your campaigns already use strong image assets, you may be picking up incremental impressions without lifting a finger.

The big picture. Early indications suggest this placement behaves more like a visual discovery surface than classic paid search. Expect high impression volume but lower click-through rates — more in line with display or Shopping than traditional text ads. That said, the assist value in multi-touch conversion paths could be significant, particularly for retail and direct-to-consumer brands. Treat it as upper-funnel reach, not a last-click channel.

What to watch. Google has not made a formal announcement, and there is no dedicated reporting breakdown for Images tab placements yet. Monitor your impression share and segment data closely to understand whether this placement is contributing — and whether it’s eating into organic image visibility for competitors.

First seen. The placement was spotted by Google Ads Expert – Matteo Braghetta, who shared seeing this update on LinkedIn. No official documentation has been published by Google at the time of writing.

Read more at Read More

One in five ChatGPT clicks go to Google: Study

Traffic funnel few winners

Over 30% of outbound clicks go to just 10 domains, with Google alone taking more than 20%, according to a new Semrush study published today.

ChatGPT also relies less on the live web, triggering search on 34.5% of queries, down from 46% in late 2024.

The big picture. ChatGPT’s growth has plateaued, and its role in how users navigate the web is evolving unevenly.

  • Referral traffic from ChatGPT grew 206%, comparing January 2025 to January 2026.

The details. Most ChatGPT referral traffic still goes to a small set of sites, even as more sites receive some traffic.

  • Google accounts for 21.6% of all ChatGPT referral traffic.
  • The next nine domains bring the top 10 to just over 30% of referrals.
  • Most other sites get a long tail of minimal traffic.
  • The number of domains receiving referrals expanded, peaking at around 260,000 in 2025 before settling near 170,000.

Why we care. Visibility in ChatGPT doesn’t translate evenly into traffic, and you’ll likely see marginal referral impact. The decline in search-triggered queries also limits your chances to earn citations and traffic.

When ChatGPT searches. It defaults to pre-trained knowledge and uses web search in specific cases, including:

  • User requests for sources.
  • Questions about recent events.
  • Situations where the model lacks confidence.

Behavior shift. Most ChatGPT prompts still don’t resemble traditional search queries.

  • Between 65% and 85% of prompts don’t match standard keywords, reflecting more complex, conversational inputs.
  • Meanwhile, engagement is deepening. Queries per session jumped 50% in late 2025.

About the data. Semrush analyzed more than 1 billion lines of U.S. clickstream data from October 2024 to February 2026 across a 200 million-user panel, tracking prompts, referral destinations, and search usage.

The study. ChatGPT traffic analysis: Insights from 17 months of clickstream data

Read more at Read More