Winning the platform shift by Braze

Grappling with innovation and changing consumer attitudes is second nature to marketers, who have already lived through many technological shifts over the past two decades. But forecasting where things are going is especially hard when it comes to modern AI, which has such unusual, non-deterministic properties. You can’t just extrapolate from the state of AI today to understand where AI is going to be in five years (or one…); during this sort of a platform shift, you need to take a deeper first-principles look.

Some things won’t change. Consumers will always want products, services and experiences that resonate and meet their needs. Marketers will always want easier, faster and more effective ways to connect with consumers. But the technologies that mediate that relationship are primed to shift in the coming years in major, unprecedented ways — impacting how marketers do their work, and the customer experiences they’re able to deliver.

How the marketer experience will evolve: Less rote work, more creativity

The history of marketing is built around constant evolution. But the scale and complexity of the change triggered by the rise of modern AI may test even seasoned customer engagement teams. To thrive, marketers need to open themselves up to new skills, perspectives and capabilities that will allow them to do more with less.

This change is already underway. As marketers take advantage of AI, they’re spending less time on rote tasks (like manual message creation) and more on strategy and creative work — from brainstorming innovative campaigns to deepening their testing and optimization strategy. These efficiency gains will grow as AI becomes a more prominent part of the customer engagement process, allowing brands to set goals and guardrails, then empowering their AI solutions to independently consume context, make decisions, and act on marketers’ behalf. 

Today, that might look like training basic agents on your brand’s voice to ensure that message content is consistently on brand. But as we gain trust in AI’s ability to operate unsupervised over longer time horizons and to handle complex projects, more marketers will be able to shift their focus to strategy and effective management of the AI resources at their disposal to enable AI decisioning and other essential optimizations.

How team experiences will evolve: Humans and AI agents working side by side

Marketing is a collaborative art, where building a successful customer engagement program often depends as much or more on marketers’ ability to work together effectively as it does on their individual skills. But while AI may help marketers to work with internal stakeholders more effectively, its biggest unlock is the ability to be a direct “teammate” to marketers themselves. And by leveraging AI’s ability to create countless agents that can support customer engagement, even entry-level marketers will likely find themselves essentially operating as a “manager” of a team of autonomous subordinates. 

Imagine creating a whole team of agents, with one tasked with personalizing product recommendations, one that QAs messages to ensure they’re formatted and built correctly, one that handles translations and another that reports back at the first sign of campaign underperformance. By supplementing your existing capabilities with agents, you aren’t just reducing the burden on yourself and your human colleagues; you’re also building a digital institutional memory, training these “teammates” with context and goals and reward functions to be able to keep supporting your efforts and driving value even as human coworkers come and go and your team’s goals shift and evolve with time.

AI and customer engagement: How brands can win the future

For years, marketers have sought the ability to truly personalize communication on a 1:1 basis across an audience of millions, and to do it swiftly, efficiently and at scale. This was the Holy Grail of marketing, but due to the limitations of technology it simply wasn’t achievable for even the most advanced teams. That’s all being made possible by AI decisioning, a powerful new type of functionality that can force multiply brands’ marketing performance and creative impact while delivering what their customers want and need.

Previously, a brand trying to win back lapsing customers had a long journey ahead of it. It might start by leveraging a churn propensity model to identify which customers are most likely to churn, then use a product prediction model to figure out what products to highlight in order to tempt them to return. From there, they’d need to run a series of A/B tests in order to figure out which offers and channels will work best. But while taking that approach is a traditional best practice, it only got brands so far — they could target micro-segments on the right channel with the right offer, but truly 1:1 engagement was still out of reach.

AI decisioning represents a new way forward when it comes to personalization. This approach leverages reinforcement learning, where AI agents learn from consumer behavior and learn over time how to maximize rewards (such as conversions or purchases) in order to optimize the KPIs that have the biggest impact through ongoing, autonomous experimentation. That means AI decisioning can seamlessly determine not only the next best product offer for those lapsing users, but also the best channel, the optimal time of day or day of week, the frequency that makes the most sense, the message most likely to drive ideal outcomes, and any other dimension that could impact whether a recipient takes a given action. 

Even better, because AI agents are constantly experimenting in the background, the model can continuously adapt to shifting consumer preferences and behavior. And because these models use first-party data about every available customer characteristic, AI decisioning makes it possible to engage with individuals in a true 1:1 way, rather than relying on segments. The result is exceptional relevance and responsive experiences for individual consumers, something that’s only possible because of AI.

Final thoughts

With any major technology shift, it isn’t enough to just plan for the obvious outcomes — you must ensure you can react effectively to the changes that no one knows are coming. To succeed, brands need to pay careful attention to the arc of this new technology. Responding to a platform shift can’t be a one-and-done thing, and brands that create a five-year plan without building in regular pulse points and adjustments are going to quickly find themselves falling behind their more agile, flexible peers. 

To see the full benefit of AI in their customer engagement efforts, brands also need to look beyond AI. After all, AI isn’t a shortcut, it’s an amplifier — and the AI you use for customer engagement is only ever going to be as good as the infrastructure supporting it. An exceptional AI feature isn’t going to feel exceptional to consumers if it’s built on architecture that can’t take action in real time or can only deliver experiences in a single, prescribed way. Make sure your AI tools are built on a strong foundation and have the infrastructure they need to shine; otherwise, you may never fully achieve what’s possible.

Curious to learn more about how Braze is thinking about AI and customer engagement? Check out our BrazeAIᵀᴹ page.

Read more at Read More

Google adds Chrome Web Store user agent

Google has added a new user agent to its help documentation named Google-CWS. This is the Chrome Web Store user agent that is a user-triggered fetchers.

More details. Google posted about the new user agent over here, it reads; “The Chrome Web Store fetcher requests URLs that developers provide in the metadata of their Chrome extensions and themes.”

What are user-triggered fetchers. A user-triggered fetchers are initiated by users to perform a fetching function within a Google product.

The example provided by Google was “Google Site Verifier acts on a user’s request, or a site hosted on Google Cloud (GCP) has a feature that allows the site’s users to retrieve an external RSS feed. Because the fetch was requested by a user, these fetchers generally ignore robots.txt rules. The general technical properties of Google’s crawlers also apply to the user-triggered fetchers.”

Why we care. If you see this user agent in your crawl logs, you now know where it is from. The Chrome Web Store fetcher requests URLs that developers provide in the metadata of their Chrome extensions and themes.

Read more at Read More

7 focus areas as AI transforms search and the customer journey in 2026

7 focus areas as AI transforms search and the customer journey in 2026

Search is changing faster than ever – and 2026 may be the year it fully breaks from the past. 

Over the last year, AI has reshaped how people discover, decide, and convert, collapsing the traditional customer journey and cutting touchpoints in half.

AI-powered assistants and large language models (LLMs) will handle roughly 25% of global search queries by 2026, per Gatner, replacing many traditional search interactions.

customer-journey-evolution

We’re already seeing the effects. Traffic from LLMs is climbing at a hockey-stick pace, signaling a massive shift in how users find information. 

To stay competitive, marketers need to build strong content and experience flywheels, as answer engine optimization (AEO) and generative engine optimization (GEO) become critical priorities.

Bruce Clay, founder and president of Bruce Clay Inc., predicted:

  • “AI-powered search is expected to cause traffic to continue to drop for many sites, creating a disturbance in the force.”

Adopting AI isn’t optional – it’s foundational. 

Yet most marketing systems weren’t designed to operate in an AI-first world. 

Disconnected tools and data silos make orchestration difficult and create inconsistencies that derail performance. 

To succeed in 2026, brands will need integrated, cross-functional, omnichannel systems that connect data, content, and customer experience.

Building a resilient digital presence for 2026

Preparing a brand’s digital presence for an AI-driven world means rethinking data, tools, and customer experiences while presenting a clear, consistent brand story. 

The goal is to deliver personalized content and be ready for agentic experiences, where AI assistants act on behalf of users.

This shift begins with the evolution of search itself. 

The biggest change is moving away from a simple query-and-response model to a more dynamic, reasoning-driven conversation.

traditiona-vs-ai-search

Traditional search was like a game of chess – discrete and predictable. AI search, on the other hand, is more like a jazz concert – continuous and fluid. 

The experience has shifted from browsing lists and visiting websites to receiving direct, synthesized answers.

Instead of matching keywords to an index, AI uses query fan-out, which involves:

  • Breaking queries into components.
  • Analyzing multiple sources.
  • Delivering a single, comprehensive answer based on consistent patterns.

With AI, the traditional marketing funnel is shrinking. AI search can move directly from intent to conversion in minutes, dramatically accelerating the process. 

We’re already seeing three- to eight-times higher conversion rates from traffic originating in AI search.

According to Crystal Carter, head of AI search and SEO communications at Wix:

  • “Traffic from LLMs (like ChatGPT and Perplexity) is becoming increasingly distinct from Google search traffic, requiring separate optimization and analysis strategies.”
traditiona-vs-ai-search-funnel

New types of intents, like “generative” (e.g., “create an image”) and “no intent” (e.g., “thanks”), now make up almost half of all LLM interactions and don’t require a website visit. 

traditiona-vs-ai-search-intent

Search is becoming action-oriented. 

As AI systems start booking tables, making appointments, and completing purchases, even transactional journeys may no longer end on your website.

Search ‘everywhere’ optimization: The new SEO

For brands, the goal is no longer to be a single destination. It’s to be present wherever your audience is. 

That means becoming a trusted data source that powers the new, agentic ecosystem. 

AI systems prioritize clarity, consistency, and patterns, so channel silos must give way to a well-integrated, omnichannel approach.

Ideally, AI agents should be able to access all your brand data and deliver complete, contextually accurate results based on user intent. 

As Bill Hunt, president of Back Azimuth Consulting, explained:

  • “AI agents like ChatGPT will shift from answering questions to completing transactions. Both the Shopify connectors and feeds, as well as Walmart and Amazon saying they are Google killers. Being ‘callable’ through APIs and integrations will be as critical in 2026 as being crawlable was in 2010.”

In this new paradigm, websites are evolving from sales destinations to data and information repositories – built not just for human visitors, but for AI systems that retrieve, interpret, and act on that data.

Dig deeper. Search everywhere optimization: 7 platforms SEOs need to optimize for beyond Google

7 key focus areas shaping marketing and search in 2026

To compete in 2026 and beyond, brands must optimize for visibility across every relevant platform.

Here are seven key priorities and emerging trends shaping the future of search and martech.

2026-focus-areas

1. Strengthen technical SEO foundations for AI retrievability

The foundation of search is shifting from traditional crawlability to GEO. 

The core principle of GEO is retrievability – ensuring that high-quality content is not only discoverable but also easily accessible and understood by AI models.

To prepare for this shift, your website should serve as a centralized data hub for your content and digital assets, enhancing the experience for both humans and AI systems.

Website data hub

Make sure to grant access to AI crawlers in your robots.txt file, use server-side rendering (SSR) for core content, and adopt progressive indexing protocols like IndexNow, used by Bing.

2. Build localized visibility in AI-driven environments

Local SEO has evolved – from data accuracy in its 1.0 phase, to profile completeness and engagement in 2.0, to personalized experiences in what’s now emerging as Local 3.0. 

AI models, particularly Google’s AI Mode, increasingly cite local business information from sources like Google Maps and online directories. 

That makes core local SEO practices – NAP consistency and Google Business Profile optimization – critical for maintaining AI visibility.

Pages with robust schema markup also tend to earn higher citation rates in AI Overviews, reinforcing the importance of structured data for local relevance.

Dig deeper: AI and local search: The new rules of visibility and ROI in 2025

Get the newsletter search marketers rely on.


3. Develop an AI-assisted content flywheel

The biggest challenge today isn’t just creating content – it’s creating a connected experience. 

As companies integrate AI into their digital experience platforms (DXPs), the focus must shift from producing siloed assets to building a connected content flywheel. 

That begins with a deep understanding of who your customers are and what they need, allowing you to fill content gaps in real time and stay present at every critical touchpoint.

DXPs are no longer static repositories. They’re evolving into intelligent, AI-native engines that proactively shape user experiences. 

The ideal platform uses AI to create quality content at scale, powering a flywheel that delivers personalized, efficient, and well-governed customer journeys. 

This is especially important for large brands and multilocation businesses, where updating hundreds of pages still requires manual, repetitive effort.

Here are the key steps to creating quality content and building a content flywheel.

content-flywheel

Insights: Identify customer intent and content gaps

Your content strategy should be guided by real-time customer needs. 

Use AI-powered tools to uncover the questions and challenges your audience is trying to solve. 

Then analyze your existing content to identify gaps where your brand isn’t providing the right answers.

Creation: Develop deep, AI-structured content

To create content that performs well in AI search, start by assessing AI visibility and user sentiment. 

Use AI to scale the development of deep, comprehensive content – always with a human in the loop.

Since AI engines draw from text, images, videos, and charts, your content must be equally diverse. 

Just as important, it must be machine-readable so AI systems can synthesize and reason with it. 

Prioritize an entity-based SEO strategy to build topical authority, and use comprehensive schema markup to help search engines understand your brand and content context.

Clearly structuring your data also prepares your site for advanced conversational search.

It ensures visibility in the next generation of AI-powered answer engines and readiness for NLWeb, the open protocol spearheaded by Microsoft to make websites conversational.

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

Distribute

Establish a human-in-the-loop workflow to review, update, and refresh content regularly, keeping it accurate, relevant, and effective in answering user queries. 

Publish from a centralized source to maintain consistency across owned channels, and adopt rapid indexing protocols like IndexNow to accelerate discovery and visibility.

Monitor and iterate

Continuously track visibility and performance within AI models by testing target prompts. 

Deploy an agile strategy – as you distribute content, monitor results, experiment with new approaches, and refine continuously, the flywheel becomes self-sustaining. 

Each cycle feeds fresh insights back into the system, helping your content strategy stay adaptable and build momentum over time.

  • “AI search engines synthesize across ecosystems, not just pages. Marketing leaders must ensure their digital footprint works as a unified system, not isolated campaigns,” Hunt said.

Businesses must maintain consistent, clear information across every channel. 

Traditional SEO is giving way to relevance engineering – a discipline centered on systematically creating and structuring content for semantic relevance. 

This approach helps brands navigate today’s increasingly complex query landscape.

4. Create a consistent, data-driven experience flywheel

experience-flywheel

While the content flywheel attracts visitors, the experience flywheel converts them – a critical function in an era of zero-click searches. It operates on a continuous feedback loop.

  • Strategy: Building an experience strategy starts with unified data from every customer touchpoint and channel. AI can segment this data to reveal audience expectations and friction points, helping shape a strategy grounded in real behavior.
  • Experience: AI can then put this data to work – connecting audience intent, personas, desired outcomes, and business goals to generate predictive insights that drive personalized and agentic experiences dynamically.
  • Conversion: AI also helps track the customer journey through the funnel across channels and touchpoints. Dynamic A/B testing and conversion rate optimization (CRO) can then be done at scale, tailored to audience segments and intent.
  • Iteration: The goal isn’t perfection but agility. Monitoring performance alone isn’t enough – iteration matters. Use data to make real-time pivots, refining your strategy with every new learning.

The experience flywheel becomes a self-reinforcing engine that continuously drives engagement, builds loyalty, and accelerates growth.

5. Use AI agents to orchestrate journeys and workflows

As AI-driven search becomes increasingly agentic, it establishes a new standard for the seamless digital experiences customers expect. 

To meet this demand, brands must use journey orchestration and workflow automation powered by AI agents that guide users through connected, intuitive experiences.

The key is to deploy specialized vertical AI agents trained on your business data. 

By orchestrating these agents across the customer journey, you can deliver hyper-personalized, omnichannel experiences. 

This is only possible if your website and systems are ready to interact with AI agents.

For internal teams, AI agents also offer major opportunities to automate manual workflows across the entire marketing landscape.

Dig deeper: How AI agents are revolutionizing digital marketing

6. Redefine KPIs for an AI-first performance model

As AI satisfies user intent more directly within search results, traditional metrics like rankings and traffic are losing relevance. 

This shift means citation is the new rank, pushing teams to optimize content for retrievability rather than rankability.

As metrics like click-through rate decline in importance, new success indicators are emerging – including LLM visibility score, AI citation count, share of voice, and sentiment. 

Success now depends on query diversity, or the ability to answer multiple related long-tail queries effectively.

According to Ray Grieselhuber, CEO of Demandsphere: 

  • “Traditional metrics like impressions, clicks, and click-through rates are becoming much more difficult to rely on as KPIs. They are still useful to look at, but marketers should renew their focus on human behavior. Share of Voice is one of the best KPIs to measure this new behavior. Companies that ignore visibility in AI-driven responses risk ‘feeding that territory’ to their competitors.”

7. Integrate systems and data to power a unified marketing infrastructure

A fragmented marketing tech stack with siloed tools creates inefficiencies and hidden costs.

Data fragmentation and manual processes increase operational expenses and derail integration efforts. 

Shifting focus to an integrated marketing platform – and evaluating total cost of ownership – helps overcome these challenges.

An integrated solution provides the consistency, clarity, and unified data needed to keep your digital presence adaptive and competitive.

Dig deeper: Integrating SEO into omnichannel marketing for seamless engagement

The next phase of search and the customer journey

As we move into 2026, AI is not just another tool – it’s rebuilding the customer journey from the ground up. 

With AI assistants expected to handle a quarter of all search queries, the traditional marketing funnel is shrinking. 

The new landscape is defined by agentic, action-oriented interactions that can bypass websites entirely, demanding a fundamental strategic shift from every brand.

To stay visible and relevant, businesses must evolve from being destinations to being trusted data sources for AI. 

That begins by fueling a content flywheel with deep, structured content accessible across every channel. 

Once this flywheel attracts an audience, an experience flywheel – powered by unified customer data and an integrated, AI-native platform – takes over to drive conversion through deep personalization.

Ultimately, the brands that succeed will be those that embrace this new ecosystem. 

They’ll replace outdated metrics, such as traffic, with new KPIs focused on AI visibility, tear down silos through integration, and prioritize delivering seamless, omnichannel experiences.

Thank you to Bill Hunt, Ray Grieselhuber, Bruce Clay, Crystal Carter, David Banahan, and Tushar Prabhu for their insights and contributions.

Read more at Read More

Google says verify your cloud hosting provider with Search Console

John Mueller from Google posted an SEO tip and reminder for those who use cloud services, such as AWS, Azure, Google Cloud or others, to host images, videos or other content. John explained that you should probably verify those within Google Search Console. This will give you the ability to track the performance of those files in Google Search, including any debugging information when necessary.

Of course, in order to do this, you need to be able to control the DNS and most give you the option to do that through DNS CNAME. So you can set up your DNS to control those files in that cloud environment. For examples, it can be images.domain.com or videos.domain.com and so on.

The advice. Here is John’s post on this on Bluesky:

If you’re using a cloud provider to host images / videos / other content, you can and should verify the host in Search Console, so that you’re aware of potential issues that affect Google’s crawling & indexing, & Safe Browsing. Use a DNS CNAME to the bucket, then verify with DNS.

Using your own hostname (something like content.your-site.com) means you can verify it in Google Search Console to get crawl errors and malware alerts. You can verify using DNS verification… or just verifying your main domain.

To do this, set up a CNAME entry for your domain name and point at your cloud provider’s bucket, eg “content.your-site.com” uses a CNAME for “your-bucket.clodstorage.com” (or “buckets.clodstorage.com”). Also, you will have to update all links in your site (ugh, I know).

You need to update all the links within your site so that users only find your content with your new hostname. For bigger sites, this is a hassle, I know. Search & replace, then double-check by crawling the main sections of your site (all templates, all important URLs).

Caveat: if you need to do this for images, and you care about Image search traffic, know that this will cause fluctuations in Google Images (images are often recrawled slower than web pages and need to be “re-processed” with the new URLs). It’ll settle down though.

Bonus: if you use something like “content.your-site.com”, you can just verify the main domain with DNS in Search Console, and get all data for your website + the content hosted there in a single property in Search Console.

AND THAT’S NOT ALL. IF YOU ORDER NOW, YOU ALSO RECEIVE the ability to migrate to another cloud storage provider without breaking a sweat. Map the CNAME to the new bucket (if the file URLs remain the same), use redirects (it’s your hostname). It’s not really your site unless it’s on your domain name.”

Why we care. It is super common these days for websites to use numerous cloud hosting services and products. So it is totally possible that you are missing out on data, analytics and useful debugging details within Google Search Console for those services.

Verifying them on Search Console should not be a big deal for your site’s administrator and should it should unlock a lot of useful information for you and your SEO team.

Your Competitors Are Already Optimizing for AI Search. Are You?

Monitor how AI platforms rank you vs competitors in real-time

Discover untapped AI visibility opportunities in your industry

Track sentiment shifts across 5+ major AI platforms


See what AI says about your brand today

Read more at Read More

Google Local Services Ads vs. Search Ads: Which drives better local leads?

Google Local Services Ads vs. Search Ads- Which drives better local leads?

Google gives local businesses two main ways to generate PPC leads online: Local Services Ads (LSAs) and Search campaigns.

LSAs are pay-per-lead campaigns – for actions such as calls, messages, or booked appointments – with a quick setup process that involves verifying your business. After that, Google automates most of the ad and keyword setup.

Search campaigns are more complex but offer far greater control over ad copy, keywords, and optimization.

Understanding how each format works – and when to use them – can help you get more qualified leads and make smarter use of your ad budget.

Most advertisers use both and shift budgets based on which delivers better long-term results.

Getting started with Google Local Services Ads

LSAs work for businesses of all sizes, not just those with small budgets.

For small business owners, LSAs offer an easy way to set up and run ads quickly. 

This is one of the few ad formats where following Google’s setup instructions can actually work well. 

That’s not the case for Google Search campaigns, which are far more complex and often waste spend when relying on Google’s automated suggestions.

Small businesses can prepay a few hundred dollars to test results. 

While LSAs offer fewer options for control, customization, or optimization, they can work well for very small budgets. 

They don’t require as much active management as Search campaigns – though they aren’t completely “set it and forget it” either.

Larger companies can also benefit from testing LSAs alongside other ad formats to compare results. 

However, not all industries are eligible, so always confirm availability before allocating budget.

During setup, review all details carefully – including company information, service areas, and specific services – rather than assuming Google configured them correctly. 

You have limited control over ad copy and keywords, since Google automatically determines relevant terms. 

As Google’s documentation notes, “there is no need to do keyword research as relevant keywords are automatically determined by Google.” 

This can work in your favor – or lead to irrelevant traffic – because you can’t define your own keywords.

Reviews are especially important in this format, as they appear prominently and heavily influence results. Collecting legitimate, high-quality reviews is critical for success.

To evaluate performance, connect third-party tools to track and qualify leads. 

A basic CRM can help you measure how many leads convert into customers. 

Platforms like HouseCall Pro and ServiceTitan can also integrate booking features, letting customers schedule appointments directly through your LSAs.

Dig deeper: Advanced Google Ads tracking for local service companies

Getting more from your Google Search ads

Google Search campaigns are more complex but offer a wider range of features for setup and optimization. 

On top of setting business hours, target areas, and other details, Search campaigns give you greater control over ad testing, assets, keywords, match types, bidding strategies, and more.

Testing with just a few hundred dollars is not recommended. These campaigns require active monitoring and frequent optimization to perform well over time. 

Unlike LSAs, you can add negative keywords and test a wide range of terms to identify which are most effective and profitable. 

A/B testing ad copy and landing pages is also possible, giving Search campaigns much more scalability.

When starting, test a small budget using phrase and exact match keywords only, even with manual CPC bidding to set your maximum bid per click. 

This offers tight control for new accounts, though it’s typically a temporary setup before switching to automated bidding and broader match types. 

With larger budgets, you can immediately use automated bidding and broad match keywords.

Begin with broad match keywords using a Maximize Conversions bid strategy, then add a target CPA (tCPA) once performance data builds.

In industries with high CPCs, set up portfolio bidding to include both a tCPA and a maximum CPC bid. 

Microsoft Ads includes this option natively in its tCPA setting, so portfolio bidding isn’t required there.

After running a Search campaign for two to three months, begin expanding and refining based on performance. 

Add new campaigns and ad groups to test additional keyword and ad combinations, aligning each with specific landing pages to maximize lead generation – something not possible with LSAs.

Get the newsletter search marketers rely on.


Combining LSAs and Search campaigns for stronger results

As with any advertising channel, it’s essential to regularly evaluate lead quality using a CRM and call tracking tools, such as CallTrackingMetrics or CallRail. 

When running both LSAs and Search ads, compare leads from each to assess performance. 

LSAs often face lead quality issues, despite being pay-per-lead campaigns. 

Google continues improving spam filtering and invalid lead detection for LSAs, but the system still isn’t perfect. Invalid leads can be disputed.

Ad positioning also differs between the two formats. LSAs typically appear at the top of the page, though fewer of them are shown compared to Search ads. 

Showing in multiple placements isn’t a problem, but you should continually evaluate cost per lead, lead quality, and lead-to-customer conversion rates for both formats.

Dig deeper: How to expand your reach with reverse location targeting in Google Ads

Expanding beyond LSAs and Search campaigns

For larger budgets, several other Google Ads campaign types are worth testing. These can support lead generation directly or help build local brand awareness.

Display, Video and Demand Gen campaigns can generate leads on their own or build brand awareness for top-of-funnel audiences. 

They work well for higher-priced products or services with longer sales cycles, and for lower-priced services that rely on staying top-of-mind – such as plumbing or AC repair.

Performance Max campaigns can also deliver strong lead volume.

However, because they extend beyond Search, it’s essential to monitor lead quality through your CRM and compare it against Search and LSA performance.

With Google Analytics and Google Ads tracking multiple touchpoints before a conversion, you may see fractional conversions.

For example, 0.5 for a Video campaign and 0.5 for a Search campaign – indicating that both contributed to a single lead. 

While not a perfect system, this data provides useful context for how different campaigns interact across the customer journey.

Test and compare

Both small and large businesses can benefit from testing LSAs, and all should consider running them alongside Search campaigns to compare results. 

There’s no one-size-fits-all approach – both formats can be profitable when properly tracked and optimized.

Dig deeper: Google Ads for SMBs: How to maximize paid search success

Read more at Read More

Google Business Profiles What’s happening feature expands

Google has expanded the What’s happening feature within Google Business Profiles to restaurants and bars in the United Kingdom, Canada, Australia, and New Zealand. It is now available for multi-location restaurants, not just single-location restaurants.

The What’s happening feature launched back in May as a way for some businesses to highlight events, deals, and specials prominently at the top of your Google Business Profile. Now, Google is bringing it to more countries.

What Google said. Google’s Lisa Landsman wrote on LinkedIn:

How do you promote your “Taco Tuesday” in Toledo and your “Happy Hour” in Houston… right when locals are searching for a place to go?

I’m excited to share that the Google Business Profile feature highlighting what’s happening at your business, such as timely events, specials and deals, has now rolled out for multi-location restaurants & bars across the US, UK, CA, AU & NZ! (It was previously only available for single-location restaurants)

This is a great option for driving real-time foot traffic. It automatically surfaces the unique specials, live music, or events you’re already promoting at a specific location, catching customers at the exact moment they’re deciding where to eat or grab a cocktail.

What it looks like. Here is a screenshot of this feature:

More details. Google’s Lisa Landsman added, “We’ve already seen excellent results from testing and look forward to hearing how this works for you!”

Availability. This feature is only available for restaurants & bars. Google said it hopes to expand to more categories soon. It is also only available in the United States, United Kingdom, Canada, Australia, and New Zealand.

The initial launch was for single-location Food and Drink businesses in the U.S., UK, Australia, Canada, and New Zealand. It is now available for multi-location restaurants, not just single-location restaurants.

Why we care. If you manage restaurants and/or bars, this may be a new way to get more attention and visitors to your business from Google Search. Now, if you manage multi-location restaurants or bars, you can leverage this feature.

Read more at Read More

LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Marketing, technology, and business leaders today are asking an important question: how do you optimize for large language models (LLMs) like ChatGPT, Gemini, and Claude? 

LLM optimization is taking shape as a new discipline focused on how brands surface in AI-generated results and what can be measured today. 

For decision makers, the challenge is separating signal from noise – identifying the technologies worth tracking and the efforts that lead to tangible outcomes.

The discussion comes down to two core areas – and the timeline and work required to act on them:

  • Tracking and monitoring your brand’s presence in LLMs.
  • Improving visibility and performance within them.

Tracking: The foundation of LLM optimization

Just as SEO evolved through better tracking and measurement, LLM optimization will only mature once visibility becomes measurable. 

We’re still in a pre-Semrush/Moz/Ahrefs era for LLMs. 

Tracking is the foundation of identifying what truly works and building strategies that drive brand growth. 

Without it, everyone is shooting in the dark, hoping great content alone will deliver results.

The core challenges are threefold:

  • LLMs don’t publish query frequency or “search volume” equivalents.
  • Their responses vary subtly (or not so subtly) even for identical queries, due to probabilistic decoding and prompt context.
  • They depend on hidden contextual features (user history, session state, embeddings) that are opaque to external observers.

Why LLM queries are different

Traditional search behavior is repetitive – millions of identical phrases drive stable volume metrics. LLM interactions are conversational and variable. 

People rephrase questions in different ways, often within a single session. That makes pattern recognition harder with small datasets but feasible at scale. 

These structural differences explain why LLM visibility demands a different measurement model.

This variability requires a different tracking approach than traditional SEO or marketing analytics.

The leading method uses a polling-based model inspired by election forecasting.

The polling-based model for measuring visibility

A representative sample of 250–500 high-intent queries is defined for your brand or category, functioning as your population proxy. 

These queries are run daily or weekly to capture repeated samples from the underlying distribution of LLM responses.

Competitive mentions and citations metrics

Tracking tools record when your brand and competitors appear as citations (linked sources) or mentions (text references), enabling share of voice calculations across all competitors. 

Over time, aggregate sampling produces statistically stable estimates of your brand visibility within LLM-generated content.

Early tools providing this capability include:

  • Profound.
  • Conductor.
  • OpenForge.
Early tools for LLM visibility tracking

Consistent sampling at scale transforms apparent randomness into interpretable signals. 

Over time, aggregate sampling provides a stable estimate of your brand’s visibility in LLM-generated responses – much like how political polls deliver reliable forecasts despite individual variations.

Building a multi-faceted tracking framework

While share of voice paints a picture of your presence in the LLM landscape, it doesn’t tell the complete story. 

Just as keyword rankings show visibility but not clicks, LLM presence doesn’t automatically translate to user engagement. 

Brands need to understand how people interact with their content to build a compelling business case.

Because no single tool captures the entire picture, the best current approach layers multiple tracking signals:

  • Share of voice (SOV) tracking: Measure how often your brand appears as mentions and citations across a consistent set of high-value queries. This provides a benchmark to track over time and compare against competitors.
  • Referral tracking in GA4: Set up custom dimensions to identify traffic originating from LLMs. While attribution remains limited today, this data helps detect when direct referrals are increasing and signals growing LLM influence.
  • Branded homepage traffic in Google Search Console: Many users discover brands through LLM responses, then search directly in Google to validate or learn more. This two-step discovery pattern is critical to monitor. When branded homepage traffic increases alongside rising LLM presence, it signals a strong causal connection between LLM visibility and user behavior. This metric captures the downstream impact of your LLM optimization efforts.

Nobody has complete visibility into LLM impact on their business today, but these methods cover all the bases you can currently measure.

Be wary of any vendor or consultant promising complete visibility. That simply isn’t possible yet.

Understanding these limitations is just as important as implementing the tracking itself.

Because no perfect models exist yet, treat current tracking data as directional – useful for decisions, but not definitive.

Why mentions matter more than citations

Dig deeper: In GEO, brand mentions do what links alone can’t

Estimating LLM ‘search volume’

Measuring LLM impact is one thing. Identifying which queries and topics matter most is another.

Compared to SEO or PPC, marketers have far less visibility. While no direct search volume exists, new tools and methods are beginning to close the gap.

The key shift is moving from tracking individual queries – which vary widely – to analyzing broader themes and topics. 

The real question becomes: which areas is your site missing, and where should your content strategy focus?

To approximate relative volume, consider three approaches:

Correlate with SEO search volume

Start with your top-performing SEO keywords. 

If a keyword drives organic traffic and has commercial intent, similar questions are likely being asked within LLMs. Use this as your baseline.

Layer in industry adoption of AI

Estimate what percentage of your target audience uses LLMs for research or purchasing decisions:

  • High AI-adoption industries: Assume 20-25% of users leverage LLMs for decision-making.
  • Slower-moving industries: Start with 5-10%.

Apply these percentages to your existing SEO keyword volume. For example, a keyword with 25,000 monthly searches could translate to 1,250-6,250 LLM-based queries in your category.

Using emerging inferential tools

New platforms are beginning to track query data through API-level monitoring and machine learning models. 

Accuracy isn’t perfect yet, but these tools are improving quickly. Expect major advancements in inferential LLM query modeling within the next year or two.

Get the newsletter search marketers rely on.


Optimizing for LLM visibility

The technologies that help companies identify what to improve are evolving quickly. 

While still imperfect, they’re beginning to form a framework that parallels early SEO development, where better tracking and data gradually turned intuition into science.

Optimization breaks down into two main questions:

  • What content should you create or update, and should you focus on quality content, entities, schema, FAQs, or something else?
  • How should you align these insights with broader brand and SEO strategies?

Identify what content to create or update

One of the most effective ways to assess your current position is to take a representative sample of high-intent queries that people might ask an LLM and see how your brand shows up relative to competitors. This is where the Share of Voice tracking tools we discussed earlier become invaluable.

These same tools can help answer your optimization questions:

  • Track who is being cited or mentioned for each query, revealing competitive positioning.
  • Identify which queries your competitors appear for that you don’t, highlighting content gaps.
  • Show which of your own queries you appear for and which specific assets are being cited, pinpointing what’s working.

From this data, several key insights emerge:

  • Thematic visibility gaps: By analyzing trends across many queries, you can identify where your brand underperforms in LLM responses. This paints a clear picture of areas needing attention. For example, you’re strong in SEO but not in PPC content. 
  • Third-party resource mapping: These tools also reveal which external resources LLMs reference most frequently. This helps you build a list of high-value third-party sites that contribute to visibility, guiding outreach or brand mention strategies. 
  • Blind spot identification: When cross-referenced with SEO performance, these insights highlight blind spots; topics or sources where your brand’s credibility and representation could improve.

Understand the overlap between SEO and LLM optimization

LLMs may be reshaping discovery, but SEO remains the foundation of digital visibility.

Across five competitive categories, brands ranking on Google’s first page appeared in ChatGPT answers 62% of the time – a clear but incomplete overlap between search and AI results.

That correlation isn’t accidental. 

Many retrieval-augmented generation (RAG) systems pull data from search results and expand it with additional context. 

The more often your content appears in those results, the more likely it is to be cited by LLMs.

Brands with the strongest share of voice in LLM responses are typically those that invested in SEO first. 

Strong technical health, structured data, and authority signals remain the bedrock for AI visibility.

What this means for marketers:

  • Don’t over-focus on LLMs at the expense of SEO. AI systems still rely on clean, crawlable content and strong E-E-A-T signals.
  • Keep growing organic visibility through high-authority backlinks and consistent, high-quality content.
  • Use LLM tracking as a complementary lens to understand new research behaviors, not a replacement for SEO fundamentals.

Redefine on-page and off-page strategies for LLMs

Just as SEO has both on-page and off-page elements, LLM optimization follows the same logic – but with different tactics and priorities.

Off-page: The new link building

Most industries show a consistent pattern in the types of resources LLMs cite:

  • Wikipedia is a frequent reference point, making a verified presence there valuable.
  • Reddit often appears as a trusted source of user discussion.
  • Review websites and “best-of” guides are commonly used to inform LLM outputs.

Citation patterns across ChatGPT, Gemini, Perplexity, and Google’s AI Overviews show consistent trends, though each engine favors different sources.

This means that traditional link acquisition strategies, guest posts, PR placements, or brand mentions in review content will likely evolve. 

Instead of chasing links anywhere, brands should increasingly target:

  • Pages already being cited by LLMs in their category.
  • Reviews or guides that evaluate their product category.
  • Articles where branded mentions reinforce entity associations.

The core principle holds: brands gain the most visibility by appearing in sources LLMs already trust – and identifying those sources requires consistent tracking.

On-page: What your own content reveals

The same technologies that analyze third-party mentions can also reveal which first-party assets, content on your own website, are being cited by LLMs. 

This provides valuable insight into what type of content performs well in your space.

For example, these tools can identify:

  • What types of competitor content are being cited (case studies, FAQs, research articles, etc.).
  • Where your competitors show up but you don’t.
  • Which of your own pages exist but are not being cited.

From there, three key opportunities emerge:

  • Missing content: Competitors are cited because they cover topics you haven’t addressed. This represents a content gap to fill.
  • Underperforming content: You have relevant content, but it isn’t being referenced. Optimization – improving structure, clarity, or authority – may be needed.
  • Content enhancement opportunities: Some pages only require inserting specific Q&A sections or adding better-formatted information rather than full rewrites.

Leverage emerging technologies to turn insights into action

The next major evolution in LLM optimization will likely come from tools that connect insight to action.

Early solutions already use vector embeddings of your website content to compare it against LLM queries and responses. This allows you to:

  • Detect where your coverage is weak.
  • See how well your content semantically aligns with real LLM answers.
  • Identify where small adjustments could yield large visibility gains.

Current tools mostly generate outlines or recommendations.

The next frontier is automation – systems that turn data into actionable content aligned with business goals.

Timeline and expected results

While comprehensive LLM visibility typically builds over 6-12 months, early results can emerge faster than traditional SEO. 

The advantage: LLMs can incorporate new content within days rather than waiting months for Google’s crawl and ranking cycles. 

However, the fundamentals remain unchanged.

Quality content creation, securing third-party mentions, and building authority still require sustained effort and resources. 

Think of LLM optimization as having a faster feedback loop than SEO, but requiring the same strategic commitment to content excellence and relationship building that has always driven digital visibility.

From SEO foundations to LLM visibility

LLM traffic remains small compared to traditional search, but it’s growing fast.

A major shift in resources would be premature, but ignoring LLMs would be shortsighted. 

The smartest path is balance: maintain focus on SEO while layering in LLM strategies that address new ranking mechanisms.

Like early SEO, LLM optimization is still imperfect and experimental – but full of opportunity. 

Brands that begin tracking citations, analyzing third-party mentions, and aligning SEO with LLM visibility now will gain a measurable advantage as these systems mature.

In short:

  • Identify the third-party sources most often cited in your niche and analyze patterns across AI engines.
  • Map competitor visibility for key LLM queries using tracking tools.
  • Audit which of your own pages are cited (or not) – high Google rankings don’t guarantee LLM inclusion.
  • Continue strong SEO practices while expanding into LLM tracking – the two work best as complementary layers.

Approach LLM optimization as both research and brand-building.

Don’t abandon proven SEO fundamentals. Rather, extend them to how AI systems discover, interpret, and cite information.

Read more at Read More

How to balance speed and credibility in AI-assisted content creation

How to balance speed and credibility in AI-assisted content creation

AI tools can help teams move faster than ever – but speed alone isn’t a strategy.

As more marketers rely on LLMs to help create and optimize content, credibility becomes the true differentiator. 

And as AI systems decide which information to trust, quality signals like accuracy, expertise, and authority matter more than ever.

It’s not just what you write but how you structure it. AI-driven search rewards clear answers, strong organization, and content it can easily interpret.

This article highlights key strategies for smarter AI workflows – from governance and training to editorial oversight – so your content remains accurate, authoritative, and unmistakably human.

Create an AI usage policy

More than half of marketers are using AI for creative endeavors like content creation, IAB reports.

Still, AI policies are not always the norm. 

Your organization will benefit from clear boundaries and expectations. Creating policies for AI use ensures consistency and accountability.

Only 7% of companies using genAI in marketing have a full-blown governance framework, according to SAS.

However, 63% invest in creating policies that govern how generative AI is used across the organization. 

Source- “Marketers and GenAI- Diving Into the Shallow End,” SAS
Source- “Marketers and GenAI- Diving Into the Shallow End,” SAS

Even a simple, one-page policy can prevent major mistakes and unify efforts across teams that may be doing things differently.

As Cathy McPhillips, chief growth officer at the Marketing Artificial Intelligence Institute, puts it

  • “If one team uses ChatGPT while others work with Jasper or Writer, for instance, governance decisions can become very fragmented and challenging to manage. You’d need to keep track of who’s using which tools, what data they’re inputting, and what guidance they’ll need to follow to protect your brand’s intellectual property.” 

So drafting an internal policy sets expectations for AI use in the organization (or at least the creative teams).

When creating a policy, consider the following guidelines: 

  • What the review process for AI-created content looks like. 
  • When and how to disclose AI involvement in content creation. 
  • How to protect proprietary information (not uploading confidential or client information into AI tools).
  • Which AI tools are approved for use, and how to request access to new ones.
  • How to log or report problems.

Logically, the policy will evolve as the technology and regulations change. 

Keep content anchored in people-first principles

It can be easy to fall into the trap of believing AI-generated content is good because it reads well. 

LLMs are great at predicting the next best sentence and making it sound convincing. 

But reviewing each sentence, paragraph, and the overall structure with a critical eye is absolutely necessary.

Think: Would an expert say it like that? Would you normally write like that? Does it offer the depth of human experience that it should?

“People-first content,” as Google puts it, is really just thinking about the end user and whether what you are putting into the world is adding value. 

Any LLM can create mediocre content, and any marketer can publish it. And that’s the problem. 

People-first content aligns with Google’s E-E-A-T framework, which outlines the characteristics of high-quality, trustworthy content.

E-E-A-T isn’t a novel idea, but it’s increasingly relevant in a world where AI systems need to determine if your content is good enough to be included in search.

According to evidence in U.S. v. Google LLC, we see quality remains central to ranking:

  • “RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: [redacted]% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.” 
Source: U.S. v. Google LLC court documentation
Source: U.S. v. Google LLC court documentation

It suggests that the same quality factors reflected in E-E-A-T likely influence how AI systems assess which pages are trustworthy enough to ground their answers.

So what does E-E-A-T look like practically when working with AI content? You can:

  • Review Google’s list of questions related to quality content: Keep these in mind before and after content creation.
  • Demonstrate firsthand experience through personal insights, examples, and practical guidance: Weave these insights into AI output to add a human touch.
  • Use reliable sources and data to substantiate claims: If you’re using LLMs for research, fact-check in real time to ensure the best sources. 
  • Insert authoritative quotes either from internal stakeholders or external subject matter experts: Quoting internal folks builds brand credibility while external sources lend authority to the piece.
  • Create detailed author bios: Include:
    • Relevant qualifications, certifications, awards, and experience.
    • Links to social media, academic papers (if relevant), or other authoritative works.
  • Add schema markup to articles to clarify the content further: Schema can clarify content in a way that AI-powered search can better understand.
  • Become the go-to resource on the topic: Create a depth and breadth of material on the website that’s organized in a search-friendly, user-friendly manner. You can learn more in my article on organizing content for AI search.
Source: Creating helpful, reliable, people-first content,” Google Search Central
Source: Creating helpful, reliable, people-first content,” Google Search Central

Dig deeper: Writing people-first content: A process and template

Train the LLM 

LLMs are trained on vast amounts of data – but they’re not trained on your data. 

Put in the work to train the LLM, and you can get better results and more efficient workflows. 

Here are some ideas.

Maintain a living style guide

If you already have a corporate style guide, great – you can use that to train the model. If not, create a simple one-pager that covers things like:

  • Audience personas.
  • Voice traits that matter.
  • Reading level, if applicable.
  • The do’s and don’ts of phrases and language to use. 
  • Formatting rules such as SEO-friendly headers, sentence length, paragraph length, bulleted list guidelines, etc. 

You can refresh this as needed and use it to further train the model over time. 

Build a prompt kit  

Put together a packet of instructions that prompts the LLM. Here are some ideas to start with: 

  • The style guide
    • This covers everything from the audience personas to the voice style and formatting.
    • If you’re training a custom GPT, you don’t need to do this every time, but it may need tweaking over time. 
  • A content brief template
    • This can be an editable document that’s filled in for each content project and includes things like:
      • The goal of the content.
      • The specific audience.
      • The style of the content (news, listicle, feature article, how-to).
      • The role (who the LLM is writing as).
      • The desired action or outcome.
  • Content examples
    • Upload a handful of the best content examples you have to train the LLM. This can be past articles, marketing materials, transcripts from videos, and more. 
    • If you create a custom GPT, you’ll do this at the outset, but additional examples of content may be uploaded, depending on the topic. 
  • Sources
    • Train the model on the preferred third-party sources of information you want it to pull from, in addition to its own research. 
    • For example, if you want it to source certain publications in your industry, compile a list and upload it to the prompt.  
    • As an additional layer, prompt the model to automatically include any third-party sources after every paragraph to make fact-checking easier on the fly.
  • SEO prompts
    • Consider building SEO into the structure of the content from the outset.  
    • Early observations of Google’s AI Mode suggest that clearly structured, well-sourced content is more likely to be referenced in AI-generated results.

With that in mind, you can put together a prompt checklist that includes:

  • Crafting a direct answer in the first one to two sentences, then expanding with context.
  • Covering the main question, but also potential subquestions (“fan-out” queries) that the system may generate (for example, questions related to comparisons, pros/cons, alternatives, etc.).
  • Chunking content into many subsections, with each subsection answering a potential fan-out query to completion.
  • Being an expert source of information in each individual section of the page, meaning it’s a passage that can stand on its own.
  • Provide clear citations and semantic richness (synonyms, related entities) throughout. 

Dig deeper: Advanced AI prompt engineering strategies for SEO

Create custom GPTs or explore RAG 

A custom GPT is a personalized version of ChatGPT that’s trained on your materials so it can better create in your brand voice and follow brand rules. 

It mostly remembers tone and format, but that doesn’t guarantee the accuracy of output beyond what’s uploaded.

Some companies are exploring RAG (retrieval-augmented generation) to further train LLMs on the company’s own knowledge base. 

RAG connects an LLM to a private knowledge base, retrieving relevant documents at query time so the model can ground its responses in approved information.

While custom GPTs are easy, no-code setups, RAG implementation is more technical – but there are companies/technologies out there that can make it easier to implement. 

That’s why GPTs tend to work best for small or medium-scale projects or for non-technical teams focused on maintaining brand consistency.

Create a custom GPT in ChatGPT
Create a custom GPT in ChatGPT

RAG, on the other hand, is an option for enterprise-level content generation in industries where accuracy is critical and information changes frequently.

Run an automated self-review

Create parameters so the model can self-assess the content before further editorial review. You can create a checklist of things to prompt it.

For example:

  • “Is the advice helpful, original, people-first?” (Perhaps using Google’s list of questions from its helpful content guidance.) 
  • “Is the tone and voice completely aligned with the style guide?” 

Have an established editing process 

Even the best AI workflow still depends on trained editors and fact-checkers. This human layer of quality assurance protects accuracy, tone, and credibility.

Editorial training

About 33% of content writers and 24% of marketing managers added AI skills to their LinkedIn profiles in 2024.

Writers and editors need to continue to upskill in the coming year, and, according to the Microsoft 2025 annual Work Trend Index, AI skilling is the top priority.  

Microsoft 2025 Annual Work Trend Index
Source: 2025 Microsoft Work Trend Index Annual Report

Professional training creates baseline knowledge so your team gets up to speed faster and can confidently handle outputs consistently.

This includes training on how to effectively use LLMs and how to best create and edit AI content.

In addition, training content teams on SEO helps them build best practices into prompts and drafts.

Editorial procedures

Ground your AI-assisted content creation in editorial best practices to ensure the highest quality. 

This might include:

  • Identifying the parts of the content creation workflow that are best suited for LLM assistance.
  • Conducting an editorial meeting to sign off on topics and outlines. 
  • Drafting the content.
  • Performing the structural edit for clarity and flow, then copyediting for grammar and punctuation.
  • Getting sign-off from stakeholders.  
AI editorial process
AI editorial process

The AI editing checklist

Build a checklist to use during the review process for quality assurance. Here are some ideas to get you started:

  • Every claim, statistic, quote, or date is accompanied by a citation for fact-checking accuracy.
  • All facts are traceable to credible, approved sources.
  • Outdated statistics (more than two years) are replaced with fresh insights. 
  • Draft meets the style guide’s voice guidelines and tone definitions. 
  • Content adds valuable, expert insights rather than being vague or generic.
  • For thought leadership, ensure the author’s perspective is woven throughout.
  • Draft is run through the AI detector, aiming for a conservative percentage of 5% or less AI. 
  • Draft aligns with brand values and meets internal publication standards.
  • Final draft includes explicit disclosure of AI involvement when required (client-facing/regulatory).

Grounding AI content in trust and intent

AI is transforming how we create, but it doesn’t change why we create.

Every policy, workflow, and prompt should ultimately support one mission: to deliver accurate, helpful, and human-centered content that strengthens your brand’s authority and improves your visibility in search. 

Dig deeper: An AI-assisted content process that outperforms human-only copy

Read more at Read More

The future of SEO teams is human-led and agent-powered

The conversation around artificial intelligence (AI) has been dominated by “replacement theory” headlines. From front-line service roles to white-collar knowledge work, there’s a growing narrative that human capital is under threat.

Economic anxiety has fueled research and debate, but many of the arguments remain narrow in scope.

  • Stanford’s Digital Economy Lab found that since generative AI became widespread, early-career workers in the most exposed jobs have seen a 13% decline in employment.
  • This fear has spread into higher-paid sectors as well, with hedge fund managers and CEOs predicting large-scale restructuring of white-collar roles over the next decade.

However, much of this narrative is steeped in speculation rather than the fundamental, evolving dynamics of skilled work.

Yes, we’ve seen layoffs, hiring slowdowns, and stories of AI automating tasks. But this is happening against the backdrop of high interest rates, shifts in global trade, and post-pandemic over-hiring.

As the global talent thought-leader Josh Bersin argues, claims of mass job destruction are “vastly over-hyped.” Many roles will transform, not vanish. 

What this means for SEO

For the SEO discipline, the familiar refrain “SEO is dead” is just as overstated.

Yes, the nature of the SEO specialist is changing. We’ve seen fewer leadership roles, a contraction in content and technical positions, and cautious hiring. But the function itself is far from disappearing.

In fact, SEO job listings remain resilient in 2025 and mid-level roles still comprise nearly 60% of open positions. Rather than declining, the field is being reshaped by new skill demands.

Don’t ask, “Will AI replace me?” Ask instead, “How can I use AI to multiply my impact?”

Think of AI not as the jackhammer replacing the hammer but as the jackhammer amplifying its effect. SEOs who can harness AI through agents, automation, and intelligent systems will deliver faster, more impactful results than ever before.

  • “AI is a tool. We can make it or teach it to do whatever we want…Life will go on, economies will continue to be driven by emotion, and our businesses will continue to be fueled by human ideas, emotion, grit, and hard work,” Bersin said.

Rewriting the SEO narrative

As an industry, it’s time to change the language we use to describe SEO’s evolution.

Too much of our conversation still revolves around loss. We focus on lost clicks, lost visibility, lost control, and loss of num=100.

That narrative doesn’t serve us anymore.

We should be speaking the language of amplification and revenue generation. SEO has evolved from “optimizing for rankings” to driving measurable business growth through organic discovery, whether that happens through traditional search, AI Overviews, or the emerging layer of Generative Engine Optimization (GEO).

AI isn’t the villain of SEO; it’s the force multiplier.

When harnessed effectively, AI scales insight, accelerates experimentation, and ties our work more directly to outcomes that matter:

  • Pipeline.
  • Conversions.
  • Revenue.

We don’t need to fight the dystopian idea that AI will replace us. We need to prove that AI-empowered SEOs can help businesses grow faster than ever before.

The new language of SEO isn’t about survival, it’s about impact.

The team landscape has already shifted

For years, marketing and SEO teams grew headcount to scale output.

Today, the opposite is true. Hiring freezes, leaner budgets, and uncertainty around the role of SEO in an AI-driven world have forced leaders to rethink team design.

A recent Search Engine Land report noted that remote SEO roles dropped to 34% of listings in early 2025, while content-focused SEO positions declined by 28%. A separate LinkedIn survey found a 37% drop in SEO job postings in Q1 compared to the previous year.

This signals two key shifts:

  • Specialized roles are disappearing. “SEO writers” and “link builders” are being replaced by versatile strategists who blend technical, analytical, and creative skill sets.
  • Leadership is demanding higher ROI per role. Headcount is no longer the metric of success – capability is.

What it means for SEO leadership

If your org chart still looks like a pyramid, you’re behind. 

The new landscape demands flexibility, speed, and cross-functional integration with analytics, UX, paid media, and content.

It’s time to design teams around capabilities, not titles.

Rethinking SEO Talent

The best SEO leaders aren’t hiring specialists, they’re hiring aptitude. Modern SEO organizations value people who can think across disciplines, not just operate within one.

The strongest hires we’re seeing aren’t traditional technical SEOs focused on crawl analysis or schema. They’re problem solvers – marketers who understand how search connects to the broader growth engine and who have experience scaling impact across content, data, and product.

Progressive leaders are also rethinking resourcing. The old model of a technical SEO paired with engineering support is giving way to tech SEOs working alongside AI product managers and, in many cases, vibe coding solutions. This model moves faster, tests bolder, and builds systems that drive real results.

For SEO leaders, rethinking team architecture is critical. The right question isn’t “Who should I hire next?” It’s “What critical capability must we master to stay competitive?”

Once that’s clear, structure your people and your agents around that need. The companies that get this right during the AI transition will be the ones writing the playbook for the next generation of search leadership.

The new human-led, agent-empowered team

The future of SEO teams will be defined by collaboration between humans and agents.

  • These agents are AI-enabled systems like automated content refreshers, site-health bots, or citation-validation agents that work alongside human experts.
  • The human role? To define, train, monitor, and QA their output.

Why this matters

  • Agents handle high-volume, repeatable tasks (e.g., content generation, basic auditing, link-score filtering) so humans can focus on strategy, insight, and business impact.
  • The cost of building AI agents can range from $20,000 to $150,000, depending on the complexity of the system, integrations, and the specialized work required across data science, engineering, and human QA teams, according to RTS Labs.
  • A single human manager might oversee 10-20 agents, shifting the traditional pyramid and echoing the “short pyramid” or “rocket ship” structure explored by Tomasz Tunguz.

The future: teams built around agents and empowered humans.

Real-world archetypes

  • SaaS companies: Develop a bespoke “onboarding agent” that reads product data, builds landing pages, and runs first-pass SEO audits, human strategist refines output.
  • Marketplace brands (e.g., upcoming seasonal trend): Use an “Audience Discovery Agent” that taps customer and marketplace data, but the human team writes the narrative and guides the vertical direction.
  • Enterprise content hubs: deploy “Content Refresh Agents” that identify high-value pages, suggest optimizations, and push drafts that editors review and finalise.

Integration is key

These new teams succeed when they don’t live in silos. The SEO/GEO squad must partner with paid search, analytics, revenue ops, and UX – not just serve them.

Agents create capacity; humans create alignment and amplification.

A call to SEO practitioners

Building the SEO community of the future will require change.

The pace of transformation has never been faster and it’s created a dangerous dependence on third-party “AI tools” as the answer to what is unknown.

But the true AI story doesn’t begin with a subscription. It begins inside your team.

If the only AI in your workflow is someone else’s product, you’re giving up your competitive edge. The future belongs to teams that build, not just buy.

Here’s how to start:

  • Build your own agent frameworks, designed with human-in-the-loop oversight to ensure accuracy, adaptability, and brand alignment.
  • Partner with experts who co-create, not just deliver. The most successful collaborations help your team learn how to manage and scale agents themselves.
  • Evolve your team structure, move beyond the pyramid mentality, and embrace a “rocket ship” model where humans and agents work in tandem to multiply output, insights, and results.

The future of SEO starts with building smarter teams. It’s humans working with agents. It’s capability uplift. And if you lead that charge, you’ll not only adapt to the next generation of search, you’ll be the ones designing it.

Read more at Read More

Google Search Console adds Query groups

Screenshot of Google Search Console

Google added Query groups to the Search Console Insights report. Query groups groups similar search queries together so you can quickly see the main topics your audience searches for.

What Google said. Google wrote, “We are excited to announce Query groups, a powerful Search Console Insights feature that groups similar search queries.”

“Query groups solve this problem by grouping similar queries. Instead of a long, cluttered list of individual queries, you will now see lists of queries representing the main groups that interest your audience. The groups are computed using AI; they may evolve and change over time. They are designed for providing a better high level perspective of your queries and don’t affect ranking,” Google added.

What it looks like. Here is a sample screenshot of this new Query groups report:

You can see that Google is lumping together “search engine optimization, seo optimization, seo website, seo optimierung, search engine optimization (seo), search …” into the “seo” query group in the second line. This shows the site overall is getting 9% fewer clicks on SEO related queries than it did previously.

Availability. Google said query groups will be rolling out gradually over the coming weeks. It is a new card in the Search Console Insights report. Plus, query groups are available only to properties that have a large volume of queries, as the need to group queries is less relevant for sites with fewer queries.

Why we care. Many SEOs have been grouping these queries into these clusters manually or through their own tools. Now, Google will do it for you, making it easier for more novie SEOs and beginner SEOs to understand.

More details will be posted in this help document soon.

Read more at Read More