AI Max in action: What early case studies and a new analysis script reveal

AI Max testing

Google’s AI Max for Search campaigns is changing how we run search ads. 

Launched in private beta as Search Max, the feature began rolling out globally in late May, with full availability expected by early Q3 2025. 

But will AI Max actually drive incremental growth or simply take credit for conversions your existing setup would have captured anyway? 

This article:

  • Breaks down the key metrics to track in AI Max.
  • Shares early results from travel, fashion, and B2B accounts.
  • Includes a Google Ads script to make analysis faster and easier.

Understanding AI Max

Think of AI Max as Google combining the best parts of Dynamic Search Ads and Performance Max into regular search campaigns. 

It does not replace your keywords. Instead, it works alongside them to find more people who want what you’re selling.

AI Max does three main things.

  • Finding new search terms your keywords might miss, using search term matching.
  • Writing new ad headlines and descriptions that match what people are actually searching for.
  • Sending people to the best page on your website instead of just the one you picked.

The real game changer came in July 2025, when Google began showing AI Max as its own match type in reports. 

Before this, figuring out what AI Max was doing felt like looking into a black box. 

Now, we can finally see the data and make smarter decisions.

Evaluating AI Max: Metrics that matter

When you’re looking at AI Max performance, start with the basics. 

Open your search term tab and look for the Search terms and landing pages from AI Max option. 

This lets you see AI Max results separately from your exact match, phrase match, and broad match keywords.

Compare conversion share and budget share

The first thing to check is how many conversions come from AI Max versus your regular keywords. 

If AI Max is bringing in 30% of your conversions but eating up 60% of your budget, you know something needs attention. 

Look at the cost per conversion, too. 

AI Max might cost more at first, but that’s normal while Google learns what works for your business.

Look beyond cost – focus on conversion rates

Don’t just focus on the cost. Pay attention to conversion rates. 

AI Max often finds people who are ready to buy but use different words than you expected. These new search terms can be gold mines if you spot the patterns.

One of AI Max’s biggest benefits is finding search terms you never thought to target. 

Look at your search terms report and filter for AI Max queries. You’ll probably see some surprises.

Let’s say you sell running shoes and target “best running shoes.” AI Max might show your ads for “comfortable jogging sneakers” or “shoes for morning runs.” 

These are people who want the same thing but use different words. The smart move is to add these high-performing terms to your regular keyword lists.

Identify irrelevant traffic 

If you’re getting clicks from people searching for “cheap shoes” when you sell premium products, add “cheap” as a negative keyword. 

AI Max respects negative keywords, so use them to guide the system toward higher-quality queries.

Dig deeper: Google’s AI Max for Search: What 30 days of testing reveal

Tracking the learning process

Every AI system needs time to learn, and AI Max is no different. 

Plan for a learning phase

The first few weeks often look expensive because Google is still figuring out what works. Don’t panic if your costs jump initially.

Track your performance daily during the first month

As AI Max learns your patterns, you should see costs stabilize and conversion rates improve. If things keep getting worse after three weeks, then it’s time to make changes.

Keep an eye on the types of search terms AI Max finds over time

Early on, you might see lots of random queries. As the system learns, the terms should become more relevant to your business goals.

Get the newsletter search marketers rely on.


When things go wrong

The biggest mistake people make is changing too much, too fast. 

AI Max needs data to work properly, and constantly adjusting things prevents the system from learning.

That said, some problems need quick fixes. 

  • If AI Max is spending money on completely irrelevant searches, add negative keywords immediately. 
  • If the AI creates ads that violate your brand guidelines, remove those assets right away.

Watch your overall account performance, not just AI Max numbers. 

Sometimes, AI Max might look expensive on its own, but it actually helps your other campaigns perform better by capturing different types of traffic.

Planning for the future

AI Max is still evolving, and Google keeps adding new features. 

To adapt:

  • Build reporting systems that can grow with these changes. 
  • Set up automated reports for the metrics that matter most to your business.
  • Don’t try to control everything. 

The businesses seeing the best results from AI Max are the ones that:

  • Set clear goals.
  • Provide good data.
  • Let the system do its job. 

Your role shifts from managing keywords to managing strategy.

Start testing AI Max on a small scale if you’re nervous about it. For example:

  • Create one campaign with AI Max enabled and compare it to your existing campaigns.
  • Run an AI Max for Search campaign experiment and let Google evaluate if the experiment is statistically valid. 

Once you see how it works for your specific business, you can decide whether to expand.

Case studies: AI Max in action

These are early results from a limited data set and shouldn’t be viewed as statistically significant. 

I’m sharing them to illustrate what I’ve seen so far – but the sample size is small and the timeframe short. Take these numbers with caution.

Case 1: Tourism and travel

This advertiser already had a solid search setup and was seeing good results. 

Growth, however, was difficult because of heavy competition and the fact that strong keywords were already in play within a modern search structure.

Match type Avg. CPC (€) CVR (%)
AI Max €0.11 1.47%
Broad match €0.09 3.79%
Exact match €0.53 9.00%
Exact match (close variant) €0.22 7.11%
Phrase match €0.16 6.25%
Phrase match (close variant) €0.11 3.27%

AI Max generated additional conversions, but relative to the existing setup the impact was limited. 

The conversion rate was much lower than other match types. 

Because the average CPC was low, there was no cost spike, but performance still lagged.

Broad match – also known for surfacing broader, newer queries – had an even lower CPC (€0.09) and a conversion rate more than twice that of AI Max. 

In this account, AI Max’s contribution was minor.

Search term overlap analysis showed AI Max had a 22.5% overlap rate, meaning 77.5% of queries were new to the campaign. 

That’s a fairly good sign in terms of query discovery.

Case 2: Fashion ecommerce

This account focused on women’s clothing and already had a well-optimized campaign. 

The goal was to expand reach during the competitive holiday season, when exact match keywords became increasingly expensive.

Match type Avg. CPC (€) CVR (%)
AI Max €0.08 2.15%
Broad match €0.12 2.89%
Exact match €0.67 8.45%
Exact match (close variant) €0.28 6.78%
Phrase match €0.19 5.92%
Phrase match (close variant) €0.14 4.11%

AI Max performed well here, delivering the lowest CPC at €0.08 and a respectable 2.15% conversion rate. 

Although conversion rates were lower than exact and phrase match, the cheaper clicks kept cost per acquisition competitive. 

AI Max also captured fashion-related long-tail searches and seasonal queries that the existing keyword set had missed.

Notably, AI Max outperformed broad match with both lower costs and better conversion rates. 

This suggests its ability to better understand product context and user intent – especially important in fashion, where search terminology is diverse.

Search term overlap analysis showed only an 18.7% overlap rate, meaning 81.3% of queries were completely new. 

That level of query discovery was valuable for extending reach in a highly competitive market.

Case 3: B2B SaaS

This account promoted project management software and had a mature strategy focused on high-intent keywords. 

Conversion tracking was strong, measuring both MQLs and SQLs. 

The client wanted to test AI Max for additional lead generation opportunities.

Match type Avg. CPC (€) CVR (%)
AI Max €0.89 0.76%
Broad match €0.72 1.23%
Exact match €1.84 4.67%
Exact match (close variant) €1.22 3.91%
Phrase match €1.05 3.44%
Phrase match (close variant) €0.94 2.88%

In this case, AI Max struggled. Despite a reasonable CPC of €0.89, the conversion rate was just 0.76%. 

That pushed CPA well above the client’s target, making AI Max the worst-performing match type in the account. 

It tended to capture too many informational searches from users not yet ready to convert.

Even broad match, typically associated with lower-intent traffic, outperformed AI Max with a 1.23% conversion rate at a lower CPC. 

The complexity of the B2B buying cycle favored exact and phrase match keywords over AI Max’s broader interpretation.

Search term overlap analysis showed a 31.4% overlap rate, leaving 68.6% of queries as new. 

However, these were mostly low-intent informational searches that didn’t align with SQL goals – underscoring the importance of high-quality conversion tracking when evaluating AI Max.

Wider industry sentiment

Advertiser feedback so far mirrors these mixed results. 

In a recent poll by Adriaan Dekker, more than 50% of respondents reported neutral outcomes from AI Max, while 16% saw good results and 28% reported poor performance.

Tips to analyze AI Max search terms

You can analyze AI Max queries in Google Sheets using a few simple formulas. If your search term report has the term in column A and match type in column B:

  • To check whether a search term appears in both AI Max and another match type:

=IF(AND(COUNTIFS($A:$A;A2;$B:$B;"AI Max")>0;COUNTIFS($A:$A;A2;$B:$B;"<>AI Max")>0);"Overlap";"No Overlap")

  • To count how many match types trigger a given term:

=COUNTIFS($A:$A;A2) can be used to count how many match types trigger on that search term.

  • To measure query length with an n-gram analysis:

=(LEN(A1)-LEN(SUBSTITUTE(A1," ","")))+1

These checks show whether AI Max is surfacing unique queries, overlapping with existing match types, or favoring short-tail vs. long-tail terms.

Because AI Max is still in early stages, it’s hard to draw firm conclusions. 

Performance may improve as the system learns from more data, or remain flat if your setup already covers most transactional queries. 

That’s the question advertisers will answer in the coming months as more tests and learnings emerge.

So far, results can be positive, neutral, or negative. 

In my experience, neutral to negative outcomes are more common – especially in accounts with strong existing setups, where AI Max has fewer opportunities to add value.

A Google Ads script to uncover AI Max insights

To make analyzing AI Max performance easier, I created a Google Ads script that automatically pulls data into Google Sheets for deeper analysis. 

It saves hours of manual work and includes the exact formulas mentioned earlier in this article, so you can immediately spot overlap rates and query patterns without manual setup.

The script creates two tabs in your Sheet:

  • AI Max: Performance Max search term data with headlines, landing pages, and performance metrics.
  • Search term analysis: A full comparison of all match types, including AI Max, with automated formulas.

The analysis covers:

  • Overlap detection between AI Max and other match types.
  • Query length analysis (short-tail vs. long-tail).
  • Match type frequency counts to identify competitive terms.
  • Automatic cost conversion from Google’s micro format into readable currency.

How to use it:

  • Create a new Google Sheet and copy the URL.
  • In Google Ads, go to Tools > Scripts.
  • Paste the script code and update the SHEET_URL variable.
  • Run the script to automatically populate your analysis.

With this setup, you can quickly calculate the same metrics I used in the case studies – like the 22.5% overlap rate in the tourism account or the 81.3% new query discovery in fashion. 

The automated workflow makes it easier to see whether AI Max is surfacing genuine new opportunities or simply redistributing existing traffic.

Read more at Read More

Google AI Mode gets more visual, including inspirational shopping responses

Google AI Mode is getting more visual by providing a more graphical response to some of your queries, including your shopping search queries. Google can do this by using its new visual version of the query fan-out technique it has used with AI Overviews and AI Mode.

Google AI Mode, for certain queries, particularly those related to shopping, will respond with images and graphics. These are aimed at sparking inspiration, Robby Stein, VP of Product Management at Google Search, told Search Engine Land.

AI Mode will be able to not just understand your query in a text-based manner, but also understand your query visually and respond with both textual and visual responses. It is a new and updated fluid, ongoing conversation in AI Mode that “sparks inspiration,” Google said.

Visual search fan-out technique. Google’s fan out technique is able breaking down your question into subtopics and issuing a multitude of queries simultaneously on your behalf. Now, Google can also do this visually by looking at the image and text query input for query analysis and various image region analysis, including meta data and context around the image. Then Google AI Mode can render a visual grid of responses to your query.

“This means AI Mode can perform a more comprehensive analysis of an image, recognizing subtle details and secondary objects in addition to the primary subjects – and then runs multiple queries in the background. This helps it understand the full visual context and the nuance of your natural language question to deliver highly relevant visual results,” Google wrote.

AI Mode is more visual. Now when you search for some queries in AI Mode, the responses will be much more graphical and visual, right up front. Yes, AI Mode may have responded with images before, but now the images are higher and more prominent in some of the responses.

Plus, you can conduct follow up questions on the visual responses.

“You’ll see rich visuals that match the vibe you’re looking for, and can follow up in whatever way is most natural for you, like asking for more options with dark tones and bold prints. Each image has a link, so you can click out and learn more when something catches your eye. And because the experience is multimodal, you can also start your search by uploading an image or snapping a photo,” Google added.

Shopping in AI Mode. Lilian Rincon, VP Product Management Google Shopping, told us one of the best places for a visual experience is with shopping in Google Search. A more visual AI Mode helps you shop conversationally with fresh shopping data, can lead to a better shopping experience.

With the addition of Google’s Shopping Graph of 50 billion product listings, where 2 billion products are refreshed daily every hour, the responses are not just inspirational but detailed and helpful.

AI Mode for Shopping can not just give you ideas on what to put in your living room but also help you find the perfect article of clothing, in your color, style and fit.

Here is a video of this in action:

More details. This is launching today in Google AI Mode in the US in English. These are free listings, not shopping ads, and currently have no paid model, including no affiliate model. While Google has ads in AI Overviews, Google is only experimenting with ads in AI Mode, and there are no more details on ads right now for this experience.

Agentic experiences, like helping you buy and find what your looking for, is here for some areas now in Search Labs. But Google said they want the final purchase to happen directly on the retailer’s site.

Why we care. A new, more visual and graphical experience in AI Mode, may be a better search experience for some searchers and for some queries. Google is experimenting with a lot of changes to Google Search and is rapidly trying new interfaces and technologies.

Read more at Read More

A checklist for effective SEO QA

ChatGPT Image - SEO QA checklist

Engineering teams usually have a quality assurance (QA) process.

Without it, they risk releasing work that hurts the user experience and creates unforeseen technical issues – including major SEO problems.

That’s where SEO QA comes in. Adding SEO-specific checks to existing QA protocols helps teams catch and fix issues before they go live.

But this step is less common than you’d think. Too often, it’s overlooked.

This article outlines what it takes to build an effective SEO QA discipline and provides a checklist SEOs and QA engineers can use to cover their bases.

Why SEO QA gets overlooked

Unless SEO is fully integrated with engineering, SEO-specific QA often gets overlooked.

As a result, SEOs may not flag problems until a tech audit – or worse, when they show up as organic KPI declines. 

This is especially common when SEO teams sit under marketing instead of product or engineering, since they’re excluded from regular milestones and lifecycle meetings. 

That makes it harder to communicate SEO’s importance, win buy-in, and establish it as part of everyday development.

Having a QA team within engineering is also no longer a given.

In agile environments, some teams prioritize speed over fully clean rollouts.

Others rely on AI tools to automate QA or monitor for technical issues, instead of employing dedicated QA engineers.

In short, there are plenty of reasons many teams lack a well-developed SEO QA practice.

What are the benefits of SEO QA?

For SEOs to be able to proactively find and resolve issues before they go out into the world, they need two things on a regular basis:

  • Opportunities to view upcoming engineering tickets and flag any that may have potential SEO impact. (A great reason for an SEO representative to be a part of sprint planning meetings.)
  • A chance to QA any of the flagged tickets before they hit production.

This has a few key benefits for the business:

  • Minimize the chances of deploying code that hurts SEO.
  • Catch and correct errors that hit production before they register with search engines.
  • Capitalize on SEO opportunities related to engineering work that’s already slotted for development.

The last bullet is just as much of a reason to implement SEO QA as the first two. 

It’s not just about catching bugs, it’s about maximizing value while minimizing resources. 

When SEOs have a chance to see what’s coming up, it allows them to connect the dots between SEO roadmap items and upcoming engineering initiatives to find potential areas of overlap. 

In turn, the business reaps the SEO benefits of work that’s already in motion, rather than spending additional resources to achieve the same goal later. 

Best practices: The 4 Ws of SEO QA

Alright, now that we’ve established why brands need SEO QA, let’s get into the logistics.

Who should perform SEO QA?

Ensure QA is performed by:

  • A technical SEO.
  • Or an engineer equipped with clear criteria shared by an SEO.

What should they check?

Define a checklist of core, critical SEO items that should be a part of QA for any ticket flagged as having potential SEO impact.

  • Refine this checklist on an ongoing basis, tailoring it to the nuances of your web stack, so no one makes the same mistake twice.
  • Automate “always-on” checklist items as much as possible to ease the resource burden over time. 
  • Supplement your checklist with any additional, project-specific SEO considerations outlined in product requirements documentation. 
  • Always check tracking, so no data is lost if GA4 or GTM issues arise.

When should SEO QA happen?

The cadence for SEO QA should mimic the site’s development release cycle and existing engineering QA processes. 

For instance:

  • If your site deploys code on two-week sprints, SEO QA should follow the same cadence.
  • After each release, run a crawl with JavaScript enabled.
  • Sites on platforms like Shopify or WordPress may release – and QA – less often.

Where should QA happen?

Test in staging before anything goes to production. 

Some elements might need to be tested in production if they affect indexing or crawlability of content. 

  • Example: The staging site might have the robots.txt set to disallow all URLs, since you don’t want staging to get indexed.

Implement monitoring tools as a safeguard that helps catch any errors that somehow make it to production.

  • Google Search Console: Make sure your account is set up, notifications are coming through, and check for issues weekly. 
  • Third-party crawlers: Set up a weekly crawl in any SEO tool, such as Semrush, Ahrefs, or Sitebulb.
  • Dedicated SEO monitoring toolsets: If you have the budget, certain third-party tools provide real-time auditing and monitoring. 

Building an SEO QA checklist

When SEO requests development work, they write the acceptance criteria in the product requirements and review the work before release.

But not all tickets that affect SEO go through that process, which makes an SEO QA checklist essential.

The checklist can be used by any SEO or QA engineer on any release flagged for SEO impact.

It’s a comprehensive list of core items, organized by category, to ensure issues don’t reach production.

Issue categories for SEO QA

Crawling

For pages to get indexed, search engines need to access URLs, crawl the content, and use it as context. 

That’s pretty fundamental to SEO, and a big reason we start here.

Note: Crawl issues often impact large swaths of the site because changes can occur across an entire page template or subfolder.

Crawling and indexing
  • Robots.txt: New or removed disallows that might impact URLs you do or don’t want crawled.
    • Are crawlers blocked from the site?
    • Are there any subfolders or parameters blocked that shouldn’t be? 
    • Are images or resources like JavaScript blocked?
  • Meta robots tags: Unintended changes from index to noindex, standard to nofollow, or vice versa.
  • Canonical tags: Were canonical URLs added, removed, or changed in ways that will cause issues?
    • For example:
      • Did Page 2+ of paginated listing pages canonicalize back to Page 1? 
      • Are filtered URLs properly canonicalized based on whether you want them indexed?
  • HTTP status codes: 3{xx} (redirect), 4{xx} (not available), or 5{xx} (server) errors resulting from changes
  • URL path: Changes to existing URLs that were not previously discussed with the SEO team.
  • Redirects: Are new redirects working properly, or did something break existing redirects?
  • Internal links: Are they coded using an <a href> tag, so crawlers can identify them?

Content changes

Are all of the following still available and correct?

Content changes
  • Navigation and footer.
  • Breadcrumbs.
  • SEO titles.
  • Meta descriptions.
  • Headings and other on-page copy.
  • Internal and external links.
  • Images, videos, and other media.
  • Related and recommended items widget.
  • User-generated content (especially reviews).
  • E-E-A-T signals, including author bylines and bios.
  • Hreflang and internationalization features.
  • Structured data: Is it crawlable, parsable, accurate, and reflecting visible information on the page? (Note: Google’s Schema Markup Testing Tool won’t work on staging URLs since crawlers are (hopefully!) blocked.)

Get the newsletter search marketers rely on.


JavaScript and CSS

You can see CSS issues because they visually impact the page. 

For JavaScript issues, you’ll need tools to understand whether crawlers can access critical content. 

Unless you’re already running a sitewide crawl with JavaScript enabled, test one or two pages from the affected template (i.e., blog, listing, product detail) using a tool like Rendering Difference Engine.

JavaScript and CSS
  • Page elements applicable to the template are available and functioning as intended, including pop-outs, filtering, sort function, and pagination.
  • Any page content that loads after a user interaction is available in HTML that search engines can crawl.
  • If the site serves source HTML, are key elements of the page different in the rendered HTML, such as:
    • Meta robots.
    • Canonicals.
    • Titles.
    • Meta descriptions.
    • Page copy.
    • Internal links.
    • External links.

Mobile

Google crawls mobile-first. So if you’re only checking on desktop, you’re skipping SEO QA.

Mobile
  • Does it look and function as it should? 
  • Are there any accessibility issues on a smaller screen? 
  • Is there consistency between the desktop and mobile versions of the site?

Tracking

If it’s not part of QA, broken tracking is a recipe for panic. 

The team will not find the issue until they see KPIs like organic traffic decline

Even worse, until it’s fixed, that’s historical data that you won’t get back. 

Tracking

Before launch on staging, check if:

  • All pages and templates have tracking code available.

The day after launch, verify that:

  • Internal analytics platform doesn’t show significant declines in KPIs or discrepancies with external reporting tools (e.g., GSC).

Optional: A/B testing

Not all A/B testing tools distinguish the control and variant for crawlers. 

They’re usually served one or the other version of the page randomly, which means your variant could impact SEO.

A/B testing
  • Aside from the variable, the pages should be identical to a crawler.

Refine over time

With every round of QA, engineers and SEOs will learn nuances and find new connections. 

You’ll discover that certain types of updates are more likely to cause certain types of SEO issues, certain plugins are linked to certain types of problems, etc.

Your SEO QA checklist is a living, breathing document and a place to document all of this to make SEO QA more effective – and avoid repeating mistakes – no matter who’s carrying it out. 

Start with the list below and make it your own over time.

 SEO QA checklist

Read more at Read More

Pinterest tests Top of Search ads to capture high-intent shoppers

pinterest logo on a smartphone

Pinterest is rolling out Top of Search ads, a new ad format that places brands directly in the first 10 slots of search results and Related Pins, where nearly half of user clicks occur.

Shopping on Pinterest is inherently visual, and most searches on the platform (96%) are unbranded according to Pinterest. That makes the top of search results a prime spot for discovery – and for brands to reach consumers who are open to new products.

Why we care. Pinterest’s Top of Search ads put your products in front of shoppers at the most valuable moment: when they’re actively browsing but not yet brand-committed. With nearly all searches unbranded, the format offers a powerful way to win new customers and outperform competitors. Early testing shows significant performance lifts, making it a high-return opportunity for brands looking to drive both visibility and conversions.

How it works. Top of Search ads ensure a brand’s products are featured where shopping journeys often begin. The new format also includes a brand-exclusive ad unit that highlights an advertiser’s catalog, giving products prominent placement over competitors.

Early results. According to Pinterest:

  • Top of Search ads have driven a 29% higher average CTR compared to standard campaigns.
  • Top of Search ads are 32% more likely to attract new clickers.
  • Wayfair, an early tester, reported a 237% CTR lift in just two weeks compared to its typical campaigns.

Between the lines. For advertisers, Top of Search ads offer a way to intercept undecided shoppers earlier in their journey, effectively turning Pinterest’s visual-first search into a new performance channel.

Read more at Read More

OpenAI turns ChatGPT into a shopping tool with Instant Checkout

OpenAI is launching Instant Checkout inside ChatGPT to Plus, Pro, and Free users in the U.S.

  • Users will be able to buy products from Etsy sellers.
  • Purchases are powered by the new Agentic Commerce Protocol (ACP), co-developed with Stripe.

How it works. Users search in plain language (e.g., “gifts for a ceramics lover”). Then:

  • ChatGPT returns product recommendations ranked by relevance, not payment.
  • If an item supports Instant Checkout, users tap “Buy,” confirm shipping and payment details, and complete the order without leaving chat.
  • Orders, payments, and fulfillment run through the merchant’s existing systems; ChatGPT passes information securely.
  • Merchants pay a small transaction fee; shoppers pay no extra cost.

Between the lines. OpenAI said products are ranked only by relevance – not sponsorship or whether Instant Checkout is enabled. Merchants remain the merchant of record, keeping control over fulfillment and customer relationships.

What’s next. Coming soon, according to OpenAI:

  • Multi-item carts.
  • Expansion to Shopify’s million-plus merchants (e.g., like Glossier, SKIMS, and Spanx)
  • More regions beyond the U.S.

Why we care. If AI chat becomes a mainstream way for people to discover products, OpenAI is now at the start of that purchase journey. For brands or businesses selling products, this could mean a new channel to optimize for – one that bypasses traditional search and funnels discovery straight into checkout.

How to sign up. Merchants can apply to have their products included in ChatGPT search results and enable Instant Checkout via ACP.

  • Etsy and Shopify sellers are already eligible and don’t need to apply.
  • OpenAI is onboarding merchants on a rolling basis through an online application form.

Dig deeper. How ChatGPT search ranks products and merchants

Different from Google. OpenAI is taking a different approach than Google’s agentic search capabilities. Whereas OpenAI is going all the way to completing purchases on behalf of users, Google (for now at least) is letting users take the final conversion action.

OpenAI’s announcement. Buy it in ChatGPT: Instant Checkout and the Agentic Commerce Protocol

Read more at Read More

TikTok launches Travel Ads to capture trip planning moments

TikTok (Credit: Shutterstock)

TikTok is introducing a new ad solution, Travel Ads powered by Smart+, designed to help brands connect with travelers during the discovery and booking phases.

Why now. Travel is one of TikTok’s fastest-growing verticals. According to internal TikTok data:

  • 66% of users say the app is their most helpful source of travel inspiration.
  • Users are 2.6x more likely to book after searching on TikTok.

How it works. Travel Ads leverage TikTok’s travel intent model and catalog integration to automatically serve personalized creatives at scale. Smart+ AI powers campaign setup, creative generation, and delivery optimization, aiming to convert discovery into bookings seamlessly.

Why we care. TikTok Travel Ads put hotels, flights, and destinations in front of a massive audience already using the app for trip planning, with two-thirds calling it their top source of travel inspiration. Smart+ AI tools then streamline setup and optimization, helping brands scale personalized ads that influence bookings at the earliest, most impactful stage.

The details:

  • Ads can showcase hotels, flights, and destinations using dynamic, visually rich formats.
  • Advertisers can choose among:
    • Single video ads: A hero video paired with personalized travel cards (hotel name, flight route, price).
    • Catalog video ads: Auto-built from product catalogs with tailored calls to action.
    • Catalog carousel ads: Scrollable, interactive ads pulling directly from catalog images.

What they’re saying. David Hoctor, TikTok’s head of US verticals for travel and gaming, said:

  • “Travel on TikTok goes beyond the For You feed, unlocking real-life travel experiences. Every swipe can be a step toward conversion.”

Between the lines. By integrating catalog feeds with intent signals, TikTok is pitching Travel Ads as a direct competitor to Google and Meta in the lucrative travel advertising market, where inspiration and conversion increasingly overlap.

Read more at Read More

‘Mistakes make you stronger’: PPC lessons from Inderpaul Rai

In episode 325 of PPC Live The Podcast, I sat down with Inderpaul “Indi” Rai, group account director at WeDiscover, to explore the lessons learned from mistakes, team dynamics, and the evolving role of automation and AI in paid search.

Indi, a veteran with over a decade of experience in AdTech, MarTech, SEO, analytics, and multilingual paid search, shared candid insights on how errors can shape careers and client relationships.

Embracing mistakes to grow

Indi opened up about one of the most significant mistakes in his career: an automated budget feature in Search Ads 360 went unchecked during his holiday, resulting in the U.S. account overspending by a substantial amount.

Despite the magnitude of the error, the client’s response was surprisingly supportive – they recognized it as an honest mistake and collaborated to mitigate the impact.

Key takeaway. Mistakes are inevitable, but transparency, a calm response, and collaborative problem-solving can turn potential disasters into learning opportunities. Indi emphasizes that experiencing and managing errors is essential for personal and professional growth.

The importance of team communication and handover

Indi reflected on how the overspend could have been avoided with better team preparation:

  • Ensuring critical information isn’t solely in the manager’s head.
  • Documenting handovers thoroughly and leaving room for junior team members to step in effectively.
  • Establishing multiple layers of oversight to prevent single points of failure.

He stressed that team members should feel empowered to act and communicate when issues arise, even if the manager is unavailable. In his experience, the junior team caught the overspend themselves and waited for Indi’s return rather than panicking – a testament to clear communication and a supportive team culture.

Lesson for managers: Maintain composure during crises, focus on solutions over blame, and ensure your team knows their role in resolving issues.

Automation and AI: Tools, not crutches

The discussion turned to automation and AI, where Indi shared practical advice:

  • Treat AI as an assistant, not a replacement. Blind reliance on AI can lead to errors, especially if users don’t understand the subject matter.
  • Always validate outputs and run rigorous testing before implementing AI-driven changes. He cited an example from his past work rewriting product descriptions: AI produced repetitive, generic content that wasn’t an improvement over the original, highlighting the need for careful oversight.
  • Automation can enhance efficiency but requires clear rules, regular checks, and accountability to prevent mistakes like unmonitored budget overspend.

Insight. Automation amplifies efficiency but cannot replace thoughtful human oversight and strategic decision-making.

Client relationships matter

A recurring theme in Indi’s story was the importance of cultivating strong client relationships. The supportive response of a previously “difficult” client revealed that mutual respect, trust, and proven value can turn challenging situations into opportunities for stronger partnerships.

Lessons in leadership and mindset

Indi also shared broader reflections applicable beyond PPC:

  • Resilience matters: he likened his career journey to Rocky, emphasizing that success isn’t about avoiding hits but about how you respond and keep moving forward.
  • Learning through experience: setbacks are essential; they teach you to handle pressure, improve processes, and grow professionally.
  • Balanced guidance: leaders should manage crises calmly, focus on facts, and support their teams without panicking.

From mistakes to momentum

The conversation underscores that mistakes are not the end – they are a catalyst for learning, collaboration, and improvement.

From automation missteps to client communication, Indi’s insights provide a roadmap for PPC professionals aiming to thrive in a fast-evolving industry.

Whether refining handovers, managing automation, or responsibly leveraging AI, the core lesson remains the same: anticipate errors, respond calmly, communicate clearly, and use each experience to build stronger teams and smarter processes.

Read more at Read More

Google pushes Demand Gen deeper into performance marketing

How to tell if Google Ads automation helps or hurts your campaigns

Google Ads’ Demand Gen campaigns – once thought of as mid-funnel discovery tools – are evolving into full-funnel, conversion-focused campaigns, with YouTube at the core.

Why we care. Marketers are under pressure to prove ROI across channels. Demand Gen now blends social-style ad formats with Google’s AI-driven targeting, giving advertisers new ways to drive sales, leads, and app installs from audiences they can’t reach elsewhere.

What’s new:

  • Target CPC bidding: Advertisers can now align Demand Gen with social campaigns for apples-to-apples budget comparisons.
  • Channel controls. Run ads only on YouTube, or expand to Display, Discover, Gmail, and even Maps.
  • Creative tools. Features like trimming and flipping help repurpose social assets for Shorts and other placements, lowering barriers to entry.
  • Feeds + app support. Product feeds in Merchant Center show a 33% conversion lift; Web-to-App Connect now extends to iOS for smoother in-app conversions.

By the numbers. According to Google advertisers using Demand Gen have seen on average:

  • 26% YoY increase in conversions per dollar
  • 33% uplift when attaching product feeds

Between the lines. Google says Demand Gen’s shift from contextual matching to predicting user intent and purchase propensity has made it a contender for bottom-funnel performance. In short: YouTube is no longer just discovery – it’s decision-making.

What’s next. Expect more AI-driven creative tools, expanded shoppable formats, and deeper integrations across channels.

The takeaway. Don’t wait for “perfect” YouTube creative. Lift, adapt, and test now — Demand Gen is no longer a mid-funnel experiment, it’s a performance channel.

Read more at Read More

Your GEO content audit template

GEO content audit

My SEO mantra in the age of GEO is from the great Lil Wayne, “Real G’s move in silence like lasagna.”

Translation for SEO marketers: the most effective GEO moves aren’t loud growth hacks. They’re the subtle edits and formatting that make AI cite you without fanfare.

To help with your GEO audit, here’s an inside peek into my secret menu.

Take a look at my GEO content audit template.

It’s an evolution of my SEO content audit.

As Google’s Danny Sullivan has been telling rooms full of marketers, “Good SEO is good GEO.”

That’s why I like to think of GEO as SEO’s MTV Unplugged version. It’s the same band, same lyrics, just stripped down, reimaged, and way more personal.

Alright, enough philosophy. You came here for the secret recipe. Let’s crack open the GEO content audit template and see how it works in practice.

Use this GEO content audit template

Cool. You’ve made a copy of the GEO content audit template. Now what?

Here are the key sheets you’ll work through:

  • Summary: High-level snapshot once you’ve scored everything.
  • Action list: Quick recap of next steps that summarizes all your findings from the other tabs.
  • Content inventory: This is the backbone. Filters include:
    • URL.
    • Action.
    • Strategy.
    • Page title.
    • Last updated date.
    • Author.
    • Word count.
    • Readability.
    • Average words per sentence.
    • Keywords.
    • Canonical.
    • Internal links.
    • And more.
  • Indexability/architecture/ URL design/on-page: Your technical health is still important.
  • Structured data: Markup needed and where.
  • International: Hreflang, language, local cues (currency, units, spelling, trust marks).
  • Speed: Yes, page speed is still important.
  • Content and gaps: Quality scoring and what’s missing.
  • Linking: Internal link plans and external targets.
  • Refresh: Cadence schedules by asset type.

You’ll bounce between content inventory, structured data, content, content gaps, and linking the most.

Tools you’ll want on hand

  • Crawling and gaps: Screaming Frog, Ahrefs, or Semrush for keywords and links.
  • Search Console: Build a regex brand view (brand + misspellings + product names) to watch demand move as AI answers spread.
  • Prompt testing: Manually test core buyer prompts in ChatGPT, Google AI Overviews/AI Mode, Gemini, Copilot, Perplexity. Log inclusions and citations. BrightEdge’s dataset shows that you’ll see a lot of disagreement across platforms. Expect that.
  • Attribution: Roadway AI (or similar) to connect topics/pages to pipeline and revenue for your QBR.

How to do a GEO content audit (with the template)

1. Set goals

Pick outcomes that map to how people actually find you now:

  • Inclusion rate: Percentage of target prompts where your brand is mentioned inside ChatGPT/AI Overviews/AI Mode/Perplexity/Copilot.
  • Homepage and direct lift: Buyers often go to AI → Google → your homepage. This is why you’ll want to watch branded impressions and homepage sessions.
  • Revenue by topic/page: Wire this to your attribution tool.

Why this mix?

Because AI boxes change, and engines disagree.

A blended scoreboard helps you avoid chasing one fragile metric.

Pro move: Add “ChatGPT/AI Search” to “How did you hear about us?” in forms and sales notes and review weekly. Many teams report this is where the hint of AI-assisted discovery shows up first.

2. Build your content inventory

Using Screaming Frog, export every URL with: title/H1, last updated, author, canonical, word count, readability metrics, internal links, and a target query/theme.

Add a few custom fields:

  • Direct-answer present (Y/N): Is there a <120-word summary up top that answers the main question?
  • FAQ present (Y/N): Does it mirror prompt fragments and include FAQ schema?

Why?

If your page is tidy, answer-first, and properly marked up, it’s far easier to reference.

3. Segment by site, market, and language 

Break your inventory into:

  • Domain/subdomain/folder (e.g., .co.uk, /fr/).
  • Market language variations (U.S. vs. U.K. English, Spain vs. Mexico Spanish).
  • Indexability quirks (hidden duplicates, parameters, session IDs).

For international pages, score:

  • Hreflang implementation (pointing to the right alternates; reciprocal).
  • Local cues (currency, units, spelling, trust marks like local badges, VAT specifics).
  • CTAs (country-specific copy, phone numbers, store links).

A shaky international setup is a fast way to look sloppy to users and AI models.

4. Pull the numbers

Look at more than organic:

  • Organic sessions and conversions.
  • Direct sessions and homepage trend 
  • GSC clicks/impressions/queries and brand regex trendline.
  • Manual AI inclusion log (engine, prompt, did we show, who else got named?).

Google says the AI experiences drive “higher-quality clicks,” while many SEO marketers report general traffic decline. 

Read both, and measure your own reality.

5. Judge the substance

Score every high-value page for “citable signal”:

  • Direct answer up top (<120 words).
  • Evidence: Proprietary data, SME quotes, methods, and links out to credible sources.
  • Trade-offs: Where your product is not the best fit.
  • FAQ block that mirrors prompt syntax (e.g., “best X for Y,” “X vs Y,” “pricing,” “implementation time”).
  • Schema: FAQ, HowTo, Product, Organization/Author with published/updated dates.

6. Map gaps and conflicts

Create a hit list:

  • Duplicates and cannibalization: Merge or redirect. If two pages answer the same thing, decide which one lives.
  • Missing BOFU pages:
    • “[Competitor] alternatives.”
    • “X vs Y.”
    • “Pricing.”
    • “Industry-specific use cases.”
  • Offsite holes: Are you absent from “Best of” lists, comparison hubs, review sites, and relevant forum threads? That’s where AI models shop for context. The more you appear on those domains, the likelier you are to get named in answers.

7. Establish next steps

Turn findings into a real plan:

  • Fixes
    • Hreflang clean-up.
    • Canonicals.
    • FAQ/HowTo/Product/Organization/Author schema.
    • Direct-answer summaries added to target pages.
  • Net-new assets
    • “Alternatives,” “X vs Y,” pricing explainer, implementation guide.
    • Video explainers (YouTube) with clear chapters.
    • Region-specific FAQs and CTAs.
  • Earned presence
    • Shortlist the publishers and communities your buyers read.
    • Pitch data-led pieces. Offer SME quotes and screenshots.
    • For review sites (G2/Capterra), set up a gentle ask after X days live.
  • Attribution
    • Connect the page/topic to the pipeline and revenue so that GEO progress is reflected in QBRs (e.g., Roadway or similar).

Get the newsletter search marketers rely on.


Worked example: Filling the template

Let’s see what this looks like in practice. Here’s a sample workflow that uses the GEO content audit template step by step. 

  • Create your goals:
    • Hit 40% inclusion across 50 priority prompts in AI Overviews/ChatGPT
    • +15% homepage sessions QoQ
    • +25% topic revenue for X cluster.
  • Load all URLs. Pick the top 100 URLs to tackle first. Manually update columns with and complete the action with: keep, update, merge, redirect.
  • Plan to add FAQ where you mirror prompt fragments. Think about adding Organization/Author (with bios and dates).
  • Check hreflang and copy cues (currency, units, etc.). Flag any market where your “local” page reads like a machine translation or uses the wrong signals.
  • List missing BOFU pages and industry variants. Prioritize by buyer impact.
  • Add internal links from top-traffic pages to the pages you want cited. Short, descriptive anchors that mirror the question asked.
  • Turn your action list into tickets. Dates. Owners. Status. Nothing lingers.
  • For each of your 50 prompts, record: engine, date, question, inclusion (Y/N), snippet, and other brands named. Check weekly for movement. Why weekly? Because Google keeps tinkering with AI Mode, links, carousels, and new UI, your presence can shift with those tweaks.

Keep these moves in mind to keep your audit on beat

Refresh cadence

Fast vertical (finance, travel, fast-moving SaaS)? Aim for quarterly.

Other verticals can run biannual or annual content refreshes.

Fresh, cited, and updated content tends to fare better for AI Overviews and Perplexity. 

Both are leaning hard on recency and clarity, and Google is actively testing more visible links in those AI blocks.

Local content beats translation

U.K. ≠ U.S.

Spain ≠ Mexico. 

Adjust spelling, units, currency, trust marks, and examples.

Tune the FAQ to local search habits and buyer objections. 

If your Canadian page says “sales tax” instead of GST/HST, people notice – so do models.

Track what matters, not just what’s easy

  • Inclusion rate across engines
  • Brand regex impressions in GSC
  • Homepage/direct lift
  • Revenue by topic/page

Chasing “LLM rankings” is sketchy. Use trackers as signals, not gospel.

What’s really the difference between a GEO content audit and an SEO content audit?

It’s a review of your content and your offsite footprint through an AI lens: Is your stuff scannable, citable, up-to-date, and backed by authority that LLMs trust?

SEO audit focuses on ranking and traffic on your own site. You crawl, fix indexing, resolve cannibalization, etc. It’s the classic playbook.

GEO audit focuses on representation and citability. You still care about structure and technicals, but you also score whether your brand appears in AI answers, even when the cited page isn’t yours.

You check if your content opens with a direct answer, mirrors prompt questions (and has FAQ schema), and is referenced by publishers, YouTube videos, Reddit threads, and review sites.

You need both

Skipping either is like training your upper body only. You’ll look fine in a tank top, but probably should avoid shorts.

Rankings still matter for discovery and for the content AI scrapes.

GEO pushes you to become answer-worthy across the broader web.

Or in Sullivan’s phrasing, good SEO already points you toward good GEO.

Pour one out for your old SEO friends – GEO is part of the scene

GEO is here to stay. Call it bad news delivered with a good whiskey. 

Visibility is shifting to AI answers. If you’re referenced in AI answers, you’ll feel it in your top-funnel numbers.

Competitors can “win” even when their site isn’t ranking because third-party pages that mention them get cited.

“You want a content plan that isn’t sipping vodka Red Bulls like it’s still 2015, then blacking out the second AI changes the playlist.

This is your GEO content audit curtain call 

ChatGPT doesn’t even have a SERP. It has an answer. If Google leans further into AI Mode, that answer becomes the main act.

Your job: be the source cited.

You want a content plan that doesn’t involve sipping vodka Red Bulls like it’s still 2015 and blacking out the second AI changes the playlist.

So run the audit, tighten structure, add proof, win some offsite mentions, and track inclusion, not just rankings.

Tie this to revenue so nobody calls this a science project.

Read more at Read More

Google Ads streamlines scripts documentation

Trusting Google Ads AI

Google has refreshed its Ads scripts documentation to make it easier for advertisers and developers to build, test, and customize automations.

Why we care. Scripts help advertisers save time and scale campaigns, but the old documentation was clunky and fragmented. The overhaul puts guides, references, and examples into a more intuitive flow that reduces friction for both beginners and power users.

What’s new:

  • Guides are now grouped by experience level and campaign type.
  • A dedicated reference tab makes it easier to browse available script objects.
  • Solutions and examples have been merged, centralizing sample code in one place.

Bottom line: Advertisers and agencies relying on automation should be able to work faster and with less guesswork, while new users have a smoother entry point into scripts.

Read more at Read More