Google to sunset ads developer support forums in 2026

Google Ads B2B campaigns

Google will shut down three long-running Google Groups forums for advertising developers early next year as it moves all technical support into official channels.

Driving the news. Google will stop responding to new posts on Jan. 28. The forums will stay online as read-only archives until later in 2026, when Google plans to disable posting entirely.

After Jan. 28:

  • Support agents will no longer reply in Google Groups.
  • Replies to existing threads will create a new email ticket with Google support.
  • Existing content will remain available for reference, including past discussions and fixes.

The shift. Google said it’s consolidating support to “streamline technical support channels” and move developers toward official tools with better tracking and response workflows.

Where developers should go now. Google’s updated documentation now points to these official channels:

Why we care. These forums have long served as open Q&A hubs for developers, helping teams troubleshoot issues across the Google Ads API, Ads Scripts, and the Campaign Manager 360 API. With the forums going away, all troubleshooting will shift to official support, forcing developers to adjust workflows, share more detailed logs, and rely less on community-driven fixes. The way advertisers solve problems is changing, and preparation will help prevent downtime and lost performance.

What Google wants from developers. To speed up resolutions, Google urges developers to include complete diagnostic details when filing tickets, such as:

  • Google Ads API: request ID, full request + response logs
  • Ads Scripts: script name, customer ID, execution logs, UI error messages
  • CM360 API: profile/account IDs, API method, request + response logs
  • All products: clear issue description, expected behavior, repro steps, code snippets, and error messages

Community still has a home. Google points developers who want updates, events, or general discussion to its “Google Advertising and Measurement Community” Discord server, which is not tied to official support.

Bottom line. Google is shuttering its public troubleshooting forums in favor of standardized, direct support – a move that may streamline issue handling but could shrink the pool of community-shared knowledge over time.

Google’s announcement. Sunsetting Google Ads API, Google Ads Scripts, and Campaign Manager 360 API Developer Support Forums on Google Groups

Read more at Read More

ChatGPT, Perplexity push deeper into AI shopping

AI shopping ecommerce

In the last 24 hours, ChatGPT and Perplexity have introduced new AI-driven shopping experiences that aim to deliver more personalized product discovery and guidance. Both experiences are meant to help users find, compare, and purchase products through conversational queries informed by preferences and past behavior.

ChatGPT

Shopping research. OpenAI introduced shopping research, a guided buying experience that turns ChatGPT into a personalized product researcher.

  • Users describe what they need (e.g., “quiet cordless vacuum,” “compare these strollers,” “gift for my art-obsessed niece”).
  • ChatGPT asks clarifying questions, pulls price/spec/review data from the open web, and produces a tailored buyer’s guide in minutes.
  • It adapts based on your preferences and ChatGPT memory, and can refine picks in real time as users mark items “More like this” or “Not interested.”

How it works. The feature runs on a specialized GPT-5 mini model optimized for shopping tasks, designed to pull reliable information from trusted sites and cite its sources.

Rollout. Available now on free and paid ChatGPT plans on web and mobile, with “nearly unlimited” usage through the holidays.

What’s next. Instant Checkout integrations will allow purchases directly inside ChatGPT for participating merchants.

Perplexity

New shopping experience. Perplexity launched a free U.S. shopping experience built around its core philosophy: AI assistants should scale a shopper, not replace them.

  • Users search conversationally (e.g., “best winter jacket for San Francisco ferry commute”) and Perplexity keeps context as you pivot to related needs.
  • It remembers preferences (e.g., mid-century modern style, minimalist running gear) and tailors future product cards accordingly.
  • Instead of infinite scroll, it generates streamlined product cards with only the details tied to the user’s stated intent.

Integrated checkout. A partnership with PayPal brings fast, in-flow purchases with retailers remaining merchant of record. That means merchants still get customer visibility, handle returns, and maintain the relationship.

Why retailers may care. Perplexity said shoppers who go through a conversational funnel have higher purchase intent, and instant checkout reduces abandonment.

Availability. The new shopping experience is live on desktop and the web now, with iOS and Android apps rolling out in the next few weeks.

Why we care. AI assistants are an emerging channel for ecommerce. ChatGPT’s focus is deep research, while Perplexity’s is smooth discovery and built-in checkout. Both aim to become the starting point for shoppers’ buying journeys by making brand/product recommendations that appear personal and tailored to their preferences.

The announcements:

Read more at Read More

Google Ads MCC takeover attacks are rising – here’s how the phishing scams work

A surge of sophisticated phishing attacks is letting scammers take over full Google Ads Manager accounts (MCCs), giving them instant access to hundreds of client accounts and the power to burn through tens of thousands of dollars in hours without being noticed.

Driving the news. Agencies across LinkedIn, Reddit, and Google’s own forums are reporting a rise in MCC takeovers, even among teams using two-factor authentication. The attackers’ preferred weapon is a near-perfect phishing email that mimics Google’s account-access invitations.

  • Victims say hijackers add fake admin users, link their own MCCs, and begin launching fraudulent, high-budget campaigns.
  • In some cases, support tickets take days to escalate while money continues to drain.
  • One agency reported “tens of thousands” in ad spend racked up within 24 hours.

How it works. The scams look like standard client-access invites – same branding, format, and copy – but the link sends users to a Google Sites page posing as a Google login screen. Once credentials are entered, the attackers get full MCC access.

Why it’s getting worse. Advertisers say the phishing attempts are now almost indistinguishable from real Google messages. Several agencies admitted they would have clicked if not for small discrepancies in the sender domain or login URL.

The impact:

  • Budgets drained: fraudulent ads run immediately.
  • Malware exposure: ads often lead to harmful sites.
  • Account damage: invalid activity flags, disapprovals, and trust issues ripple for months.
  • Operational chaos: agencies lose access to every client account under the MCC.

What Google says. The Google Ads Community team posted a What to do if your account is compromised help doc, warning advertisers about rising credential theft during the holiday season, but hasn’t acknowledged the scale of the MCC takeover surge.

Why we care. These MCC hijacks aren’t just isolated security issues – they’re direct financial and operational threats that can wipe out budgets, compromise every client account, and take days for Google to contain. With attackers now bypassing 2FA through near-perfect phishing, even well-secured teams are suddenly vulnerable. If just one team member slips, an entire portfolio of accounts – spend, performance, and client trust – is instantly at risk.

What experts recommend. Marc Walker, founder and managing director of Low Digital Ltd, shared these recommendations to keep your accounts from being hijacked:

  • Always verify the URL: Google never uses Google Sites for login.
  • Confirm invites inside the MCC, not just via email.
  • Purge dormant users and inactive accounts to reduce attack surfaces.
  • Educate teams on phishing red flags, especially during high-volume holiday outreach.

Between the lines. If even one user in a large MCC falls for the scam, the attacker effectively acquires keys to an entire portfolio – and can drain budgets faster than Google’s support system can respond.

Bottom line. Google Ads hijacks are a serious operational threat for agencies and in-house teams. Until Google ships stronger MCC-level protections, vigilance remains the only real defense.

Read more at Read More

How to make products machine-readable for multimodal AI search

Making products machine-readable in the era of visual and multimodal AI search

As shopping becomes more visually driven, imagery plays a central role in how people evaluate products.

Images and videos can unfurl complex stories in an instant, making them powerful tools for communication. 

In ecommerce, they function as decision tools. 

Generative search systems extract objects, embedded text, composition, and style to infer use cases and brand fit, then 

LLMs surface the assets that best answer a shopper’s question. 

Each visual becomes structured data that removes a purchase objection, increasing discoverability in multimodal search contexts where customers take a photo or upload a screenshot to ask about it.

Visual search is a shopping behavior

Shoppers use visual search to make decisions: snapping a photo, scanning a label, or comparing products to answer “Will this work for me?” in seconds. 

For online stores, that means every photo must answer that task: in‑hand scale shots, on‑body size cues, real‑light color, micro‑demos, and side‑by‑sides that make trade‑offs obvious without reading a word. 

Multimodal search is reshaping user behaviors

Visual search adoption is accelerating.

Google Lens now handles 20 billion visual queries per month, driven heavily by younger users in the 18-24 cohort. 

These evolving behaviors map to specific intent categories.​

General context

Multimodal search aligns with intuitive information-finding. 

Users no longer rely on text-only fields. They combine images, spoken queries, and context to direct requests.​​

Quick capture and identify

By snapping a photo and asking for identification (e.g., “What plant is this?” or querying an error screen), users instantly solve recognition and troubleshooting tasks, speeding up resolution and product authentication.​

Visual comparison

Showing a product and requesting “find a dupe” or asking about “room style” eliminates complex textual descriptions and enables rapid cross-category shopping and fit checking.

This shortens discovery time and supports quicker alternative product searches.​

Information processing

Presenting ingredient lists (“make recipe”), manuals, or foreign text triggers on-the-fly data conversion. 

Systems extract, translate, and operationalize information, eliminating the need for manual reentry or searching elsewhere for instructions.​

Modification search

Displaying a product and asking for variations (“like this but in blue”) enables precise attribute searching, such as finding parts or compatible accessories, without needing to hunt down model or part numbers.​

These user behaviors highlight the shift away from purely language-based navigation. 

Multimodal AI now enables instant identification, decision support, and creative exploration, reducing friction across both ecommerce and information journeys. 

You can view a comprehensive table of multimodal visual search types here.

Dig deeper: How multimodal discovery is redefining SEO in the AI era

Prioritize content and quality for purchase decisions

Your product images must highlight the specific details customers look for, such as pockets, patterns, or special stitching. 

This goes further, because certain abstract ideas are conveyed more authentically through visuals. 

To answer “Can a 40-year-old woman wear Doc Martens?” you should show, not tell, that they belong.

Original images are essential because they reflect high effort, uniqueness, and skill, making the content more engaging and credible.

Source: Mark Williams-Cook on LinkedIn

Making products machine-readable for image vision

To make products machine-readable, every visual element must be clearly interpreted by AI systems. 

This starts with how images and packaging are designed.

Products and packaging as landing pages

Ecommerce packaging must be engineered like a digital asset to thrive in the era of multimodal AI search. 

When AI or search engines can’t read the packaging, the product becomes invisible at the moment of highest consumer intent. 

Design for OCR-friendliness and authenticity

Both Google Lens and leading LLMs use optical character recognition (OCR) to extract, interpret, and index data from physical goods.

To support this, text and visuals on packaging must be easy for OCR to convert into data.

Prioritize high-contrast color schemes. Black text on white backgrounds is the gold standard. 

Critical details (e.g., ingredients, instructions, warnings) should be presented in clean, sans-serif fonts (e.g., Helvetica, Arial, Lato, Open Sans) and set against solid backgrounds, free from distracting patterns. 

This means treating physical product labeling like a landing page, as Cetaphil does.

Cetaphil product packaging
Source: AdAge

Avoid common failure points such as:

  • Low contrast.
  • Decorative or script fonts.
  • Busy patterns.
  • Curved or creased surfaces.
  • Glossy materials that reflect light and break up text.

Here’s an example:

Document where OCR fails and analyze why. 

Run a grayscale test to confirm that text remains distinguishable without color. 

For every product, include a QR code that links directly to a web page with structured, machine-readable information in HTML.

High-resolution, multi-angle product images work best, especially for items that require authenticity verification. 

Authentic photos, where accuracy and credibility are essential, consistently outperform artificial or AI-generated images.

Dig deeper: How to make ecommerce product pages work in an AI-first world

Get the newsletter search marketers rely on.


Managing your brand’s visual knowledge graph

Ecommerce product images on ChatGPT

AI does not isolate your product. It scans every adjacent object in an image to build a contextual database. 

Props, backgrounds, and other elements help AI infer price point, lifestyle relevance, and target customers. 

Each object placed alongside a product sends a signal – luxury cues, sport gear, utilitarian tools – all recalibrating the brand’s digital persona for machines. 

A distinctive logo within each visual scene ensures rapid recognition, making products easier to identify in visual and multimodal AI search “in the wild.” 

Tight control of these adjacency signals is now part of brand architecture. 

Deliberate curation ensures AI models correctly map a brand’s value, context, and ideal customer, increasing the likelihood of appearing in relevant, high-value conversational queries.

Run a co-occurrence audit for brand context

Establish a workflow that assesses, corrects, and operationalizes brand context for multimodal AI search. 

Run this audit in AI Mode, ChatGPT search, ChatGPT, and another LLM model of your choice.

Gather the top five lifestyle or product photos and input them into a multimodal LLM, such as Gemini, or an object detection API, like the Google Vision API. 

Use the prompt: 

  • “List every single object you can identify in this image. Based on these objects, describe the person who owns them.” 

This generates a machine-produced inventory and persona analysis.

Identify narrative disconnects, such as a budget product mispositioned as a luxury or an aspirational item, undermined by mismatched background cues. 

From these results, develop explicit guidelines that include props, context elements, and on-brand and off-brand objects for marketing, photography, and creative teams. 

Enforce these standards to ensure every asset analyzed by AI – and subsequently ranked or recommended – consistently reinforces product context, brand value, and the desired customer profile. 

This alignment ensures consistent machine perception with strategic goals and strengthens presence in next-generation search and recommendation environments.

Brand control across the four visual layers

The brand control quadrant provides a practical framework for managing brand visibility through the lens of machine interpretation. 

It covers four layers, some owned by the brand and others influenced by it.

Known brand

This includes owned visuals, such as official logos, branded imagery, and design guides, which brands assume are controlled and understood by both human audiences and AI.

Loreal product on AI search

Image strategy

  • Curate a visual knowledge graph. 
  • List and assess adjacent objects in brand-connected images. 
  • Build and reinforce an “Object Bible” to reduce narrative drift and ensure lifestyle signals consistently support the intended brand persona and value.

Latent brand 

These are images and contexts AI captures “in the wild,” including:

  • User photos.
  • Social sightings.
  • Street-style shots. 

These third-party visuals can generate unintended inferences about price, persona, or positioning. 

An extreme example is Helly Hansen, whose “HH” logo was co-opted by far-right and neo-Nazi groups, creating unintended associations through user-posted images.

Helly Hansen on Google Search

Shadow brand

This quadrant consists of outdated brand assets and materials presumed private that can be indexed and learned by LLMs if made public, even unintentionally. 

  • Audit all public and semi-public digital archives for outdated or conflicting imagery. 
  • Remove or update diagrams, screenshots, or historic visuals. 
  • Funnel only current, strategy-aligned visual data to guide AI inferences and search representations.

AI-narrated brand

AI builds composite narratives about a brand by synthesizing visual and emotional cues from all layers. 

This outcome can include competitor contamination or tone mismatches.

Image strategy

  • Test the image’s meaning and emotional tone using tools like Google Cloud Vision to confirm that its inherent aesthetics and mood align with the intended product messaging. 
  • When mismatches appear, correct them at the asset level to recalibrate the narrative.

Factoring for sentiment: Aligning visual tone and emotional context

Images do more than provide information. 

They command attention and evoke emotion in split seconds, shaping perceptions and influencing behavior. 

In AI-driven multimodal search, this emotional resonance becomes a direct, machine-readable signal. 

Emotional context is interpreted and sentiment scored.

The affective quality of each image is evaluated by LLMs, which synthesize sentiment, tone, and contextual nuance alongside textual descriptions to match content to user emotion and intent.

To capitalize on this, brands must intentionally design and rigorously audit the emotional tone of their imagery. 

Tools like Microsoft Azure Computer Vision or Google Cloud Vision’s API allow teams to:

  • Score images for emotional cues at scale. 
  • Assess facial expressions and assign probabilities to emotions, enabling precise calibration of imagery to intended product feelings such as “calm” for a yoga mat line, “joy” for a party dress, or “confidence” for business shoes.
  • Align emotional content with marketing goals. 
  • Ensure that imagery sets the right expectations and appeals to the target audience.

Start by identifying the baseline emotion in your brand imagery, then actively test for consistency using AI tools.

Ensuring your brand narrative matches AI perception

Prioritize authentic, high-quality product images, ensure every asset is machine-readable, and rigorously curate visual context and sentiment.

Treat packaging and on-site visuals as digital landing pages. Run regular audits for object adjacency, emotional tone, and technical discoverability. 

AI systems will shape your brand narrative whether you guide them or not, so make sure every visual aligns with the story you intend to tell.

Read more at Read More

Google Business Profiles adds scheduling and multi-location publishing to Google Posts

Google Posts now supports scheduling and multi-location publishing within Google Business Profiles. This should make it easier for you to manage your Google Posts for your business(es) and client(s).

Scheduling. When you add a new Google Post within Google Business Profiles, there is a new option to “schedule this post.” You can then select a date and time for when you want the post to be scheduled.

Lisa Landsman from Google said on LinkedIn, “plan your entire week or month in advance! You can now schedule your Google Posts to go live automatically at the perfect time.”

Multi-location publishing. Also, if you manage multiple locations for a business and you want to quickly copy those Google Posts to some or all of those locations, you can now. Lisa Landsman explained, “Easily create a single post and apply it instantly to multiple business locations in one click..”

What it looks like. Here is a GIF of this in action:

Why we care. Businesses are busy and you don’t always have time to drop what you are doing to create a Google Post about a new event or message. But now, when you have time, you can pre-schedule these Google Posts at your convenience. Also, you can quickly copy them to other locations you manage.

As Google’s Lisa Landsman wrote, “We know the upcoming holiday season is a crucial, and hectic, time for your business. It’s also your biggest opportunity to get your events, offers, and updates in front of potential customers who are actively searching.”

Read more at Read More

Google Ads quietly rolls out a new conversion metric

How Google Ads’ AI tools fix creative bottlenecks, streamline asset creation

A new column called “Original Conversion Value” has started appearing inside Google Ads, giving advertisers a long-requested way to see the true, unadjusted value of their conversions.

How it works. Google’s new formula strips everything back:

Conversion Value
– Rule Adjustments (value rules)
– Lifecycle Goal Adjustments (e.g., NCA bonuses)
= Original Conversion Value

Why we care. For years, marketers have struggled to isolate real conversion value from Google’s layers of adjustments — including Conversion Value Rules and Lifecycle Goals (like New Customer Acquisition goals). Original Conversion value makes it easier to diagnose performance, compare data across campaigns, and spot when automated bidding is boosting value rather than actual conversions.

In short: clearer insights, cleaner ROAS, and more confident decision-making.

Between the lines:

  • Value adjustments are useful for steering Smart Bidding.
  • But they also inflate numbers, complicating reporting and performance analysis.
  • Agencies and in-house teams have long asked Google for a cleaner view.

What’s next. “Original Conversion Value” could quickly become a go-to column for:

  • Revenue reporting
  • Post-campaign analysis
  • Troubleshooting inflated ROAS
  • Auditing automated bid strategies

First seen. This update was first picked up by Google Ads Specialist Thomas Eccel when he shared spotting the new column on LinkedIn

The bottom line. It’s a small update with big clarity. Google Ads is giving marketers something rare: a simpler, more transparent look at the value their ads actually drive.

Read more at Read More

Google releases Gemini 3 – it already powers AI Mode

Google announced the release of its latest AI model update, Gemini 3. “And now we’re introducing Gemini 3, our most intelligent model, that combines all of Gemini’s capabilities together so you can bring any idea to life,” Google’s CEO, Sundar Pichai wrote.

Gemini 3 is now being used in AI Mode in Search with more complex reasoning and new dynamic experiences. “This is the first time we are shipping Gemini in Search on day one,” Sundar Pichai said.

AI Mode with Gemini 3. Google shared how AI Mode in Search is now using Gemini 3 to enable new generative UI experiences like immersive visual layouts and interactive tools and simulations, all generated completely on the fly based on your query.

Here is a video of showing how RNA polymerase works with generative UI in AI Mode in Search.

Robby Stein, VP of Product at Google Search said:

“In Search, Gemini 3 with generative layouts will make it easy to get a rich understanding of anything on your mind. It has state-of-the-art reasoning, deep multimodal understanding and advanced agentic capabilities. That allows the model to shine when you ask it to explain advanced concepts or ideas – it reasons and can code interactive visuals in real-time. It can tackle your toughest questions like advanced science.”

More Gemini 3. Google added that Gemini 3 has:

  • State-of-the-art reasoning
  • Deep multimodal understanding
  • Powerful vibe coding so you can go from prompt to app in one shot
  • Improved agentic capabilities, so it can get things done on your behalf, at your direction

Availability. Gemini 3 is now rolling out, yes, in AI Mode but here also:

  • For everyone in the Gemini app and for Google AI Pro and Ultra subscribers in AI Mode in Search
  • For developers in the Gemini API in AI Studio, our new agentic development platform, Google Antigravity; and Gemini CLI
  • For enterprises in Vertex AI and Gemini Enterprise

Why we care. Gemini 3 is currently powering AI Mode, the future of Google Search. It will continue to power more and more search features within Google, as well as other areas within Google’s platforms.

Being on top of these changes and how they impact search and your site and maybe Google Ads is important.

Read more at Read More

The three AI research modes redefining search – and why brand wins

The three AI research modes redefining search — and why brand wins

The AI resume has become a C-suite-level asset that reflects your entire digital strategy. 

To use it effectively, we first need to understand where AI is deploying it across the user journey.

How AI has rewritten the user journey

For years, our strategies were shaped by the inbound methodology.

We built content around a user-driven path through awareness, consideration, and decision, with traditional SEO acting as the engine behind those moments.

That journey has now been fundamentally reshaped. 

AI assistive engines – conversational systems like Gemini, ChatGPT, and Perplexity – are collapsing the funnel. 

They move users from discovery to decision within walled-garden environments. 

It’s what I call the BigTech walled garden AI conversational acquisition funnel.

For marketers, that shift can feel like a loss of control. 

We no longer own the click, the landing page, or the carefully engineered funnel. 

But from the consumer perspective, the change is positive. 

People want one thing: a direct, trusted answer.

This isn’t a contradiction. It’s the new reality. 

Our job is to align with this best-service model by proving to the AI that our brand is the most credible answer.

That requires updating the ultimate goal. 

For commercial queries, the win is no longer visibility. 

It’s earning the perfect click – the moment when an AI system acts as a trusted advisor and chooses your brand as the best solution.

To get there, we have to broaden our focus from explicit branded searches to the three modes of research AI uses today: 

  • Explicit.
  • Implicit.
  • Ambient. 

Together, they define the new strategic landscape and lead to one truth.

In an AI-driven ecosystem, brand is what matters most.

3 types of research redefining what search is

These three behaviors reveal how users now discover, assess, and choose brands through AI.

Explicit research (brand): The final perfect click

Explicit research is any query that includes your brand name, such as:

  • Searches for your name.
  • “Brand name reviews.”
  • “Brand vs. competitor.”

They represent deliberate, high-stakes moments when a potential client, partner, or investor is actively researching your brand. 

It’s the decision stage of the funnel, where they look for specific information about you or your services, or conduct a final AI-driven due diligence check before committing.

What they see here is your digital business card

A strong AI assistive engine optimization (AIEO) strategy secures these bottom-of-funnel moments first. 

You must engineer an AI resume – the AI equivalent of a brand SERP – that is positive, accurate, and convincing so the prospect who is actively looking for you converts.

Branded terms are the lowest-hanging fruit, the most critical conversion point in the new conversational funnel, and the foundation of AIEO.

Implicit research (industry/topic/comparison): Being top of algorithmic mind

Implicit research includes any topical query that does not contain a brand name. 

These are the “best of” comparisons and problem-focused questions that happen at the top and middle of the funnel.

To win this part of the journey, your brand must be top of algorithmic mind, the state where an AI instinctively selects you as the most credible, relevant, and authoritative answer to a user’s query.

  • Consideration: When a user asks, “Who are the best personal injury law firms in Los Angeles?”, the AI builds a shortlist, and you cannot afford to be missing.
  • Awareness: When a user asks, “Give me advice about personal injury legal options after a car accident,” your chance to be included depends on whether the AI already understands and trusts your brand.

Implicit research is not about keywords. It is about being understood by the algorithms, demonstrating credibility, and building topical authority.

Here’s how it works:

  • The algorithms understand who you are.
  • They can effectively apply credibility signals. (An expanded version of Google’s E-E-A-T framework, N-E-E-A-T-T, incorporates notability and transparency.)
  • You have provided the content that demonstrates topical authority.

If you meet these three prerequisites, you can become top of algorithmic mind for user-AI interactions at the top and middle of the funnel, where implicit research happens.

Get the newsletter search marketers rely on.


Ambient research (push by software): Where the algorithms advocate for you

Ambient research is the ultimate form of push discovery, where an AI proactively suggests your brand to a user who isn’t even in research mode. 

It represents the most profound shift yet. Ambient research sits beyond the funnel – it is pre-awareness.

Simple examples include:

  • Gemini suggesting your name in Google Sheets while a prospect models ROI.
  • Your profile surfacing as a suggested consultant in Gmail or Outlook.
  • A meeting summary in Google Meet or Teams recommending your brand as the expert who can solve a key challenge.

In these day-to-day situations, the user is no longer pulling information. 

The AI is pushing a solution it trusts so completely that the engine becomes your advocate.

This is the ultimate goal, signaling that a brand has reached true dominant status as top of algorithmic mind within a niche. 

This level of trust comes from building a deep and consistent digital presence that teaches the AI your brand is a helpful default in a given context. 

It’s the same dynamic Seth Godin describes as “permission marketing,” except here the permission is granted by the algorithms.

It may feel like an edge case in 2025, but ambient research will become a major opportunity for those who prepare now. 

The walls are rising in the AI walled garden 2.0 – the new, more restrictive AI ecosystems. 

The next evolution will be AI assistive agents. 

These agents will not just recommend a solution. They will execute it. 

When an agent books a flight, orders a product, or hires a consultant on a user’s behalf, there is no second place. 

This creates a true zero-sum moment in AI. 

If you are not the trusted default choice, you are not an option at all.

Rethink your funnel: Brand is the unifying strategy

The awareness, consideration, and decision funnel still exists, but the journey has been hijacked by AI.

A strategy focused only on explicit research is a losing game. 

It secures the bottom of the funnel but leaves the entire middle and top wide open for competitors to be discovered and recommended.

Expanding into implicit research is better, yet it remains a reactive posture. You are waiting to be chosen from a list. 

That approach will fail as ambient research grows, because ambient moments are where the AI makes the first introduction.

This landscape demands a brand-first strategy.

Brand is the one constant across all three research modes. AI:

  • Recommends you in explicit research because it understands your brand’s facts. 
  • Recommends you in implicit research because it trusts your credibility on a topic. 
  • Advocates for you in ambient research because it has learned your brand is the most helpful default solution.

By building understandability, credibility, and deliverability, you are not optimizing for one type of search. 

You are systematically teaching the AI to trust your brand at every possible interaction.

The brands that become the best teachers will be the ones an AI recommends across all three research modes. 

It’s time to update your strategy or risk being left out of the conversation entirely.

Your final step: The strategic roadmap 

You now understand the what – the AI resume – and the where – the three research modes. 

Finally, we’ll cover the how: the complete strategic roadmap for mastering the algorithmic trinity with a multi-speed approach that systematically builds your brand’s authority.

Read more at Read More

Google AI Overviews: How to remove or suppress negative content

How to remove or suppress negative content from AI Overviews

By now, we’re all familiar with Google AI Overviews. Many queries you search on Google now surface responses through this quick and prominent search feature.

But AI Overview results aren’t always reliable or accurate. 

Google’s algorithms can promote negative or misleading content, making online reputation management (ORM) difficult. 

Here’s how to stay on top of AI Overviews and your ORM – by removing, mitigating, or addressing negative content.

How AI Overviews source information

AI Overviews relies on a mix of data sources across Google and the open web, including:

  • Google’s Knowledge Graph: The Knowledge Graph is Google’s structured database of facts about people, places, and things. It’s built from a range of licensed data sources and publicly available information.
  • Google’s tools and databases: Google also draws on structured data from its own systems. This includes information from:
    • Business Profiles.
    • The Merchant Center.
    • Other Google-managed datasets that commonly appear in search results.
  • Websites: AI Overviews frequently cites content from websites across the open web. The links that appear beside answers point to a wide variety of sources, ranging from authoritative publishers to lower-quality sites.
  • User-generated content (UGC): UGC can also surface in AI Overviews. This may include posts, reviews, photos, or publicly available content from community-driven platforms like Reddit.

Several other factors influence how this data is organized into answers, including topical relevance, freshness, and the authority of the source.

However, even with relevance and authority taken into consideration, harmful or false content can still appear in results.

This can happen for a variety of reasons, including:

  • Where the information is sourced.
  • How Google’s AI fills in gaps.
  • Instances where it may misunderstand the context of a user’s query.

Removing or suppressing harmful content

There are several options for removing or suppressing negative information on the web, including those related to AI Overviews. Let’s look at two.

Legal and platform-based removal

From time to time, you are left with no other option but to take legal action.

In certain instances, a Digital Millennium Copyright Act (DMCA) claim or defamation lawsuit might be applicable. 

A DMCA claim can be initiated at the request of the content owner. A defamation lawsuit, meanwhile, aims to establish libel by showing four things:

  • A false statement purporting to be fact. 
  • Publication or communication of that statement to a third person.
  • Fault amounting to at least negligence.
  • Damages, or some harm caused to the reputation of the person or entity who is the subject of the statement.

Defamation standards vary by jurisdiction, and public figures may face a higher legal standard. 

Because of this, proper documentation and professionalism are essential when filing a lawsuit, and working with a legal professional is likely in your best interest.

Dig deeper: Generative AI and defamation: What the new reputation threats look like

Working with an ORM specialist

The other (and perhaps easier) route to take is working with an online reputation management specialist. 

These teams are extremely well-versed at handling the multi-layered process of removals.

In an online crisis, they have the tools to respond and mitigate damage. They’re also trained to balance ethical considerations you might not always account for.

Get the newsletter search marketers rely on.


How to deliver positive signals to AI systems

Clearer signals make it easier for AI Overview to present your brand correctly. Focus on the following areas.

Strengthening signals through publishing 

One effective method is strategic publishing.

This means building a strong, positive presence around your company, business, or personal brand so AI Overviews have authoritative information to draw from.

A few approaches support this:

  • Publishing on credible domains: ORM firms often publish content on platforms like Medium, LinkedIn, and reputable industry sites. This strengthens your presence in trusted environments.
  • Employing consistent branding and factual accuracy: Content must also be factual and consistently branded. This reinforces authority and signals reliability.
  • Leveraging press releases and thought leadership: Press releases, thought leadership pieces, and expert commentary help create credible backlinks and citations across the web.
  • Supporting pages that build the narrative: ORM specialists also create supporting pages that reinforce key narratives. With the right linking and content clusters, AI Overviews is more likely to surface this material.

Leveraging structured data and E-E-A-T

Another effective method to establish credibility on AI Overviews is to focus on technical enhancements and experience, expertise, authoritativeness, and trustworthiness (E-E-A-T). 

ORM specialists typically focus on two areas:

  • Structured data and schema markup: This involves adding more context about your brand online by:
    • Enhancing author bios.
    • Highlighting positive reviews.
    • Reinforcing signals that reflect credibility.
  • Establishing E-E-A-T signals: This includes building a trusted online presence by:
    • Referencing work published in reputable outlets.
    • Highlighting real client examples.
    • Showcasing customer relationships.
    • Outlining accolades and expertise through your bio.

Monitoring AI Overviews and detecting issues early

A final key aspect of staying on top of AI Overviews is to monitor the algorithm and detect issues early. 

Using tools to track AI Overviews is extremely efficient, and these systems can help business owners monitor keywords and detect potential damage.

For instance, you might use these tools to track your brand name, executive names, or even relevant products.

As discussed, it’s also crucial to have a plan in place in case a crisis ever hits.

This means establishing press outreach contact points and a legal department, and knowing how to suppress content via the suppression methods already mentioned.

Ethical considerations

Online reputation management isn’t just generating think pieces. It’s a layered process grounded in ethical integrity and factual accuracy.

To maintain a truthful and durable strategy, keep the following in mind:

  • Facts matter: Don’t aim to manipulate or deceive. Focus on promoting factual, positive content to AI Overview.
  • Avoid aggression: Aggressive tactics rarely work in ORM. There’s a balance between over-optimization and under-optimization, and an ORM firm can help you find it.
  • Think long-term: You may want negative or false content removed immediately, but lasting suppression requires a long-term plan to promote positive content year after year.

Managing how AI Overviews presents your brand

AI Overviews is already a dominant part of the search experience.

But its design means negative or false content can still rise to the top.

As AI Overviews become more prominent, business owners need to monitor their online reputation and strengthen the positive signals that surface in these results.

Over time, that requires strategic publishing, long-term planning, the right technical signals, and a commitment to factual, honest content.

By following these principles, AI Overviews can become an asset for growth instead of a source of harm.

Read more at Read More

82% of marketers fail AI adoption (Positionless Marketing can fix it) by Optimove

Picture a chocolate company with an elaborate recipe, generations old. They ask an AI system to identify which ingredients they could remove to cut costs. The AI suggests one. They remove it. Sales hold steady. They ask again. The AI suggests another. This continues through four or five iterations until they’ve created the cheapest possible version of their product. Fantastic margins, terrible sales. When someone finally tastes it, the verdict is immediate: “This isn’t even chocolate anymore.”

Aly Blawat, senior director of customer strategy at Blain’s Farm & Fleet, shared this story during a recent MarTech webinar to illustrate why 82% of marketing teams are failing at AI adoption: automation without human judgment doesn’t just fail. It compounds failure faster than ever before. And that failure has nothing to do with the technology itself.

The numbers tell the story. In a Forrester study commissioned by Optimove, only 18% of marketers consider themselves at the leading edge of AI adoption, even though nearly 80% expect AI to improve targeting, personalization and optimization. Forrester’s Rusty Warner, VP and principal analyst, puts this in context: only about 25% of marketers worldwide are in production with any AI use cases. Another third are experimenting but haven’t moved to production. That leaves more than 40% still learning about what AI might do for them.

“This particular statistic didn’t really surprise me,” Warner said. “We find that a lot of people that are able to use AI tools at work might be experimenting with them at home, but at work, they’re really waiting for their software vendors to make tools available that have been deemed safe to use and responsible.”

The caution is widespread. IT teams have controls in place for third-party AI tools. Even tech-savvy marketers who experiment at home often can’t access those tools at work until vendors embed responsible AI, data protections and auditability directly into their platforms.

The problem isn’t the AI tools available today. It’s that marketing work is still structured the same way it was before AI existed.

The individual vs. the organization

Individual marketers are thirsty for AI tools. They see the potential immediately. But organizations are fundamentally built for something different: control over brand voice, short-term optimization and manual processes where work passes from insights teams to creative teams to activation teams, each handoff adding days or weeks to cycle time.

Most marketing organizations still operate like an assembly line. Insights come from one door, creative from another, activation from a third. Warner called this out plainly: “Marketing still runs like an assembly line. AI and automation break that model, letting marketers go beyond their position to do more and be more agile.”

The assembly line model is excellent at governance and terrible at speed. By the time results return, they inform the past more than the present. And in a world where customer behavior shifts weekly, that lag becomes fatal.

The solution is “Positionless Marketing,” a model where a single marketer can access data, generate brand-safe creative and launch campaigns with built-in optimization, all without filing tickets or waiting for handoffs. It doesn’t mean eliminating collaboration. It means reserving human collaboration for major launches, holiday campaigns and sensitive topics while enabling marketers to go end-to-end quickly and safely for everything else.

Starting small, building confidence

Blain’s Farm & Fleet, a 120-year-old retail chain, began its AI journey with a specific problem: launching a new brand campaign and needing to adapt tone consistently across channels. They implemented Jasper, a closed system where they could feed their brand tone and messaging without risk.

“We were teaching it a little bit more about us,” Blawat said. “We wanted to show up cohesively across the whole entire ecosystem.”

Warner recommends this approach. “Start small and pick something that you think is going to be a nice quick win to build confidence,” he said. “Audit your data, make sure it’s cleaned up. Your AI is only going to be as good as the data that you’re feeding it.”

The pattern repeats: start with a closed-loop copy tool, then add scripts to clean product data, then layer in segmentation. Each step frees time, shortens cycles, and builds confidence.

Where data meets speed

Marketers aren’t drowning in too little data. They’re drowning in too much data with too little access. The 20% of marketing organizations that move fast centralize definitions of what “active customer,” “at risk,” and “incremental lift” actually mean. And they put those signals where marketers work, not in a separate BI maze.

“There’s massive potential for AI, but success hinges on embracing the change required,” Warner said. “And change is hard because it involves people and their mindset, not just the technology.”

The adoption lag isn’t about technology readiness. It’s about organizational readiness.

Balancing automation and authenticity

Generative AI took off first in low-risk applications: creative support, meeting notes, copy cleanup. Customer-facing decisions remain slower to adopt because brands pay the price for mistakes. The answer is to deploy AI with guardrails in the highest-leverage decisions, prove lift with holdouts and expand methodically.

Blawat emphasized this balance. “We need that human touch on a lot of this stuff to make sure we’re still showing up as genuine and authentic,” she said. “We’re staying true to who our brand is.”

For Blain’s Farm & Fleet, that means maintaining the personal connection customers expect. The AI handles the mechanics of targeting and timing. But humans ensure every message reflects the values and voice customers’ trust.

The future of marketing work

AI is moving from analysis to execution. When predictive models, generative AI and decisioning engines converge, marketers stop drawing hypothetical journeys and start letting the system assemble unique paths per person.

What changes? Less canvas drawing, more outcome setting. Less reporting theater, more lift by cohort. Fewer meetings, faster iterations.

Warner points to a future that’s closer than most organizations realize. “Imagine a world where I don’t come to your commerce site and browse. Instead, I can just type to a bot what it is I’m looking for. And I expect your brand to be responsive to that.”

That kind of conversational commerce will require everyone in the organization to become a customer experience expert. “It doesn’t matter what channel the customer uses,” Warner explained. “They’re talking to your brand.”

The path forward

There is no AI strategy without an operating model that can use it. The fix requires three fundamental changes: restructure how marketing work flows, measure lift instead of activity and enable marketers to move from idea to execution without handoffs.

The path forward requires discipline. Pick one customer-facing use case with clear financial upside. Define the minimum signals, audiences and KPIs needed. Enforce holdouts by default. Enable direct access to data, creative generation and activation in one place. Publish weekly lift by cohort. Expand only when lift is proven.

Warner expects adoption to accelerate significantly in 2026 as more vendors embed AI capabilities with proper guardrails. For brands like Blain’s Farm & Fleet, that future is already taking shape. They started with copywriting, proved value and are now expanding. The key was finding specific problems where AI could help and measuring whether it actually did.

AI will not fix a slow system. It will amplify it. Teams that modernize the way work gets done and lift the language of decisions will see the promise translate into performance.

As Blawat’s chocolate story reminds us, automation without judgment optimizes for the wrong outcome. The goal isn’t the cheapest product or the fastest campaign. It’s the one that serves customers while building the brand. That requires humans in the loop to point AI in the ri

Read more at Read More