Posts

Web Design and Development San Diego

Why PPC measurement feels broken (and why it isn’t)

Why PPC measurement works differently in a privacy-first world

If you’ve been managing PPC accounts for any length of time, you don’t need a research report to tell you something has changed. 

You see it in the day-to-day work: 

  • GCLIDs missing from URLs.
  • Conversions arriving later than expected.
  • Reports that take longer to explain while still feeling less definitive than they used to.

When that happens, the reflex is to assume something broke – a tracking update, a platform change, or a misconfiguration buried somewhere in the stack.

But the reality is usually simpler. Many measurement setups still assume identifiers will reliably persist from click to conversion, and that assumption no longer holds consistently.

Measurement hasn’t stopped working. The conditions it depends on have been shifting for years, and what once felt like edge cases now show up often enough to feel like a systemic change.

Why this shift feels so disorienting

I’ve been close to this problem for most of my career. 

Before Google Ads had native conversion tracking, I built my own tracking pixels and URL parameters to optimize affiliate campaigns. 

Later, while working at Google, I was involved in the acquisition of Urchin as the industry moved toward standardized, comprehensive measurement.

That era set expectations that nearly everything could be tracked, joined, and attributed at the click level. Google made advertising feel measurable, controllable, and predictable. 

As the ecosystem now shifts toward more automation, less control, and less data, that contrast can be jarring.

It has been for me. Much of what I once relied on to interpret PPC data no longer applies in the same way. 

Making sense of today’s measurement environment requires rethinking those assumptions, not trying to restore the old ones. This is how I think about it now.

Dig deeper: How to evolve your PPC measurement strategy for a privacy-first future

The old world: click IDs and deterministic matching

For many years, Google Ads measurement followed a predictable pattern. 

  • A user clicked an ad. 
  • A click ID, or gclid, was appended to the URL. 
  • The site stored it in a cookie. 
  • When a conversion fired, that identifier was sent back and matched to the click.

This produced deterministic matches, supported offline conversion imports, and made attribution relatively easy to explain to stakeholders. 

As long as the identifier survived the journey, the system behaved in ways most advertisers could reason about. 

We could literally see what happened with each click and which ones led to individual conversions.

That reliability depended on a specific set of conditions.

  • Browsers needed to allow parameters through. 
  • Cookies had to persist long enough to cover the conversion window. 
  • Users had to accept tracking by default. 

Luckily, those conditions were common enough that the model worked really well.

Why that model breaks more often now

Browsers now impose tighter limits on how identifiers are stored and passed.

Apple’s Intelligent Tracking Prevention, enhanced tracking protection, private browsing modes, and consent requirements all reduce how long tracking data persists, or whether it’s stored at all.

URL parameters may be stripped before a page loads. Cookies set via JavaScript may expire quickly. Consent banners may block storage entirely.

Click IDs sometimes never reach the site, or they disappear before a conversion occurs.

This is expected behavior in modern browser environments, not an edge case, so we have to account for it.

Trying to restore deterministic click-level tracking usually means working against the constant push toward more privacy and the resulting browser behaviors.

This is another of the many evolutions of online advertising we simply have to get on board with, and I’ve found that designing systems to function with partial data beats fighting the tide.

The adjustment isn’t just technical

On my own team, GA4 is a frequent source of frustration. Not because it’s incapable, but because it’s built for a world where some data will always be missing. 

We hear the same from other advertisers: the data isn’t necessarily wrong, but it’s harder to reason about.

This is the bigger challenge. Moving from a world where nearly everything was observable to one where some things are inferred requires accepting that measurement now operates under different conditions. 

That mindset shift has been uneven across the industry because measurement lives at the periphery of where many advertisers spend most of their time, working in ad platforms.

A lot of effort goes into optimizing ad platform settings when sometimes the better use of time might’ve been fixing broken data so better decisions could be made.

Dig deeper: Advanced analytics techniques to measure PPC

Get the newsletter search marketers rely on.


What still works: Client-side and server-side approaches

So what approaches hold up under current constraints? The answer involves both client-side and server-side measurement.

Pixels still matter, but they have limits

Client-side pixels, like the Google tag, continue to collect useful data.

They fire immediately, capture on-site actions, and provide fast feedback to ad platforms, whose automated bidding systems rely on this data.

But these pixels are constrained by the browser. Scripts can be blocked, execution can fail and consent settings can prevent storage. A portion of traffic will never be observable at the individual level.

When pixel tracking is the only measurement input, these gaps affect both reporting and optimization. Pixels haven’t stopped working. They just no longer cover every case.

Changing how pixels are delivered

Some responses to declining pixel data focus on the mechanics of how pixels are served rather than measurement logic.

Google Tag Gateway changes where tag requests are routed, sending them through a first-party, same-origin setup instead of directly to third-party domains.

This can reduce failures caused by blocked scripts and simplify deployment for teams using Google Cloud.

What it doesn’t do is define events, decide what data is collected, or correct poor tagging choices. It improves delivery reliability, not measurement logic.

This distinction matters when comparing Tag Gateway and server-side GTM.

  • Tag Gateway focuses on routing and ease of setup.
  • Server-side GTM enables event processing, enrichment, and governance. It requires more maintenance and technical oversight, but it provides more control.

The two address different problems.

Here’s the key point: better infrastructure affects how data moves, not what it means.

Event definitions, conversion logic, and consistency across systems still determine data quality.

A reliable pipeline delivers whatever it’s given, so it’d be just as good at making sure the garbage you put in also comes back out.

Offline conversion imports: Moving measurement off the browser

Offline conversion imports take a different approach, moving measurement away from the browser entirely. Conversions are recorded in backend systems and sent to Google Ads after the fact.

Because this process is server to server, it’s less affected by browser privacy restrictions. It works for longer sales cycles, delayed purchases, and conversions that happen outside the site. 

This is why Google commonly recommends running offline imports alongside pixel-based tracking. The two cover different parts of the journey. One is immediate, the other persists.

Offline imports also align with current privacy constraints. They rely on data users provide directly, such as email addresses during a transaction or signup.

The data is processed server-side and aggregated, reducing reliance on browser identifiers and short-lived cookies.

Offline imports don’t replace pixels. They reduce dependence on them.

Dig deeper: Offline conversion tracking: 7 best practices and testing strategies

How Google fills the gaps

Even with pixels and offline imports working together, some conversions can’t be directly observed.

Matching when click IDs are missing

When click IDs are unavailable, Google Ads can still match conversions using other inputs.

This often begins with deterministic matching through hashed first-party identifiers such as email addresses, when those identifiers can be associated with signed-in Google users.

This is what Enhanced Conversions help achieve.

When deterministic matching, if this then that, isn’t possible, the system relies on aggregated and validated signals rather than reconstructing individual click paths.

These can include session-level attributes and limited, privacy-safe IP information, combined with timing and contextual constraints.

This doesn’t recreate the old click-level model, but it allows conversions to be associated with prior ad interactions at an aggregate level.

One thing I’ve noticed: adding these inputs typically improves matching before it affects bidding.

Bidding systems account for conversion lag and validate new signals over time, which means imported or modeled conversions may appear in reporting before they’re fully weighted in optimization.

Matching, attribution, and bidding are related but separate processes. Improvements in one don’t immediately change the others.

Modeled conversions as a standard input

Modeled conversions are now a standard part of Google Ads and GA4 reporting.

They’re used when direct observation isn’t possible, such as when consent is denied or identifiers are unavailable.

These models are constrained by available data and validated through consistency checks and holdback experiments.

When confidence is low, modeling may be limited or not applied. Modeled data should be treated as an expected component of measurement rather than an exception.

Dig deeper: Google Ads pushes richer conversion imports

Boundaries still matter

Tools like Google Tag Gateway or Enhanced Conversions for Leads help recover measurement signal, but they don’t override user intent. 

Routing data through a first-party domain doesn’t imply consent. Ad blockers and restrictive browser settings are explicit signals. 

Overriding them may slightly increase the measured volume, but it doesn’t align with users’ expectations regarding how your organization uses their data.

Legal compliance and user intent aren’t the same thing. Measurement systems can respect both, but doing so requires deliberate choices.

Designing for partial data

Missing signals are normal. Measurement systems that assume full visibility will continue to break under current conditions.

Redundancy helps: pixels paired with hardened delivery, offline imports paired with enhanced identifiers, and multiple incomplete signals instead of a single complete one.

But here’s where things get interesting. Different systems will see different things, and this creates a tension many advertisers now face daily.

Some clients tell us their CRM data points clearly in one direction, while Google Ads automation, operating on less complete inputs, nudges campaigns another way.

In most cases, neither system is wrong. They’re answering different questions with different data, on different timelines. Operating in a world of partial observability means accounting for that tension rather than trying to eliminate it.

Dig deeper: Auditing and optimizing Google Ads in an age of limited data

Making peace with partial observability

The shift toward privacy-first measurement changes how much of the user journey can be directly observed. That changes our jobs.

The goal is no longer perfect reconstruction of every click, but building measurement systems that remain useful when signals are missing, delayed, or inferred.

Different systems will continue to operate with different views of reality, and alignment comes from understanding those differences rather than trying to eliminate them.

In this environment, durable measurement depends less on recovering lost identifiers and more on thoughtful data design, redundancy, and human judgment.

Measurement is becoming more strategic than ever.

Read more at Read More

Web Design and Development San Diego

OpenAI starts testing ChatGPT ads

OpenAI confirmed today that it’s rolling out its first live test of ads in ChatGPT, showing sponsored messages directly inside the app for select users.

The details. The ads will appear in a clearly labeled section beneath the chat interface, not inside responses, keeping them visually separate from ChatGPT’s answers.

  • OpenAI will show ads to logged-in users on the free tier and its lower-cost Go subscription.
  • Advertisers won’t see user conversations or influence ChatGPT’s responses, even though ads will be tailored based on what OpenAI believes will be helpful to each user, the company said.

How ads are selected. During the test, OpenAI matches ads to conversation topics, past chats, and prior ad interactions.

  • For example: A user researching recipes might see ads for meal kits or grocery delivery. If multiple advertisers qualify, OpenAI shows the most relevant option first.

User controls. Users get granular controls over the experience. They can dismiss ads, view and delete separate ad history and interest data, and toggle personalization on or off.

  • Turning personalization off limits ads to the current chat.
  • Free users can also opt out of ads in exchange for fewer daily messages or upgrade to a paid plan.

Why we care. ChatGPT is one of the world’s largest consumer AI platforms. Even a limited ad rollout could mark a major shift in how conversational AI gets monetized — and how brands reach users.

Bottom line. OpenAI is officially moving into ads inside ChatGPT, testing how sponsored content can coexist with conversational AI at massive scale.

OpenAI’s announcement.Testing ads in ChatGPT (OpenAI)

Read more at Read More

B2B Social Media Marketing: Build a Winning Strategy

While most direct-to-consumer brands are maximizing their social media presence with polished content and paid ads, many business-to-business companies (B2Bs) are still stuck in broadcast mode. They treat social like a checkbox or, worse, avoid it altogether. That’s a miss.

Your buyers are on these platforms every day, scrolling LinkedIn between meetings, watching YouTube explainers, and even picking up insights on TikTok.

The good news is that most of your competitors aren’t doing this well. And B2B social follows different rules. It’s less about selling, more about showing up with value and building trust over time.

This guide breaks down the platforms, strategy, and mistakes to avoid so you can stop blending in and start building something that drives real results.

Key Takeaways

  • Most B2B brands underperform on social because they focus on broadcasting, not solving problems or creating value.
  • LinkedIn leads for B2B, but platforms like YouTube, X, and even TikTok can work if you match the content to your audience.
  • B2B social content should educate, not sell. Use it to build trust and stay relevant throughout long sales cycles.
  • Build a strategy around real personas, funnel stages, and platform-specific content—not random posting or vanity metrics.
  • Avoid common mistakes like generic messaging and chasing impressions over actions like clicks or demo signups.

Why B2B Social Media Is (Still) Underrated

Many B2B companies still treat social media as an afterthought. They post a few updates, maybe recycle some blog content, and call it a day. But here’s the truth: social media isn’t just about brand awareness anymore. It plays a fundamental role in demand gen, and even sales.

Your buyers are on these platforms every day. LinkedIn? Still essential. YouTube? Massive for education. Twitter (X)? Great for thought leadership. Even TikTok is becoming a serious B2B player in some niches.

If you’re only thinking top-of-funnel, you’re missing the bigger picture. Social gives you direct access to influence buying decisions, build relationships, and stay top of mind during long sales cycles. It’s also a powerful signal for search. That’s why smart B2B brands treat social like a core channel, right alongside their email, paid, and B2B SEO strategies.

So yes, B2B social still flies under the radar but that’s your opportunity. While your competitors play it safe, you can build a strategy that actually drives the pipeline.

Top B2B Social Platforms

Not all platforms are worth your time, but these are a good starting point. Here’s a breakdown of the top B2B social channels and how to use each one to actually move the needle.

LinkedIn

B2B marketers love LinkedIn, with 97% of them using it for their content marketing strategy.

There’s a reason for this: LinkedIn is effective at securing leads.

The social goal of most B2Bs isn’t just traffic. It’s the right kind of traffic. More specifically, it’s leads from that traffic. That’s why LinkedIn has been the social media sweet spot of most B2Bs.

Social platformrs compared in a graphic.

LinkedIn does for B2Bs what Facebook, X, and Pinterest have all failed to do. It forms professional connections based on a single goal.

It’s not that Facebook, X, and all the rest are more personal and less professional than LinkedIn. LinkedIn brands itself as a professional networking site. On LinkedIn, you see fewer baby pictures, fewer cat videos, and nothing about “Dave just checked in at Downtown Bar.”

LinkedIn, devoid as it is of issues like “relationship status” and “favorite TV shows,” is much more appealing to the world of B2B exchanges.

A HubSpot Linkedin Post.

Image Source

Post link

X

X still punches above its weight for B2B if you use it right. It’s built for real-time conversations. That makes it great for PR moments and quick interactions with your audience or peers.

If you’re in tech or SaaS, this is where your buyers and early adopters are already talking. Threads and hot takes can build credibility fast, as long as you’re consistent and actually say something worth engaging with.

Just don’t expect conversions. X is a conversation starter, not a closer. Use it to build visibility, shape perception, and stay in the mix.

An Adobe X post.

Source

YouTube

YouTube is a goldmine for B2B content that keeps working long after you hit publish. Think product demos, how-to explainers, or customer stories, anything that helps prospects see your value in action.

It’s perfect for long-form content with high evergreen potential. A solid video can rank in search, appear in recommended feeds, and continue to drive traffic for months (or even years). And because Google owns YouTube, it plays nice with your overall SEO strategy.

Use it to educate, build trust, and answer the questions your audience is already Googling. Just keep the production clean and the content useful.

Image Source

Link to Video

TikTok + Instagram (Yes, Really)

These aren’t just playgrounds for influencers anymore. TikTok and Instagram can actually work for B2B if you play to their strengths. Short-form video is perfect for showing off your brand personality, simplifying complex ideas, or giving a behind-the-scenes look at your team.

They’re especially useful for building an audience that sees your brand as more than just a logo. Quick explainers and team moments go a long way here.

The key is to be intentional. You don’t need to chase every trend, but you do need to show up as a genuine person, not a corporate account.

A Zapier TikTok post.

Image Source

Video Link

A Shopify Instagram post.

Image Source

Post Link

How B2B Social Media Needs To Work Differently

Most B2B social strategies fall flat because they treat platforms like a digital brochure. Too much product pushing. Not enough problem-solving.

Your buyers don’t scroll through LinkedIn or YouTube looking for a sales pitch; they’re looking for answers. That’s your opportunity. When you lead with value, you earn attention. And in B2B, attention is the first step toward trust.

This isn’t about trying to “go viral.” It’s about consistently showing up with content that solves real problems. That might look like a short video explaining a common pain point or a post breaking down industry trends.

Educational content works because it positions you as a guide, not just a vendor. It says, “We get your world. Here’s how to navigate it better.” That’s way more powerful than just listing your features.

You also need to show up like a human. Buyers are smart. They can sniff out polished sales copy in seconds. What they actually want is an honest perspective, clear thinking, and content that feels like it came from someone who’s done the work. That’s how you build an audience that actually wants to hear from you, and buyers who remember your name when it’s time to act.

Building a B2B Social Media Strategy That Works

A solid B2B social strategy doesn’t mean posting constantly. It means making smarter posts. Here’s how to build a plan that actually drives results across the funnel.

Know Your Customer Profiles

Before planning content, you need to be clear about who you’re actually talking to. Who’s following you now, and who do you want to attract?

An ideal customer profile.

Source: The Smarketers

B2B audiences aren’t one-size-fits-all. A CMO wants high-level insights and strategic trends. A sales manager cares more about tactics and results. Founders might look for big-picture thinking or lessons from the trenches. If you post the same content to all of them, you’ll miss the mark every time.

Start by segmenting your audience. Review your analytics and consult with your sales team, then map out which personas matter most for your business and what they care about.

Also, know where they hang out. Your audience might be active on LinkedIn and totally absent on Instagram. Or maybe they’re watching explainers on YouTube but ignoring X. Match your platform and content format to what your ideal customer actually uses and engages with. That’s how you create content that lands.

Set The Right Goals/KPIs

If you don’t know what you’re aiming for, it’s easy to waste time chasing the wrong metrics. Start by defining what success actually looks like for your brand.

Is your focus on awareness? Then you’re tracking reach, impressions, and follower growth. Want to drive engagement? Look at comments, shares, and saves, not just likes. If lead gen is the goal, prioritize CTRs or traffic to high-intent landing pages.

You might also be building community or educating users on your product. In those cases, qualitative feedback can be a stronger signal than raw numbers.

The key is to tie your content back to goals that matter for the business—and track the right KPIs for each. Don’t get distracted by vanity metrics that look good but don’t move the needle. Set benchmarks, track consistently, and optimize based on what’s actually working.

Build A Content Marketing Calendar

An effective content calendar maps content to each stage of the funnel, so you’re guiding prospects from awareness to action and making the most of your b2b content strategy.

At the top of the funnel (TOFU), focus on educational content. Think industry stats and quick tips that stop the scroll and add value fast. For the middle (MOFU), shift to case studies and testimonials that build trust and show proof. Bottom-of-funnel (BOFU) content should drive action—think offers and clear (call to actions) CTAs.

A Linkedin Post from Neil Patel with a graphic.

A well-planned calendar also helps you stay consistent without burning out your team. You can batch content and avoid that last-minute “what do we post today?” panic.

Turn Employees and Executives Into Advocates

People trust people, not brands. That’s why employee advocacy is one of the most powerful (and underused) tools in B2B social.

When your team shares content, adds their take, or shows up in the comments, it expands your reach and adds credibility. Their networks are often full of the exact decision-makers you’re trying to reach. And posts from real people perform better than anything coming from a company page.

The same goes for your leadership team. Help your CEO or founder post in their own voice, not just polished PR copy. A short LinkedIn post sharing a real insight or lesson learned often lands better than a glossy video.

A Linkedin Post from an NP Digital employee.

Make it easy for your team to participate. Share post templates, content ideas, or just ask them to weigh in on relevant threads. The goal isn’t to turn everyone into a creator—it’s to activate your people as trusted voices for your brand. The image above shows how to do this versus something to the effect of “helping your CEO or founder.”

Measure, Learn, Optimize

If you’re not measuring, you’re just guessing. The best B2B social strategies are built on real data, not hunches.

Start with the basics: engagement rate, impressions, and click-throughs. Track how often people interact with your content and where they go next. Are they hitting your demo page? Signing up for a webinar? Those are signals your content is working.

Use tools like GA4 and each social platform’s native analytics to connect the dots. Don’t just track what performs best. Look at why. Was it the topic? The format? The tone?

Speaking of format, test everything. Short videos. Carousels. Polls. First-person posts. What works on LinkedIn might fall flat on X. What drives DMs might not drive clicks. The only way to know is to try.

Then optimize. Double down on what works. Cut what doesn’t. Keep tweaking until your content not only earns attention but drives action.

Additional Strategies For B2B Social Media

Once your core strategy’s in place, these advanced plays can help you scale faster, get more mileage from your content, and squeeze more value out of every post.

Figure Out a Non-boring Angle

A lot of B2Bs feel like they’re boring, and this perception of being a boring company becomes a self-fulfilling prophecy. Because they think they are boring, they write boring articles and make boring social media posts.

Let’s look at a company that sells project management software. On the surface, nothing is exciting about that product or industry, but when you start to look at how the product can help your customer, things become unboring very quickly.

A new project management platform can include cool features for collaboration. It could also increase productivity or help business teams achieve goals that previously seemed out of reach.

Your job is to “sell the sizzle.” Put yourself in your customer’s shoes and brainstorm the solutions your product or service can provide their business that will get them excited!

Each B2B with an unintelligible product or service needs to develop an angle that is both understandable and appealing to a broader audience. This will allow them to create an initiative or idea that can gain traction on social media.

You can find an unboring angle. Once you do, you’re ready to roll forward with your social media efforts.

Feature A Human Aspect

One of the major shortcomings of many B2Bs, is the lack of a genuine human backing their efforts.

The lack of real people makes the B2B company seem so distant and unreal. It’s like talking to a robot. It just doesn’t feel right.

Every B2B needs to make an intense effort to humanize their brand tone and voice on social media and content marketing. Here’s what this looks like in practice:

  • Using first-person voice when writing updates and articles
  • Using a brand front person to tweet, post updates, and write articles
  • Using real people with their names in customer service
  • Initiating engagement and outreach from a real person
A Hubspot Linkedin Post.

Image Source

Post Link

Hire The Right Person

B2Bs are often challenged in social media because they don’t hire the right person to manage their social media efforts.

Here are a few tips to help a B2B hire the right person for social media:

  • Hire an expert in social media. Look for someone who has social media success in a similar niche, but not necessarily in your own niche.
  • Hire a social media consulting company or agency, not just an individual. Companies often have more resources at their disposal. For a lower price, they can help you engage on multiple levels, such as creating social media graphics and writing content.

Anyone leading a social media initiative must have familiarity with the industry. But B2Bs also need someone who is a social media ninja. Why? Because B2B social media is a hard nut to crack. It’s not inherently sexy or awesome. It doesn’t automatically generate buzz. It takes a social media expert to really unleash the hidden power in B2B social media.

Brands need someone who can develop a social media movement, shaping the brand’s voice and expanding its reach. It’s not just status updates. It’s an entire identity creation.

If the first objective of social media is leads, then things have gotten off on the wrong foot. Leads don’t come first. Engagement and presence come first. Leads are a byproduct. This goes back to the “unboring angle” I mentioned above.

Back Your Social Media With Content Marketing

There is no such thing as a successful social media campaign without a successful content marketing campaign. They’re like two links in an indestructible chain.

Fortunately, about half of B2B companies understand the importance of content marketing, according to Statista. They realize it’s essential for customers to trust their brand, and they know how far content marketing can go in solidifying that trust. 

I’m convinced that the better a B2B company is at content marketing, the more effective they will be at social media.

This article is not the place to discuss the ins and outs of B2B content marketing. Instead, I’ll point out that the company should find the most engaging form of content and share it on social media.

Common B2B Social Mistakes

Most B2B social feeds feel like a wall of noise. Why? Because too many brands treat social like a megaphone instead of a conversation. Here are some of the biggest mistakes I see with B2B social accounts:

  • Constantly pushing products and making salesy updates, treating your account like a billboard. If your posts aren’t solving a problem or offering insight, don’t expect engagement.
  • Posting just to stay “active.” If your content calendar is driven by days of the week instead of strategy, your audience will feel it. Every post should aim to educate, engage, or move someone closer to buying.
  • The platform dilemma. What works on LinkedIn won’t work on TikTok. You need to adapt your message, tone, and format based on where you’re showing up and who you’re trying to reach.
  • Tracking the wrong metrics. Chasing impressions or vanity metrics won’t tell you what’s driving value. Prioritize metrics like click-throughs and demo page visits—things that tie back to real business outcomes.

Avoid these traps, and you’ll be in much better shape than most of your competition.

FAQs

Which social media platform is best for B2B marketing?

LinkedIn is the go-to platform for most B2B brands. It’s built for professional networking and decision-maker engagement, making it ideal for thought leadership and brand awareness. But depending on your audience, YouTube, X (Twitter), and even TikTok can play a role too.

How to use social media for B2B marketing?

Start by sharing content that solves real problems—think educational posts, customer stories, and product demos. Focus on building trust and staying visible across the buyer journey, not just selling. Then measure what works and keep improving.

What is B2B social media marketing?

B2B social media marketing uses platforms like LinkedIn, YouTube, and X to connect with business buyers. It’s about building relationships and sharing valuable insights as you guide potential customers through the sales funnel.

Conclusion

In the next few years, I predict that we’ll see more and more B2B markets focus more time and energy on their social media skills. Already, there are a few bright spots in the B2B social horizon. 

Using these tips are a great way to optimize your cross-channel marketing efforts. Becoming a platform ninja who understands social media trends, and can incorporate them into the B2B marketing sales funnel, is the clear path forward for today’s marketers.

Read more at Read More

Digital Marketing Trends & Predictions 2026

If 2025 taught us anything, it’s that AI is no longer just a side tool. It’s the engine running campaigns and reshaping how people discover brands.  

At the same time, platforms have declared war on the “click.” We’re seeing an aggressive push for native conversions, where the goal isn’t to drive traffic to the website but to close the deal right in the feed. 

That shift toward “frictionless” experiences, combined with the saturation of AI-generated noise, has forced another major change. Content with deep educational value is starting to outperform the high-volume, “101-level” content that simply fills space. 

As we get deeper into the new year, those shifts are accelerating. 

The top digital marketing trends for 2026 reflect this reality: Automation handles execution, while human elements like strategy and storytelling set the winners apart.  

If you want to stay relevant, abandon the old metrics of “rankings” and “reach.” They no longer guarantee relevance. Here’s what’s actually moving the needle in 2026 (and how the best digital marketers are keeping up). 

Key Takeaways

  • With the rise of agentic AI, machines can now handle the lifecycle and campaigns, but human oversight is essential. 
  • User discovery spans platforms like TikTok, Reddit, YouTube, and Meta. Each one requires unique formats, signals, and intent-based optimization. 
  • Funnels are no longer static. AI personalizes journeys in real time based on user behavior, replacing manual segmentation and drip campaigns. 
  • Chat assistants recommend brands based on trust and content relevance. Consistency and large language model optimization (LLMO) are key to inclusion. 
  • Google’s traditional and AI systems (PMax, AI Overviews, Demand Gen, and Search) now operate as one. Aligning creative and goals across all touchpoints boosts results. 

AI Agents Take Over Execution

We’re already seeing AI streamline much of a marketing team’s content production. But the new flex is agentic AI. We’re talking about autonomous “team members” that can now handle your entire campaign workflow.  

According to PwC, nearly 80 percent of organizations have already adopted AI agents to some degree. And most plan to expand use as these systems move from experimentation into day-to-day operations. 

 AI agent adoption levels across organizations, with most reporting broad or limited adoption. 

This goes far beyond production and publishing. Large language models (LLMs) have advanced to the point that they can manage the full lifecycle. We’re talking about agents embedded into tools that can help: 

  • Manage your customer relationship management (CRM) data 
  • Analyze data performance 
  • Provide campaign insights 
  • Adjust ad bids for paid campaigns in real time 

This year, AI is going from writing your content to autonomous operations. It handles the execution while you focus on strategy and oversight. 

Search Everywhere Optimization Becomes Mandatory

For the last few years, “search everywhere” has been a catchy conference buzzword. In 2026, it’s a baseline for survival. 

The era of the “Google-default” mindset is over. Discovery now happens across platforms, feeds, and AI systems. Today’s SEO is drifting more and more toward search everywhere optimization and less search engine optimization. 

Your audience isn’t just “Googling it” anymore. They’re asking questions and validating purchases on the platforms they trust most. And each has its own algorithm, formats, and user behavior.  

For example: 

  • TikTok viewer wants quick, visual tips.  
  • Reddit user wants deep, authentic discussion.  
  • Pinterest needs eye-catching visuals with keyword-rich descriptions.  
  • YouTube demands longer, high-value content with tight intros and strong engagement. 

The most disruptive shift, however, is happening outside traditional feeds. Voice assistants like Alexa and Siri, and generative chat tools like ChatGPT, Gemini, or Claude are increasingly acting as answer engines.  

The numbers show where we’re headed. Nearly 1 in 5 people use voice search, and Statista predicts 36 percent of the global population will be searching via AI by 2028.  

Example of an AI chat assistant returning a summarized product recommendation list, showing how search increasingly happens inside answer engines.

Prompt-Driven Campaigns and Product Development

Digital marketers no longer need full engineering cycles to test new ideas.  

Prompt-driven tools now make it possible to prototype calculators, quizzes, internal tools, and campaign utilities in hours instead of weeks. 

Tools like Cursor and Replit let marketers translate plain-language instructions into working interfaces, lowering the barrier to experimentation. You still need engineering for production-scale products, but prompts now handle much of the early build and validation work. 

Base44 is another example of a “vibe coding” platform that can turn your detailed descriptions into functional tools, reinforcing the same idea: Prompts are becoming a new control layer.  

Everyone’s an engineer now. Look out, Silicon Valley!  

The game has changed. You can now test fast, learn faster, and skip the bottlenecks that used to slow everything down. 

Funnels Become Dynamic and Self-Optimizing

Static funnels are out. In 2026, customer journeys are becoming shorter and increasingly influenced in real time by AI systems. 

It may seem shocking at first, but it makes sense when you zoom out and think about it. We are no longer pushing users through a pre-set funnel. We’re letting AI agents build the funnel around the user in real time. 

In the early days of Google (and online shopping), a customer would have to visit several sites to research and read reviews—and, eventually, make a purchase. This is the classic marketing funnel we’re all familiar with. There’s a clearly defined top-of-funnel, mid-funnel, and bottom-of-funnel. 

With generative AI tools now offering in-platform purchases, that funnel shrinks significantly. Your typical user can now research, build trust, and make a purchase all within an LLM like ChatGPT.  

We’ve even begun to see major retailers like Walmart and Amazon move toward this model.  

Walmart Sparky can answer user queries and pull in product recommendations to answer deeper questions. It even leads you to check out when you’re ready to purchase.  

Walmart interface showing its AI shopping assistant answering product questions, comparing options, summarizing reviews, and guiding users toward checkout within a single on-platform experience. 

(Image Source) 

The same setup applies to Amazon Rufus, enabling customers to get details, get suggestions, get help, and get inspiration (and ultimately get stuff) all within one platform.  

Amazon’s Rufus AI assistant helping users research products, get recommendations, and shop without leaving Amazon

(Image Source

The result is higher engagement and faster conversions with way less manual work. These tools provide a hyper-personalized shopping experience faster than ever before. Platforms like Shopify and Etsy have also partnered with ChatGPT to purchase products directly in the LLM. 

AI Attribution Connects Content to Revenue

Attribution isn’t new, but it’s getting more accurate. AI-powered attribution now connects every touchpoint—from the first video view to the final click—with real revenue outcomes. 

Platforms like Wicked Reports are enabling marketers to tie initial ad clicks to lifetime purchases and provide “first click” and “time decay” tools to help you pinpoint the most successful starting point for your customers’ buying journeys. This app also provides revenue forecasting to help B2C and e-commerce businesses reliably predict and scale their growth. 

Marketing analytics dashboard showing AI-driven measurement, signal correction, and performance insights used to connect campaigns to real revenue outcomes. 

(Image Source

Your latest blog post may not have converted immediately, but it made the visitor trust you enough to subscribe for email updates. That email is the next stop in their journey, pushing them to check out your pricing page. AI sees it all and assigns value accordingly. 

With these new insights, you finally know which content moves the needle.  

And it’s having a real financial impact. Teams using AI-driven marketing analytics report return on investment (ROI) improvements of roughly 300 percent and customer acquisition costs dropping by more than 30 percent. 

Chat Assistants Reshape Discovery

We mentioned earlier how people’s search has evolved into asking AI chat tools like ChatGPT, Gemini, and Perplexity to answer their product questions. These platforms now include brand recommendations built right into the response, as well as the ability to shop for Shopify and Etsy products. 

This is the same dynamic powering tools like Walmart Sparky and Amazon Rufus, where research and recommendations happen within a single AI experience.  

These assistants don’t list 10 “sponsored” links, a la Google. They summarize what they trust. If they don’t mention your brand, you’re invisible in this new layer of discovery. 

AI answer engine Perplexity showing summarized recommendations for ‘best email marketing tools for SaaS,’ with brands cited directly in the response instead of traditional search links. 

It takes more than gaming keywords to show up on these platforms. It’s all about relevance and consistency.  

The more helpful, high-quality content you create around a topic, the more citations you’ll receive from users sharing it across the internet. Signals like structured content, schema markup, and consistent third-party validation help AI systems interpret your authority and decide when your brand is worth referencing. 

This shift has given rise to large language model optimization (LLMO), a new branch of SEO focused on training AI to recognize and recommend your brand. If you’re not already thinking about LLMO, it’s time to get caught up. 

The big takeaway here is that usefulness matters more than volume as discovery moves into AI systems. Provide enough high-quality answers to your audience’s questions, and the bots will start to bring your name up first. 

Content Structure Becomes Even More Important

Old-school SEO was all about keywords. In 2026, performance increasingly comes from covering topics in depth and structuring content so both people and machines can understand it. 

As we mentioned in the last section, search engines and AI assistants care more about how well you answer a question than how many times you use a keyword. That means your content needs to be thorough and easy to interpret at a glance, no matter who (or what) is doing the glancing. 

NerdWallet does this well by organizing credit card content into a clear hub, then breaking it into tightly related subtopics that cover a ton of topical ground. It’s no longer a game of relying on individual keyword pages. Notably, Nerdwallet is one of the most frequently cited websites in LLMs. 

NerdWallet credit cards hub showing a structured topic cluster with subcategories like travel, cash back, balance transfer, and student cards organized under a single pillar. 

So, switch your strategy mindset from pages to topic clusters. Cover a topic from every angle across multiple assets. Use headers, FAQs, schema markup, and internal links to connect the dots.  

The better you structure your content, the easier it is for AI to find and promote it. 

Your target audience is searching across multiple channels in today’s environment. Focusing on individual keywords leaves a lot of opportunity on the table.  

Today’s rising search platforms, like social media apps and LLMs, revolve around semantic queries. 

People talk to these tools naturally and conversationally (some of them even use ChatGPT’s voice functionality). This means you can’t hone in on a specific keyword. Using a keyword cluster that covers the most popular phrasings customers may use is a much better way to make sure you’re covering what people are asking, increasing your probability of being found.  

This query within Perplexity demonstrates how people interact with search tools. They’re not always typing keywords. They’re asking full, conversational questions and expecting a clear answer. 

AI answer engine responding to a conversational question, ‘Which is better for a headache, Tylenol or ibuprofen?,’ with a summarized comparison pulled from multiple medical sources. 

You also have to consider that many users never click through to your site. Zero-click searches are growing fast, which means your content needs to deliver value right in the SERP—or immediately on platforms like social, LLMs, and voice. 

If you’re still chasing individual keywords, you’re missing the bigger opportunity: becoming the trusted source on your topic. 

Brand Trust Is Measured in Citations and Sentiment

AI doesn’t care how loud you are. It cares how often others talk about you, and what they say when they do. 

Large language models prioritize brands with consistent, credible citations across the web. That includes mentions in blog posts, news articles, podcasts, reviews, and Reddit threads. The more quality signals you earn, the more likely AI is to recommend you.  

But the mentions are just the beginning. Your performance in 2026 really boils down to your audience’s perception of you. Sentiment analysis now plays a big role in ranking. Positive discussions boost your chances of surfacing in AI results, while negativity can drag you down. 

Until recently, this layer of discovery was almost impossible to measure. Traditional analytics don’t show when your brand is cited inside AI-generated answers. But a new class of AI visibility tools now tracks where and how often brands appear across platforms like ChatGPT, Perplexity, Claude, and Google’s AI Overviews (along with the surrounding context). But what types of brands are succeeding using this strategy? 

Brands like Patagonia and TOMS are shining examples of this. These companies leverage philanthropy to increase their goodwill and, in turn, their customers’ positive sentiment toward them.  

Leveraging elements like philanthropy the right way switches these brands’ audiences from customers to loyal supporters. 

Patagonia webpage outlining causes the company funds and does not fund, illustrating clear brand values and consistent public positioning. 

This shift rewards brands that build goodwill rather than just backlinks. If your strategy still centers on shouting the loudest, you’ll get buried by brands that are being talked about, and for the right reasons. 

A ChatGPT result talking about TOMS philanthropy efforts.

Trust is now your most important ranking factor. Earn it or fade out. 

Blogs Influence AI Models, Not Just Traffic

If you think blogs don’t “work” like they used to, you’re missing the bigger picture. They still do heavy lifting behind the scenes to shape AI output and position your brand as a go-to source. 

In modern search, everything you publish helps shape how AI models understand your brand. When you consistently cover a topic with depth and clarity, models start to associate your name with that subject.  

This new reality turns your blogs from content assets into signals of authority. 

Even if search traffic dips due to zero-click results or AI summaries, the long-term payoff is still there. The more high-quality content you create, the more likely your brand is to be cited by the higher-profile AI channels and included in trusted content roundups. 

Social Platforms Function as Search Engines 

As the search everywhere trend shows us, search behavior is spreading. And, according to Statista, nearly a quarter of U.S. adults treat social media as their starting point for search. 

People are searching TikTok to see how something works or whether a restaurant’s worth trying.  

TikTok search results for ‘best places to eat in Las Vegas,’ showing short videos answering a local restaurant query instead of traditional search links. 

They’re using YouTube to learn how to install software or compare skincare brands. Considering that this is the largest search engine after Google, it’s a great platform to focus efforts on. 

This matters because social search runs on a different logic than traditional SEO or AI answer engines. These platforms reward relevance through engagement. 

Each platform has its own discovery logic. TikTok rewards watch time and velocity. YouTube favors relevance and retention. Instagram leans on recency and interaction. 

Without optimizing for these platforms, you’re missing a huge part of the search pie. You should be treating social platforms like search engines, because your audience already does. 

This is where more traditional on-page SEO comes into play. That means digging into the types of questions your audience is asking and focusing on tried-and-true tactics like using clear, searchable titles and engaging hooks to “stop the scroll” and get your viewers’ attention in the first three seconds. 

Content Quality Outperforms Quantity Across Channels

Publishing more content won’t save you in 2026. 

Social platforms are flooded, and search is competitive. On top of that, AI is getting better every day at filtering out thin, repetitive, or regurgitated content.  

Consequently, original insights and pieces that actually teach something are rising to the top. 

We see this in emerging trends. For starters, the average number of posts per day among brands has decreased to 9.5. Engagement is moving in the opposite direction, with inbound interactions increasing by roughly 20 percent year over year.  

Instead of posting five times a day, focus on publishing things worth reading and sharing, even if it’s only one well-structured piece of content per week.  

A thoughtful video or long-form LinkedIn breakdown that sparks conversation will do much better than 100 pieces of AI-generated blogs that barely scratch the surface of a topic. 

Take National Geographic, for example. Rather than posting constantly, it focuses on educational storytelling. Check out its TikTok grid

National Geographic’s TikTok profile showcasing educational, documentary-style videos that prioritize learning and storytelling over high-volume posting. 

Content creators are experiencing the benefits of this strategy in real time.  

recent survey finds that 35 percent of creators say they’re seeing higher potential ROI from longer-form content formats, with 39 percent saying they’re seeing better engagement. And almost half (49 percent) say that the choice to produce longer-form content is helping them reach a wider audience.  

If your strategy is still built around churning out content to “stay active,” it’s time to shift. Fewer pieces. Bigger impact. Better outcomes. 

That’s what wins in 2026. 

Conversion Happens On-Platform, Not On-Site 

The platforms people use every day are getting very good at keeping them there.  

Think about it: Nearly every social platform has lead forms and lets you shop inside the app. The goal of these features is to help you convert without ever leaving their platform. 

Instagram and TikTok, for example, have fully integrated shopping experiences. And it’s working. Sales through social media channels are forecasted to reach nearly 21 percent in 2026. 

Google’s even testing AI-generated product recommendations with built-in checkout links, like Etsy and ChatGPT. The whole point is to remove friction and keep the experience seamless. 

That shift changes what a “landing page” even means. In many cases, it’s a native form, a product card, or an in-app checkout flow that closes the deal on the spot. 

Your website still matters, but forcing every conversion to happen there can introduce unnecessary drop-off. When users are ready to act, the simplest path usually wins. 

This shift is giving rise to what some teams now call checkout optimization, and it’s getting some pretty serious results. E-commerce brands with 1,000 to 2,000 orders per month are implementing checkout optimization and seeing measurable gains in shipping revenue and order total.  

Comparison of e-commerce checkout flows before and after optimization, showing fewer steps, clearer shipping options, and reduced friction at checkout. 

(Image Source) 

When you meet users where they are, you lower the barrier to action. No load times. No messy redirects. Just a quick tap or swipe to buy, book, or sign up. 

Video Becomes a Primary Search and AI Input 

Video is increasingly becoming more than just a distribution format. It’s now a primary way people search—and a growing input for AI systems. 

Search engines and AI platforms now index video much like they do written content, pulling from structural signals to generate results. If those signals aren’t there, the video might as well not exist. 

ChatGPT interface responding to the prompt ‘Hit me with some funny cat videos’ by embedding a YouTube video thumbnail of a cat sitting in a plastic container in water. 

What do those signals look like in practice? 

Well, because search engines and AI platforms can’t watch your videos, they instead rely on clean transcripts, keyword-rich titles and descriptions, and clear segmentation. Think chapters, not rambles. Structure is what makes video searchable. 

This video from Neil Patel uses chapters, summaries, and clear topic segmentation, making it easier for search engines and AI systems to interpret and reference specific sections. 

The more structured and searchable your video content, the more likely it is to be cited by AI assistants. 

Text still matters. But if video isn’t part of your SEO and discovery strategy, you’re leaving serious visibility on the table. 

Paid Media Shifts to AI-Led Campaigns

We’ve seen AI-driven paid media campaigns for some time now, but platforms like Google’s Performance Max and Meta’s Advantage+ are refining and elevating how it’s done. We’re seeing these platforms automatically testing creative and placements to hit performance goals, and even testing the benefits of AI-powered segmentation or ad bidding. 

The result is less manual control and more system-led optimization, which is a benefit for many marketers. Retail marketers, for example, have seen a 10 percent to 25 percent lift in their return on ad spend (ROAS) by implementing AI-powered campaign elements.  

But “hands-off” doesn’t mean “set it and forget it.” 

In this model, your role shifts from managing campaigns to training the system. The better your inputs—creative variety, first-party data, and clear conversion signals—the better your results.  

Lazy targeting and generic ads just get ignored. 

Want to lower customer acquisition cost (CAC) or increase return on ad spend (ROAS)? Focus on refining your creative and uploading strong first-party data. AI will handle testing and optimization, but it can’t fix bad inputs. 

Savvy marketers are shifting their roles from campaign operators to strategy leads. They’re spending less time on dashboards and more time building assets that actually convert, such as a robust content library or unique, impactful insights from proprietary data. 

It all comes down to this: AI runs the ads, but you train it. If you’re not giving the algorithm something great to work with, you’re not going to like what it gives back. 

FAQs

What are the digital marketing trends for 2026?

In 2026, AI is running full campaigns, dynamic funnels are replacing traditional static ones, and users are increasingly discovering brands across platforms. Chat assistants like ChatGPT now also recommend brands, and SEO is more about structured topics than keywords. Quality content outperforms quantity, and conversion often happens off your site. 

How can businesses stay updated on marketing trends?

Follow trusted industry blogs (like NeilPatel.com), subscribe to marketing newsletters, and keep an eye on platform updates from the big players (Google, Meta, and TikTok). Tools like Ubersuggest can also help spot shifts in search behavior. But more than anything, continue testing and tracking, and stay close to what your audience responds to. 

Conclusion

Many experts say that marketing is changing, but the fact is that it’s already changed.  

AI now drives the full spectrum of content marketing. Platforms prioritize native conversion. Content shapes how machines and people see your brand. If you’re still playing by old rules—keyword-centric strategy, manual funnels, or high-volume posting—you’re going to get left behind. 

Winning in 2026 means adapting quickly to emerging digital marketing trends by thinking strategically and building trust across every touchpoint. 

If you’re not sure where to start, check out my guide on search engine trends to see how modern discovery actually works today. 

The marketers who move first always get the advantage. So, make your move. 

Read more at Read More

January 2026 Digital Marketing Roundup: What Changed and What You Should Do About It

January didn’t bring flashy product launches. It brought something more valuable: clarity.

Platforms spent the month explaining how their systems actually work. Google detailed JavaScript indexing rules that matter for modern sites. Reddit opened up automation insights most platforms keep hidden. Amazon positioned itself as a legitimate cross-screen player with first-party data advantages traditional TV can’t match.

Automation kept expanding, but with firmer guardrails. AI continued to compress discovery. Zero-click experiences grew. Brands without clear expertise signals or off-site authority started disappearing from AI-generated answers.

For digital marketers, January reinforced one reality: performance in 2026 depends less on clever tactics and more on getting fundamentals right across channels.

Key Takeaways

  • Indexing logic must live in base HTML, not JavaScript. Google may skip rendering pages with noindex directives in initial HTML, leaving valuable content invisible even if JavaScript removes the tag later.
  • Performance Max channel reporting is now essential, not optional. Budget pressure is currently your sharpest lever for managing underperforming surfaces like Display or Discover.
  • Share of search is becoming a better demand signal than traffic alone. As AI reduces click-through rates, measuring how often people search for your brand versus competitors reveals momentum better than vanishing clicks.
  • Digital PR now directly impacts AI visibility. Authoritative mentions and credible coverage determine whether AI systems recognize and recommend your brand in zero-click answers.
  • Influencer marketing reached enterprise maturity in January. Unilever’s 20x creator expansion and 50% social budget shift prove influence at scale is baseline strategy, not experimentation.
  • Review monitoring must track losses, not just gains. Google’s AI is deleting legitimate reviews without notice, affecting rankings and trust faster than new reviews can rebuild them.

Search, SEO, and Indexing Reality Checks

Search teams started 2026 with clearer rules, not more flexibility. Google spent January confirming how it treats indexing signals on JavaScript-heavy sites.

Google Clarifies Noindex and JavaScript Behavior

Google confirmed that pages with a noindex directive in their initial HTML may not get rendered at all. Any JavaScript meant to remove or modify that directive might never execute.

Indexing intent belongs in base HTML. JavaScript should enhance experiences, not define crawl behavior. For headless stacks and dynamic frameworks, search engines respond to what they see first, not what you hope they’ll see after rendering.

If your site uses React, Next.js, Angular, or Vue with client-side rendering, audit how noindex tags are implemented. Server-side rendering or static generation solves most of these issues.

Google Clarifies JavaScript Canonical Rules

Google detailed how canonical tags work on JavaScript-driven pages. Canonicals can be evaluated twice: once in raw HTML and again after rendering. Conflicts between the two create real indexing problems.

Server-rendered HTML pointing to one canonical while client-side JavaScript points to another forces Google to pick. That choice often hurts rankings quietly, without throwing obvious errors in Search Console.

Teams need to decide where canonicals live and enforce consistency. One canonical after rendering. No ambiguity between server and client.

December Core Algorithm Update Wraps

Google’s December 2025 core update finished after roughly 18 days of volatility. Sites with stale content, weak expertise signals, or unclear intent lost ground. Others gained visibility by being more useful and better aligned with user needs.

Core updates no longer feel disruptive because they’re frequent. Three broad core updates rolled out in 2025 alone. The advantage now comes from consistent execution, not post-update recovery tactics.

Paid Search, Automation, and Audience Control

Paid media keeps moving toward automation. January showed where control still exists and where it doesn’t.

Using Google’s PMax Channel Report More Strategically

The Performance Max Channel Performance Report keeps evolving. You can now see performance broken down across Search, YouTube, Display, Discover, Gmail, and Maps.

The PMAX Channel Performance Report.

You still can’t control bids or exclusions at a granular level. What you can control is budget pressure. One surface consistently underperforming? Budget becomes your corrective lever. Pull back overall spend and PMax reallocates to better-performing channels automatically.

Teams that review this report monthly make better creative and investment decisions. Track this data over time. Patterns emerge. You start understanding which channels deliver at which funnel stages, even inside automation.

Google Drops Audience Size Minimums

Google lowered minimum audience size thresholds to 100 users across Search, Display, and YouTube. Previous minimums ranged from 1,000 users down to a few hundred depending on network and list type.

This opens doors for smaller advertisers and niche segments. Remarketing lists, CRM uploads, and custom audiences that previously failed minimums now become usable.

Smart teams will use this to test tighter segmentation strategies. But don’t chase volume that isn’t there. A 100-user audience won’t scale into a growth channel overnight.

Bing Tests Google-Style Ad Grouping

A Bing Ad Example.

Bing briefly tested a sponsored results format similar to Google’s recent changes. Multiple ads grouped under a single label, with only the first result carrying an ad marker.

The test ended quickly, but the signal matters. Search platforms are converging on similar layouts. How ads appear now affects click quality and intent, not just click-through rate.

Social Platforms and Performance Content

Social platforms spent January rewarding clarity while punishing shortcuts.

Reddit Launches Max Campaigns

Reddit introduced Max Campaigns, an automated ad product handling targeting, placements, creative, and budget allocation in real-time.

What stands out is visibility. Reddit surfaces audience personas and engagement insights that most automated systems hide. Early testers report 27% more conversions and 17% lower CPA on average.

Testing works best when anchored to existing campaigns. Replicate your best-performing Reddit campaign as a Max Campaign. Let automation prove efficiency gains with known benchmarks.

Instagram Caps Hashtags

Instagram rolled out a five-hashtag limit across posts and reels. This confirms discovery on Instagram is driven by AI-based content understanding, not hashtag volume.

Hashtags now function like keywords. They clarify intent and help Instagram’s systems categorize content. They don’t manufacture reach.

Captions, on-screen text, subtitles, and visuals do the heavy lifting. Choose five hashtags that directly describe your content. Mix specificity levels: one broad category tag, two niche topic tags, one community hashtag, one branded hashtag.

LinkedIn Shares Performance Guidance for 2026

LinkedIn reiterated that human perspective drives performance. Video continues outperforming other formats. Hashtags do not impact distribution. Automated engagement and content pods face increased scrutiny.

Posting two to five times per week remains effective. AI can support thinking, but content still needs lived experience and clear points of view.

Brand Visibility, Authority, and Demand Measurement in an AI Era

AI-driven discovery is reshaping how brands get surfaced and evaluated.

What AI Search Means for Your Business

AI-generated summaries and zero-click experiences shape early discovery now. Users often form opinions before visiting a site. Google’s AI Overviews, ChatGPT’s SearchGPT, and Perplexity answer questions directly, compressing or eliminating the need to click through.

AI favors brands with clear expertise, structured content, and external validation. Generic explanations get compressed into summaries that strip away brand identity. Thin content disappears entirely.

Optimization now includes being understandable and credible to machines, not just persuasive to human readers. That means structured data markup, clear content hierarchy, author credentials, and topical authority signals.

Share of Search Becomes a Core KPI

As AI reduces click-through rates, traffic becomes a weaker signal of demand. Share of search fills that gap.

It measures how often people look for your brand compared to competitors. That correlates strongly with market share and future growth. Brands with rising share of search typically see revenue growth follow within quarters, even if organic traffic stays flat.

Calculate share of search by tracking branded search volume for your brand and key competitors over time. Tools like Google Trends, Semrush, or Ahrefs make this accessible.

Digital PR Matters More Than Ever

AI systems recommend brands they recognize and trust. That trust is built off-site, not through on-page optimization.

Authoritative mentions, expert commentary, and credible coverage now influence visibility across AI-driven experiences. Links still matter, but reputation matters more.

PR, SEO, and content strategy can no longer operate independently. Authority compounds when they align. If you’re not investing in Digital PR alongside traditional SEO, you’re optimizing for a search ecosystem that’s rapidly shrinking.

Video, CTV, and Cross-Screen Media Strategy

Video buying is consolidating across screens.

Amazon Emerges as a Cross-Screen Advertising Player

Amazon is positioning itself as a unified advertising ecosystem across Prime Video, live sports, audio, and programmatic inventory. Layered with first-party shopper data, this creates a powerful performance and measurement advantage traditional TV buyers can’t match.

Amazon now competes higher in the funnel through premium video and live sports while retaining lower-funnel accountability through its commerce data. Interactive features let you add “add to cart” overlays directly in OTT video ads.

CTV Breaks the 30-Second Format

Streaming dominates TV consumption. Ad formats are finally catching up. Interactive and nontraditional CTV units are gaining traction, supported by early standardization efforts from IAB Tech Lab.

Traditional :15 and :30 second spots still work, but they blend into an increasingly crowded environment. Emerging formats offer differentiation in lower-clutter streaming contexts.

Brands that test early build creative and performance advantages before these formats normalize and competition increases.

Pinterest Acquires tvScientific

Pinterest’s acquisition of tvScientific connects intent-driven discovery with CTV buying. This closes a long-standing measurement gap between inspiration and awareness channels.

For brands rooted in discovery—home decor, fashion, food, travel, DIY, beauty—this creates a clearer path from interest to action.

Brand-Led Attention and Influence at Scale

Attention increasingly flows through people, communities, and culture-driven media.

Unilever’s Influencer Expansion

Unilever announced plans to work with 20 times more influencers and shift half its ad budget to social. This isn’t a test. It’s a structural reallocation signaling influencer marketing has reached enterprise maturity.

Unilever’s SASSY framework now activates nearly 300,000 creators. The company reported category-wide outperformance, attributing significant gains to influencer-driven campaigns.

Brands still treating creators as side projects will struggle to compete against organizations running influencer programs with the same rigor and budget as paid search or programmatic display.

Google’s AI Is Deleting Reviews

Google’s AI moderation is removing reviews at scale, including legitimate ones, often without notice. Business owners report hundreds of reviews disappearing overnight.

That affects rankings, conversion rates, and consumer trust. Reputation strategy now includes monitoring review loss, not just tracking new reviews.

Check your Google Business Profile weekly. Document total review count and average rating. When drops occur, investigate patterns. Better yet, diversify review platforms beyond Google.

Experimentation and Growth Discipline

Sustainable growth depends on knowing why a test exists before judging its outcome.

Growth vs Optimization: Drawing the Line

Growth experiments explore new opportunities. Optimization improves what already works. Blurring the two creates misaligned expectations and poor decision-making.

Clear intent leads to clearer measurement and stronger buy-in. Teams that label tests correctly scale with more confidence.

What Digital Marketers Should Take Forward

Platforms are clarifying rules. AI rewards authority and consistency. Measurement is shifting away from clicks alone.

The advantage in 2026 comes from alignment across teams and channels. Durable signals outperform clever workarounds.

Indexing logic must live in base HTML. Performance Max channel reporting is essential. Share of search reveals momentum. Digital PR impacts AI visibility. Influencer marketing reached enterprise maturity. Review monitoring must track losses.

This is the work we focus on every day at NP Digital.

If you want help aligning fundamentals across SEO, paid media, content, and PR in a way that compounds over time, let’s talk.

Read more at Read More

AI Hallucinations, Errors, and Accuracy: What the Data Shows

AI hallucinations became a headline story when Google’s AI Overviews told people that cats can teleport and suggested eating rocks for health.

Those bizarre moments spread fast because they’re easy to point at and laugh about.

But that’s not the kind of AI hallucination most marketers deal with. The tools you probably use, like ChatGPT or Claude, likely won’t produce anything that bizarre. Their misses are sneakier, like outdated numbers or confident explanations that fall apart once you start looking under the hood.

In a fast-moving industry like digital marketing, it’s easy to miss those subtle errors. 

This made us curious: How often is AI actually getting it wrong? What types of questions trip it up? And how are marketers handling the fallout?

To find out, we tested 600 prompts across major large language model (LLM) platforms and surveyed 565 marketers to understand how often AI gets things wrong. You’ll see how these mistakes show up in real workflows and what you can do to catch hallucinations before they hurt your work.

Key Takeaways

  • Nearly half of marketers (47.1 percent) encounter AI inaccuracies several times a week, and over 70 percent spend hours fact-checking each week.
  • More than a third (36.5 percent) say hallucinated or incorrect AI content has gone live publicly, most often due to false facts, broken source links, or inappropriate language.
  • In our LLM test, ChatGPT had the highest accuracy (59.7 percent), but even the best models made errors, especially on multi-part reasoning, niche topics, or real-time questions.
  • The most common hallucination types were fabrication, omission, outdated info, and misclassification—often delivered with confident language.
  • Despite knowledge of hallucinations, 23 percent of marketers feel confident using AI outputs without review. Most teams add extra approval layers or assign dedicated fact-checkers to their processes.

What Do We Know About AI Hallucinations and Errors?

An AI hallucination happens when a model gives you an answer that sounds correct but isn’t. We’re talking about made-up facts or claims that don’t stand up to fact-checking or a quick Google search.

And they’re not rare.

In our research, over 43% of marketers say hallucinated or false information has slipped past review and gone public. These errors come in a few common forms:

  • Fabrication: The AI simply makes something up.
  • Omission: It skips critical context or details.
  • Outdated info: It shares data that’s no longer accurate.
  • Misclassification: It answers the wrong question, or only part of it.
A graphic showing common AI Hallucination Types

Hallucinations tend to happen when prompts are too vague or require multi-step reasoning. Sometimes the AI model tries to fill the gaps with whatever seems plausible.

AI hallucinations aren’t new, but our dependence on these tools is. As they become part of everyday workflows, the cost of a single incorrect answer increases.

Once you recognize the patterns behind these mistakes, you can catch them early and keep them out of your content.

AI Hallucination Examples

AI hallucinations can be ridiculous or dangerously subtle. These real AI hallucination examples give you a sense of the range:

  • Fabricated legal citations: Recent reporting shows a growing number of lawyers relying on AI-generated filings, only to learn that the cases or citations don’t exist. Courts are now flagging these hallucinations at an alarming rate.
  • Health misinformation: Revisiting our example from earlier, Google’s AI Overviews once claimed eating rocks had health benefits in an error that briefly went viral.
  • Fake academic references: Some LLMs will list fake studies or broken source links if asked for citations. A peer-reviewed Nature study found that ChatGPT frequently produced academic citations that look legitimate but reference papers that don’t exist.
  • Factual contradictions: Some tools have answered simple yes/no questions with completely contradictory statements in the same paragraph.
  • Outdated or misattributed data: Models can pull statistics from the wrong year or tie them to the wrong sources. And that creates problems once those numbers sneak into presentations or content.

Our Surveys/Methodology

To get a clear picture of how AI hallucinations show up in real-world marketing work, we pulled data from two original sources:

  1. Marketers survey: We surveyed 565 U.S.-based digital marketers using AI in their workflows. The questions covered how often they spot errors, what kinds of mistakes they see, and how their teams are adjusting to AI-assisted content. We also asked about public slip-ups, trust in AI, and whether they want clearer industry standards.
  1. LLM accuracy test: We built a set of 600 prompts across five categories: SEO/marketing, general business, industry-specific verticals, consumer queries, and control questions with a known correct answer. We then tested them across six major AI platforms: ChatGPT, Gemini, Claude, Perplexity, Grok, and Copilot. Humans graded each output, classifying them as fully correct, partially correct, or incorrect. For partially correct or incorrect outputs, we also logged the error type (omission, outdated info, fabrication, or misclassification).

For this report, we focused only on text-based hallucinations and content errors, not visual or video generation. The insights that follow combine both data sets to show how hallucinations happen and what marketers should watch for across tools and task types.

How AI Hallucinations and Errors Impact Digital Marketers

A graphic that shows how often Marketers Encounter AI Errors.

We asked marketers how AI errors show up in their work, and the results were clear: Hallucinations are far from a rarity.

Nearly half of marketers (47.1 percent) encounter AI inaccuracies multiple times a week. And more than 70 percent say they spend one to five hours each week just fact-checking AI-generated output. That’s a lot of time spent fixing “helpful” content.

Those misses don’t always stay hidden. 

More than a third (36.5 percent) say hallucinated content has made it all the way to the public. Another 39.8 percent have had close calls where bad AI info almost went live. 

And it’s not just teams spotting the problems. More than half of marketers (57.7 percent) say clients or stakeholders have questioned the quality of AI-assisted outputs.

These aren’t minor formatting issues, either. When mistakes make it through, the most common offenders are:

  • Inappropriate or brand-unsafe content (53.9 percent)
  • Completely false or hallucinated information (43.5 percent)
  • Formatting glitches that break the user experience (42.5 percent)

So where does it break down?

AI errors are most common in tasks that require structure or precision. Here are the daily error rates by task:

  • HTML or schema creation: 46.2 percent
  • Full content writing: 42.7 percent
  • Reporting and analytics: 34.2 percent

Brainstorming or idea generation had far fewer issues, with each landing at right about 25 percent.

A graphic showing where marketers encounter AI errors most often.

When we looked at confidence levels, only 23 percent of marketers felt fully comfortable using AI output without review. The rest? They were either cautious or not confident at all.

Teams hit hardest by public-facing AI mistakes include:

  • Digital PR (33.3 percent)
  • Content marketing (20.8 percent)
  • Paid media (17.8 percent)
A graphic showing teams most affected by public AI mistakes.

These are the same departments most likely to face direct brand damage when AI gets it wrong.

AI can save you time, but it also creates a lot of cleanup without checks in place. And most marketers feel the pressure to catch hallucinations before clients or customers do.

AI Hallucinations and Errors: How Do the Top LLMs Stack Up?

To figure out how often leading AI platforms hallucinate, we tested 600 prompts across six major models: ChatGPT, Claude, Gemini, Perplexity, Grok, and Copilot.

Each model received the same set of queries across five categories: marketing/SEO, general business, industry-specific use cases, consumer questions, and fact-checkable control prompts. Human reviewers graded each response for accuracy and completeness.

Here’s how they performed:

  • ChatGPT delivered the highest percentage of fully correct answers at 59.7 percent, with the lowest rate of serious hallucinations. Most of its mistakes were subtle, like misinterpreting the question rather than fabricating facts.
  • Claude was the most consistent. While it scored slightly lower on fully correct responses (55.1 percent), it had the lowest overall error rate at just 6.2 percent. When it missed, it usually left something out rather than getting it wrong.
  • Gemini performed well on simple prompts (51.3 percent fully correct) but tended to skip over complex or multi-step answers. Its most common error was omission.
  • Perplexity showed strength in fast-moving fields like crypto and AI, thanks to its strong real-time retrieval features. But that speed came with risk: 12.2 percent of responses were incorrect, often due to misclassifications or minor fabrications.
  • Copilot sat in the middle of the pack. It gave safe, brief answers. While that’s good for overviews, it often misses the deeper context.
  • Grok struggled across the board. It had the highest error rate at 21.8 percent and the lowest percentage of fully correct answers (39.6 percent). Hallucinations, contradictions, and vague outputs were common.
A graphic showing how major LLMs performed in our 600-prompt accuracy test.
A graphich showing most common error types across models.

So, what does this mean for marketers?

Well, most teams aren’t expecting perfection. According to our survey, 77.7 percent of marketers will accept some level of AI inaccuracy, likely because the speed and efficiency gains still outweigh the cleanup.

The takeaway isn’t that one model is flawless. It’s that every tool has its strengths and weaknesses. Knowing each platform’s tendencies helps you know when (and how) to pull a human into the loop and what to be on guard against.

What Question Types Gave LLMs The Most Trouble

Some questions are harder for AI to handle than others. In our testing, three prompt types consistently tripped up all the models, regardless of how accurate they were overall:

  • Multi-part prompts: When asked to explain a concept and give an example, many tools did only half the job. They either defined the term or gave an example, but not both. This was a common source of partial answers and context gaps.
  • Recently updated or real-time topics: If the ask was about something that changed in the last few months (like a Google algorithm update or an AI model release), responses were often inaccurate or completely fabricated. Some tools made confident claims using outdated info that sounded fresh.
  • Niche or domain-specific questions: Verticals like crypto, legal, SaaS, or even SEO created problems for most LLMs. In these cases, tools either made up terminology or gave vague responses that missed key industry context.

Even models like Claude and ChatGPT, which scored relatively high for accuracy, showed cracks when asked to handle layered prompts that required nuance or specialized knowledge.

Knowing which types of prompts increase the risk of hallucination is the first step in writing better ones and catching issues before they cost you.

AI Hallucination Tells to Look Out For

AI hallucinations don’t always scream “wrong.” In fact, the most dangerous ones sound reasonable (at least until you check the details). Still, there are patterns worth watching for:

Here are the red flags that showed up most often across the models we tested:

  • No source, or a broken one: If an AI gives you a link, check it. A lot of hallucinated answers include made-up or outdated citations that don’t exist when you click.
  • Answers to the wrong questions: Some models misinterpret the prompt and go off in a related (but incorrect) direction. If the response feels slightly off topic, dig deeper.
  • Big claims with no specifics: Watch for sweeping statements without specific stats or dates. That’s often a sign it’s filling in blanks with plausible-sounding fluff.
  • Stats with no attribution: Hallucinated numbers are a common issue. If the stat sounds surprising or overly convenient, verify it with a trusted source.
  • Contradictions inside the same answer: We experienced cases where an AI said one thing in the first paragraph and contradicted itself by the end. That’s a major warning sign.
  • “Real” examples that don’t exist: Some hallucinations involve fake product names, companies, case studies, or legal precedents. These details feel legit, but a quick search reveals no facts to verify these claims.

The more complex your prompt, the more important it is to sanity-check the output. If something feels even slightly off, assume it’s worth a second look. After all, subtle hallucinations are the ones most likely to slip through the cracks.

Best Practices for Avoiding AI Hallucinations and Errors

You can’t eliminate AI hallucinations completely, but you can make it a lot less likely they slip through. Here’s how to stay ahead of the risk:

  • Always request and verify sources: Some models will confidently provide links that look legit but don’t exist. Others reference real studies or stats, but take them out of context. Before you copy/paste, click through. This matters even more for AI SEO work, where accuracy and citation quality directly affect rankings and trust.
  • Fine-tune your prompts: Vague prompts are hallucination magnets, so be clear about what you want the model to reference or avoid. That might mean building prompt template libraries or using follow-up prompts to guide models more effectively. That’s exactly what LLM optimization (LLMO) focuses on.
  • Assign a dedicated fact-checker: Our survey results showed this to be one of the most effective internal safeguards. Human review might take more time, but it’s how you keep hallucinated claims from damaging trust or a brand’s credibility.
  • Set clear internal guidelines: Many teams now treat AI like a junior content assistant: It can draft, synthesize, and suggest, but humans own the final version. That means reviewing and fact-checking outputs and correcting anything that doesn’t hold up. This approach lines up with the data. Nearly half (48.3 percent) of marketers support industry-wide standards for responsible AI use.
  • Add a final review layer every time: Even fast-moving brands are building in one more layer of review for AI-assisted work. In fact, the most common adjustment marketers reported making was adding a new round of content review to catch AI errors. That said, 23 percent of respondents reported skipping human review if they trust the tool enough. That’s a risky move.
  • Don’t blindly trust brand-safe output: AI can sound polished even when it’s wrong. In our LLM testing, some of the most confidently written outputs were factually incorrect or missing key context. If it feels too clean, double-check it.

FAQs

What are AI hallucinations?

AI hallucinations occur when an AI tool gives you an answer that sounds accurate, but it’s not. These mistakes can include made-up facts, fake citations, or outdated info packaged in confident language.

Why Does AI hallucinate?

AI models don’t “know” facts. They generate responses based on patterns in the data they were trained on. When there’s a gap or ambiguity, the model fills it in with what sounds most likely (even if it’s completely wrong).

What causes AI hallucinations?

Hallucinations usually happen when prompts are vague, complex, or involve topics the model hasn’t seen enough data on. They’re also more common in fast-changing fields like SEO and crypto.

Can you stop AI from hallucinating?

Not entirely. Even the best models make things up sometimes. That’s because LLMs are built to generate language, not verify facts. Occasional hallucinations are baked into how they work.

How can you reduce AI hallucinations?

Use more specific prompts, request citation sources, and always double-check the output for accuracy. Add a human review step before anything goes live. The more structure and context you give the AI, the fewer hallucinations you’ll run into.

Conclusion

AI is powerful, but it’s not perfect. 

Our research shows that hallucinations happen regularly, even with the best tools. From made-up stats to misinterpreted prompts, the risks are real. That’s especially the case for fast-moving marketers.

If you’re using AI to create content or guide strategy, knowing where these tools fall short is like a cheat code. 

The best defense? Smarter prompts, tighter reviews, and clear internal guidelines that treat AI as a co-pilot (not the driver).

Want help building a more reliable AI workflow? Talk to our team at NP Digital if you’re ready to scale content without compromising accuracy. Also, you can check out the full report here on the NP Digital website.

Read more at Read More

Microsoft launches Publisher Content Marketplace for AI licensing

The future of remarketing? Microsoft bets on impressions, not clicks

Microsoft Advertising today launched the Publisher Content Marketplace (PCM), a system that lets publishers license premium content to AI products and get paid based on how that content is used.

How it works. PCM creates a direct value exchange. Publishers set licensing and usage terms, while AI builders discover and license content for specific grounding scenarios. The marketplace also includes usage-based reporting, giving publishers visibility into how their content performs and where it creates the most value.

Designed to scale. PCM is designed to avoid one-off licensing deals between individual publishers and AI providers. Participation is voluntary, ownership remains with publishers, and editorial independence stays intact. The marketplace supports everyone from global publishers to smaller, specialized outlets.

Why we care. As AI systems shift from answering questions to making decisions, content quality matters more than ever. As agents increasingly guide purchases, finance, and healthcare choices, ads and sponsored messages will sit alongside — or draw from — premium content rather than generic web signals. That raises the bar for credibility and points to a future where brand alignment with trusted publishers and AI ecosystems directly impacts performance.

Early traction. Microsoft Advertising co-designed PCM with major U.S. publishers, including Business Insider, Condé Nast, Hearst, The Associated Press, USA TODAY, and Vox Media. Early pilots grounded Microsoft Copilot responses in licensed content, with Yahoo among the first demand partners now onboarding.

What’s next. Microsoft plans to expand the pilot to more publishers and AI builders that share a core belief: as the AI web evolves, high-quality content should be respected, governed, and paid for.

The big picture. In an agentic web, AI tools increasingly summarize, reason, and recommend through conversation. Whether the topic is medical safety, financial eligibility, or a major purchase, outcomes depend on access to trusted, authoritative sources — many of which sit behind paywalls or in proprietary archives.

The tension. The traditional web bargain was simple: publishers shared content, and platforms sent traffic back. That model breaks down when AI delivers answers directly, cutting clicks while still depending on premium content to perform well.

Bottom line. If AI is going to make better decisions, it needs better inputs — and PCM is Microsoft’s bet that a sustainable content economy can power the next phase of the agentic web.

Microsoft’s announcement. Building Toward a Sustainable Content Economy for the Agentic Web

Read more at Read More

Web Design and Development San Diego

Inspiring examples of responsible and realistic vibe coding for SEO

Vibe coding is a new way to create software using AI tools such as ChatGPT, Cursor, Replit, and Gemini. It works by describing to the tool what you want in plain language and receiving written code in return. You can then simply paste the code into an environment (such as Google Colab), run it, and test the results, all without ever actually programming a single line of code.

Collins Dictionary named “vibe coding” word of the year in 2025, defining it as “the use of artificial intelligence prompted by natural language to write computer code.”

In this guide, you’ll understand how to start vibe coding, learn its limitations and risks, and see examples of great tools created by SEOs to inspire you to vibe code your own projects.

Vibe coding variations

While “vibe coding” is used as an umbrella term, there are subsets of coding with support or AI, including the following:

Type Description Tools
AI-assisted coding  AI helps write, refactor, explain, or debug code. Used by actual developers or engineers to support their complex work. GitHub Copilot, Cursor, Claude, Google AI Studio
Vibe coding Platforms that handle everything except the prompt/idea. AI does most of the work. ChatGPT, Replit, Gemini, Google AI Studio
No-code platforms Platforms that handle everything you ask (“drag and drop” visual updates while the code happens in the background). They tend to use AI but existed long before AI became mainstream. Notion, Zapier, Wix

We’ll focus exclusively on vibe coding in this guide. 

With vibe coding, while there’s a bit of manual work to be done, the barrier is still low — you basically need a ChatGPT account (free or paid) and access to a Google account (free). Depending on your use case, you might also need access to APIs or SEO tools subscriptions such as Semrush or Screaming Frog.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

To set expectations, by the end of this guide, you’ll know how to run a small program on the cloud. If you expect to build a SaaS or software to sell, AI-assisted coding is a more reasonable option to take, which will involve costs and deeper coding knowledge.

Vibe coding use cases

Vibe coding is great when you’re trying to find outcomes for specific buckets of data, such as finding related links, adding pre-selected tags to articles, or doing something fun where the outcome doesn’t need to be exact.

For example, I’ve built an app to create a daily drawing for my daughter. I type a phrase about something that she told me about her day (e.g., “I had carrot cake at daycare”). The app has some examples of drawing styles I like and some pictures of her. The outputs (drawings) are the final work as they come from AI.

When I ask for specific changes, however, the program tends to worsen and redraw things I didn’t ask for. I once asked to remove a mustache and it recolored the image instead. 

If my daughter were a client who’d scrutinize the output and require very specific changes, I’d need someone who knows Photoshop or similar tools to make specific improvements. In this case, though, the results are good enough. 

Building commercial applications solely on vibe coding may require a company to hire vibe coding cleaners. However, for a demo, MVP (minimum viable product), or internal applications, vibe coding can be a useful, effective shortcut. 

How to create your SEO tools with vibe coding

Using vibe coding to create your own SEO tools require three steps:

  1. Write a prompt describing your code
  2. Paste the code into a tool such as Google Colab
  3. Run the code and analyze the results

Here’s a prompt example for a tool I built to map related links at scale. After crawling a website using Screaming Frog and extracting vector embeddings (using the crawler’s integration with OpenAI), I vibe coded a tool that would compare the topical distance between the vectors in each URL.

This is exactly what I wrote on ChatGPT:

I need a Google Colab code that will use OpenAI to:

Check the vector embeddings existing in column C. Use cosine similarity to match with two suggestions from each locale (locale identified in Column A). 

The goal is to find which pages from each locale are the most similar to each other, so we can add hreflang between these pages.

I’ll upload a CSV with these columns and expect a CSV in return with the answers.

Then I pasted the code that ChatGPT created on Google Colab, a free Jupyter Notebook environment that allows users to write and execute Python code in a web browser. It’s important to run your program by clicking on “Run all” in Google Colab to test if the output does what you expected.

This is how the process works on paper. Like everything in AI, it may look perfect, but it’s not always functioning exactly how you want it. 

You’ll likely encounter issues along the way — luckily, they’re simple to troubleshoot.

First, be explicit about the platform you’re using in your prompt. If it’s Google Colab, say the code is for Google Colab. 

You might still end up with code that requires packages that aren’t installed. In this case, just paste the error into ChatGPT and it’ll likely regenerate the code or find an alternative. You don’t even need to know what the package is, just show the error and use the new code. Alternatively, you can ask Gemini directly in your Google Colab to fix the issue and update your code directly.

AI tends to be very confident about anything and could return completely made-up outputs. One time I forgot to say the source data would come from a CSV file, so it simply created fake URLs, traffic, and graphs. Always check and recheck the output because “it looks good” can sometimes be wrong.

If you’re connecting to an API, especially a paid API (e.g., from Semrush, OpenAI, Google Cloud, or other tools), you’ll need to request your own API key and keep in mind usage costs. 

Should you want an even lower execution barrier than Google Colab, you can try using Replit. 

Simply prompt your request and the software will create the code, design, and allow testing all on the same screen. This means a lower chance of coding errors, no copy and paste, and a URL you can share right away with anyone to see your project built with a nice design. (You should still check for poor outputs and iterate with prompts until your final app is built.)

Keep in mind that while Google Colab is free (you’ll only spend if you use API keys), Replit charges a monthly subscription and per-usage fee on APIs. So the more you use an app, the more expensive it gets.

Inspiring examples of SEO vibe-coded tools

While Google Colab is the most basic (and easy) way to vibe code a small program, some SEOs are taking vibe coding even further by creating programs that are turned into Chrome extensions, Google Sheets automation, and even browser games.

The goal behind highlighting these tools is not only to showcase great work by the community, but also to inspire, build, and adapt to your specific needs. Do you wish any of these tools had different features? Perhaps you can build them for yourself — or for the world.

GBP Reviews Sentiment Analyzer (Celeste Gonzalez)

After vibe coding some SEO tools on Google Colab, Celeste Gonzalez, Director of SEO Testing at RicketyRoo Inc, took her vibing skills a step further and created a Chrome extension. “I realized that I don’t need to build something big, just something useful,” she explained.

Her browser extension, the GBP Reviews Sentiment Analyzer, summarizes sentiment analysis for reviews over the last 30 days and review velocity. It also allows the information to be exported into a CSV. The extension works on Google Maps and Google Business Profile pages.

Instead of ChatGPT, Celeste used a combination of Claude (to create high-quality prompts) and Cursor (to paste the created prompts and generate the code).

AI tools used: Claude (Sunner 4.5 model) and Cursor 

APIs used: Google Business Profile API (free)

Platform hosting: Chrome Extension

Knowledge Panel Tracker (Gus Pelogia)

I became obsessed with the Knowledge Graph in 2022, when I learned how to create and manage my own knowledge panel. Since then, I found out that Google has a Knowledge Graph Search API that allows you to check the confidence score for any entity.

This vibe-coded tool checks the score for your entities daily (or at any frequency you want) and returns it in a sheet. You can track multiple entities at once and just add new ones to the list at any time.

The Knowledge Panel Tracker runs completely on Google Sheets, and the Knowledge Graph Search API is free to use. This guide shows how to create and run it in your own Google account, or you can see the spreadsheet here and just update the API key under Extensions > App Scripts. 

AI models used: ChatGPT 5.1

APIs used: Google Knowledge Graph API (free)

Platform hosting: Google Sheets

Inbox Hero Game (Vince Nero)

How about vibe coding a link building asset? That’s what Vince Nero from BuzzStream did when creating the Inbox Hero Game. It requires you to use your keyboard to accept or reject a pitch within seconds. The game is over if you accept too many bad pitches.

Inbox Hero Game is certainly more complex than running a piece of code on Google Colab, and it took Vince about 20 hours to build it all from scratch. “I learned you have to build things in pieces. Design the guy first, then the backgrounds, then one aspect of the game mechanics, etc.,” he said.

The game was coded in HTML, CSS, and JavaScript. “I uploaded the files to GitHub to make it work. ChatGPT walked me through everything,” Vince explained.

According to him, the longer the prompt continued, the less effective ChatGPT became, “to the point where [he’d] have to restart in a new chat.” 

This issue was one of the hardest and most frustrating parts of creating the game. Vince would add a new feature (e.g., score), and ChatGPT would “guarantee” it found the error, update the file, but still return with the same error. 

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

In the end, Inbox Hero Game is a fun game that demonstrates it’s possible to create a simple game without coding knowledge, yet taking steps to perfect it would be more feasible with a developer.

AI models used: ChatGPT

APIs used: None

Platform hosting: Webpage

Vibe coding with intent

Vibe coding won’t replace developers, and it shouldn’t. But as these examples show, it can responsibly unlock new ways for SEOs to prototype ideas, automate repetitive tasks, and explore creative experiments without heavy technical lift. 

The key is realism: Use vibe coding where precision isn’t mission-critical, validate outputs carefully, and understand when a project has outgrown “good enough” and needs additional resources and human intervention.

When approached thoughtfully, vibe coding becomes less about shipping perfect software and more about expanding what’s possible — faster testing, sharper insights, and more room for experimentation. Whether you’re building an internal tool, a proof of concept, or a fun SEO side project, the best results come from pairing curiosity with restraint.

Read more at Read More

Web Design and Development San Diego

LinkedIn: AI-powered search cut traffic by up to 60%

AEO playbook

AI-powered search gutted LinkedIn’s B2B awareness traffic. Across a subset of topics, non-brand organic visits fell by as much as 60% even while rankings stayed stable, the company said.

  • LinkedIn is moving past the old “search, click, website” model and adopting a new framework: “Be seen, be mentioned, be considered, be chosen.”

By the numbers. In a new article, LinkedIn said its B2B organic growth team started researching Google’s Search Generative Experience (SGE) in early 2024. By early 2025, when SGE evolved into AI Overviews, the impact became significant.

  • Non-brand, awareness-driven traffic declined by up to 60% across a subset of B2B topics.
  • Rankings stayed stable, but click-through rates fell (by an undisclosed amount).

Yes, but. LinkedIn’s “new learnings” are more like a rehash of established SEO/AEO best practices. Here’s what LinkedIn’s content-level guidance consists of:

  • Use strong headings and a clear information hierarchy.
  • Improve semantic structure and content accessibility.
  • Publish authoritative, fresh content written by experts.
  • Move fast, because early movers get an edge.

Why we care. These tactics should all sound familiar. These are technical SEO and content-quality fundamentals. LinkedIn’s article offers little new in terms of tactics. It’s just updated packaging for modern SEO/AEO and AI visibility.

Dig deeper. How to optimize for AI search: 12 proven LLM visibility tactics

Measurement is broken. LinkedIn said its big challenge is the “dark” funnel. It can’t quantify how visibility in LLM answers impacts the bottom line, especially when discovery happens without a click.

  • LinkedIn’s B2B marketing websites saw triple-digit growth in LLM-driven traffic and that it can track conversion from those visits.
    • Yes, but: Many websites are also seeing triple-digit (or more) growth in LLM-driven traffic. Because it’s an emerging channel. That said, this is still a tiny amount of overall traffic right now (1% or less for most sites).

What LinkedIn is doing. LinkedIn created an AI Search Taskforce spanning SEO, PR, editorial, product marketing, product, paid media, social, and brand. Key actions included:

  • Correcting misinformation that showed up in AI responses.
  • Publishing new owned content optimized for generative visibility.
  • Testing LinkedIn (social) content to validate its strength in AI discovery.

Is it working? LinkedIn said early tests produced a meaningful lift in visibility and citations, especially from owned content. At least one external datapoint (Semrush, Nov. 10, 2025) suggested that LinkedIn has a structural advantage in AI search:

  • Google AI Mode cited LinkedIn in roughly 15% of responses.
  • LinkedIn was the #2 most-cited domain in that dataset, behind YouTube.

Incomplete story. LinkedIn’s article is an interesting read, but it’s light on specifics. Missing details include:

  • The exact topic set behind the “up to 60%” decline.
  • Exactly how much click-through rates “softened.”
  • Sample size and timeframe.
  • How “industry-wide” comparisons were calculated.
  • What tests were run, what moved citation share, and by how much.

Bottom line. LinkedIn is right that visibility is the new currency. However, it hasn’t shown enough detail to prove its new playbook is meaningfully different from doing some SEO (yes, SEO) fundamentals.

LinkedIn’s article. How LinkedIn Marketing Is Adapting to AI-Led Discovery

Read more at Read More

Web Design and Development San Diego

Are we ready for the agentic web?

Are we ready for the agentic web?

Innovations are coming at marketers and consumers faster than before, raising the question: Are we actually ready for the agentic web?

To answer that question, it’s important to unpack a few supporting ones:

  • What’s the agentic web?
  • How can the agentic web be used?
  • What are the pros and cons of the agentic web?

It’s important to note that this article isn’t a mandate for AI skeptics to abandon the rational questions they have about the agentic web. 

Nor is it intended to place any judgment on how you, as a consumer or professional, engage with the agentic web.

LinkedIn poll on Copilot Checkout

With thoughts and feelings so divided on the agentic web, this article aims to provide clear insight into how to think about it in earnest, without the branding or marketing fluff.

Disclosure: I am a Microsoft employee and believe in the path Microsoft’s taking with the agentic web. However, this article will attempt to be as platform-agnostic as possible.

What’s the agentic web? 

The agentic web refers to sophisticated tools, or agents, trained on our preferences that act with our consent to accomplish time-consuming tasks.

In simple terms, when I use one-click checkout, I allow my saved payment information to be passed to the merchant’s accounts receivable systems. 

Neither the merchant nor I must write down all the details or be involved beyond consenting to send and receive payment.

For fun, I put this question to four different AI models, and the responses are telling: 

  • Copilot: “The agentic web is an emerging layer of the internet where AI agents can reason and take actions on behalf of people or organizations, turning expressed intent into outcomes while preserving user choice, transparency, and control.” 
  • Gemini: “The agentic web is an evolution of the internet where autonomous AI agents – rather than just human users – interact with websites, APIs, and each other to perform complex tasks and make decisions on our behalf. “
  • Perplexity: “The agentic web is an emerging phase of the internet where autonomous AI agents act as the primary interface, navigating, interpreting, and taking actions across websites and services on behalf of users with minimal human involvement.” 
  • Claude: “The agentic web is an evolution of the internet where AI agents autonomously navigate, interact with, and complete tasks across websites and services on behalf of users.” 

I begin with how different models answer the question because it’s important to understand that each one is trained on different information, and outcomes will inevitably vary.

It’s worth noting that with the same prompt, defining the agentic web in one sentence, three out of four models focus on diminishing the human role in navigating the web, while one makes a point to emphasize the significance of human involvement, preserving user choice, transparency, and control.

Two out of four refer to the agentic web as a layer or phase rather than an outright evolution of the web. 

This is likely where the sentiment divide on the agentic web stems from.

Some see it as a consent-driven layer designed to make life easier, while others see it as a behemoth that consumes content, critical thinking, and choice.

It’s noteworthy that one model, Gemini, calls out APIs as a means of communication in the agentic web. APIs are essentially libraries of information that can be referenced, or called, based on the task you are attempting to accomplish. 

This matters because APIs will become increasingly relevant in the agentic web, as saved preferences must be organized in ways that are easily understood and acted upon.

Defining the agentic web requires spending some time digging into two important protocols – ACP and UCP.

Dig deeper: AI agents in SEO: What you need to know

Agentic Commerce Protocol: Optimized for action inside conversational AI 

The Agentic Commerce Protocol, or ACP, is designed around a specific moment: when a user has already expressed intent and wants the AI to act.

The core idea behind ACP is simple. If a user tells an AI assistant to buy something, the assistant should be able to do so safely, transparently, and without forcing the user to leave the conversation to complete the transaction.

ACP enables this by standardizing how an AI agent can:

  • Access merchant product data.
  • Confirm availability and price.
  • Initiate checkout using delegated, revocable payment authorization.

The experience is intentionally streamlined. The user stays in the conversation. The AI handles the mechanics. The merchant still fulfills the order.

This approach is tightly aligned with conversational AI platforms, particularly environments where users are already asking questions, refining preferences, and making decisions in real time. It prioritizes speed, clarity, and minimal friction.

Universal Commerce Protocol: Built for discovery, comparison, and lifecycle commerce 

The Universal Commerce Protocol, or UCP, takes a broader view of agentic commerce.

Rather than focusing solely on checkout, UCP is designed to support the entire shopping journey on the agentic web, from discovery through post-purchase interactions. It provides a common language that allows AI agents to interact with commerce systems across different platforms, surfaces, and payment providers. 

That includes: 

  • Product discovery and comparison.
  • Cart creation and updates.
  • Checkout and payment handling.
  • Order tracking and support workflows.

UCP is designed with scale and interoperability in mind. It assumes users will encounter agentic shopping experiences in many places, not just within a single assistant, and that merchants will want to participate without locking themselves into a single AI platform.

It’s tempting to frame ACP and UCP as competing solutions. In practice, they address different moments of the same user journey.

ACP is typically strongest when intent is explicit and the user wants something done now. UCP is generally strongest when intent is still forming and discovery, comparison, and context matter.

So what’s the agentic web? Is it an army of autonomous bots acting on past preferences to shape future needs? Is it the web as we know it, with fewer steps driven by consent-based signals? Or is it something else entirely?

The frustrating answer is that the agentic web is still being defined by human behavior, so there’s no clear answer yet. However, we have the power to determine what form the agentic web takes. To better understand how to participate, we now move to how the agentic web can be used, along with the pros and cons.

Dig deeper: The Great Decoupling of search and the birth of the agentic web

How can the agentic web be used? 

Working from the common theme across all definitions, autonomous action, we can move to applications.

Elmer Boutin has written a thoughtful technical view on how schema will impact agentic web compatibility. Benjamin Wenner has explored how PPC management might evolve in a fully agentic web. Both are worth reading.

Here, I want to focus on consumer-facing applications of the agentic web and how to think about them in relation to the tasks you already perform today.

Here are five applications of the agentic web that are live today or in active development.

1. Intent-driven commerce  

A user states a goal, such as “Find me the best running shoes under $150,” and an agent handles discovery, comparison, and checkout without requiring the user to manually browse multiple sites. 

How it works 

Rather than returning a list of links, the agent interprets user intent, including budget, category, and preferences. 

It pulls structured product information from participating merchants, applies reasoning logic to compare options, and moves toward checkout only after explicit user confirmation. 

The agent operates on approved product data and defined rules, with clear handoffs that keep the user in control. 

Implications for consumers and professionals 

Reducing decision fatigue without removing choice is a clear benefit for consumers. For brands, this turns discovery into high-intent engagement rather than anonymous clicks with unclear attribution. 

Strategically, it shifts competition away from who shouts the loudest toward who provides the clearest and most trusted product signals to agents. These agents can act as trusted guides, offering consumers third-party verification that a merchant is as reliable as it claims to be.

2. Brand-owned AI assistants 

A brand deploys its own AI agent to answer questions, recommend products, and support customers using the brand’s data, tone, and business rules.

How it works 

The agent uses first-party information, such as product catalogs, policies, and FAQs. 

Guardrails define what it can say or do, preventing inferences that could lead to hallucinations. 

Responses are generated by retrieving and reasoning over approved context within the prompt.

Implications for consumers and professionals 

Customers get faster and more consistent responses. Brands retain voice, accountability, and ownership of the experience. 

Strategically, this allows companies to participate in the agentic web without ceding their identity to a platform or intermediary. It also enables participation in global commerce without relying on native speakers to verify language.

3. Autonomous task completion 

Users delegate outcomes rather than steps, such as “Prepare a weekly performance summary” or “Reorder inventory when stock is low.” 

How it works 

The agent breaks the goal into subtasks, determines which systems or tools are needed, and executes actions sequentially. It pauses when permissions or human approvals are required. 

These can be provided in bulk upfront or step by step. How this works ultimately depends on how the agent is built. 

Implications for consumers and marketers 

We’re used to treating AI like interns, relying on micromanaged task lists and detailed prompts. As agents become more sophisticated, it becomes possible to treat them more like senior employees, oriented around outcomes and process improvement. 

That makes it reasonable to ask an agent to identify action items in email or send templates in your voice when active engagement isn’t required. Human choice comes down to how much you delegate to agents versus how much you ask them to assist.

Dig deeper: The future of search visibility: What 6 SEO leaders predict for 2026

Get the newsletter search marketers rely on.


4. Agent-to-agent coordination and negotiation 

Agents communicate with other agents on behalf of people or organizations, such as a buyer agent comparing offers with multiple seller agents. 

How it works 

Agents exchange structured information, including pricing, availability, and constraints. 

They apply predefined rules, such as budgets or policies, and surface recommended outcomes for human approval. 

Implications for consumers and marketers 

Consumers may see faster and more transparent comparisons without needing to manually negotiate or cross-check options. 

For professionals, this introduces new efficiencies in areas like procurement, media buying, or logistics, where structured negotiation can occur at scale while humans retain oversight.

5. Continuous optimization over time 

Agents don’t just act once. They improve as they observe outcomes.

How it works 

After each action, the agent evaluates what happened, such as engagement, conversion, or satisfaction. It updates its internal weighting and applies those learnings to future decisions.

Why people should care 

Consumers experience increasingly relevant interactions over time without repeatedly restating preferences. 

Professionals gain systems that improve continuously, shifting optimization from one-off efforts to long-term, adaptive performance. 

What are the pros and cons of the agentic web? 

Life is a series of choices, and leaning into or away from the agentic web comes with clear pros and cons.

Pros of leaning into the agentic web 

The strongest argument for leaning into the agentic web is behavioral. People have already been trained to prioritize convenience over process. 

Saved payment methods, password managers, autofill, and one-click checkout normalized the idea that software can complete tasks on your behalf once trust is established.

Agentic experiences follow the same trajectory. Rather than requiring users to manually navigate systems, they interpret intent and reduce the number of steps needed to reach an outcome. 

Cons of leaning into the agentic web 

Many brands will need to rethink how their content, data, and experiences are structured so they can be interpreted by automated systems and humans. What works for visual scanning or brand storytelling doesn’t always map cleanly to machine-readable signals.

There’s also a legitimate risk of overoptimization. Designing primarily for AI ingestion can unintentionally degrade human usability or accessibility if not handled carefully. 

Dig deeper: The enterprise blueprint for winning visibility in AI search

Pros of leaning away from the agentic web 

Choosing to lean away from the agentic web can offer clarity of stance. There’s a visible segment of users skeptical of AI-mediated experiences, whether due to privacy concerns, automation fatigue, or a loss of human control. 

Aligning with that perspective can strengthen trust with audiences who value deliberate, hands-on interaction.

Cons of leaning away from the agentic web 

If agentic interfaces become a primary way people discover information, compare options, or complete tasks, opting out entirely may limit visibility or participation. 

The longer an organization waits to adapt, the more expensive and disruptive that transition can become.

What’s notable across the ecosystem is that agentic systems are increasingly designed to sit on top of existing infrastructure rather than replace it outright. 

Avoiding engagement with these patterns may not be sustainable over time. If interaction norms shift and systems aren’t prepared, the combination of technical debt and lost opportunity may be harder to overcome later.

Where the agentic web stands today

The agentic web is still taking form, shaped largely by how people choose to use it. Some organizations are already applying agentic systems to reduce friction and improve outcomes. Others are waiting for stronger trust signals and clearer consent models.

Either approach is valid. What matters is understanding how agentic systems work, where they add value, and how emerging protocols are shaping participation. That understanding is the foundation for deciding when, where, and how to engage with the agentic web.

Read more at Read More