Posts

The Future of Content Marketing: A 2026 Guide

Following last year’s content marketing playbook won’t cut it in 2026.

AI is evolving how we create, but human connection still drives what performs. Search behavior is splintering across platforms, and brands are being judged not just on what they publish but also on how it shows up.

To win this year, marketing pros need to be smarter about what they’re doing. That means your content must be rooted in real insights about how people actually buy from your brand.

This complete guide to content marketing breaks down what’s changing, what’s working, and where to focus your time and budget. If you’re serious about growing through content in 2026, you need to understand the shifts shaping the space.

Key Takeaways

  • AI should speed up execution, not replace strategy. Use it to draft and repurpose content, but rely on human perspective and editing to determine performance.
  • Content that feels human outperforms content that feels polished. Audiences respond to authentic opinions and usefulness versus brand-safe, committee-written copy.
  • Thought leadership now requires original insight. Repackaging what already ranks won’t build authority; bold takes, experienced authors, and first party data will. First party/proprietary data also helps your content be more original and unique.
  • Distribution is half the strategy. Content needs a plan for where and how it gets discovered across platforms, not just published on a blog.
  • Measure influence, not output. The content that matters most is what changes thinking and earns trust, not what fills a calendar.

AI Can Help You Scale but It Can’t Think for You

Your competitors may already be creating AI content, but I want you to understand how to create better AI content.

AI is great for maximizing your efficiency. It can drastically cut the time you spend on mundane content marketing tasks like researching and building outlines. It can even save you time by helping you repurpose old content into new formats. That kind of scale used to take teams. Now, all it takes is prompts.

But don’t mistake speed for strategy.

AI doesn’t know your customer. It doesn’t understand your brand’s voice or point of view—the elements of your brand or product that actually matter to people. Introducing those elements and ensuring they stay intact is on you.

Use AI to take the grunt work off your plate (i.e., building drafts or summarizing competitor content). But when it comes to telling your story, positioning your offer, or crafting something people want to read and share, human judgment still wins.

You can feel the difference between templated, AI-written content and something with a real perspective. So can your audience. If your content feels robotic or generic, they’re gone. No one shares or converts from content that reads like it came off an assembly line.

So yes, use AI. Just don’t hand it the keys to your content strategy. If you’re not putting in the human effort to edit and elevate what comes out, you’re just publishing noise.

Content Must Feel More Human, Not More Polished

People are tired of content that sounds like it was written by a committee.

You know the type: no strong opinions and so sanitized it could’ve come from any brand in your space. It’s forgettable. This year, forgettable doesn’t work.

Your audience doesn’t want another corporate how-to. They want to hear from someone who gets it. Someone who’s been in their shoes and isn’t afraid to say what actually works—and what doesn’t.

That doesn’t mean being sloppy. It means being real.

Ditch the fluff. Cut the clichés. Talk like a smart peer, not a brand trying to tick SEO boxes. Share what you’ve learned and what surprised you. That’s what builds trust. That’s what gets people to read and come back.

An article from Slite.

Slite’s piece on “people first” only going so far for parents is a great example. The post comes from a parent on their team, immediately creating expertise on the subject. There’s no promotional content anywhere in the blog, and the focus is on entertaining and educating the readers.

Let me pause a second here and make something clear: authentic doesn’t mean unedited. It means intentional. Sure, edit your pieces for grammar and structure, but don’t sand down the voice. Let the human fingerprints show.

In a world of AI-generated everything, human content stands out, not by being perfect, but by being authentically helpful and worth someone’s time.

Thought Leadership Requires Saying Something New

You don’t become a thought leader by echoing what everyone else is already saying.

Too many blogs and LinkedIn posts are just rewrites of what’s already ranking. They quote the same stats and land on the same safe conclusions. That’s not thought leadership, that’s content recycling.

You need to bring something new to the table if you want people to see you as an authority. That could mean sharing a bold opinion others won’t say out loud. It could be a unique framework you’ve developed through real experience. Or it might be calling out what isn’t working anymore, even if it used to.

Google’s EEAT principles—experience, expertise, authority, and trustworthiness—favor exactly this kind of content. You’re not just writing for algorithms anymore. You’re writing to earn trust from real people and search engines. Along with this, the style of throwing anonymous blog posts out into the aether isn’t going to cut it either. Adding named authors with credentials to your blog posts is essential for EEAT.

Brand Voice Is a Strategic Asset

AI can write. So can your competitors. Sticking to your unique voice is what’s going to set you apart.

In a sea of content that all sounds the same, brand voice is what makes people recognize you, even without seeing your logo. It’s more than tone or personality. It’s how you show up. And in 2026, it’s one of your biggest strategic advantages.

The best brands sound human. Clear and consistent across platforms—no matter if it’s a blog post, a LinkedIn comment, or a product page. That consistency builds trust and, over time, creates familiarity, which then leads to trust and loyalty—two essential elements of marketing to customers on the modern playing field. 

Innocent Drinks is a British brand that’s a great example of a unique tone and voice. They keep customer interactions fun with cheeky British humor and self-deprecating jokes. The laid-back, conversational tone presents their smoothie drinks amidst daily jokes, weather updates, and more that keep their customers coming back.

Innocent Drinks.

Source: https://iconicfox.com.au/brand-voice-examples/

But the catch is: you have to guard your brand voice with your life.

Well, maybe it’s not that drastic. But you at least have to define it, teach your writers how to use it, and maybe most importantly, defend it—especially when AI starts diluting it with generic phrasing or over-polished outputs.

Think of brand voice like a design system. It should guide every piece of content you publish. The goal isn’t to sound perfect. The goal is to sound unmistakably like you across formats, channels, and teams.

When everything else feels copy-pasted, your voice is what makes people stop scrolling and actually listen.

Video Isn’t Optional in a Content Strategy

If video still feels like “bonus content” to your team, you’re already behind.

Video—short and long-form—is now a core part of how people discover and share information. It’s not just for YouTube or TikTok anymore. It belongs in your blog strategy, your LinkedIn posts, your email sequences, and even your whitepapers. YouTube has risen to be the #2 largest search engine as well as a top source for Google Gemini.

The smartest content teams don’t treat video as a separate effort. They treat it as an extension of what they’re already creating.

Wrote a blog post that’s performing well? Turn it into a 60-second explainer for Instagram. Got a data-packed whitepaper? Break it into a mini-series of clips or animated infographics. Publishing a thought leadership piece? Record a quick POV video that puts a face (and voice) to the ideas.

Here’s an example from my Instagram:’

An Instagram post from Neil Patel.

This isn’t about adding more work. It’s about getting more mileage from what you’ve already built.

People scroll past walls of text. But they’ll pause for a story or a strong hook in motion. Video improves retention and makes your message stick.

In 2026, you should be including video in your strategy from the start.

Distribution Is Just as Important as Creation

If you’re not planning for distribution, you may just be publishing into the void.

Too many marketers hit “publish” and hope for traffic. But in 2026, the real game happens after the content goes live. Distribution is half the strategy.

Every piece you create should have a plan for where it lives and how it spreads. That could mean breaking your blog post into a X thread or syndicating it as a native article on LinkedIn or Medium.

And don’t underestimate the power of partnerships. Influencers, creators, and subject matter experts can extend your reach with the right angle and format.

This is where Search Everywhere Optimization becomes essential. People aren’t just searching on Google anymore. They’re searching across all the major social media platforms, LLMs like Chat GPT and Perplexity, and e-commerce sites like Amazon. You need to meet them where they are, in the format they prefer.

Good content doesn’t go viral by luck. It travels because someone planned the route.

So before you write your next post, ask yourself: how will people actually find this? If you don’t have a solid answer, you’re not done yet.

Content Must Map to the Buyer Journey, Not Just the Funnel

A customer’s actual buying journey is moving away from the classic funnel we all know and love. The funnel itself isn’t changing, but where people go along the funnel is changing. People can shop in ChatGPT, meaning they can follow the entire funnel without ever leaving an LLM.

The marketing funnel.

Real decision-making is messy. People bounce between tabs, skim reviews, watch videos, compare products, and ask peers for input—all before ever booking a demo or hitting “buy.” If your content only speaks to top-of-funnel traffic, you’re leaving serious revenue on the table.

Modern content strategy needs to follow the buyer wherever they go, not just the funnel stages.

That means going beyond how-to posts and SEO guides. You need content that helps buyers decide. Product comparisons. Honest breakdowns of pricing and features. Content that tackles objections head-on. Even onboarding previews and post-purchase FAQs count. They reduce friction and increase trust. This method is useful for appearing in LLMs as we.

Not only does this kind of content help convert, but it also ranks. Buyers search for “[Product A] vs [Product B],” “Is [Brand] worth it?”, and “How hard is it to implement [Tool]?” If you’re not showing up there, your competitor will.

So build for real behavior, not static funnels. Meet your buyer where they are—digging deep into research and looking for clarity before they buy.

Refreshing Existing Content Beats Churning Out New Posts

More content doesn’t always mean more results.

If you’re constantly creating from scratch but ignoring your old posts, you’re missing one of the easiest wins in content marketing: updates.

Refreshing content isn’t just minor changes; it’s making your best-performing or best-potential content even stronger. That could mean improving the structure, adding internal links, expanding thin sections, or aligning it with new search intent.

And it works. Updated content often ranks faster and converts better because it already has history and positive social signals, like backlinks. Google rewards freshness, but it also loves authority.

Instead of publishing five new blog posts next month, what if you refreshed five that are slipping in rankings? Or turned an old listicle into a detailed comparison guide? That’s not less work, it’s smarter work.

Start by running a quick content audit. Identify top traffic drivers and declining or outdated post topics. From there, prioritize updates that align with current search demand and business goals.

New content still matters, but refreshing what you already built often delivers a faster, more predictable ROI. Don’t start from zero when there’s gold in your archives.

User-Generated Content Builds Credibility and Community

Positive product recommendations, reviews, and stories from your customer base are gold for social proof. 

That’s the power of user-generated content (UGC). Testimonials and stories or spotlights can build trust faster than anything you write yourself. They’re authentic social proof, and one of the most underused levers in content strategy.

Coca-Cola’s Share a Coke campaign is a great example of this. The company rolled out product with some of the most popular first names on each can or small bottle. Store displays encouraged shoppers to find a can with their name on it, take a picture, and post to social media with the caption “#shareacoke”. The result was social media feeds flooded with posts just like this:

Share a Coke examples.

UGC is such a powerful strategy because it reduces your content lift while increasing credibility. Instead of creating everything from scratch, you’re curating voices from your community. A five-minute video from a happy customer or a LinkedIn post from an employee can do more than a polished landing page.

So how do you get it? Ask. Prompt your audience to share their experiences. Feature real users in your blog posts or newsletters. Turn customer feedback into quote graphics or build case studies around standout use cases.

This strategy really gets powerful when you turn it into a system. Create UGC submission forms. Add review prompts to your post-purchase emails. Encourage your team to share behind-the-scenes stories on social.

The more people see themselves in your brand, the more they want to be part of it. UGC turns customers into advocates, and that’s content you can’t fake.

Measurement Is Moving to Influence, Not Just Output

Content volume used to be the metric. How many blogs did we publish? How many posts went live?

Now, it’s about impact.

Smart teams aren’t asking, “How much did we ship?” They’re asking, “What moved the needle?” That means tracking content that supports real business goals, not just filling up a calendar.

Engagement quality matters more than vanity metrics. Are people sharing or talking about it? Did it convert to revenue or reduce friction in the sales process? That’s the kind of content worth doubling down on.

Brand lift and even SEO performance are shifting, too. With AI Overviews reshaping how content appears in search, your content marketing performance hinges on owning the conversation through citations that strengthen brand trust.

An AI Overview.

Writers play a huge role here. When your content solves a real problem or answers a specific question better than anyone else, it sticks. That influence compounds.

So shift your mindset. Don’t just create content that gets clicks. Create content that changes people’s thinking or provides new insights. That’s the metric that matters now.

FAQs

What is the future of content marketing?

Content in 2026 is shifting toward authenticity. Brands are focusing on aspects like voice and distribution over volume. Repurposing across channels, creating content with real perspective, and measuring influence over output are now core strategies. The new goal is content that actually earns trust and drives action.

Conclusion

Creating a winning content marketing strategy in 2026 will certainly look different. The smart use of AI to scale instead of substitute, while still leaning on human editing and content elevation, will be a huge brand separator. You and your team can do that effectively by building a voice people recognize, creating content that feels human, and aligning it all to authentic buyer journeys and behaviors.

You don’t need to chase every trend. Focus on strategy, quality, and distribution that drives results.

Because in a world full of content, the only stuff that stands out is the kind that actually matters.

Read more at Read More

January 2026 Digital Marketing Roundup: What Changed and What You Should Do About It

January didn’t bring flashy product launches. It brought something more valuable: clarity.

Platforms spent the month explaining how their systems actually work. Google detailed JavaScript indexing rules that matter for modern sites. Reddit opened up automation insights most platforms keep hidden. Amazon positioned itself as a legitimate cross-screen player with first-party data advantages traditional TV can’t match.

Automation kept expanding, but with firmer guardrails. AI continued to compress discovery. Zero-click experiences grew. Brands without clear expertise signals or off-site authority started disappearing from AI-generated answers.

For digital marketers, January reinforced one reality: performance in 2026 depends less on clever tactics and more on getting fundamentals right across channels.

Key Takeaways

  • Indexing logic must live in base HTML, not JavaScript. Google may skip rendering pages with noindex directives in initial HTML, leaving valuable content invisible even if JavaScript removes the tag later.
  • Performance Max channel reporting is now essential, not optional. Budget pressure is currently your sharpest lever for managing underperforming surfaces like Display or Discover.
  • Share of search is becoming a better demand signal than traffic alone. As AI reduces click-through rates, measuring how often people search for your brand versus competitors reveals momentum better than vanishing clicks.
  • Digital PR now directly impacts AI visibility. Authoritative mentions and credible coverage determine whether AI systems recognize and recommend your brand in zero-click answers.
  • Influencer marketing reached enterprise maturity in January. Unilever’s 20x creator expansion and 50% social budget shift prove influence at scale is baseline strategy, not experimentation.
  • Review monitoring must track losses, not just gains. Google’s AI is deleting legitimate reviews without notice, affecting rankings and trust faster than new reviews can rebuild them.

Search, SEO, and Indexing Reality Checks

Search teams started 2026 with clearer rules, not more flexibility. Google spent January confirming how it treats indexing signals on JavaScript-heavy sites.

Google Clarifies Noindex and JavaScript Behavior

Google confirmed that pages with a noindex directive in their initial HTML may not get rendered at all. Any JavaScript meant to remove or modify that directive might never execute.

Indexing intent belongs in base HTML. JavaScript should enhance experiences, not define crawl behavior. For headless stacks and dynamic frameworks, search engines respond to what they see first, not what you hope they’ll see after rendering.

If your site uses React, Next.js, Angular, or Vue with client-side rendering, audit how noindex tags are implemented. Server-side rendering or static generation solves most of these issues.

Google Clarifies JavaScript Canonical Rules

Google detailed how canonical tags work on JavaScript-driven pages. Canonicals can be evaluated twice: once in raw HTML and again after rendering. Conflicts between the two create real indexing problems.

Server-rendered HTML pointing to one canonical while client-side JavaScript points to another forces Google to pick. That choice often hurts rankings quietly, without throwing obvious errors in Search Console.

Teams need to decide where canonicals live and enforce consistency. One canonical after rendering. No ambiguity between server and client.

December Core Algorithm Update Wraps

Google’s December 2025 core update finished after roughly 18 days of volatility. Sites with stale content, weak expertise signals, or unclear intent lost ground. Others gained visibility by being more useful and better aligned with user needs.

Core updates no longer feel disruptive because they’re frequent. Three broad core updates rolled out in 2025 alone. The advantage now comes from consistent execution, not post-update recovery tactics.

Paid Search, Automation, and Audience Control

Paid media keeps moving toward automation. January showed where control still exists and where it doesn’t.

Using Google’s PMax Channel Report More Strategically

The Performance Max Channel Performance Report keeps evolving. You can now see performance broken down across Search, YouTube, Display, Discover, Gmail, and Maps.

The PMAX Channel Performance Report.

You still can’t control bids or exclusions at a granular level. What you can control is budget pressure. One surface consistently underperforming? Budget becomes your corrective lever. Pull back overall spend and PMax reallocates to better-performing channels automatically.

Teams that review this report monthly make better creative and investment decisions. Track this data over time. Patterns emerge. You start understanding which channels deliver at which funnel stages, even inside automation.

Google Drops Audience Size Minimums

Google lowered minimum audience size thresholds to 100 users across Search, Display, and YouTube. Previous minimums ranged from 1,000 users down to a few hundred depending on network and list type.

This opens doors for smaller advertisers and niche segments. Remarketing lists, CRM uploads, and custom audiences that previously failed minimums now become usable.

Smart teams will use this to test tighter segmentation strategies. But don’t chase volume that isn’t there. A 100-user audience won’t scale into a growth channel overnight.

Bing Tests Google-Style Ad Grouping

A Bing Ad Example.

Bing briefly tested a sponsored results format similar to Google’s recent changes. Multiple ads grouped under a single label, with only the first result carrying an ad marker.

The test ended quickly, but the signal matters. Search platforms are converging on similar layouts. How ads appear now affects click quality and intent, not just click-through rate.

Social Platforms and Performance Content

Social platforms spent January rewarding clarity while punishing shortcuts.

Reddit Launches Max Campaigns

Reddit introduced Max Campaigns, an automated ad product handling targeting, placements, creative, and budget allocation in real-time.

What stands out is visibility. Reddit surfaces audience personas and engagement insights that most automated systems hide. Early testers report 27% more conversions and 17% lower CPA on average.

Testing works best when anchored to existing campaigns. Replicate your best-performing Reddit campaign as a Max Campaign. Let automation prove efficiency gains with known benchmarks.

Instagram Caps Hashtags

Instagram rolled out a five-hashtag limit across posts and reels. This confirms discovery on Instagram is driven by AI-based content understanding, not hashtag volume.

Hashtags now function like keywords. They clarify intent and help Instagram’s systems categorize content. They don’t manufacture reach.

Captions, on-screen text, subtitles, and visuals do the heavy lifting. Choose five hashtags that directly describe your content. Mix specificity levels: one broad category tag, two niche topic tags, one community hashtag, one branded hashtag.

LinkedIn Shares Performance Guidance for 2026

LinkedIn reiterated that human perspective drives performance. Video continues outperforming other formats. Hashtags do not impact distribution. Automated engagement and content pods face increased scrutiny.

Posting two to five times per week remains effective. AI can support thinking, but content still needs lived experience and clear points of view.

Brand Visibility, Authority, and Demand Measurement in an AI Era

AI-driven discovery is reshaping how brands get surfaced and evaluated.

What AI Search Means for Your Business

AI-generated summaries and zero-click experiences shape early discovery now. Users often form opinions before visiting a site. Google’s AI Overviews, ChatGPT’s SearchGPT, and Perplexity answer questions directly, compressing or eliminating the need to click through.

AI favors brands with clear expertise, structured content, and external validation. Generic explanations get compressed into summaries that strip away brand identity. Thin content disappears entirely.

Optimization now includes being understandable and credible to machines, not just persuasive to human readers. That means structured data markup, clear content hierarchy, author credentials, and topical authority signals.

Share of Search Becomes a Core KPI

As AI reduces click-through rates, traffic becomes a weaker signal of demand. Share of search fills that gap.

It measures how often people look for your brand compared to competitors. That correlates strongly with market share and future growth. Brands with rising share of search typically see revenue growth follow within quarters, even if organic traffic stays flat.

Calculate share of search by tracking branded search volume for your brand and key competitors over time. Tools like Google Trends, Semrush, or Ahrefs make this accessible.

Digital PR Matters More Than Ever

AI systems recommend brands they recognize and trust. That trust is built off-site, not through on-page optimization.

Authoritative mentions, expert commentary, and credible coverage now influence visibility across AI-driven experiences. Links still matter, but reputation matters more.

PR, SEO, and content strategy can no longer operate independently. Authority compounds when they align. If you’re not investing in Digital PR alongside traditional SEO, you’re optimizing for a search ecosystem that’s rapidly shrinking.

Video, CTV, and Cross-Screen Media Strategy

Video buying is consolidating across screens.

Amazon Emerges as a Cross-Screen Advertising Player

Amazon is positioning itself as a unified advertising ecosystem across Prime Video, live sports, audio, and programmatic inventory. Layered with first-party shopper data, this creates a powerful performance and measurement advantage traditional TV buyers can’t match.

Amazon now competes higher in the funnel through premium video and live sports while retaining lower-funnel accountability through its commerce data. Interactive features let you add “add to cart” overlays directly in OTT video ads.

CTV Breaks the 30-Second Format

Streaming dominates TV consumption. Ad formats are finally catching up. Interactive and nontraditional CTV units are gaining traction, supported by early standardization efforts from IAB Tech Lab.

Traditional :15 and :30 second spots still work, but they blend into an increasingly crowded environment. Emerging formats offer differentiation in lower-clutter streaming contexts.

Brands that test early build creative and performance advantages before these formats normalize and competition increases.

Pinterest Acquires tvScientific

Pinterest’s acquisition of tvScientific connects intent-driven discovery with CTV buying. This closes a long-standing measurement gap between inspiration and awareness channels.

For brands rooted in discovery—home decor, fashion, food, travel, DIY, beauty—this creates a clearer path from interest to action.

Brand-Led Attention and Influence at Scale

Attention increasingly flows through people, communities, and culture-driven media.

Unilever’s Influencer Expansion

Unilever announced plans to work with 20 times more influencers and shift half its ad budget to social. This isn’t a test. It’s a structural reallocation signaling influencer marketing has reached enterprise maturity.

Unilever’s SASSY framework now activates nearly 300,000 creators. The company reported category-wide outperformance, attributing significant gains to influencer-driven campaigns.

Brands still treating creators as side projects will struggle to compete against organizations running influencer programs with the same rigor and budget as paid search or programmatic display.

Google’s AI Is Deleting Reviews

Google’s AI moderation is removing reviews at scale, including legitimate ones, often without notice. Business owners report hundreds of reviews disappearing overnight.

That affects rankings, conversion rates, and consumer trust. Reputation strategy now includes monitoring review loss, not just tracking new reviews.

Check your Google Business Profile weekly. Document total review count and average rating. When drops occur, investigate patterns. Better yet, diversify review platforms beyond Google.

Experimentation and Growth Discipline

Sustainable growth depends on knowing why a test exists before judging its outcome.

Growth vs Optimization: Drawing the Line

Growth experiments explore new opportunities. Optimization improves what already works. Blurring the two creates misaligned expectations and poor decision-making.

Clear intent leads to clearer measurement and stronger buy-in. Teams that label tests correctly scale with more confidence.

What Digital Marketers Should Take Forward

Platforms are clarifying rules. AI rewards authority and consistency. Measurement is shifting away from clicks alone.

The advantage in 2026 comes from alignment across teams and channels. Durable signals outperform clever workarounds.

Indexing logic must live in base HTML. Performance Max channel reporting is essential. Share of search reveals momentum. Digital PR impacts AI visibility. Influencer marketing reached enterprise maturity. Review monitoring must track losses.

This is the work we focus on every day at NP Digital.

If you want help aligning fundamentals across SEO, paid media, content, and PR in a way that compounds over time, let’s talk.

Read more at Read More

AI Hallucinations, Errors, and Accuracy: What the Data Shows

AI hallucinations became a headline story when Google’s AI Overviews told people that cats can teleport and suggested eating rocks for health.

Those bizarre moments spread fast because they’re easy to point at and laugh about.

But that’s not the kind of AI hallucination most marketers deal with. The tools you probably use, like ChatGPT or Claude, likely won’t produce anything that bizarre. Their misses are sneakier, like outdated numbers or confident explanations that fall apart once you start looking under the hood.

In a fast-moving industry like digital marketing, it’s easy to miss those subtle errors. 

This made us curious: How often is AI actually getting it wrong? What types of questions trip it up? And how are marketers handling the fallout?

To find out, we tested 600 prompts across major large language model (LLM) platforms and surveyed 565 marketers to understand how often AI gets things wrong. You’ll see how these mistakes show up in real workflows and what you can do to catch hallucinations before they hurt your work.

Key Takeaways

  • Nearly half of marketers (47.1 percent) encounter AI inaccuracies several times a week, and over 70 percent spend hours fact-checking each week.
  • More than a third (36.5 percent) say hallucinated or incorrect AI content has gone live publicly, most often due to false facts, broken source links, or inappropriate language.
  • In our LLM test, ChatGPT had the highest accuracy (59.7 percent), but even the best models made errors, especially on multi-part reasoning, niche topics, or real-time questions.
  • The most common hallucination types were fabrication, omission, outdated info, and misclassification—often delivered with confident language.
  • Despite knowledge of hallucinations, 23 percent of marketers feel confident using AI outputs without review. Most teams add extra approval layers or assign dedicated fact-checkers to their processes.

What Do We Know About AI Hallucinations and Errors?

An AI hallucination happens when a model gives you an answer that sounds correct but isn’t. We’re talking about made-up facts or claims that don’t stand up to fact-checking or a quick Google search.

And they’re not rare.

In our research, over 43% of marketers say hallucinated or false information has slipped past review and gone public. These errors come in a few common forms:

  • Fabrication: The AI simply makes something up.
  • Omission: It skips critical context or details.
  • Outdated info: It shares data that’s no longer accurate.
  • Misclassification: It answers the wrong question, or only part of it.
A graphic showing common AI Hallucination Types

Hallucinations tend to happen when prompts are too vague or require multi-step reasoning. Sometimes the AI model tries to fill the gaps with whatever seems plausible.

AI hallucinations aren’t new, but our dependence on these tools is. As they become part of everyday workflows, the cost of a single incorrect answer increases.

Once you recognize the patterns behind these mistakes, you can catch them early and keep them out of your content.

AI Hallucination Examples

AI hallucinations can be ridiculous or dangerously subtle. These real AI hallucination examples give you a sense of the range:

  • Fabricated legal citations: Recent reporting shows a growing number of lawyers relying on AI-generated filings, only to learn that the cases or citations don’t exist. Courts are now flagging these hallucinations at an alarming rate.
  • Health misinformation: Revisiting our example from earlier, Google’s AI Overviews once claimed eating rocks had health benefits in an error that briefly went viral.
  • Fake academic references: Some LLMs will list fake studies or broken source links if asked for citations. A peer-reviewed Nature study found that ChatGPT frequently produced academic citations that look legitimate but reference papers that don’t exist.
  • Factual contradictions: Some tools have answered simple yes/no questions with completely contradictory statements in the same paragraph.
  • Outdated or misattributed data: Models can pull statistics from the wrong year or tie them to the wrong sources. And that creates problems once those numbers sneak into presentations or content.

Our Surveys/Methodology

To get a clear picture of how AI hallucinations show up in real-world marketing work, we pulled data from two original sources:

  1. Marketers survey: We surveyed 565 U.S.-based digital marketers using AI in their workflows. The questions covered how often they spot errors, what kinds of mistakes they see, and how their teams are adjusting to AI-assisted content. We also asked about public slip-ups, trust in AI, and whether they want clearer industry standards.
  1. LLM accuracy test: We built a set of 600 prompts across five categories: SEO/marketing, general business, industry-specific verticals, consumer queries, and control questions with a known correct answer. We then tested them across six major AI platforms: ChatGPT, Gemini, Claude, Perplexity, Grok, and Copilot. Humans graded each output, classifying them as fully correct, partially correct, or incorrect. For partially correct or incorrect outputs, we also logged the error type (omission, outdated info, fabrication, or misclassification).

For this report, we focused only on text-based hallucinations and content errors, not visual or video generation. The insights that follow combine both data sets to show how hallucinations happen and what marketers should watch for across tools and task types.

How AI Hallucinations and Errors Impact Digital Marketers

A graphic that shows how often Marketers Encounter AI Errors.

We asked marketers how AI errors show up in their work, and the results were clear: Hallucinations are far from a rarity.

Nearly half of marketers (47.1 percent) encounter AI inaccuracies multiple times a week. And more than 70 percent say they spend one to five hours each week just fact-checking AI-generated output. That’s a lot of time spent fixing “helpful” content.

Those misses don’t always stay hidden. 

More than a third (36.5 percent) say hallucinated content has made it all the way to the public. Another 39.8 percent have had close calls where bad AI info almost went live. 

And it’s not just teams spotting the problems. More than half of marketers (57.7 percent) say clients or stakeholders have questioned the quality of AI-assisted outputs.

These aren’t minor formatting issues, either. When mistakes make it through, the most common offenders are:

  • Inappropriate or brand-unsafe content (53.9 percent)
  • Completely false or hallucinated information (43.5 percent)
  • Formatting glitches that break the user experience (42.5 percent)

So where does it break down?

AI errors are most common in tasks that require structure or precision. Here are the daily error rates by task:

  • HTML or schema creation: 46.2 percent
  • Full content writing: 42.7 percent
  • Reporting and analytics: 34.2 percent

Brainstorming or idea generation had far fewer issues, with each landing at right about 25 percent.

A graphic showing where marketers encounter AI errors most often.

When we looked at confidence levels, only 23 percent of marketers felt fully comfortable using AI output without review. The rest? They were either cautious or not confident at all.

Teams hit hardest by public-facing AI mistakes include:

  • Digital PR (33.3 percent)
  • Content marketing (20.8 percent)
  • Paid media (17.8 percent)
A graphic showing teams most affected by public AI mistakes.

These are the same departments most likely to face direct brand damage when AI gets it wrong.

AI can save you time, but it also creates a lot of cleanup without checks in place. And most marketers feel the pressure to catch hallucinations before clients or customers do.

AI Hallucinations and Errors: How Do the Top LLMs Stack Up?

To figure out how often leading AI platforms hallucinate, we tested 600 prompts across six major models: ChatGPT, Claude, Gemini, Perplexity, Grok, and Copilot.

Each model received the same set of queries across five categories: marketing/SEO, general business, industry-specific use cases, consumer questions, and fact-checkable control prompts. Human reviewers graded each response for accuracy and completeness.

Here’s how they performed:

  • ChatGPT delivered the highest percentage of fully correct answers at 59.7 percent, with the lowest rate of serious hallucinations. Most of its mistakes were subtle, like misinterpreting the question rather than fabricating facts.
  • Claude was the most consistent. While it scored slightly lower on fully correct responses (55.1 percent), it had the lowest overall error rate at just 6.2 percent. When it missed, it usually left something out rather than getting it wrong.
  • Gemini performed well on simple prompts (51.3 percent fully correct) but tended to skip over complex or multi-step answers. Its most common error was omission.
  • Perplexity showed strength in fast-moving fields like crypto and AI, thanks to its strong real-time retrieval features. But that speed came with risk: 12.2 percent of responses were incorrect, often due to misclassifications or minor fabrications.
  • Copilot sat in the middle of the pack. It gave safe, brief answers. While that’s good for overviews, it often misses the deeper context.
  • Grok struggled across the board. It had the highest error rate at 21.8 percent and the lowest percentage of fully correct answers (39.6 percent). Hallucinations, contradictions, and vague outputs were common.
A graphic showing how major LLMs performed in our 600-prompt accuracy test.
A graphich showing most common error types across models.

So, what does this mean for marketers?

Well, most teams aren’t expecting perfection. According to our survey, 77.7 percent of marketers will accept some level of AI inaccuracy, likely because the speed and efficiency gains still outweigh the cleanup.

The takeaway isn’t that one model is flawless. It’s that every tool has its strengths and weaknesses. Knowing each platform’s tendencies helps you know when (and how) to pull a human into the loop and what to be on guard against.

What Question Types Gave LLMs The Most Trouble

Some questions are harder for AI to handle than others. In our testing, three prompt types consistently tripped up all the models, regardless of how accurate they were overall:

  • Multi-part prompts: When asked to explain a concept and give an example, many tools did only half the job. They either defined the term or gave an example, but not both. This was a common source of partial answers and context gaps.
  • Recently updated or real-time topics: If the ask was about something that changed in the last few months (like a Google algorithm update or an AI model release), responses were often inaccurate or completely fabricated. Some tools made confident claims using outdated info that sounded fresh.
  • Niche or domain-specific questions: Verticals like crypto, legal, SaaS, or even SEO created problems for most LLMs. In these cases, tools either made up terminology or gave vague responses that missed key industry context.

Even models like Claude and ChatGPT, which scored relatively high for accuracy, showed cracks when asked to handle layered prompts that required nuance or specialized knowledge.

Knowing which types of prompts increase the risk of hallucination is the first step in writing better ones and catching issues before they cost you.

AI Hallucination Tells to Look Out For

AI hallucinations don’t always scream “wrong.” In fact, the most dangerous ones sound reasonable (at least until you check the details). Still, there are patterns worth watching for:

Here are the red flags that showed up most often across the models we tested:

  • No source, or a broken one: If an AI gives you a link, check it. A lot of hallucinated answers include made-up or outdated citations that don’t exist when you click.
  • Answers to the wrong questions: Some models misinterpret the prompt and go off in a related (but incorrect) direction. If the response feels slightly off topic, dig deeper.
  • Big claims with no specifics: Watch for sweeping statements without specific stats or dates. That’s often a sign it’s filling in blanks with plausible-sounding fluff.
  • Stats with no attribution: Hallucinated numbers are a common issue. If the stat sounds surprising or overly convenient, verify it with a trusted source.
  • Contradictions inside the same answer: We experienced cases where an AI said one thing in the first paragraph and contradicted itself by the end. That’s a major warning sign.
  • “Real” examples that don’t exist: Some hallucinations involve fake product names, companies, case studies, or legal precedents. These details feel legit, but a quick search reveals no facts to verify these claims.

The more complex your prompt, the more important it is to sanity-check the output. If something feels even slightly off, assume it’s worth a second look. After all, subtle hallucinations are the ones most likely to slip through the cracks.

Best Practices for Avoiding AI Hallucinations and Errors

You can’t eliminate AI hallucinations completely, but you can make it a lot less likely they slip through. Here’s how to stay ahead of the risk:

  • Always request and verify sources: Some models will confidently provide links that look legit but don’t exist. Others reference real studies or stats, but take them out of context. Before you copy/paste, click through. This matters even more for AI SEO work, where accuracy and citation quality directly affect rankings and trust.
  • Fine-tune your prompts: Vague prompts are hallucination magnets, so be clear about what you want the model to reference or avoid. That might mean building prompt template libraries or using follow-up prompts to guide models more effectively. That’s exactly what LLM optimization (LLMO) focuses on.
  • Assign a dedicated fact-checker: Our survey results showed this to be one of the most effective internal safeguards. Human review might take more time, but it’s how you keep hallucinated claims from damaging trust or a brand’s credibility.
  • Set clear internal guidelines: Many teams now treat AI like a junior content assistant: It can draft, synthesize, and suggest, but humans own the final version. That means reviewing and fact-checking outputs and correcting anything that doesn’t hold up. This approach lines up with the data. Nearly half (48.3 percent) of marketers support industry-wide standards for responsible AI use.
  • Add a final review layer every time: Even fast-moving brands are building in one more layer of review for AI-assisted work. In fact, the most common adjustment marketers reported making was adding a new round of content review to catch AI errors. That said, 23 percent of respondents reported skipping human review if they trust the tool enough. That’s a risky move.
  • Don’t blindly trust brand-safe output: AI can sound polished even when it’s wrong. In our LLM testing, some of the most confidently written outputs were factually incorrect or missing key context. If it feels too clean, double-check it.

FAQs

What are AI hallucinations?

AI hallucinations occur when an AI tool gives you an answer that sounds accurate, but it’s not. These mistakes can include made-up facts, fake citations, or outdated info packaged in confident language.

Why Does AI hallucinate?

AI models don’t “know” facts. They generate responses based on patterns in the data they were trained on. When there’s a gap or ambiguity, the model fills it in with what sounds most likely (even if it’s completely wrong).

What causes AI hallucinations?

Hallucinations usually happen when prompts are vague, complex, or involve topics the model hasn’t seen enough data on. They’re also more common in fast-changing fields like SEO and crypto.

Can you stop AI from hallucinating?

Not entirely. Even the best models make things up sometimes. That’s because LLMs are built to generate language, not verify facts. Occasional hallucinations are baked into how they work.

How can you reduce AI hallucinations?

Use more specific prompts, request citation sources, and always double-check the output for accuracy. Add a human review step before anything goes live. The more structure and context you give the AI, the fewer hallucinations you’ll run into.

Conclusion

AI is powerful, but it’s not perfect. 

Our research shows that hallucinations happen regularly, even with the best tools. From made-up stats to misinterpreted prompts, the risks are real. That’s especially the case for fast-moving marketers.

If you’re using AI to create content or guide strategy, knowing where these tools fall short is like a cheat code. 

The best defense? Smarter prompts, tighter reviews, and clear internal guidelines that treat AI as a co-pilot (not the driver).

Want help building a more reliable AI workflow? Talk to our team at NP Digital if you’re ready to scale content without compromising accuracy. Also, you can check out the full report here on the NP Digital website.

Read more at Read More

Microsoft launches Publisher Content Marketplace for AI licensing

The future of remarketing? Microsoft bets on impressions, not clicks

Microsoft Advertising today launched the Publisher Content Marketplace (PCM), a system that lets publishers license premium content to AI products and get paid based on how that content is used.

How it works. PCM creates a direct value exchange. Publishers set licensing and usage terms, while AI builders discover and license content for specific grounding scenarios. The marketplace also includes usage-based reporting, giving publishers visibility into how their content performs and where it creates the most value.

Designed to scale. PCM is designed to avoid one-off licensing deals between individual publishers and AI providers. Participation is voluntary, ownership remains with publishers, and editorial independence stays intact. The marketplace supports everyone from global publishers to smaller, specialized outlets.

Why we care. As AI systems shift from answering questions to making decisions, content quality matters more than ever. As agents increasingly guide purchases, finance, and healthcare choices, ads and sponsored messages will sit alongside — or draw from — premium content rather than generic web signals. That raises the bar for credibility and points to a future where brand alignment with trusted publishers and AI ecosystems directly impacts performance.

Early traction. Microsoft Advertising co-designed PCM with major U.S. publishers, including Business Insider, Condé Nast, Hearst, The Associated Press, USA TODAY, and Vox Media. Early pilots grounded Microsoft Copilot responses in licensed content, with Yahoo among the first demand partners now onboarding.

What’s next. Microsoft plans to expand the pilot to more publishers and AI builders that share a core belief: as the AI web evolves, high-quality content should be respected, governed, and paid for.

The big picture. In an agentic web, AI tools increasingly summarize, reason, and recommend through conversation. Whether the topic is medical safety, financial eligibility, or a major purchase, outcomes depend on access to trusted, authoritative sources — many of which sit behind paywalls or in proprietary archives.

The tension. The traditional web bargain was simple: publishers shared content, and platforms sent traffic back. That model breaks down when AI delivers answers directly, cutting clicks while still depending on premium content to perform well.

Bottom line. If AI is going to make better decisions, it needs better inputs — and PCM is Microsoft’s bet that a sustainable content economy can power the next phase of the agentic web.

Microsoft’s announcement. Building Toward a Sustainable Content Economy for the Agentic Web

Read more at Read More

Web Design and Development San Diego

Inspiring examples of responsible and realistic vibe coding for SEO

Vibe coding is a new way to create software using AI tools such as ChatGPT, Cursor, Replit, and Gemini. It works by describing to the tool what you want in plain language and receiving written code in return. You can then simply paste the code into an environment (such as Google Colab), run it, and test the results, all without ever actually programming a single line of code.

Collins Dictionary named “vibe coding” word of the year in 2025, defining it as “the use of artificial intelligence prompted by natural language to write computer code.”

In this guide, you’ll understand how to start vibe coding, learn its limitations and risks, and see examples of great tools created by SEOs to inspire you to vibe code your own projects.

Vibe coding variations

While “vibe coding” is used as an umbrella term, there are subsets of coding with support or AI, including the following:

Type Description Tools
AI-assisted coding  AI helps write, refactor, explain, or debug code. Used by actual developers or engineers to support their complex work. GitHub Copilot, Cursor, Claude, Google AI Studio
Vibe coding Platforms that handle everything except the prompt/idea. AI does most of the work. ChatGPT, Replit, Gemini, Google AI Studio
No-code platforms Platforms that handle everything you ask (“drag and drop” visual updates while the code happens in the background). They tend to use AI but existed long before AI became mainstream. Notion, Zapier, Wix

We’ll focus exclusively on vibe coding in this guide. 

With vibe coding, while there’s a bit of manual work to be done, the barrier is still low — you basically need a ChatGPT account (free or paid) and access to a Google account (free). Depending on your use case, you might also need access to APIs or SEO tools subscriptions such as Semrush or Screaming Frog.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

To set expectations, by the end of this guide, you’ll know how to run a small program on the cloud. If you expect to build a SaaS or software to sell, AI-assisted coding is a more reasonable option to take, which will involve costs and deeper coding knowledge.

Vibe coding use cases

Vibe coding is great when you’re trying to find outcomes for specific buckets of data, such as finding related links, adding pre-selected tags to articles, or doing something fun where the outcome doesn’t need to be exact.

For example, I’ve built an app to create a daily drawing for my daughter. I type a phrase about something that she told me about her day (e.g., “I had carrot cake at daycare”). The app has some examples of drawing styles I like and some pictures of her. The outputs (drawings) are the final work as they come from AI.

When I ask for specific changes, however, the program tends to worsen and redraw things I didn’t ask for. I once asked to remove a mustache and it recolored the image instead. 

If my daughter were a client who’d scrutinize the output and require very specific changes, I’d need someone who knows Photoshop or similar tools to make specific improvements. In this case, though, the results are good enough. 

Building commercial applications solely on vibe coding may require a company to hire vibe coding cleaners. However, for a demo, MVP (minimum viable product), or internal applications, vibe coding can be a useful, effective shortcut. 

How to create your SEO tools with vibe coding

Using vibe coding to create your own SEO tools require three steps:

  1. Write a prompt describing your code
  2. Paste the code into a tool such as Google Colab
  3. Run the code and analyze the results

Here’s a prompt example for a tool I built to map related links at scale. After crawling a website using Screaming Frog and extracting vector embeddings (using the crawler’s integration with OpenAI), I vibe coded a tool that would compare the topical distance between the vectors in each URL.

This is exactly what I wrote on ChatGPT:

I need a Google Colab code that will use OpenAI to:

Check the vector embeddings existing in column C. Use cosine similarity to match with two suggestions from each locale (locale identified in Column A). 

The goal is to find which pages from each locale are the most similar to each other, so we can add hreflang between these pages.

I’ll upload a CSV with these columns and expect a CSV in return with the answers.

Then I pasted the code that ChatGPT created on Google Colab, a free Jupyter Notebook environment that allows users to write and execute Python code in a web browser. It’s important to run your program by clicking on “Run all” in Google Colab to test if the output does what you expected.

This is how the process works on paper. Like everything in AI, it may look perfect, but it’s not always functioning exactly how you want it. 

You’ll likely encounter issues along the way — luckily, they’re simple to troubleshoot.

First, be explicit about the platform you’re using in your prompt. If it’s Google Colab, say the code is for Google Colab. 

You might still end up with code that requires packages that aren’t installed. In this case, just paste the error into ChatGPT and it’ll likely regenerate the code or find an alternative. You don’t even need to know what the package is, just show the error and use the new code. Alternatively, you can ask Gemini directly in your Google Colab to fix the issue and update your code directly.

AI tends to be very confident about anything and could return completely made-up outputs. One time I forgot to say the source data would come from a CSV file, so it simply created fake URLs, traffic, and graphs. Always check and recheck the output because “it looks good” can sometimes be wrong.

If you’re connecting to an API, especially a paid API (e.g., from Semrush, OpenAI, Google Cloud, or other tools), you’ll need to request your own API key and keep in mind usage costs. 

Should you want an even lower execution barrier than Google Colab, you can try using Replit. 

Simply prompt your request and the software will create the code, design, and allow testing all on the same screen. This means a lower chance of coding errors, no copy and paste, and a URL you can share right away with anyone to see your project built with a nice design. (You should still check for poor outputs and iterate with prompts until your final app is built.)

Keep in mind that while Google Colab is free (you’ll only spend if you use API keys), Replit charges a monthly subscription and per-usage fee on APIs. So the more you use an app, the more expensive it gets.

Inspiring examples of SEO vibe-coded tools

While Google Colab is the most basic (and easy) way to vibe code a small program, some SEOs are taking vibe coding even further by creating programs that are turned into Chrome extensions, Google Sheets automation, and even browser games.

The goal behind highlighting these tools is not only to showcase great work by the community, but also to inspire, build, and adapt to your specific needs. Do you wish any of these tools had different features? Perhaps you can build them for yourself — or for the world.

GBP Reviews Sentiment Analyzer (Celeste Gonzalez)

After vibe coding some SEO tools on Google Colab, Celeste Gonzalez, Director of SEO Testing at RicketyRoo Inc, took her vibing skills a step further and created a Chrome extension. “I realized that I don’t need to build something big, just something useful,” she explained.

Her browser extension, the GBP Reviews Sentiment Analyzer, summarizes sentiment analysis for reviews over the last 30 days and review velocity. It also allows the information to be exported into a CSV. The extension works on Google Maps and Google Business Profile pages.

Instead of ChatGPT, Celeste used a combination of Claude (to create high-quality prompts) and Cursor (to paste the created prompts and generate the code).

AI tools used: Claude (Sunner 4.5 model) and Cursor 

APIs used: Google Business Profile API (free)

Platform hosting: Chrome Extension

Knowledge Panel Tracker (Gus Pelogia)

I became obsessed with the Knowledge Graph in 2022, when I learned how to create and manage my own knowledge panel. Since then, I found out that Google has a Knowledge Graph Search API that allows you to check the confidence score for any entity.

This vibe-coded tool checks the score for your entities daily (or at any frequency you want) and returns it in a sheet. You can track multiple entities at once and just add new ones to the list at any time.

The Knowledge Panel Tracker runs completely on Google Sheets, and the Knowledge Graph Search API is free to use. This guide shows how to create and run it in your own Google account, or you can see the spreadsheet here and just update the API key under Extensions > App Scripts. 

AI models used: ChatGPT 5.1

APIs used: Google Knowledge Graph API (free)

Platform hosting: Google Sheets

Inbox Hero Game (Vince Nero)

How about vibe coding a link building asset? That’s what Vince Nero from BuzzStream did when creating the Inbox Hero Game. It requires you to use your keyboard to accept or reject a pitch within seconds. The game is over if you accept too many bad pitches.

Inbox Hero Game is certainly more complex than running a piece of code on Google Colab, and it took Vince about 20 hours to build it all from scratch. “I learned you have to build things in pieces. Design the guy first, then the backgrounds, then one aspect of the game mechanics, etc.,” he said.

The game was coded in HTML, CSS, and JavaScript. “I uploaded the files to GitHub to make it work. ChatGPT walked me through everything,” Vince explained.

According to him, the longer the prompt continued, the less effective ChatGPT became, “to the point where [he’d] have to restart in a new chat.” 

This issue was one of the hardest and most frustrating parts of creating the game. Vince would add a new feature (e.g., score), and ChatGPT would “guarantee” it found the error, update the file, but still return with the same error. 

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

In the end, Inbox Hero Game is a fun game that demonstrates it’s possible to create a simple game without coding knowledge, yet taking steps to perfect it would be more feasible with a developer.

AI models used: ChatGPT

APIs used: None

Platform hosting: Webpage

Vibe coding with intent

Vibe coding won’t replace developers, and it shouldn’t. But as these examples show, it can responsibly unlock new ways for SEOs to prototype ideas, automate repetitive tasks, and explore creative experiments without heavy technical lift. 

The key is realism: Use vibe coding where precision isn’t mission-critical, validate outputs carefully, and understand when a project has outgrown “good enough” and needs additional resources and human intervention.

When approached thoughtfully, vibe coding becomes less about shipping perfect software and more about expanding what’s possible — faster testing, sharper insights, and more room for experimentation. Whether you’re building an internal tool, a proof of concept, or a fun SEO side project, the best results come from pairing curiosity with restraint.

Read more at Read More

Web Design and Development San Diego

LinkedIn: AI-powered search cut traffic by up to 60%

AEO playbook

AI-powered search gutted LinkedIn’s B2B awareness traffic. Across a subset of topics, non-brand organic visits fell by as much as 60% even while rankings stayed stable, the company said.

  • LinkedIn is moving past the old “search, click, website” model and adopting a new framework: “Be seen, be mentioned, be considered, be chosen.”

By the numbers. In a new article, LinkedIn said its B2B organic growth team started researching Google’s Search Generative Experience (SGE) in early 2024. By early 2025, when SGE evolved into AI Overviews, the impact became significant.

  • Non-brand, awareness-driven traffic declined by up to 60% across a subset of B2B topics.
  • Rankings stayed stable, but click-through rates fell (by an undisclosed amount).

Yes, but. LinkedIn’s “new learnings” are more like a rehash of established SEO/AEO best practices. Here’s what LinkedIn’s content-level guidance consists of:

  • Use strong headings and a clear information hierarchy.
  • Improve semantic structure and content accessibility.
  • Publish authoritative, fresh content written by experts.
  • Move fast, because early movers get an edge.

Why we care. These tactics should all sound familiar. These are technical SEO and content-quality fundamentals. LinkedIn’s article offers little new in terms of tactics. It’s just updated packaging for modern SEO/AEO and AI visibility.

Dig deeper. How to optimize for AI search: 12 proven LLM visibility tactics

Measurement is broken. LinkedIn said its big challenge is the “dark” funnel. It can’t quantify how visibility in LLM answers impacts the bottom line, especially when discovery happens without a click.

  • LinkedIn’s B2B marketing websites saw triple-digit growth in LLM-driven traffic and that it can track conversion from those visits.
    • Yes, but: Many websites are also seeing triple-digit (or more) growth in LLM-driven traffic. Because it’s an emerging channel. That said, this is still a tiny amount of overall traffic right now (1% or less for most sites).

What LinkedIn is doing. LinkedIn created an AI Search Taskforce spanning SEO, PR, editorial, product marketing, product, paid media, social, and brand. Key actions included:

  • Correcting misinformation that showed up in AI responses.
  • Publishing new owned content optimized for generative visibility.
  • Testing LinkedIn (social) content to validate its strength in AI discovery.

Is it working? LinkedIn said early tests produced a meaningful lift in visibility and citations, especially from owned content. At least one external datapoint (Semrush, Nov. 10, 2025) suggested that LinkedIn has a structural advantage in AI search:

  • Google AI Mode cited LinkedIn in roughly 15% of responses.
  • LinkedIn was the #2 most-cited domain in that dataset, behind YouTube.

Incomplete story. LinkedIn’s article is an interesting read, but it’s light on specifics. Missing details include:

  • The exact topic set behind the “up to 60%” decline.
  • Exactly how much click-through rates “softened.”
  • Sample size and timeframe.
  • How “industry-wide” comparisons were calculated.
  • What tests were run, what moved citation share, and by how much.

Bottom line. LinkedIn is right that visibility is the new currency. However, it hasn’t shown enough detail to prove its new playbook is meaningfully different from doing some SEO (yes, SEO) fundamentals.

LinkedIn’s article. How LinkedIn Marketing Is Adapting to AI-Led Discovery

Read more at Read More

Web Design and Development San Diego

Are we ready for the agentic web?

Are we ready for the agentic web?

Innovations are coming at marketers and consumers faster than before, raising the question: Are we actually ready for the agentic web?

To answer that question, it’s important to unpack a few supporting ones:

  • What’s the agentic web?
  • How can the agentic web be used?
  • What are the pros and cons of the agentic web?

It’s important to note that this article isn’t a mandate for AI skeptics to abandon the rational questions they have about the agentic web. 

Nor is it intended to place any judgment on how you, as a consumer or professional, engage with the agentic web.

LinkedIn poll on Copilot Checkout

With thoughts and feelings so divided on the agentic web, this article aims to provide clear insight into how to think about it in earnest, without the branding or marketing fluff.

Disclosure: I am a Microsoft employee and believe in the path Microsoft’s taking with the agentic web. However, this article will attempt to be as platform-agnostic as possible.

What’s the agentic web? 

The agentic web refers to sophisticated tools, or agents, trained on our preferences that act with our consent to accomplish time-consuming tasks.

In simple terms, when I use one-click checkout, I allow my saved payment information to be passed to the merchant’s accounts receivable systems. 

Neither the merchant nor I must write down all the details or be involved beyond consenting to send and receive payment.

For fun, I put this question to four different AI models, and the responses are telling: 

  • Copilot: “The agentic web is an emerging layer of the internet where AI agents can reason and take actions on behalf of people or organizations, turning expressed intent into outcomes while preserving user choice, transparency, and control.” 
  • Gemini: “The agentic web is an evolution of the internet where autonomous AI agents – rather than just human users – interact with websites, APIs, and each other to perform complex tasks and make decisions on our behalf. “
  • Perplexity: “The agentic web is an emerging phase of the internet where autonomous AI agents act as the primary interface, navigating, interpreting, and taking actions across websites and services on behalf of users with minimal human involvement.” 
  • Claude: “The agentic web is an evolution of the internet where AI agents autonomously navigate, interact with, and complete tasks across websites and services on behalf of users.” 

I begin with how different models answer the question because it’s important to understand that each one is trained on different information, and outcomes will inevitably vary.

It’s worth noting that with the same prompt, defining the agentic web in one sentence, three out of four models focus on diminishing the human role in navigating the web, while one makes a point to emphasize the significance of human involvement, preserving user choice, transparency, and control.

Two out of four refer to the agentic web as a layer or phase rather than an outright evolution of the web. 

This is likely where the sentiment divide on the agentic web stems from.

Some see it as a consent-driven layer designed to make life easier, while others see it as a behemoth that consumes content, critical thinking, and choice.

It’s noteworthy that one model, Gemini, calls out APIs as a means of communication in the agentic web. APIs are essentially libraries of information that can be referenced, or called, based on the task you are attempting to accomplish. 

This matters because APIs will become increasingly relevant in the agentic web, as saved preferences must be organized in ways that are easily understood and acted upon.

Defining the agentic web requires spending some time digging into two important protocols – ACP and UCP.

Dig deeper: AI agents in SEO: What you need to know

Agentic Commerce Protocol: Optimized for action inside conversational AI 

The Agentic Commerce Protocol, or ACP, is designed around a specific moment: when a user has already expressed intent and wants the AI to act.

The core idea behind ACP is simple. If a user tells an AI assistant to buy something, the assistant should be able to do so safely, transparently, and without forcing the user to leave the conversation to complete the transaction.

ACP enables this by standardizing how an AI agent can:

  • Access merchant product data.
  • Confirm availability and price.
  • Initiate checkout using delegated, revocable payment authorization.

The experience is intentionally streamlined. The user stays in the conversation. The AI handles the mechanics. The merchant still fulfills the order.

This approach is tightly aligned with conversational AI platforms, particularly environments where users are already asking questions, refining preferences, and making decisions in real time. It prioritizes speed, clarity, and minimal friction.

Universal Commerce Protocol: Built for discovery, comparison, and lifecycle commerce 

The Universal Commerce Protocol, or UCP, takes a broader view of agentic commerce.

Rather than focusing solely on checkout, UCP is designed to support the entire shopping journey on the agentic web, from discovery through post-purchase interactions. It provides a common language that allows AI agents to interact with commerce systems across different platforms, surfaces, and payment providers. 

That includes: 

  • Product discovery and comparison.
  • Cart creation and updates.
  • Checkout and payment handling.
  • Order tracking and support workflows.

UCP is designed with scale and interoperability in mind. It assumes users will encounter agentic shopping experiences in many places, not just within a single assistant, and that merchants will want to participate without locking themselves into a single AI platform.

It’s tempting to frame ACP and UCP as competing solutions. In practice, they address different moments of the same user journey.

ACP is typically strongest when intent is explicit and the user wants something done now. UCP is generally strongest when intent is still forming and discovery, comparison, and context matter.

So what’s the agentic web? Is it an army of autonomous bots acting on past preferences to shape future needs? Is it the web as we know it, with fewer steps driven by consent-based signals? Or is it something else entirely?

The frustrating answer is that the agentic web is still being defined by human behavior, so there’s no clear answer yet. However, we have the power to determine what form the agentic web takes. To better understand how to participate, we now move to how the agentic web can be used, along with the pros and cons.

Dig deeper: The Great Decoupling of search and the birth of the agentic web

How can the agentic web be used? 

Working from the common theme across all definitions, autonomous action, we can move to applications.

Elmer Boutin has written a thoughtful technical view on how schema will impact agentic web compatibility. Benjamin Wenner has explored how PPC management might evolve in a fully agentic web. Both are worth reading.

Here, I want to focus on consumer-facing applications of the agentic web and how to think about them in relation to the tasks you already perform today.

Here are five applications of the agentic web that are live today or in active development.

1. Intent-driven commerce  

A user states a goal, such as “Find me the best running shoes under $150,” and an agent handles discovery, comparison, and checkout without requiring the user to manually browse multiple sites. 

How it works 

Rather than returning a list of links, the agent interprets user intent, including budget, category, and preferences. 

It pulls structured product information from participating merchants, applies reasoning logic to compare options, and moves toward checkout only after explicit user confirmation. 

The agent operates on approved product data and defined rules, with clear handoffs that keep the user in control. 

Implications for consumers and professionals 

Reducing decision fatigue without removing choice is a clear benefit for consumers. For brands, this turns discovery into high-intent engagement rather than anonymous clicks with unclear attribution. 

Strategically, it shifts competition away from who shouts the loudest toward who provides the clearest and most trusted product signals to agents. These agents can act as trusted guides, offering consumers third-party verification that a merchant is as reliable as it claims to be.

2. Brand-owned AI assistants 

A brand deploys its own AI agent to answer questions, recommend products, and support customers using the brand’s data, tone, and business rules.

How it works 

The agent uses first-party information, such as product catalogs, policies, and FAQs. 

Guardrails define what it can say or do, preventing inferences that could lead to hallucinations. 

Responses are generated by retrieving and reasoning over approved context within the prompt.

Implications for consumers and professionals 

Customers get faster and more consistent responses. Brands retain voice, accountability, and ownership of the experience. 

Strategically, this allows companies to participate in the agentic web without ceding their identity to a platform or intermediary. It also enables participation in global commerce without relying on native speakers to verify language.

3. Autonomous task completion 

Users delegate outcomes rather than steps, such as “Prepare a weekly performance summary” or “Reorder inventory when stock is low.” 

How it works 

The agent breaks the goal into subtasks, determines which systems or tools are needed, and executes actions sequentially. It pauses when permissions or human approvals are required. 

These can be provided in bulk upfront or step by step. How this works ultimately depends on how the agent is built. 

Implications for consumers and marketers 

We’re used to treating AI like interns, relying on micromanaged task lists and detailed prompts. As agents become more sophisticated, it becomes possible to treat them more like senior employees, oriented around outcomes and process improvement. 

That makes it reasonable to ask an agent to identify action items in email or send templates in your voice when active engagement isn’t required. Human choice comes down to how much you delegate to agents versus how much you ask them to assist.

Dig deeper: The future of search visibility: What 6 SEO leaders predict for 2026

Get the newsletter search marketers rely on.


4. Agent-to-agent coordination and negotiation 

Agents communicate with other agents on behalf of people or organizations, such as a buyer agent comparing offers with multiple seller agents. 

How it works 

Agents exchange structured information, including pricing, availability, and constraints. 

They apply predefined rules, such as budgets or policies, and surface recommended outcomes for human approval. 

Implications for consumers and marketers 

Consumers may see faster and more transparent comparisons without needing to manually negotiate or cross-check options. 

For professionals, this introduces new efficiencies in areas like procurement, media buying, or logistics, where structured negotiation can occur at scale while humans retain oversight.

5. Continuous optimization over time 

Agents don’t just act once. They improve as they observe outcomes.

How it works 

After each action, the agent evaluates what happened, such as engagement, conversion, or satisfaction. It updates its internal weighting and applies those learnings to future decisions.

Why people should care 

Consumers experience increasingly relevant interactions over time without repeatedly restating preferences. 

Professionals gain systems that improve continuously, shifting optimization from one-off efforts to long-term, adaptive performance. 

What are the pros and cons of the agentic web? 

Life is a series of choices, and leaning into or away from the agentic web comes with clear pros and cons.

Pros of leaning into the agentic web 

The strongest argument for leaning into the agentic web is behavioral. People have already been trained to prioritize convenience over process. 

Saved payment methods, password managers, autofill, and one-click checkout normalized the idea that software can complete tasks on your behalf once trust is established.

Agentic experiences follow the same trajectory. Rather than requiring users to manually navigate systems, they interpret intent and reduce the number of steps needed to reach an outcome. 

Cons of leaning into the agentic web 

Many brands will need to rethink how their content, data, and experiences are structured so they can be interpreted by automated systems and humans. What works for visual scanning or brand storytelling doesn’t always map cleanly to machine-readable signals.

There’s also a legitimate risk of overoptimization. Designing primarily for AI ingestion can unintentionally degrade human usability or accessibility if not handled carefully. 

Dig deeper: The enterprise blueprint for winning visibility in AI search

Pros of leaning away from the agentic web 

Choosing to lean away from the agentic web can offer clarity of stance. There’s a visible segment of users skeptical of AI-mediated experiences, whether due to privacy concerns, automation fatigue, or a loss of human control. 

Aligning with that perspective can strengthen trust with audiences who value deliberate, hands-on interaction.

Cons of leaning away from the agentic web 

If agentic interfaces become a primary way people discover information, compare options, or complete tasks, opting out entirely may limit visibility or participation. 

The longer an organization waits to adapt, the more expensive and disruptive that transition can become.

What’s notable across the ecosystem is that agentic systems are increasingly designed to sit on top of existing infrastructure rather than replace it outright. 

Avoiding engagement with these patterns may not be sustainable over time. If interaction norms shift and systems aren’t prepared, the combination of technical debt and lost opportunity may be harder to overcome later.

Where the agentic web stands today

The agentic web is still taking form, shaped largely by how people choose to use it. Some organizations are already applying agentic systems to reduce friction and improve outcomes. Others are waiting for stronger trust signals and clearer consent models.

Either approach is valid. What matters is understanding how agentic systems work, where they add value, and how emerging protocols are shaping participation. That understanding is the foundation for deciding when, where, and how to engage with the agentic web.

Read more at Read More

Web Design and Development San Diego

7 digital PR secrets behind strong SEO performance

7 digital PR secrets behind strong SEO performance

Digital PR is about to matter more than ever. Not because it’s fashionable, or because agencies have rebranded link building with a shinier label, but because the mechanics of search and discovery are changing. 

Brand mentions, earned media, and the wider PR ecosystem are now shaping how both search engines and large language models understand brands. That shift has serious implications for how SEO professionals should think about visibility, authority, and revenue.

At the same time, informational search traffic is shrinking. Fewer people are clicking through long blog posts written to target top-of-funnel keywords. 

The commercial value in search is consolidating around high-intent queries and the pages that serve them: product pages, category pages, and service pages. Digital PR sits right at the intersection of these changes.

What follows are seven practical, experience-led secrets that explain how digital PR actually works when it’s done well, and why it’s becoming one of the most important tools in SEOs’ toolkit.

Secret 1: Digital PR can be a direct sales activation channel

Digital PR is usually described as a link tactic, a brand play or, more recently, as a way to influence generative search and AI outputs.

All of that’s true. What’s often overlooked is that digital PR can also drive revenue directly.

When a brand appears in a relevant media publication, it’s effectively placing itself in front of buyers while they are already consuming related information.

This is not passive awareness. It’s targeted exposure during a moment of consideration.

Platforms like Google are exceptionally good at understanding user intent, interests and recency. Anyone who has looked at their Discover feed after researching a product category has seen this in action. 

Digital PR taps into the same behavioral reality. You are not broadcasting randomly. You are appearing where buyers already are.

Two things tend to happen when this is executed well.

  • If your site already ranks for a range of relevant queries, your brand gains additional recognition in nontransactional contexts. Readers see your name attached to a credible story or insight. That familiarity matters.
  • More importantly, that exposure drives brand search and direct clicks. Some readers click straight through from the article. Others search for your brand shortly after. In both cases, they enter your marketing funnel with a level of trust that generic search traffic rarely has.

This effect is driven by basic behavioral principles such as recency and familiarity. While it’s difficult to attribute cleanly in analytics, the commercial impact is very real. 

We see this most clearly in direct-to-consumer, finance, and health markets, where purchase cycles are active and intent is high.

Digital PR is not just about supporting sales. In the right conditions, it’s part of the sales engine.

Dig deeper: Discoverability in 2026: How digital PR and social search work together

Secret 2: The mere exposure effect is one of digital PR’s biggest advantages

One of the most consistent patterns in successful digital PR campaigns is repetition.

When a brand appears again and again in relevant media coverage, tied to the same themes, categories, or areas of expertise, it builds familiarity. 

That familiarity turns into trust, and trust turns into preference. This is known as the mere exposure effect, and it’s fundamental to how brands grow.

In practice, this often happens through syndicated coverage. A strong story picked up by regional or vertical publications can lead to dozens of mentions across different outlets. 

Historically, many SEOs undervalued this type of coverage because the links were not always unique or powerful on their own.

That misses the point.

What this repetition creates is a dense web of co-occurrences. Your brand name repeatedly appears alongside specific topics, products, or problems. This influences how people perceive you, but it also influences how machines understand you.

For search engines and large language models alike, frequency and consistency of association matter. 

An always-on digital PR approach, rather than sporadic big hits, is one of the fastest ways to increase both human and algorithmic familiarity with a brand.

Secret 3: Big campaigns come with big risk, so diversification matters

Large, creative digital PR campaigns are attractive. They are impressive, they generate internal excitement, and they often win industry praise. The problem is that they also concentrate risk.

A single large campaign can succeed spectacularly, or it can fail quietly. From an SEO perspective, many widely celebrated campaigns underperform because they do not generate the links or mentions that actually move rankings.

This happens for a simple reason. What marketers like is not always what journalists need.

Journalists are under pressure to publish quickly, attract attention, and stay relevant to their audience. 

If a campaign is clever but difficult to translate into a story, it will struggle. If all your budget’s tied up in one idea, you have no fallback.

A diversified digital PR strategy spreads investment across multiple smaller campaigns, reactive opportunities, and steady background activity. 

This increases the likelihood of consistent coverage and reduces dependence on any single idea working perfectly.

In digital PR, reliability often beats brilliance.

Dig deeper: How to build search visibility before demand exists

Get the newsletter search marketers rely on.


Secret 4: The journalist’s the customer

One of the most common mistakes in digital PR is forgetting who the gatekeeper is.

From a brand’s perspective, the goal might be links, mentions, or authority. 

From a journalist’s perspective, the goal is to write a story that interests readers and performs well. These goals overlap, but they are not the same.

The journalist decides whether your pitch lives or dies. In that sense, they are the customer.

Effective digital PR starts by understanding what makes a journalist’s job easier. 

That means providing clear angles, credible data, timely insights, and fast responses. Think about relevance before thinking about links.

When you help journalists do their job well, they reward you with exposure. 

That exposure carries weight in search engines and in the training data that informs AI systems. The exchange is simple: value for value.

Treat journalists as partners, not as distribution channels.

Secret 5: Product and category page links are where SEO value is created

Not all links are equal.

From an SEO standpoint, links to product, category, and core service pages are often far more valuable than links to blog content. Unfortunately, they are also the hardest links to acquire through traditional outreach.

This is where digital PR excels.

Because PR coverage is contextual and editorial, it allows links to be placed naturally within discussions of products, services, or markets. When done correctly, this directs authority to the pages that actually generate revenue.

As informational content becomes less central to organic traffic growth, this matters even more.

Ranking improvements on high-intent pages can have a disproportionate commercial impact.

A relatively small number of high-quality, relevant links can outperform a much larger volume of generic links pointed at top-of-funnel content.

Digital PR should be planned with these target pages in mind from the outset.

Dig deeper: How to make ecommerce product pages work in an AI-first world

Secret 6: Entity lifting is now a core outcome of digital PR

Search engines have long made it clear that context matters. The text surrounding a link, and the way a brand is described, help define what that brand represents.

This has become even more important with the rise of large language models. These systems process information in chunks, extracting meaning from surrounding text rather than relying solely on links.

When your brand is mentioned repeatedly in connection with specific topics, products, or expertise, it strengthens your position as an entity in that space. This is what’s often referred to as entity lifting.

The effect goes beyond individual pages. Brands see ranking improvements for terms and categories that were not directly targeted, simply because their overall authority has increased. 

At the same time, AI systems are more likely to reference and summarize brands that are consistently described as relevant sources.

Digital PR is one of the most scalable ways to build this kind of contextual understanding around a brand.

Secret 7: Authority comes from relevant sources and relevant sections

Former Google engineer Jun Wu discusses this in his book “The Beauty of Mathematics in Computer Science,” explaining that authority emerges from being recognized as a source within specific informational hubs. 

In practical terms, this means that where you are mentioned matters as much as how big the site is.

A link or mention from a highly relevant section of a large publication can be more valuable than a generic mention on the homepage. For example, a targeted subfolder on a major media site can carry strong authority, even if the domain as a whole covers many subjects.

Effective digital PR focuses on two things: 

  • Publications that are closely aligned with your industry and sections.
  • Subfolders that are tightly connected to the topic you want to be known for.

This is how authority is built in a way that search engines and AI systems both recognize.

Dig deeper: The new SEO imperative: Building your brand

Where digital PR now fits in SEO

Digital PR is no longer a supporting act to SEO. It’s becoming central to how brands are discovered, understood, and trusted.

As informational traffic declines and high-intent competition intensifies, the brands that win will be those that combine relevance, repetition, and authority across earned media. 

Digital PR, done properly, delivers all three.

Read more at Read More

Web Design and Development San Diego

Microsoft rolls out multi-turn search in Bing

Microsoft today rolled out multi-turn search globally in Bing. As you scroll down the search results page, a Copilot search box now dynamically appears at the bottom.

About multi-turn search. This type of search experience lets a user continue the conversation from the Bing search results page. Instead of starting over, the searcher types a follow-up question into the Copilot search box at the bottom of the results, allowing the search to build on the previous query. Here’s a screenshot of this feature:

Here’s a video of it in action:

What Microsoft said. Jordi Ribas, CVP, Head of Search at Microsoft, posted this news on X:

  • “After shipping in the US last year, multi-turn search in Bing is now available worldwide.
  • “Bing users don’t need to scroll up to do the next query, and the next turn will keep context when appropriate. We have seen gains in engagement and sessions per user in our online metrics, which reflect the positive user value of this approach.”

Why we care. Search engines like Google and Bing are pushing harder to move users into their AI experiences. Google is blending AI Overviews more deeply into AI Mode, even as many publishers object to how it handles their content. Bing has now followed suit, fully rolling out the Copilot search box at the bottom of search results after several months of testing.

Read more at Read More

Web Design and Development San Diego

Why most SEO failures are organizational, not technical

Why most SEO failures are organizational, not technical

I’ve spent over 20 years in companies where SEO sat in different corners of the organization – sometimes as a full-time role, other times as a consultant called in to “find what’s wrong.” Across those roles, the same pattern kept showing up.

The technical fix was rarely what unlocked performance. It revealed symptoms, but it almost never explained why progress stalled.

No governance

The real constraints showed up earlier, long before anyone read my weekly SEO reports. They lived in reporting lines, decision rights, hiring choices, and in what teams were allowed to change without asking permission. 

When SEO struggled, it was usually because nobody rightfully owned the CMS templates, priorities conflicted across departments, or changes were made without anyone considering how they affected discoverability.

I did not have a word for the core problem at the time, but now I do – it’s governance, usually manifested by its absence.

Two workplaces in my career had the conditions that allowed SEO to work as intended. Ownership was clear.

Release pathways were predictable. Leaders understood that visibility was something you managed deliberately, not something you reacted to when traffic dipped.

Everywhere else, metadata and schema were not the limiting factor. Organizational behavior was.

Dig deeper: How to build an SEO-forward culture in enterprise organizations

Beware of drift

Once sales pressures dominate each quarter, even technically strong sites undergo small, reasonable changes:

  • Navigation renamed by a new UX hire.
  • Wording adjusted by a new hire on the content team.
  • Templates adjusted for a marketing campaign.
  • Titles “cleaned up” by someone outside the SEO loop.

None of these changes look dangerous in isolation – if you know before they occur.

Over time, they add up. Performance slides, and nobody can point to a single release or decision where things went wrong.

This is the part of SEO most industry commentary skips. Technical fixes are tangible and teachable. Organizational friction is not. Yet that friction is where SEO outcomes are decided, usually months before any visible decline.

SEO loses power when it lives in the wrong place

I’ve seen this drift hurt rankings, with SEO taking the blame. In one workplace, leadership brought in an agency to “fix” the problem, only for it to confirm what I’d already found: a lack of governance caused the decline.

Where SEO sits on the org chart determines whether you see decisions early or discover them after launch. It dictates whether changes ship in weeks or sit in the backlog for quarters.

I have worked with SEO embedded under marketing, product, IT, and broader omnichannel teams. Each placement created a different set of constraints.

When SEO sits too low, decisions that reshape visibility ship first and get reviewed later — if they are reviewed at all.

  • Engineering adjusted components to support a new security feature. In one workplace, a new firewall meant to stop scraping also blocked our own SEO crawling tools.
  • Product reorganized navigation to “simplify” the user journey. No one asked SEO how it would affect internal PageRank.
  • Marketing “refreshed” content to match a campaign. Each change shifted page purpose, internal linking, and consistency — the exact signals search engines and AI systems use to understand what a site is about.

Dig deeper: SEO stakeholders: Align teams and prove ROI like a pro

Positioning the SEO function

Without a seat at the right table, SEO becomes a cleanup function.

When one operational unit owns SEO, the work starts to reflect that unit’s incentives.

  • Under marketing, it becomes campaign-driven and short-term.
  • Under IT, it competes with infrastructure work and release stability.
  • Under product, it gets squeezed into roadmaps that prioritize features over discoverability.

The healthiest performance I’ve seen came from environments where SEO sat close enough to leadership to see decisions early, yet broad enough to coordinate with content, engineering, analytics, UX, and legal.

In one case, I was a high-priced consultant, and every recommendation was implemented. I haven’t repeated that experience since, but it made one thing clear: VP-level endorsement was critical. That client doubled organic traffic in eight months and tripled it over three years.

Unfortunately, the in-house SEO team is just another team that might not get the chance to excel. Placement is not everything, but it is the difference between influencing the decision and fixing the outcome.

Get the newsletter search marketers rely on.


Hiring mistakes

The second pattern that keeps showing up is hiring – and it surfaces long before any technical review.

Many SEO programs fail because organizations staff strategically important roles for execution, when what they really need is judgment and influence. This isn’t a talent shortage. It’s a screening problem

The SEO manager often wears multiple hats, with SEO as a minor one. When they don’t understand SEO requirements, they become a liability, and the C-suite rarely sees it.

Across many engagements, I watched seasoned professionals passed over for younger candidates who interviewed well, knew the tool names, and sounded confident.

HR teams defaulted to “team fit” because it was easier to assess than a candidate’s ability to handle ambiguity, challenge bad decisions, or influence work across departments.

SEO excellence depends on lived experience. Not years on a résumé, but having seen the failure modes up close:

  • Migrations that wiped out templates.
  • Restructures that deleted category pages.
  • “Small” navigation changes that collapsed internal linking.

Those experiences build judgment. Judgment is what prevents repeat mistakes. Often, that expertise is hard to put in a résumé.

Without SEO domain literacy, hiring becomes theater. But we can’t blame HR, which has to hire people for all parts of the business. Its only expertise is HR.

Governance needs to step in.

One of the most reliable ways to improve recruitment outcomes is simple: let the SEO leader control the shortlist.

Fit still matters. Competence matters first. When the person accountable for results shapes the hiring funnel, the best candidates are chosen.

SEO roles require the ability to change decisions, not just diagnose problems. That skill does not show up in a résumé keyword scan.

Dig deeper: The top 5 strategic SEO mistakes enterprises make (and how to avoid them)

When priorities pull in different directions

Every department in a large organization has legitimate goals.

  • Product wants momentum.
  • Engineering wants predictable releases.
  • Marketing wants campaign impact.
  • Legal wants risk reduction.

Each team can justify its decisions – and SEO still absorbs the cost.

I have seen simple structural improvements delayed because engineering was focused on a different initiative.

At one workplace, I was asked how much sales would increase if my changes were implemented.

I have seen content refreshed for branding reasons that weakened high-converting pages. Each decision made sense locally. Collectively, they reshaped the site in ways nobody fully anticipated.

Today, we face an added risk: AI systems now evaluate content for synthesis. When content changes materially, an LLM may stop citing us as an authority on that topic.

Strong visibility governance can prevent that.

The organizations that struggled most weren’t the ones with conflict. They were the ones that failed to make trade-offs explicit.

What are we giving up in visibility to gain speed, consistency, or safety? When that question is never asked, SEO degrades quietly.

What improved outcomes was not a tool. It was governance: shared expectations and decision rights.

When teams understood how their work affected discoverability, alignment followed naturally. SEO stopped being the team that said “no” and became the function that clarified consequences.

International SEO improves when teams stop shipping locally good changes that are globally damaging. Local SEO improves when there is a single source of location truth.

Ownership gaps

Many SEO problems trace back to ownership gaps that only become visible once performance declines.

  • Who owns the CMS templates?
  • Who defines metadata standards?
  • Who maintains structured data? Who approves content changes?

When these questions have no clear answer, decisions stall or happen inconsistently. The site evolves through convenience rather than intent.

In contrast, the healthiest organizations I worked with shared one trait: clarity.

People knew which decisions they owned and which ones required coordination. They did not rely on committees or heavy documentation because escalation paths were already understood.

When ownership is clear, decisions move. When ownership is fragmented, even straightforward SEO work becomes difficult.

Dig deeper: How to win SEO allies and influence the brand guardians

Healthy environments for SEO to succeed

Across my career, the strongest results came from environments where SEO had:

  • Early involvement in upcoming changes.
  • Predictable collaboration with engineering.
  • Visibility into product goals.
  • Clear authority over content standards.
  • Stable templates and definitions.
  • A reliable escalation path when priorities conflicted.
  • Leaders who understood visibility as a long-term asset.

These organizations were not perfect. They were coherent.

People understood why consistency mattered. SEO was not a reactive service. It was part of the infrastructure.

What leaders can do now

If you lead SEO inside a complex organization, the most effective improvements come from small, deliberate shifts in how decisions get made:

  • Place SEO where it can see and influence decisions early.
  • Let SEO leaders – not HR – shape candidate shortlists.
  • Hire for judgment and influence, not presentation.
  • Create predictable access to product, engineering, content, analytics, and legal.
  • Stabilize page purpose and structural definitions.
  • Make the impact of changes visible before they ship.

These shifts do not require new software. They require decision clarity, discipline, and follow-through.

Visibility is an organizational outcome

SEO succeeds when an organization can make and enforce consistent decisions about how it presents itself. Technical work matters, but it can’t offset structures pulling in different directions.

The strongest SEO results I’ve seen came from teams that focused less on isolated optimizations and more on creating conditions where good decisions could survive change. That’s visibility governance.

When SEO performance falters, the most durable fixes usually start inside the organization.

Dig deeper: What 15 years in enterprise SEO taught me about people, power, and progress

Read more at Read More