Google’s AI Mode isn’t more likely to cite content that appears “above the fold,” according to a study from SALT.agency, a technical SEO and content agency.
After analyzing more than 2,000 URLs cited in AI Mode responses, researchers found no correlation between how high text appears on a page and whether Google’s AI selects it for citation.
Pixel depth doesn’t matter. AI Mode cited text from across entire pages, including content buried thousands of pixels down.
Citation depth showed no meaningful relationship to visibility.
Average depth varied by vertical, from about 2,400 pixels in travel to 4,600 pixels in SaaS, with many citations far below the traditional “above the fold” area.
Page layout affects depth, not visibility. Templates and design choices influenced how far down the cited text appeared, but not whether it was cited.
Pages with large hero images or narrative layouts pushed cited text deeper, while simpler blog or FAQ-style pages surfaced citations earlier.
No layout type showed a visibility advantage in AI Mode.
Descriptive subheadings matter. One consistent pattern emerged: AI Mode frequently highlighted a subheading and the sentence that followed it.
This suggests Google uses heading structures to navigate content, then samples opening lines to assess relevance, behavior consistent with long-standing search practices, according to SALT.
What Google is likely doing. SALT believes AI Mode relies on the same fragment indexing technology Google has used for years. Pages are broken into sections, and the most relevant fragment is retrieved regardless of where it appears on the page.
What they’re saying. While the study examined only one structural factor and one AI model, the takeaway is clear: there’s no magic formula for AI Mode visibility. Dan Taylor, partner and head of innovation (organic and AI) at SALT.agency, said:
“Our study confirms that there is no magic template or formula for increased visibility in AI Mode responses – and that AI Mode is not more likely to cite text from ‘above the fold.’ Instead, the best approach mirrors what’s worked in search for years: create well-structured, authoritative content that genuinely addresses the needs of your ideal customers.
“…the data clearly debunks the idea that where the information sits within a page has an impact on whether it will be cited.”
Why we care. The findings challenge the idea that AI-specific templates or rigid page structures drive better AI Mode visibility. Chasing “AI-optimized” layouts may distract from work that actually matters.
About the research. SALT analyzed 2,318 unique URLs cited in AI Mode responses for high-value queries across travel, ecommerce, and SaaS. Using a Chrome bookmarklet and a 1920×1080 viewport, researchers recorded the vertical pixel position of the first highlighted character in each AI-cited fragment. They also cataloged layouts and elements, such as hero sections, FAQs, accordions, and tables of contents.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-09 17:43:152026-02-09 17:43:15Google AI Mode doesn’t favor above-the-fold content: Study
AI content generation for SEO can be a game-changer if you use it the right way.
AI tools help increase the speed of your content production, from brainstorming to drafting. And yes, we’ve built our own AI writer into Ubersuggest to make that process easier.
But here’s the thing: AI isn’t a shortcut to rankings. Without the right prompts and a human touch, AI content can actually hurt your traffic. Google’s recent updates and the rise of AI Overviews in search show just how important quality and clarity are.
So no, AI-generated content isn’t bad, but you need a strategy. Otherwise, it’s just more noise.
Key Takeaways
LLMs won’t cite your content unless it’s structured, trustworthy, and answers real user questions.
AI content generation for SEO works, but only with the right strategy and human oversight.
AI can speed up all stages of content production, but publishing without reviewing will tank your results.
Prompts matter. Clear direction on content structure and audience and strong keyword targeting separate ranking content from noise.
Human elements like originality, firsthand insights, and strong E-E-A-T signals are still non-negotiable.
AI VS Humans: Pros & Cons
With AI, we found that you can’t just publish the content it generates and go off to the races.
It still takes time to use AI.
From modifying the content to putting it in your CMS to adjusting the format, creating content takes time whether you use AI or not.
Here’s how long it takes to create content using AI versus a human.
When using AI we found that you can write content, post it into a CMS, and publish it all within 16 minutes.
Humans on the other hand took an average of 69 minutes.
But there are some issues that most people don’t talk about.
The first is AI takes what’s on the web and “regurgitates” the same old info.
People want to read something new…
The second is we found that 94.12% of the time human written content outranked AI-created content.
With that said, there is still a role for AI-generated content in an SEO strategy.
<h2>Does AI-Generated Content Support SEO? </h2>
Our findings aren’t all “doom and gloom” for AI, especially as platforms and LLMs evolve. It can absolutely support your SEO strategy, especially when it comes to scaling content or repurposing existing assets, but AI needs direction. If you feed it a vague prompt like “write a blog post about SEO,” you’ll get generic, surface-level content that won’t rank or convert.
Your prompt is essential in making AI-generated content SEO-friendly. You need to tell the tool exactly what keywords to target, what questions to answer, what structure to follow, and who the audience is. Doing that requires real marketing experience.
This is where human input and oversight still matter. You need to choose the right keywords and guide the AI to meet quality standards. AI is just guessing without that input, and that rarely ends well for SEO.
It’s also worth noting that while AI can help draft content, it won’t replace human editing. You still need someone to review for tone and voice accuracy, and depth.
<h3>Does AI-Generated Content Help with LLM Presence? </h3>
AI content won’t magically get picked up by LLMs. But with smart prompting and a clear optimization strategy, it can absolutely improve your chances.
Large language models (LLMs) like ChatGPT and Gemini pull from indexed content to generate answers. This process is known as retrieval augmented generation (RAG)
If your content is well-structured and authoritative, it has a better shot of getting cited or referenced in those answers, but generic content won’t cut it. These models are picky.
To actually earn LLM visibility, you need to create content that matches how LLMs surface information. That means answering specific questions, using structured data where it makes sense, and writing in a way that’s clear, concise, and trustworthy.
AI tools can help here, but again, prompting is key. If your AI-generated content isn’t shaped around real user questions or lacks structure that aligns with LLM output patterns, it’s unlikely to perform.
Digging deeper and learning more about LLM SEO and LLM optimization is a great way to improve your skills in this area. By understanding these concepts, you’ll learn exactly what to include in your content and how to use AI to get there.
Integrating AI Into Your Content Approach (The Right Way)
Used well, AI can help you move faster but it’s the human touches that drive results. You need to start thinking of AI as a starting point, not the whole process.
We ran an experiment across 68 sites, publishing 744 articles—half written by humans, half by AI. Five months in, the average AI article brought in 52 visitors a month. Human-written articles? 283.
Now, sure, you could scale faster with AI, but pumping out a ton of mediocre content does more harm than good. In fact, when we pruned low-quality posts, we saw an 11 to 12 percent traffic lift.
If you’re going to use a GenAI tool to do your writing, do it with intention:
Start with smart prompts. Include keyword targets and content goals.
Feed the tool solid references like existing content, credible sources, or structured outlines.
Don’t just hit publish. Run a full human review: fact-check, rewrite weak sections, fix tone issues, and make sure it aligns with your brand.
And here’s the secret sauce: add manual value. Include firsthand insights via screenshots or updated data. Layer in trust-building elements like personal experience or expert sourcing. That’s how you build E-E-A-T—Google’s framework for judging helpful, credible content.
FAQs
Is AI-generated content good for SEO?
It can be, if you do it right. AI can help you scale content creation, but you still need a human touch to make sure it’s high-quality and helpful. Google rewards useful content, not mass-produced fluff.
Does AI-generated content affect SEO?
Yes, but how it affects your SEO depends on what you publish. If your AI content adds value and matches search intent, it can help you rank. If it’s generic or purely written for keywords, it’ll likely hurt you.
Will Google penalize SEO content generated by AI?
Google will not penalize you for using AI alone. Google doesn’t care how content is made as long as it’s useful and trustworthy. But if the content is spammy or misleading, that’s where penalties come in.
Case Study: How We Use AI
AI’s biggest impact on our content writing process isn’t even the writing part.
It’s the research part.
For example, at NP Digital, we used AI to help UTI boost its traffic.
Instead of relying on AI to write extensive content, we leveraged it to create select drafts (which then undergo our human editing process) and assist us in conducting research for all the cities in which UTI has campuses.
This allowed us to scale the creation of their local pages and ensure high quality by leveraging our human content staff to incorporate other elements that would be useful for someone performing a local search.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-08 20:00:002026-02-08 20:00:00AI Content Generation for SEO: Pros, Cons & How to Use It
If 2025 taught us anything, it’s that AI is no longer just a side tool. It’s the engine running campaigns and reshaping how people discover brands.
At the same time, platforms have declared war on the “click.” We’re seeing an aggressive push for native conversions, where the goal isn’t to drive traffic to the website but to close the deal right in the feed.
That shift toward “frictionless” experiences, combined with the saturation of AI-generated noise, has forced another major change. Content with deep educational value is starting to outperform the high-volume, “101-level” content that simply fills space.
As we get deeper into the new year, those shifts are accelerating.
The top digital marketing trends for 2026 reflect this reality: Automation handles execution, while human elements like strategy and storytelling set the winners apart.
If you want to stay relevant, abandon the old metrics of “rankings” and “reach.” They no longer guarantee relevance. Here’s what’s actually moving the needle in 2026 (and how the best digital marketers are keeping up).
Key Takeaways
With the rise of agentic AI, machines can now handle the lifecycle and campaigns, but human oversight is essential.
User discovery spans platforms like TikTok, Reddit, YouTube, and Meta. Each one requires unique formats, signals, and intent-based optimization.
Funnels are no longer static. AI personalizes journeys in real time based on user behavior, replacing manual segmentation and drip campaigns.
Chat assistants recommend brands based on trust and content relevance. Consistency and large language model optimization (LLMO) are key to inclusion.
Google’s traditional and AI systems (PMax, AI Overviews, Demand Gen, and Search) now operate as one. Aligning creative and goals across all touchpoints boosts results.
AI Agents Take Over Execution
We’re already seeing AI streamline much of a marketing team’s content production. But the new flex is agentic AI. We’re talking about autonomous “team members” that can now handle your entire campaign workflow.
According to PwC, nearly 80 percent of organizations have already adopted AI agents to some degree. And most plan to expand use as these systems move from experimentation into day-to-day operations.
This goes far beyond production and publishing. Large language models (LLMs) have advanced to the point that they can manage the full lifecycle. We’re talking about agents embedded into tools that can help:
Manage your customer relationship management (CRM) data
Analyze data performance
Provide campaign insights
Adjust ad bids for paid campaigns in real time
This year, AI is going from writing your content to autonomous operations. It handles the execution while you focus on strategy and oversight.
Search Everywhere Optimization Becomes Mandatory
For the last few years, “search everywhere” has been a catchy conference buzzword. In 2026, it’s a baseline for survival.
The era of the “Google-default” mindset is over. Discovery now happens across platforms, feeds, and AI systems. Today’s SEO is drifting more and more toward search everywhere optimization and less search engine optimization.
Your audience isn’t just “Googling it” anymore. They’re asking questions and validating purchases on the platforms they trust most. And each has its own algorithm, formats, and user behavior.
Pinterest needs eye-catching visuals with keyword-rich descriptions.
YouTube demands longer, high-value content with tight intros and strong engagement.
The most disruptive shift, however, is happening outside traditional feeds. Voice assistants like Alexa and Siri, and generative chat tools like ChatGPT, Gemini, or Claude are increasingly acting as answer engines.
Digital marketers no longer need full engineering cycles to test new ideas.
Prompt-driven tools now make it possible to prototype calculators, quizzes, internal tools, and campaign utilities in hours instead of weeks.
Tools like Cursor and Replit let marketers translate plain-language instructions into working interfaces, lowering the barrier to experimentation. You still need engineering for production-scale products, but prompts now handle much of the early build and validation work.
Base44 is another example of a “vibe coding” platform that can turn your detailed descriptions into functional tools, reinforcing the same idea: Prompts are becoming a new control layer.
Everyone’s an engineer now. Look out, Silicon Valley!
The game has changed. You can now test fast, learn faster, and skip the bottlenecks that used to slow everything down.
Funnels Become Dynamic and Self-Optimizing
Static funnels are out. In 2026, customer journeys are becoming shorter and increasingly influenced in real time by AI systems.
It may seem shocking at first, but it makes sense when you zoom out and think about it. We are no longer pushing users through a pre-set funnel. We’re letting AI agents build the funnel around the user in real time.
In the early days of Google (and online shopping), a customer would have to visit several sites to research and read reviews—and, eventually, make a purchase. This is the classic marketing funnel we’re all familiar with. There’s a clearly defined top-of-funnel, mid-funnel, and bottom-of-funnel.
With generative AI tools now offering in-platform purchases, that funnel shrinks significantly. Your typical user can now research, build trust, and make a purchase all within an LLM like ChatGPT.
We’ve even begun to see major retailers like Walmart and Amazon move toward this model.
Walmart Sparky can answer user queries and pull in product recommendations to answer deeper questions. It even leads you to check out when you’re ready to purchase.
The same setup applies to Amazon Rufus, enabling customers to get details, get suggestions, get help, and get inspiration (and ultimately get stuff) all within one platform.
The result is higher engagement and faster conversions with way less manual work. These tools provide a hyper-personalized shopping experience faster than ever before. Platforms like Shopify and Etsy have also partnered with ChatGPT to purchase products directly in the LLM.
AI Attribution Connects Content to Revenue
Attribution isn’t new, but it’s getting more accurate. AI-powered attribution now connects every touchpoint—from the first video view to the final click—with real revenue outcomes.
Platforms like Wicked Reports are enabling marketers to tie initial ad clicks to lifetime purchases and provide “first click” and “time decay” tools to help you pinpoint the most successful starting point for your customers’ buying journeys. This app also provides revenue forecasting to help B2C and e-commerce businesses reliably predict and scale their growth.
Your latest blog post may not have converted immediately, but it made the visitor trust you enough to subscribe for email updates. That email is the next stop in their journey, pushing them to check out your pricing page. AI sees it all and assigns value accordingly.
With these new insights, you finally know which content moves the needle.
And it’s having a real financial impact. Teams using AI-driven marketing analytics report return on investment (ROI) improvements of roughly 300 percent and customer acquisition costs dropping by more than 30 percent.
Chat Assistants Reshape Discovery
We mentioned earlier how people’s search has evolved into asking AI chat tools like ChatGPT, Gemini, and Perplexity to answer their product questions. These platforms now include brand recommendations built right into the response, as well as the ability to shop for Shopify and Etsy products.
This is the same dynamic powering tools like Walmart Sparky and Amazon Rufus, where research and recommendations happen within a single AI experience.
These assistants don’t list 10 “sponsored” links, a la Google. They summarize what they trust. If they don’t mention your brand, you’re invisible in this new layer of discovery.
It takes more than gaming keywords to show up on these platforms. It’s all about relevance and consistency.
The more helpful, high-quality content you create around a topic, the more citations you’ll receive from users sharing it across the internet. Signals like structured content, schema markup, and consistent third-party validation help AI systems interpret your authority and decide when your brand is worth referencing.
This shift has given rise to large language model optimization (LLMO), a new branch of SEO focused on training AI to recognize and recommend your brand. If you’re not already thinking about LLMO, it’s time to get caught up.
The big takeaway here is that usefulness matters more than volume as discovery moves into AI systems. Provide enough high-quality answers to your audience’s questions, and the bots will start to bring your name up first.
Content Structure Becomes Even More Important
Old-school SEO was all about keywords. In 2026, performance increasingly comes from covering topics in depth and structuring content so both people and machines can understand it.
As we mentioned in the last section, search engines and AI assistants care more about how well you answer a question than how many times you use a keyword. That means your content needs to be thorough and easy to interpret at a glance, no matter who (or what) is doing the glancing.
NerdWallet does this well by organizing credit card content into a clear hub, then breaking it into tightly related subtopics that cover a ton of topical ground. It’s no longer a game of relying on individual keyword pages. Notably, Nerdwallet is one of the most frequently cited websites in LLMs.
So, switch your strategy mindset from pages to topic clusters. Cover a topic from every angle across multiple assets. Use headers, FAQs, schema markup, and internal links to connect the dots.
The better you structure your content, the easier it is for AI to find and promote it.
Your target audience is searching across multiple channels in today’s environment. Focusing on individual keywords leaves a lot of opportunity on the table.
Today’s rising search platforms, like social media apps and LLMs, revolve around semantic queries.
People talk to these tools naturally and conversationally (some of them even use ChatGPT’s voice functionality). This means you can’t hone in on a specific keyword. Using a keyword cluster that covers the most popular phrasings customers may use is a much better way to make sure you’re covering what people are asking, increasing your probability of being found.
This query within Perplexity demonstrates how people interact with search tools. They’re not always typing keywords. They’re asking full, conversational questions and expecting a clear answer.
You also have to consider that many users never click through to your site. Zero-click searches are growing fast, which means your content needs to deliver value right in the SERP—or immediately on platforms like social, LLMs, and voice.
If you’re still chasing individual keywords, you’re missing the bigger opportunity: becoming the trusted source on your topic.
Brand Trust Is Measured in Citations and Sentiment
AI doesn’t care how loud you are. It cares how often others talk about you, and what they say when they do.
Large language models prioritize brands with consistent, credible citations across the web. That includes mentions in blog posts, news articles, podcasts, reviews, and Reddit threads. The more quality signals you earn, the more likely AI is to recommend you.
But the mentions are just the beginning. Your performance in 2026 really boils down to your audience’s perception of you. Sentiment analysis now plays a big role in ranking. Positive discussions boost your chances of surfacing in AI results, while negativity can drag you down.
Until recently, this layer of discovery was almost impossible to measure. Traditional analytics don’t show when your brand is cited inside AI-generated answers. But a new class of AI visibility tools now tracks where and how often brands appear across platforms like ChatGPT, Perplexity, Claude, and Google’s AI Overviews (along with the surrounding context). But what types of brands are succeeding using this strategy?
Brands like Patagonia and TOMS are shining examples of this. These companies leverage philanthropy to increase their goodwill and, in turn, their customers’ positive sentiment toward them.
Leveraging elements like philanthropy the right way switches these brands’ audiences from customers to loyal supporters.
This shift rewards brands that build goodwill rather than just backlinks. If your strategy still centers on shouting the loudest, you’ll get buried by brands that are being talked about, and for the right reasons.
Trust is now your most important ranking factor. Earn it or fade out.
Blogs Influence AI Models, Not Just Traffic
If you think blogs don’t “work” like they used to, you’re missing the bigger picture. They still do heavy lifting behind the scenes to shape AI output and position your brand as a go-to source.
In modern search, everything you publish helps shape how AI models understand your brand. When you consistently cover a topic with depth and clarity, models start to associate your name with that subject.
This new reality turns your blogs from content assets into signals of authority.
Even if search traffic dips due to zero-click results or AI summaries, the long-term payoff is still there. The more high-quality content you create, the more likely your brand is to be cited by the higher-profile AI channels and included in trusted content roundups.
Social Platforms Function as Search Engines
As the search everywhere trend shows us, search behavior is spreading. And, according to Statista, nearly a quarter of U.S. adults treat social media as their starting point for search.
People are searching TikTok to see how something works or whether a restaurant’s worth trying.
They’re using YouTube to learn how to install software or compare skincare brands. Considering that this is the largest search engine after Google, it’s a great platform to focus efforts on.
This matters because social search runs on a different logic than traditional SEO or AI answer engines. These platforms reward relevance through engagement.
Each platform has its own discovery logic. TikTok rewards watch time and velocity. YouTube favors relevance and retention. Instagram leans on recency and interaction.
Without optimizing for these platforms, you’re missing a huge part of the search pie. You should be treating social platforms like search engines, because your audience already does.
This is where more traditional on-page SEO comes into play. That means digging into the types of questions your audience is asking and focusing on tried-and-true tactics like using clear, searchable titles and engaging hooks to “stop the scroll” and get your viewers’ attention in the first three seconds.
Content Quality Outperforms Quantity Across Channels
Publishing more content won’t save you in 2026.
Social platforms are flooded, and search is competitive. On top of that, AI is getting better every day at filtering out thin, repetitive, or regurgitated content.
Consequently, original insights and pieces that actually teach something are rising to the top.
We see this in emerging trends. For starters, the average number of posts per day among brands has decreased to 9.5. Engagement is moving in the opposite direction, with inbound interactions increasing by roughly 20 percent year over year.
Instead of posting five times a day, focus on publishing things worth reading and sharing, even if it’s only one well-structured piece of content per week.
A thoughtful video or long-form LinkedIn breakdown that sparks conversation will do much better than 100 pieces of AI-generated blogs that barely scratch the surface of a topic.
Take National Geographic, for example. Rather than posting constantly, it focuses on educational storytelling. Check out its TikTok grid.
Content creators are experiencing the benefits of this strategy in real time.
A recent survey finds that 35 percent of creators say they’re seeing higher potential ROI from longer-form content formats, with 39 percent saying they’re seeing better engagement. And almost half (49 percent) say that the choice to produce longer-form content is helping them reach a wider audience.
If your strategy is still built around churning out content to “stay active,” it’s time to shift. Fewer pieces. Bigger impact. Better outcomes.
That’s what wins in 2026.
Conversion Happens On-Platform, Not On-Site
The platforms people use every day are getting very good at keeping them there.
Think about it: Nearly every social platform has lead forms and lets you shop inside the app. The goal of these features is to help you convert without ever leaving their platform.
Instagram and TikTok, for example, have fully integrated shopping experiences. And it’s working. Sales through social media channels are forecasted to reach nearly 21 percent in 2026.
Google’s even testing AI-generated product recommendations with built-in checkout links, like Etsy and ChatGPT. The whole point is to remove friction and keep the experience seamless.
That shift changes what a “landing page” even means. In many cases, it’s a native form, a product card, or an in-app checkout flow that closes the deal on the spot.
Your website still matters, but forcing every conversion to happen there can introduce unnecessary drop-off. When users are ready to act, the simplest path usually wins.
This shift is giving rise to what some teams now call checkout optimization, and it’s getting some pretty serious results. E-commerce brands with 1,000 to 2,000 orders per month are implementing checkout optimization and seeing measurable gains in shipping revenue and order total.
When you meet users where they are, you lower the barrier to action. No load times. No messy redirects. Just a quick tap or swipe to buy, book, or sign up.
Video Becomes a Primary Search and AI Input
Video is increasingly becoming more than just a distribution format. It’s now a primary way people search—and a growing input for AI systems.
Search engines and AI platforms now index video much like they do written content, pulling from structural signals to generate results. If those signals aren’t there, the video might as well not exist.
What do those signals look like in practice?
Well, because search engines and AI platforms can’t watch your videos, they instead rely on clean transcripts, keyword-rich titles and descriptions, and clear segmentation. Think chapters, not rambles. Structure is what makes video searchable.
The more structured and searchable your video content, the more likely it is to be cited by AI assistants.
Text still matters. But if video isn’t part of your SEO and discovery strategy, you’re leaving serious visibility on the table.
Paid Media Shifts to AI-Led Campaigns
We’ve seen AI-driven paid media campaigns for some time now, but platforms like Google’s Performance Max and Meta’s Advantage+ are refining and elevating how it’s done. We’re seeing these platforms automatically testing creative and placements to hit performance goals, and even testing the benefits of AI-powered segmentation or ad bidding.
The result is less manual control and more system-led optimization, which is a benefit for many marketers. Retail marketers, for example, have seen a 10 percent to 25 percent lift in their return on ad spend (ROAS) by implementing AI-powered campaign elements.
But “hands-off” doesn’t mean “set it and forget it.”
In this model, your role shifts from managing campaigns to training the system. The better your inputs—creative variety, first-party data, and clear conversion signals—the better your results.
Lazy targeting and generic ads just get ignored.
Want to lower customer acquisition cost (CAC) or increase return on ad spend (ROAS)? Focus on refining your creative and uploading strong first-party data. AI will handle testing and optimization, but it can’t fix bad inputs.
Savvy marketers are shifting their roles from campaign operators to strategy leads. They’re spending less time on dashboards and more time building assets that actually convert, such as a robust content library or unique, impactful insights from proprietary data.
It all comes down to this: AI runs the ads, but you train it. If you’re not giving the algorithm something great to work with, you’re not going to like what it gives back.
FAQs
What are the digital marketing trends for 2026?
In 2026, AI is running full campaigns, dynamic funnels are replacing traditional static ones, and users are increasingly discovering brands across platforms. Chat assistants like ChatGPT now also recommend brands, and SEO is more about structured topics than keywords. Quality content outperforms quantity, and conversion often happens off your site.
How can businesses stay updated on marketing trends?
Follow trusted industry blogs (like NeilPatel.com), subscribe to marketing newsletters, and keep an eye on platform updates from the big players (Google, Meta, and TikTok). Tools like Ubersuggest can also help spot shifts in search behavior. But more than anything, continue testing and tracking, and stay close to what your audience responds to.
Conclusion
Many experts say that marketing is changing, but the fact is that it’s already changed.
AI now drives the full spectrum of content marketing. Platforms prioritize native conversion. Content shapes how machines and people see your brand. If you’re still playing by old rules—keyword-centric strategy, manual funnels, or high-volume posting—you’re going to get left behind.
Winning in 2026 means adapting quickly to emerging digital marketing trends by thinking strategically and building trust across every touchpoint.
If you’re not sure where to start, check out my guide on search engine trends to see how modern discovery actually works today.
The marketers who move first always get the advantage. So, make your move.
Following last year’s content marketing playbook won’t cut it in 2026.
AI is evolving how we create, but human connection still drives what performs. Search behavior is splintering across platforms, and brands are being judged not just on what they publish but also on how it shows up.
To win this year, marketing pros need to be smarter about what they’re doing. That means your content must be rooted in real insights about how people actually buy from your brand.
This complete guide to content marketing breaks down what’s changing, what’s working, and where to focus your time and budget. If you’re serious about growing through content in 2026, you need to understand the shifts shaping the space.
Key Takeaways
AI should speed up execution, not replace strategy. Use it to draft and repurpose content, but rely on human perspective and editing to determine performance.
Content that feels human outperforms content that feels polished. Audiences respond to authentic opinions and usefulness versus brand-safe, committee-written copy.
Thought leadership now requires original insight. Repackaging what already ranks won’t build authority; bold takes, experienced authors, and first party data will. First party/proprietary data also helps your content be more original and unique.
Distribution is half the strategy. Content needs a plan for where and how it gets discovered across platforms, not just published on a blog.
Measure influence, not output. The content that matters most is what changes thinking and earns trust, not what fills a calendar.
AI is great for maximizing your efficiency. It can drastically cut the time you spend on mundane content marketing tasks like researching and building outlines. It can even save you time by helping you repurpose old content into new formats. That kind of scale used to take teams. Now, all it takes is prompts.
But don’t mistake speed for strategy.
AI doesn’t know your customer. It doesn’t understand your brand’s voice or point of view—the elements of your brand or product that actually matter to people. Introducing those elements and ensuring they stay intact is on you.
Use AI to take the grunt work off your plate (i.e., building drafts or summarizing competitor content). But when it comes to telling your story, positioning your offer, or crafting something people want to read and share, human judgment still wins.
You can feel the difference between templated, AI-written content and something with a real perspective. So can your audience. If your content feels robotic or generic, they’re gone. No one shares or converts from content that reads like it came off an assembly line.
So yes, use AI. Just don’t hand it the keys to your content strategy. If you’re not putting in the human effort to edit and elevate what comes out, you’re just publishing noise.
Content Must Feel More Human, Not More Polished
People are tired of content that sounds like it was written by a committee.
You know the type: no strong opinions and so sanitized it could’ve come from any brand in your space. It’s forgettable. This year, forgettable doesn’t work.
Your audience doesn’t want another corporate how-to. They want to hear from someone who gets it. Someone who’s been in their shoes and isn’t afraid to say what actually works—and what doesn’t.
That doesn’t mean being sloppy. It means being real.
Ditch the fluff. Cut the clichés. Talk like a smart peer, not a brand trying to tick SEO boxes. Share what you’ve learned and what surprised you. That’s what builds trust. That’s what gets people to read and come back.
Slite’s piece on “people first” only going so far for parents is a great example. The post comes from a parent on their team, immediately creating expertise on the subject. There’s no promotional content anywhere in the blog, and the focus is on entertaining and educating the readers.
Let me pause a second here and make something clear: authentic doesn’t mean unedited. It means intentional. Sure, edit your pieces for grammar and structure, but don’t sand down the voice. Let the human fingerprints show.
In a world of AI-generated everything, human content stands out, not by being perfect, but by being authentically helpful and worth someone’s time.
Thought Leadership Requires Saying Something New
You don’t become a thought leader by echoing what everyone else is already saying.
Too many blogs and LinkedIn posts are just rewrites of what’s already ranking. They quote the same stats and land on the same safe conclusions. That’s not thought leadership, that’s content recycling.
You need to bring something new to the table if you want people to see you as an authority. That could mean sharing a bold opinion others won’t say out loud. It could be a unique framework you’ve developed through real experience. Or it might be calling out what isn’t working anymore, even if it used to.
Google’s EEAT principles—experience, expertise, authority, and trustworthiness—favor exactly this kind of content. You’re not just writing for algorithms anymore. You’re writing to earn trust from real people and search engines. Along with this, the style of throwing anonymous blog posts out into the aether isn’t going to cut it either. Adding named authors with credentials to your blog posts is essential for EEAT.
Brand Voice Is a Strategic Asset
AI can write. So can your competitors. Sticking to your unique voice is what’s going to set you apart.
In a sea of content that all sounds the same, brand voice is what makes people recognize you, even without seeing your logo. It’s more than tone or personality. It’s how you show up. And in 2026, it’s one of your biggest strategic advantages.
The best brands sound human. Clear and consistent across platforms—no matter if it’s a blog post, a LinkedIn comment, or a product page. That consistency builds trust and, over time, creates familiarity, which then leads to trust and loyalty—two essential elements of marketing to customers on the modern playing field.
Innocent Drinks is a British brand that’s a great example of a unique tone and voice. They keep customer interactions fun with cheeky British humor and self-deprecating jokes. The laid-back, conversational tone presents their smoothie drinks amidst daily jokes, weather updates, and more that keep their customers coming back.
But the catch is: you have to guard your brand voice with your life.
Well, maybe it’s not that drastic. But you at least have to define it, teach your writers how to use it, and maybe most importantly, defend it—especially when AI starts diluting it with generic phrasing or over-polished outputs.
Think of brand voice like a design system. It should guide every piece of content you publish. The goal isn’t to sound perfect. The goal is to sound unmistakably like you across formats, channels, and teams.
When everything else feels copy-pasted, your voice is what makes people stop scrolling and actually listen.
Video Isn’t Optional in a Content Strategy
If video still feels like “bonus content” to your team, you’re already behind.
Video—short and long-form—is now a core part of how people discover and share information. It’s not just for YouTube or TikTok anymore. It belongs in your blog strategy, your LinkedIn posts, your email sequences, and even your whitepapers. YouTube has risen to be the #2 largest search engine as well as a top source for Google Gemini.
The smartest content teams don’t treat video as a separate effort. They treat it as an extension of what they’re already creating.
Wrote a blog post that’s performing well? Turn it into a 60-second explainer for Instagram. Got a data-packed whitepaper? Break it into a mini-series of clips or animated infographics. Publishing a thought leadership piece? Record a quick POV video that puts a face (and voice) to the ideas.
Here’s an example from my Instagram:’
This isn’t about adding more work. It’s about getting more mileage from what you’ve already built.
People scroll past walls of text. But they’ll pause for a story or a strong hook in motion. Video improves retention and makes your message stick.
In 2026, you should be including video in your strategy from the start.
Distribution Is Just as Important as Creation
If you’re not planning for distribution, you may just be publishing into the void.
Too many marketers hit “publish” and hope for traffic. But in 2026, the real game happens after the content goes live. Distribution is half the strategy.
Every piece you create should have a plan for where it lives and how it spreads. That could mean breaking your blog post into a X thread or syndicating it as a native article on LinkedIn or Medium.
And don’t underestimate the power of partnerships. Influencers, creators, and subject matter experts can extend your reach with the right angle and format.
This is where Search Everywhere Optimization becomes essential. People aren’t just searching on Google anymore. They’re searching across all the major social media platforms, LLMs like Chat GPT and Perplexity, and e-commerce sites like Amazon. You need to meet them where they are, in the format they prefer.
Good content doesn’t go viral by luck. It travels because someone planned the route.
So before you write your next post, ask yourself: how will people actually find this? If you don’t have a solid answer, you’re not done yet.
Content Must Map to the Buyer Journey, Not Just the Funnel
A customer’s actual buying journey is moving away from the classic funnel we all know and love. The funnel itself isn’t changing, but where people go along the funnel is changing. People can shop in ChatGPT, meaning they can follow the entire funnel without ever leaving an LLM.
Real decision-making is messy. People bounce between tabs, skim reviews, watch videos, compare products, and ask peers for input—all before ever booking a demo or hitting “buy.” If your content only speaks to top-of-funnel traffic, you’re leaving serious revenue on the table.
Modern content strategy needs to follow the buyer wherever they go, not just the funnel stages.
That means going beyond how-to posts and SEO guides. You need content that helps buyers decide. Product comparisons. Honest breakdowns of pricing and features. Content that tackles objections head-on. Even onboarding previews and post-purchase FAQs count. They reduce friction and increase trust. This method is useful for appearing in LLMs as we.
Not only does this kind of content help convert, but it also ranks. Buyers search for “[Product A] vs [Product B],” “Is [Brand] worth it?”, and “How hard is it to implement [Tool]?” If you’re not showing up there, your competitor will.
So build for real behavior, not static funnels. Meet your buyer where they are—digging deep into research and looking for clarity before they buy.
Refreshing Existing Content Beats Churning Out New Posts
More content doesn’t always mean more results.
If you’re constantly creating from scratch but ignoring your old posts, you’re missing one of the easiest wins in content marketing: updates.
Refreshing content isn’t just minor changes; it’s making your best-performing or best-potential content even stronger. That could mean improving the structure, adding internal links, expanding thin sections, or aligning it with new search intent.
And it works. Updated content often ranks faster and converts better because it already has history and positive social signals, like backlinks. Google rewards freshness, but it also loves authority.
Instead of publishing five new blog posts next month, what if you refreshed five that are slipping in rankings? Or turned an old listicle into a detailed comparison guide? That’s not less work, it’s smarter work.
Start by running a quick content audit. Identify top traffic drivers and declining or outdated post topics. From there, prioritize updates that align with current search demand and business goals.
New content still matters, but refreshing what you already built often delivers a faster, more predictable ROI. Don’t start from zero when there’s gold in your archives.
User-Generated Content Builds Credibility and Community
Positive product recommendations, reviews, and stories from your customer base are gold for social proof.
That’s the power of user-generated content (UGC). Testimonials and stories or spotlights can build trust faster than anything you write yourself. They’re authentic social proof, and one of the most underused levers in content strategy.
Coca-Cola’s Share a Coke campaign is a great example of this. The company rolled out product with some of the most popular first names on each can or small bottle. Store displays encouraged shoppers to find a can with their name on it, take a picture, and post to social media with the caption “#shareacoke”. The result was social media feeds flooded with posts just like this:
UGC is such a powerful strategy because it reduces your content lift while increasing credibility. Instead of creating everything from scratch, you’re curating voices from your community. A five-minute video from a happy customer or a LinkedIn post from an employee can do more than a polished landing page.
So how do you get it? Ask. Prompt your audience to share their experiences. Feature real users in your blog posts or newsletters. Turn customer feedback into quote graphics or build case studies around standout use cases.
This strategy really gets powerful when you turn it into a system. Create UGC submission forms. Add review prompts to your post-purchase emails. Encourage your team to share behind-the-scenes stories on social.
The more people see themselves in your brand, the more they want to be part of it. UGC turns customers into advocates, and that’s content you can’t fake.
Measurement Is Moving to Influence, Not Just Output
Content volume used to be the metric. How many blogs did we publish? How many posts went live?
Now, it’s about impact.
Smart teams aren’t asking, “How much did we ship?” They’re asking, “What moved the needle?” That means tracking content that supports real business goals, not just filling up a calendar.
Engagement quality matters more than vanity metrics. Are people sharing or talking about it? Did it convert to revenue or reduce friction in the sales process? That’s the kind of content worth doubling down on.
Brand lift and even SEO performance are shifting, too. With AI Overviews reshaping how content appears in search, your content marketing performance hinges on owning the conversation through citations that strengthen brand trust.
Writers play a huge role here. When your content solves a real problem or answers a specific question better than anyone else, it sticks. That influence compounds.
So shift your mindset. Don’t just create content that gets clicks. Create content that changes people’s thinking or provides new insights. That’s the metric that matters now.
FAQs
What is the future of content marketing?
Content in 2026 is shifting toward authenticity. Brands are focusing on aspects like voice and distribution over volume. Repurposing across channels, creating content with real perspective, and measuring influence over output are now core strategies. The new goal is content that actually earns trust and drives action.
Conclusion
Creating a winning content marketing strategy in 2026 will certainly look different. The smart use of AI to scale instead of substitute, while still leaning on human editing and content elevation, will be a huge brand separator. You and your team can do that effectively by building a voice people recognize, creating content that feels human, and aligning it all to authentic buyer journeys and behaviors.
You don’t need to chase every trend. Focus on strategy, quality, and distribution that drives results.
Because in a world full of content, the only stuff that stands out is the kind that actually matters.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-06 20:00:002026-02-06 20:00:00The Future of Content Marketing: A 2026 Guide
Today we’ve released
the February 2026 Discover core update. This is a broad update to our systems that surface
articles in Discover. Our testing shows that people find the Discover experience more useful and
worthwhile with this update.
January didn’t bring flashy product launches. It brought something more valuable: clarity.
Platforms spent the month explaining how their systems actually work. Google detailed JavaScript indexing rules that matter for modern sites. Reddit opened up automation insights most platforms keep hidden. Amazon positioned itself as a legitimate cross-screen player with first-party data advantages traditional TV can’t match.
Automation kept expanding, but with firmer guardrails. AI continued to compress discovery. Zero-click experiences grew. Brands without clear expertise signals or off-site authority started disappearing from AI-generated answers.
For digital marketers, January reinforced one reality: performance in 2026 depends less on clever tactics and more on getting fundamentals right across channels.
Key Takeaways
Indexing logic must live in base HTML, not JavaScript. Google may skip rendering pages with noindex directives in initial HTML, leaving valuable content invisible even if JavaScript removes the tag later.
Performance Max channel reporting is now essential, not optional. Budget pressure is currently your sharpest lever for managing underperforming surfaces like Display or Discover.
Share of search is becoming a better demand signal than traffic alone. As AI reduces click-through rates, measuring how often people search for your brand versus competitors reveals momentum better than vanishing clicks.
Digital PR now directly impacts AI visibility. Authoritative mentions and credible coverage determine whether AI systems recognize and recommend your brand in zero-click answers.
Influencer marketing reached enterprise maturity in January. Unilever’s 20x creator expansion and 50% social budget shift prove influence at scale is baseline strategy, not experimentation.
Review monitoring must track losses, not just gains. Google’s AI is deleting legitimate reviews without notice, affecting rankings and trust faster than new reviews can rebuild them.
Search, SEO, and Indexing Reality Checks
Search teams started 2026 with clearer rules, not more flexibility. Google spent January confirming how it treats indexing signals on JavaScript-heavy sites.
Google Clarifies Noindex and JavaScript Behavior
Google confirmed that pages with a noindex directive in their initial HTML may not get rendered at all. Any JavaScript meant to remove or modify that directive might never execute.
Indexing intent belongs in base HTML. JavaScript should enhance experiences, not define crawl behavior. For headless stacks and dynamic frameworks, search engines respond to what they see first, not what you hope they’ll see after rendering.
If your site uses React, Next.js, Angular, or Vue with client-side rendering, audit how noindex tags are implemented. Server-side rendering or static generation solves most of these issues.
Google Clarifies JavaScript Canonical Rules
Google detailed how canonical tags work on JavaScript-driven pages. Canonicals can be evaluated twice: once in raw HTML and again after rendering. Conflicts between the two create real indexing problems.
Server-rendered HTML pointing to one canonical while client-side JavaScript points to another forces Google to pick. That choice often hurts rankings quietly, without throwing obvious errors in Search Console.
Teams need to decide where canonicals live and enforce consistency. One canonical after rendering. No ambiguity between server and client.
December Core Algorithm Update Wraps
Google’s December 2025 core update finished after roughly 18 days of volatility. Sites with stale content, weak expertise signals, or unclear intent lost ground. Others gained visibility by being more useful and better aligned with user needs.
Core updates no longer feel disruptive because they’re frequent. Three broad core updates rolled out in 2025 alone. The advantage now comes from consistent execution, not post-update recovery tactics.
Paid Search, Automation, and Audience Control
Paid media keeps moving toward automation. January showed where control still exists and where it doesn’t.
Using Google’s PMax Channel Report More Strategically
You still can’t control bids or exclusions at a granular level. What you can control is budget pressure. One surface consistently underperforming? Budget becomes your corrective lever. Pull back overall spend and PMax reallocates to better-performing channels automatically.
Teams that review this report monthly make better creative and investment decisions. Track this data over time. Patterns emerge. You start understanding which channels deliver at which funnel stages, even inside automation.
Google Drops Audience Size Minimums
Google lowered minimum audience size thresholds to 100 users across Search, Display, and YouTube. Previous minimums ranged from 1,000 users down to a few hundred depending on network and list type.
This opens doors for smaller advertisers and niche segments. Remarketing lists, CRM uploads, and custom audiences that previously failed minimums now become usable.
Smart teams will use this to test tighter segmentation strategies. But don’t chase volume that isn’t there. A 100-user audience won’t scale into a growth channel overnight.
Bing Tests Google-Style Ad Grouping
Bing briefly tested a sponsored results format similar to Google’s recent changes. Multiple ads grouped under a single label, with only the first result carrying an ad marker.
The test ended quickly, but the signal matters. Search platforms are converging on similar layouts. How ads appear now affects click quality and intent, not just click-through rate.
Social Platforms and Performance Content
Social platforms spent January rewarding clarity while punishing shortcuts.
Reddit Launches Max Campaigns
Reddit introduced Max Campaigns, an automated ad product handling targeting, placements, creative, and budget allocation in real-time.
What stands out is visibility. Reddit surfaces audience personas and engagement insights that most automated systems hide. Early testers report 27% more conversions and 17% lower CPA on average.
Testing works best when anchored to existing campaigns. Replicate your best-performing Reddit campaign as a Max Campaign. Let automation prove efficiency gains with known benchmarks.
Instagram Caps Hashtags
Instagram rolled out a five-hashtag limit across posts and reels. This confirms discovery on Instagram is driven by AI-based content understanding, not hashtag volume.
Hashtags now function like keywords. They clarify intent and help Instagram’s systems categorize content. They don’t manufacture reach.
Captions, on-screen text, subtitles, and visuals do the heavy lifting. Choose five hashtags that directly describe your content. Mix specificity levels: one broad category tag, two niche topic tags, one community hashtag, one branded hashtag.
LinkedIn Shares Performance Guidance for 2026
LinkedIn reiterated that human perspective drives performance. Video continues outperforming other formats. Hashtags do not impact distribution. Automated engagement and content pods face increased scrutiny.
Posting two to five times per week remains effective. AI can support thinking, but content still needs lived experience and clear points of view.
Brand Visibility, Authority, and Demand Measurement in an AI Era
AI-driven discovery is reshaping how brands get surfaced and evaluated.
What AI Search Means for Your Business
AI-generated summaries and zero-click experiences shape early discovery now. Users often form opinions before visiting a site. Google’s AI Overviews, ChatGPT’s SearchGPT, and Perplexity answer questions directly, compressing or eliminating the need to click through.
AI favors brands with clear expertise, structured content, and external validation. Generic explanations get compressed into summaries that strip away brand identity. Thin content disappears entirely.
Optimization now includes being understandable and credible to machines, not just persuasive to human readers. That means structured data markup, clear content hierarchy, author credentials, and topical authority signals.
Share of Search Becomes a Core KPI
As AI reduces click-through rates, traffic becomes a weaker signal of demand. Share of search fills that gap.
It measures how often people look for your brand compared to competitors. That correlates strongly with market share and future growth. Brands with rising share of search typically see revenue growth follow within quarters, even if organic traffic stays flat.
Calculate share of search by tracking branded search volume for your brand and key competitors over time. Tools like Google Trends, Semrush, or Ahrefs make this accessible.
Digital PR Matters More Than Ever
AI systems recommend brands they recognize and trust. That trust is built off-site, not through on-page optimization.
Authoritative mentions, expert commentary, and credible coverage now influence visibility across AI-driven experiences. Links still matter, but reputation matters more.
PR, SEO, and content strategy can no longer operate independently. Authority compounds when they align. If you’re not investing in Digital PR alongside traditional SEO, you’re optimizing for a search ecosystem that’s rapidly shrinking.
Video, CTV, and Cross-Screen Media Strategy
Video buying is consolidating across screens.
Amazon Emerges as a Cross-Screen Advertising Player
Amazon is positioning itself as a unified advertising ecosystem across Prime Video, live sports, audio, and programmatic inventory. Layered with first-party shopper data, this creates a powerful performance and measurement advantage traditional TV buyers can’t match.
Amazon now competes higher in the funnel through premium video and live sports while retaining lower-funnel accountability through its commerce data. Interactive features let you add “add to cart” overlays directly in OTT video ads.
CTV Breaks the 30-Second Format
Streaming dominates TV consumption. Ad formats are finally catching up. Interactive and nontraditional CTV units are gaining traction, supported by early standardization efforts from IAB Tech Lab.
Traditional :15 and :30 second spots still work, but they blend into an increasingly crowded environment. Emerging formats offer differentiation in lower-clutter streaming contexts.
Brands that test early build creative and performance advantages before these formats normalize and competition increases.
Pinterest Acquires tvScientific
Pinterest’s acquisition of tvScientific connects intent-driven discovery with CTV buying. This closes a long-standing measurement gap between inspiration and awareness channels.
For brands rooted in discovery—home decor, fashion, food, travel, DIY, beauty—this creates a clearer path from interest to action.
Brand-Led Attention and Influence at Scale
Attention increasingly flows through people, communities, and culture-driven media.
Unilever’s Influencer Expansion
Unilever announced plans to work with 20 times more influencers and shift half its ad budget to social. This isn’t a test. It’s a structural reallocation signaling influencer marketing has reached enterprise maturity.
Unilever’s SASSY framework now activates nearly 300,000 creators. The company reported category-wide outperformance, attributing significant gains to influencer-driven campaigns.
Brands still treating creators as side projects will struggle to compete against organizations running influencer programs with the same rigor and budget as paid search or programmatic display.
Google’s AI Is Deleting Reviews
Google’s AI moderation is removing reviews at scale, including legitimate ones, often without notice. Business owners report hundreds of reviews disappearing overnight.
That affects rankings, conversion rates, and consumer trust. Reputation strategy now includes monitoring review loss, not just tracking new reviews.
Check your Google Business Profile weekly. Document total review count and average rating. When drops occur, investigate patterns. Better yet, diversify review platforms beyond Google.
Experimentation and Growth Discipline
Sustainable growth depends on knowing why a test exists before judging its outcome.
Growth vs Optimization: Drawing the Line
Growth experiments explore new opportunities. Optimization improves what already works. Blurring the two creates misaligned expectations and poor decision-making.
Clear intent leads to clearer measurement and stronger buy-in. Teams that label tests correctly scale with more confidence.
What Digital Marketers Should Take Forward
Platforms are clarifying rules. AI rewards authority and consistency. Measurement is shifting away from clicks alone.
The advantage in 2026 comes from alignment across teams and channels. Durable signals outperform clever workarounds.
Indexing logic must live in base HTML. Performance Max channel reporting is essential. Share of search reveals momentum. Digital PR impacts AI visibility. Influencer marketing reached enterprise maturity. Review monitoring must track losses.
This is the work we focus on every day at NP Digital.
If you want help aligning fundamentals across SEO, paid media, content, and PR in a way that compounds over time, let’s talk.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-04 16:46:552026-02-04 16:46:55January 2026 Digital Marketing Roundup: What Changed and What You Should Do About It
AI hallucinations became a headline story when Google’s AI Overviews told people that cats can teleport and suggested eating rocks for health.
Those bizarre moments spread fast because they’re easy to point at and laugh about.
But that’s not the kind of AI hallucination most marketers deal with. The tools you probably use, like ChatGPT or Claude, likely won’t produce anything that bizarre. Their misses are sneakier, like outdated numbers or confident explanations that fall apart once you start looking under the hood.
In a fast-moving industry like digital marketing, it’s easy to miss those subtle errors.
This made us curious: How often is AI actually getting it wrong? What types of questions trip it up? And how are marketers handling the fallout?
To find out, we tested 600 prompts across major large language model (LLM) platforms and surveyed 565 marketers to understand how often AI gets things wrong. You’ll see how these mistakes show up in real workflows and what you can do to catch hallucinations before they hurt your work.
Key Takeaways
Nearly half of marketers (47.1 percent) encounter AI inaccuracies several times a week, and over 70 percent spend hours fact-checking each week.
More than a third (36.5 percent) say hallucinated or incorrect AI content has gone live publicly, most often due to false facts, broken source links, or inappropriate language.
In our LLM test, ChatGPT had the highest accuracy (59.7 percent), but even the best models made errors, especially on multi-part reasoning, niche topics, or real-time questions.
The most common hallucination types were fabrication, omission, outdated info, and misclassification—often delivered with confident language.
Despite knowledge of hallucinations, 23 percent of marketers feel confident using AI outputs without review. Most teams add extra approval layers or assign dedicated fact-checkers to their processes.
What Do We Know About AI Hallucinations and Errors?
An AI hallucination happens when a model gives you an answer that sounds correct but isn’t. We’re talking about made-up facts or claims that don’t stand up to fact-checking or a quick Google search.
And they’re not rare.
In our research, over 43% of marketers say hallucinated or false information has slipped past review and gone public. These errors come in a few common forms:
Fabrication: The AI simply makes something up.
Omission: It skips critical context or details.
Outdated info: It shares data that’s no longer accurate.
Misclassification: It answers the wrong question, or only part of it.
Hallucinations tend to happen when prompts are too vague or require multi-step reasoning. Sometimes the AI model tries to fill the gaps with whatever seems plausible.
AI hallucinations aren’t new, but our dependence on these tools is. As they become part of everyday workflows, the cost of a single incorrect answer increases.
Once you recognize the patterns behind these mistakes, you can catch them early and keep them out of your content.
AI Hallucination Examples
AI hallucinations can be ridiculous or dangerously subtle. These real AI hallucination examples give you a sense of the range:
Fabricated legal citations: Recent reporting shows a growing number of lawyers relying on AI-generated filings, only to learn that the cases or citations don’t exist. Courts are now flagging these hallucinations at an alarming rate.
Health misinformation: Revisiting our example from earlier, Google’s AI Overviews once claimed eating rocks had health benefits in an error that briefly went viral.
Fake academic references: Some LLMs will list fake studies or broken source links if asked for citations. A peer-reviewed Nature study found that ChatGPT frequently produced academic citations that look legitimate but reference papers that don’t exist.
Factual contradictions: Some tools have answered simple yes/no questions with completely contradictory statements in the same paragraph.
Outdated or misattributed data:Models can pull statistics from the wrong year or tie them to the wrong sources. And that creates problems once those numbers sneak into presentations or content.
Our Surveys/Methodology
To get a clear picture of how AI hallucinations show up in real-world marketing work, we pulled data from two original sources:
Marketers survey: We surveyed 565 U.S.-based digital marketers using AI in their workflows. The questions covered how often they spot errors, what kinds of mistakes they see, and how their teams are adjusting to AI-assisted content. We also asked about public slip-ups, trust in AI, and whether they want clearer industry standards.
LLM accuracy test: We built a set of 600 prompts across five categories: SEO/marketing, general business, industry-specific verticals, consumer queries, and control questions with a known correct answer. We then tested them across six major AIplatforms: ChatGPT, Gemini, Claude, Perplexity, Grok, and Copilot. Humans graded each output, classifying them as fully correct, partially correct, or incorrect. For partially correct or incorrect outputs, we also logged the error type (omission, outdated info, fabrication, or misclassification).
For this report, we focused only on text-based hallucinations and content errors, not visual or video generation. The insights that follow combine both data sets to show how hallucinations happen and what marketers should watch for across tools and task types.
How AI Hallucinations and Errors Impact Digital Marketers
We asked marketers how AI errors show up in their work, and the results were clear: Hallucinations are far from a rarity.
Nearly half of marketers (47.1 percent) encounter AI inaccuracies multiple times a week. And more than 70 percent say they spend one to five hours each week just fact-checking AI-generated output. That’s a lot of time spent fixing “helpful” content.
Those misses don’t always stay hidden.
More than a third (36.5 percent) say hallucinated content has made it all the way to the public. Another 39.8 percent have had close calls where bad AI info almost went live.
And it’s not just teams spotting the problems. More than half of marketers (57.7 percent) say clients or stakeholders have questioned the quality of AI-assisted outputs.
These aren’t minor formatting issues, either. When mistakes make it through, the most common offenders are:
Inappropriate or brand-unsafe content (53.9 percent)
Completely false or hallucinated information (43.5 percent)
Formatting glitches that break the user experience (42.5 percent)
So where does it break down?
AI errors are most common in tasks that require structure or precision. Here are the daily error rates by task:
HTML or schema creation: 46.2 percent
Full content writing: 42.7 percent
Reporting and analytics: 34.2 percent
Brainstorming or idea generation had far fewer issues, with each landing at right about 25 percent.
When we looked at confidence levels, only 23 percent of marketers felt fully comfortable using AI output without review. The rest? They were either cautious or not confident at all.
Teams hit hardest by public-facing AI mistakes include:
Digital PR (33.3 percent)
Content marketing (20.8 percent)
Paid media (17.8 percent)
These are the same departments most likely to face direct brand damage when AI gets it wrong.
AI can save you time, but it also creates a lot of cleanup without checks in place. And most marketers feel the pressure to catch hallucinations before clients or customers do.
AI Hallucinations and Errors: How Do the Top LLMs Stack Up?
To figure out how often leading AI platforms hallucinate, we tested 600 prompts across six major models: ChatGPT, Claude, Gemini, Perplexity, Grok, and Copilot.
Each model received the same set of queries across five categories: marketing/SEO, general business, industry-specific use cases, consumer questions, and fact-checkable control prompts. Human reviewers graded each response for accuracy and completeness.
Here’s how they performed:
ChatGPT delivered the highest percentage of fully correct answers at 59.7 percent, with the lowest rate of serious hallucinations. Most of its mistakes were subtle, like misinterpreting the question rather than fabricating facts.
Claude was the most consistent. While it scored slightly lower on fully correct responses (55.1 percent), it had the lowest overall error rate at just 6.2 percent. When it missed, it usually left something out rather than getting it wrong.
Gemini performed well on simple prompts (51.3 percent fully correct) but tended to skip over complex or multi-step answers. Its most common error was omission.
Perplexity showed strength in fast-moving fields like crypto and AI, thanks to its strong real-time retrieval features. But that speed came with risk: 12.2 percent of responses were incorrect, often due to misclassifications or minor fabrications.
Copilot sat in the middle of the pack. It gave safe, brief answers. While that’s good for overviews, it often misses the deeper context.
Grok struggled across the board. It had the highest error rate at 21.8 percent and the lowest percentage of fully correct answers (39.6 percent). Hallucinations, contradictions, and vague outputs were common.
So, what does this mean for marketers?
Well, most teams aren’t expecting perfection. According to our survey, 77.7 percent of marketers will accept some level of AI inaccuracy, likely because the speed and efficiency gains still outweigh the cleanup.
The takeaway isn’t that one model is flawless. It’s that every tool has its strengths and weaknesses. Knowing each platform’s tendencies helps you know when (and how) to pull a human into the loop and what to be on guard against.
What Question Types Gave LLMs The Most Trouble
Some questions are harder for AI to handle than others. In our testing, three prompt types consistently tripped up all the models, regardless of how accurate they were overall:
Multi-part prompts: When asked to explain a concept and give an example, many tools did only half the job. They either defined the term or gave an example, but not both. This was a common source of partial answers and context gaps.
Recently updated or real-time topics: If the ask was about something that changed in the last few months (like a Google algorithm update or an AI model release), responses were often inaccurate or completely fabricated. Some tools made confident claims using outdated info that sounded fresh.
Niche or domain-specific questions: Verticals like crypto, legal, SaaS, or even SEO created problems for most LLMs. In these cases, tools either made up terminology or gave vague responses that missed key industry context.
Even models like Claude and ChatGPT, which scored relatively high for accuracy, showed cracks when asked to handle layered prompts that required nuance or specialized knowledge.
Knowing which types of prompts increase the risk of hallucination is the first step in writing better ones and catching issues before they cost you.
AI Hallucination Tells to Look Out For
AI hallucinations don’t always scream “wrong.” In fact, the most dangerous ones sound reasonable (at least until you check the details). Still, there are patterns worth watching for:
Here are the red flags that showed up most often across the models we tested:
No source, or a broken one: If an AI gives you a link, check it. A lot of hallucinated answers include made-up or outdated citations that don’t exist when you click.
Answers to the wrong questions: Some models misinterpret the prompt and go off in a related (but incorrect) direction. If the response feels slightly off topic, dig deeper.
Big claims with no specifics: Watch for sweeping statements without specific stats or dates. That’s often a sign it’s filling in blanks with plausible-sounding fluff.
Stats with no attribution: Hallucinated numbers are a common issue. If the stat sounds surprising or overly convenient, verify it with a trusted source.
Contradictions inside the same answer: We experienced cases where an AI said one thing in the first paragraph and contradicted itself by the end. That’s a major warning sign.
“Real” examples that don’t exist: Some hallucinations involve fake product names, companies, case studies, or legal precedents. These details feel legit, but a quick search reveals no facts to verify these claims.
The more complex your prompt, the more important it is to sanity-check the output. If something feels even slightly off, assume it’s worth a second look. After all, subtle hallucinations are the ones most likely to slip through the cracks.
Best Practices for Avoiding AI Hallucinations and Errors
You can’t eliminate AI hallucinations completely, but you can make it a lot less likely they slip through. Here’s how to stay ahead of the risk:
Always request and verify sources: Some models will confidently provide links that look legit but don’t exist. Others reference real studies or stats, but take them out of context. Before you copy/paste, click through. This matters even more for AI SEO work, where accuracy and citation quality directly affect rankings and trust.
Fine-tune your prompts: Vague prompts are hallucination magnets, so be clear about what you want the model to reference or avoid. That might mean building prompt template libraries or using follow-up prompts to guide models more effectively. That’s exactly what LLM optimization (LLMO) focuses on.
Assign a dedicated fact-checker: Our survey results showed this to be one of the most effective internal safeguards. Human review might take more time, but it’s how you keep hallucinated claims from damaging trust or a brand’s credibility.
Set clear internal guidelines: Many teams now treat AI like a junior content assistant: It can draft, synthesize, and suggest, but humans own the final version. That means reviewing and fact-checking outputs and correcting anything that doesn’t hold up. This approach lines up with the data. Nearly half (48.3 percent) of marketers support industry-wide standards for responsible AI use.
Add a final review layer every time: Even fast-moving brands are building in one more layer of review for AI-assisted work. In fact, the most common adjustment marketers reported making was adding a new round of content review to catch AI errors. That said, 23 percent of respondents reported skipping human review if they trust the tool enough. That’s a risky move.
Don’t blindly trust brand-safe output: AI can sound polished even when it’s wrong. In our LLM testing, some of the most confidently written outputs were factually incorrect or missing key context. If it feels too clean, double-check it.
FAQs
What are AI hallucinations?
AI hallucinations occur when an AI tool gives you an answer that sounds accurate, but it’s not. These mistakes can include made-up facts, fake citations, or outdated info packaged in confident language.
Why Does AI hallucinate?
AI models don’t “know” facts. They generate responses based on patterns in the data they were trained on. When there’s a gap or ambiguity, the model fills it in with what sounds most likely (even if it’s completely wrong).
What causes AI hallucinations?
Hallucinations usually happen when prompts are vague, complex, or involve topics the model hasn’t seen enough data on. They’re also more common in fast-changing fields like SEO and crypto.
Can you stop AI from hallucinating?
Not entirely. Even the best models make things up sometimes. That’s because LLMs are built to generate language, not verify facts. Occasional hallucinations are baked into how they work.
How can you reduce AI hallucinations?
Use more specific prompts, request citation sources, and always double-check the output for accuracy. Add a human review step before anything goes live. The more structure and context you give the AI, the fewer hallucinations you’ll run into.
Conclusion
AI is powerful, but it’s not perfect.
Our research shows that hallucinations happen regularly, even with the best tools. From made-up stats to misinterpreted prompts, the risks are real. That’s especially the case for fast-moving marketers.
If you’re using AI to create content or guide strategy, knowing where these tools fall short is like a cheat code.
The best defense? Smarter prompts, tighter reviews, and clear internal guidelines that treat AI as a co-pilot (not the driver).
Want help building a more reliable AI workflow? Talk to our team at NP Digital if you’re ready to scale content without compromising accuracy. Also, you can check out the full report here on the NP Digital website.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-03 19:05:362026-02-03 19:05:36AI Hallucinations, Errors, and Accuracy: What the Data Shows
Microsoft Advertising today launched the Publisher Content Marketplace (PCM), a system that lets publishers license premium content to AI products and get paid based on how that content is used.
How it works. PCM creates a direct value exchange. Publishers set licensing and usage terms, while AI builders discover and license content for specific grounding scenarios. The marketplace also includes usage-based reporting, giving publishers visibility into how their content performs and where it creates the most value.
Designed to scale. PCM is designed to avoid one-off licensing deals between individual publishers and AI providers. Participation is voluntary, ownership remains with publishers, and editorial independence stays intact. The marketplace supports everyone from global publishers to smaller, specialized outlets.
Why we care. As AI systems shift from answering questions to making decisions, content quality matters more than ever. As agents increasingly guide purchases, finance, and healthcare choices, ads and sponsored messages will sit alongside — or draw from — premium content rather than generic web signals. That raises the bar for credibility and points to a future where brand alignment with trusted publishers and AI ecosystems directly impacts performance.
Early traction. Microsoft Advertising co-designed PCM with major U.S. publishers, including Business Insider, Condé Nast, Hearst, The Associated Press, USA TODAY, and Vox Media. Early pilots grounded Microsoft Copilot responses in licensed content, with Yahoo among the first demand partners now onboarding.
What’s next. Microsoft plans to expand the pilot to more publishers and AI builders that share a core belief: as the AI web evolves, high-quality content should be respected, governed, and paid for.
The big picture. In an agentic web, AI tools increasingly summarize, reason, and recommend through conversation. Whether the topic is medical safety, financial eligibility, or a major purchase, outcomes depend on access to trusted, authoritative sources — many of which sit behind paywalls or in proprietary archives.
The tension. The traditional web bargain was simple: publishers shared content, and platforms sent traffic back. That model breaks down when AI delivers answers directly, cutting clicks while still depending on premium content to perform well.
Bottom line. If AI is going to make better decisions, it needs better inputs — and PCM is Microsoft’s bet that a sustainable content economy can power the next phase of the agentic web.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2026/02/The-future-of-remarketing-Microsoft-bets-on-impressions-not-clicks-bgozoy.jpg?fit=1920%2C1080&ssl=110801920http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-03 17:36:572026-02-03 17:36:57Microsoft launches Publisher Content Marketplace for AI licensing
Vibe coding is a new way to create software using AI tools such as ChatGPT, Cursor, Replit, and Gemini. It works by describing to the tool what you want in plain language and receiving written code in return. You can then simply paste the code into an environment (such as Google Colab), run it, and test the results, all without ever actually programming a single line of code.
In this guide, you’ll understand how to start vibe coding, learn its limitations and risks, and see examples of great tools created by SEOs to inspire you to vibe code your own projects.
Vibe coding variations
While “vibe coding” is used as an umbrella term, there are subsets of coding with support or AI, including the following:
Type
Description
Tools
AI-assisted coding
AI helps write, refactor, explain, or debug code. Used by actual developers or engineers to support their complex work.
GitHub Copilot, Cursor, Claude, Google AI Studio
Vibe coding
Platforms that handle everything except the prompt/idea. AI does most of the work.
ChatGPT, Replit, Gemini, Google AI Studio
No-code platforms
Platforms that handle everything you ask (“drag and drop” visual updates while the code happens in the background). They tend to use AI but existed long before AI became mainstream.
Notion, Zapier, Wix
We’ll focus exclusively on vibe coding in this guide.
With vibe coding, while there’s a bit of manual work to be done, the barrier is still low — you basically need a ChatGPT account (free or paid) and access to a Google account (free). Depending on your use case, you might also need access to APIs or SEO tools subscriptions such as Semrush or Screaming Frog.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with
To set expectations, by the end of this guide, you’ll know how to run a small program on the cloud. If you expect to build a SaaS or software to sell, AI-assisted coding is a more reasonable option to take, which will involve costs and deeper coding knowledge.
Vibe coding use cases
Vibe coding is great when you’re trying to find outcomes for specific buckets of data, such as finding related links, adding pre-selected tags to articles, or doing something fun where the outcome doesn’t need to be exact.
For example, I’ve built an app to create a daily drawing for my daughter. I type a phrase about something that she told me about her day (e.g., “I had carrot cake at daycare”). The app has some examples of drawing styles I like and some pictures of her. The outputs (drawings) are the final work as they come from AI.
When I ask for specific changes, however, the program tends to worsen and redraw things I didn’t ask for. I once asked to remove a mustache and it recolored the image instead.
If my daughter were a client who’d scrutinize the output and require very specific changes, I’d need someone who knows Photoshop or similar tools to make specific improvements. In this case, though, the results are good enough.
Building commercial applications solely on vibe coding may require a company to hire vibe coding cleaners. However, for a demo, MVP (minimum viable product), or internal applications, vibe coding can be a useful, effective shortcut.
How to create your SEO tools with vibe coding
Using vibe coding to create your own SEO tools require three steps:
Write a prompt describing your code
Paste the code into a tool such as Google Colab
Run the code and analyze the results
Here’s a prompt example for a tool I built to map related links at scale. After crawling a website using Screaming Frog and extracting vector embeddings (using the crawler’s integration with OpenAI), I vibe coded a tool that would compare the topical distance between the vectors in each URL.
This is exactly what I wrote on ChatGPT:
I need a Google Colab code that will use OpenAI to:
Check the vector embeddings existing in column C. Use cosine similarity to match with two suggestions from each locale (locale identified in Column A).
The goal is to find which pages from each locale are the most similar to each other, so we can add hreflang between these pages.
I’ll upload a CSV with these columns and expect a CSV in return with the answers.
Then I pasted the code that ChatGPT created on Google Colab, a free Jupyter Notebook environment that allows users to write and execute Python code in a web browser. It’s important to run your program by clicking on “Run all” in Google Colab to test if the output does what you expected.
This is how the process works on paper. Like everything in AI, it may look perfect, but it’s not always functioning exactly how you want it.
You’ll likely encounter issues along the way — luckily, they’re simple to troubleshoot.
First, be explicit about the platform you’re using in your prompt. If it’s Google Colab, say the code is for Google Colab.
You might still end up with code that requires packages that aren’t installed. In this case, just paste the error into ChatGPT and it’ll likely regenerate the code or find an alternative. You don’t even need to know what the package is, just show the error and use the new code. Alternatively, you can ask Gemini directly in your Google Colab to fix the issue and update your code directly.
AI tends to be very confident about anything and could return completely made-up outputs. One time I forgot to say the source data would come from a CSV file, so it simply created fake URLs, traffic, and graphs. Always check and recheck the output because “it looks good” can sometimes be wrong.
If you’re connecting to an API, especially a paid API (e.g., from Semrush, OpenAI, Google Cloud, or other tools), you’ll need to request your own API key and keep in mind usage costs.
Should you want an even lower execution barrier than Google Colab, you can try using Replit.
Simply prompt your request and the software will create the code, design, and allow testing all on the same screen. This means a lower chance of coding errors, no copy and paste, and a URL you can share right away with anyone to see your project built with a nice design. (You should still check for poor outputs and iterate with prompts until your final app is built.)
Keep in mind that while Google Colab is free (you’ll only spend if you use API keys), Replit charges a monthly subscription and per-usage fee on APIs. So the more you use an app, the more expensive it gets.
Inspiring examples of SEO vibe-coded tools
While Google Colab is the most basic (and easy) way to vibe code a small program, some SEOs are taking vibe coding even further by creating programs that are turned into Chrome extensions, Google Sheets automation, and even browser games.
The goal behind highlighting these tools is not only to showcase great work by the community, but also to inspire, build, and adapt to your specific needs. Do you wish any of these tools had different features? Perhaps you can build them for yourself — or for the world.
GBP Reviews Sentiment Analyzer (Celeste Gonzalez)
After vibe coding some SEO tools on Google Colab, Celeste Gonzalez, Director of SEO Testing at RicketyRoo Inc, took her vibing skills a step further and created a Chrome extension. “I realized that I don’t need to build something big, just something useful,” she explained.
Her browser extension, the GBP Reviews Sentiment Analyzer, summarizes sentiment analysis for reviews over the last 30 days and review velocity. It also allows the information to be exported into a CSV. The extension works on Google Maps and Google Business Profile pages.
Instead of ChatGPT, Celeste used a combination of Claude (to create high-quality prompts) and Cursor (to paste the created prompts and generate the code).
AI tools used: Claude (Sunner 4.5 model) and Cursor
APIs used: Google Business Profile API (free)
Platform hosting: Chrome Extension
Knowledge Panel Tracker (Gus Pelogia)
I became obsessed with the Knowledge Graph in 2022, when I learned how to create and manage my own knowledge panel. Since then, I found out that Google has a Knowledge Graph Search API that allows you to check the confidence score for any entity.
This vibe-coded tool checks the score for your entities daily (or at any frequency you want) and returns it in a sheet. You can track multiple entities at once and just add new ones to the list at any time.
The Knowledge Panel Tracker runs completely on Google Sheets, and the Knowledge Graph Search API is free to use. This guide shows how to create and run it in your own Google account, or you can see the spreadsheet here and just update the API key under Extensions > App Scripts.
AI models used: ChatGPT 5.1
APIs used: Google Knowledge Graph API (free)
Platform hosting: Google Sheets
Inbox Hero Game (Vince Nero)
How about vibe coding a link building asset? That’s what Vince Nero from BuzzStream did when creating the Inbox Hero Game. It requires you to use your keyboard to accept or reject a pitch within seconds. The game is over if you accept too many bad pitches.
Inbox Hero Game is certainly more complex than running a piece of code on Google Colab, and it took Vince about 20 hours to build it all from scratch. “I learned you have to build things in pieces. Design the guy first, then the backgrounds, then one aspect of the game mechanics, etc.,” he said.
The game was coded in HTML, CSS, and JavaScript. “I uploaded the files to GitHub to make it work. ChatGPT walked me through everything,” Vince explained.
According to him, the longer the prompt continued, the less effective ChatGPT became, “to the point where [he’d] have to restart in a new chat.”
This issue was one of the hardest and most frustrating parts of creating the game. Vince would add a new feature (e.g., score), and ChatGPT would “guarantee” it found the error, update the file, but still return with the same error.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with
In the end, Inbox Hero Game is a fun game that demonstrates it’s possible to create a simple game without coding knowledge, yet taking steps to perfect it would be more feasible with a developer.
AI models used: ChatGPT
APIs used: None
Platform hosting: Webpage
Vibe coding with intent
Vibe coding won’t replace developers, and it shouldn’t. But as these examples show, it can responsibly unlock new ways for SEOs to prototype ideas, automate repetitive tasks, and explore creative experiments without heavy technical lift.
The key is realism: Use vibe coding where precision isn’t mission-critical, validate outputs carefully, and understand when a project has outgrown “good enough” and needs additional resources and human intervention.
When approached thoughtfully, vibe coding becomes less about shipping perfect software and more about expanding what’s possible — faster testing, sharper insights, and more room for experimentation. Whether you’re building an internal tool, a proof of concept, or a fun SEO side project, the best results come from pairing curiosity with restraint.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-03 17:21:262026-02-03 17:21:26Inspiring examples of responsible and realistic vibe coding for SEO
AI-powered search gutted LinkedIn’s B2B awareness traffic. Across a subset of topics, non-brand organic visits fell by as much as 60% even while rankings stayed stable, the company said.
LinkedIn is moving past the old “search, click, website” model and adopting a new framework: “Be seen, be mentioned, be considered, be chosen.”
By the numbers. In a new article, LinkedIn said its B2B organic growth team started researching Google’s Search Generative Experience (SGE) in early 2024. By early 2025, when SGE evolved into AI Overviews, the impact became significant.
Non-brand, awareness-driven traffic declined by up to 60% across a subset of B2B topics.
Rankings stayed stable, but click-through rates fell (by an undisclosed amount).
Yes, but. LinkedIn’s “new learnings” are more like a rehash of established SEO/AEO best practices. Here’s what LinkedIn’s content-level guidance consists of:
Use strong headings and a clear information hierarchy.
Improve semantic structure and content accessibility.
Publish authoritative, fresh content written by experts.
Move fast, because early movers get an edge.
Why we care. These tactics should all sound familiar. These are technical SEO and content-quality fundamentals. LinkedIn’s article offers little new in terms of tactics. It’s just updated packaging for modern SEO/AEO and AI visibility.
Measurement is broken. LinkedIn said its big challenge is the “dark” funnel. It can’t quantify how visibility in LLM answers impacts the bottom line, especially when discovery happens without a click.
LinkedIn’s B2B marketing websites saw triple-digit growth in LLM-driven traffic and that it can track conversion from those visits.
Yes, but: Many websites are also seeing triple-digit (or more) growth in LLM-driven traffic. Because it’s an emerging channel. That said, this is still a tiny amount of overall traffic right now (1% or less for most sites).
What LinkedIn is doing. LinkedIn created an AI Search Taskforce spanning SEO, PR, editorial, product marketing, product, paid media, social, and brand. Key actions included:
Correcting misinformation that showed up in AI responses.
Publishing new owned content optimized for generative visibility.
Testing LinkedIn (social) content to validate its strength in AI discovery.
Is it working? LinkedIn said early tests produced a meaningful lift in visibility and citations, especially from owned content. At least one external datapoint (Semrush, Nov. 10, 2025) suggested that LinkedIn has a structural advantage in AI search:
Google AI Mode cited LinkedIn in roughly 15% of responses.
LinkedIn was the #2 most-cited domain in that dataset, behind YouTube.
Incomplete story. LinkedIn’s article is an interesting read, but it’s light on specifics. Missing details include:
The exact topic set behind the “up to 60%” decline.
Exactly how much click-through rates “softened.”
Sample size and timeframe.
How “industry-wide” comparisons were calculated.
What tests were run, what moved citation share, and by how much.
Bottom line. LinkedIn is right that visibility is the new currency. However, it hasn’t shown enough detail to prove its new playbook is meaningfully different from doing some SEO (yes, SEO) fundamentals.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2021/12/web-design-creative-services.jpg?fit=1500%2C600&ssl=16001500http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2026-02-03 15:41:442026-02-03 15:41:44LinkedIn: AI-powered search cut traffic by up to 60%