Reddit introduces collection ads, deal overlays, Shopify integration

Reddit logo displayed on smartphone screen

Reddit is rolling out new Dynamic Product Ad features, including a shoppable Collection Ads format and Shopify integration, the company announced today.

What’s new.

  • Collection Ads: A new Dynamic Product Ad format that pairs a lifestyle hero image with shoppable product tiles in one carousel, bridging discovery and purchase. Early adopters following best practices are seeing an 8% ROAS lift.
  • Community and Deal overlays: Reddit-native labels like “Redditors’ Top Pick” and automatic discount callouts surface social proof and pricing signals without extra work from you.
  • Shopify integration: Now in alpha, this simplifies catalog and pixel setup for new DPA advertisers, automatically matching products to the right users and context.

The numbers. Reddit DPA delivered an average 91% higher ROAS year over year in Q4 2025. Liquid I.V. reports DPA already accounts for 33% of its total platform revenue and outperforms its other conversion campaigns by 40%.

Why now. Reddit has seen a 40% year-over-year increase in shopping conversations. Also, 84% of shoppers say they feel more confident in purchases after researching products on Reddit.

Why we care. The new tools, especially the Shopify integration, lower the barrier to getting started with Dynamic Product Ads. Reddit might still be viewed by some as an undervalued paid media channel, but there’s an opportunity to get in before competition and costs rise.

Bottom line. Reddit is increasingly a serious performance channel for ecommerce, and these tools make it easier to get started. If you’re not yet running DPA on Reddit, the combination of undervalued inventory and improving ad formats makes this a good time to test.

Reddit’s announcement. Introducing More Ways to Tap into Shopping on Reddit

Read more at Read More

AI citations favor listicles, articles, product pages: Study

AI citation engine

AI search citations favor a small set of formats. Listicles, articles, and product pages drive over half of all mentions across major LLMs, according to new Wix Studio AI Search Lab research analyzing 75,000 AI answers and more than 1 million citations across ChatGPT, Google AI Mode, and Perplexity.

The findings. Listicles led at 21.9% of citations, followed by articles (16.7%) and product pages (13.7%). Together, these three formats made up 52% of all AI citations.

  • Articles dominated informational queries, cited 2.7x more than other formats.
  • Listicles captured 40% of commercial-intent citations, nearly double any other type.

Why intent wins. Query intent — not industry or model — most strongly predicts which content gets cited. This pattern held across industries, from SaaS to health.

  • Informational queries skewed heavily toward articles (45.5%) and listicles (21.7%).
  • Commercial queries were led by listicles (40.9%).
  • Transactional and navigational queries favored product and category pages (around 40% combined).

Why we care. This research indicates that you want to map content types to user goals rather than just creating more content. Articles educate, listicles drive comparison, and product pages convert. Aligning content format with user intent could help you capture more AI citations and increase visibility.

Not all listicles perform equally. Third-party listicles accounted for 80.9% of citations in professional services, compared to 19.1% for self-promotional lists. That seems to indicate LLMs prefer neutral, editorial comparisons over brand-led rankings.

Model differences. All models favored listicles, but diverged after that.

  • ChatGPT leaned heavily into articles and informational content.
  • Google AI Mode showed the most balanced distribution.
  • Perplexity stood out, with 17% of citations coming from discussions like Reddit and forums.

Industry patterns. Content preferences shifted slightly by vertical:

  • SaaS and professional services over-indexed on listicles.
  • Health favored authoritative articles.
  • Ecommerce spread citations across listicles, articles, and category pages.
  • Home repair showed the most even distribution across formats.

The research. The content types most cited by LLMs

Read more at Read More

Google is tightening political content rules for Shopping ads starting April 16

Google shopping ads

A quiet but important policy update is coming to Google Shopping ads next month, requiring some merchants to verify their accounts before running ads featuring political content.

What’s changing. From April 16, merchants running Shopping ads with certain political content in nine countries will need to verify their Google Ads account as an election advertiser. Google will also outright prohibit some political Shopping ads in India.

The countries affected. Argentina, Australia, Chile, Israel, Mexico, New Zealand, South Africa, the United Kingdom, and the United States.

Why we care. Shopping ads aren’t typically associated with political advertising — this update signals that Google is broadening its election integrity efforts beyond search and display into commerce formats. Merchants selling politically themed merchandise, campaign materials, or other related products in the affected countries need to act before the April 16 deadline.

What to do now.

  • Review the updated policy language to determine if your Shopping ads feature content that falls under the new restrictions
  • If affected, apply for election advertiser verification through Google Ads before April 16 to avoid disruption to your campaigns

The bottom line. This affects a narrow but specific set of merchants — but the consequences of missing the deadline could mean ads being disapproved or accounts being flagged. If you sell anything with a political angle in the listed countries, check your eligibility now.

Read more at Read More

ChatGPT citations favor a small group of domains: Study

AI retrieval vs citations

AI citations in ChatGPT are far more concentrated than citation distributions in traditional search. Roughly 30 domains capture 67% of citations within a topic.

  • That’s according to Kevin Indig’s latest study, which also found that broad topical coverage, long-form pages, and cluster-based models outperform the old “one keyword, one page” approach.

The details. Citation visibility wasn’t evenly distributed. In product comparison topics, the top 10 domains accounted for 46% of citations; the top 30, 67%.

  • AI visibility was slightly less concentrated than classic organic search, but still highly centralized.
  • Indig’s conclusion: you’re effectively shut out unless you build enough authority to win one of a limited number of citation “seats.”

What changed. Ranking No. 1 in Google still matters, but it’s not enough. Of pages ranking No. 1, 43.2% were cited by ChatGPT — 3.5x more often than pages beyond the top 20.

  • ChatGPT retrieved far more pages than it cited. AirOps found that it retrieved ~6x as many pages as it cited, and 85% of the retrieved pages were never cited.
  • A third of the cited pages came from fan-out queries, and 95% of those had zero search volume.

Why we care. Publishing the “best answer” for one keyword isn’t enough. ChatGPT rewards domains that cover a topic from multiple angles, not pages optimized for isolated terms. And discovery often happens outside the keyword universe you track.

The patterns. Longer pages generally earned more citations, with variation by vertical. The biggest lift appeared between 5,000 to 10,000 characters. Pages above 20,000 characters averaged 10.18 citations vs. 2.39 for pages under 500.

  • This pattern broke in Finance, where shorter, denser pages often outperformed long guides. In Education, Crypto, and Product Analytics, longer pages continued to gain citation value with little drop-off.
  • 58% of cited URLs were cited only once. Pages that recurred across prompts were usually category roundups, comparison pages, or broad guides answering multiple related questions.

On-page behavior. ChatGPT cited heavily from the upper part of a page. The 10% to 20% section performed best across all industries.

  • The bottom 10% earned just 2.4% to 4.4% of citations. Conclusions were largely ignored.
  • Finance had the steepest ramp, with 43.7% of citations in the first 30%.
  • Healthcare and HR Tech were flatter.
  • Education peaked later, around 30% to 40%.

About the data. Indig analyzed ~98,000 citation rows from ~1.2 million ChatGPT responses (Gauge), isolating seven verticals. The study used structural page parsing, positional mapping, and entity and sentiment analysis to identify which pages earned citations and where they come from.

The study. The science of how AI picks its sources

Read more at Read More

Google is testing AI-generated animated video clips inside PMax

Google Local Services Ads vs. Search Ads- Which drives better local leads?

A new creative feature has been spotted inside Google Ads Performance Max campaigns — and it could change how advertisers without video budgets approach animated display advertising.

What was found. Vice President of Search at JumpFly, Inc. Nikki Kuhlman spotted an option to generate animated video clips directly within PMax asset groups, using AI to enhance and animate a single source image.

How it works.

  • Upload a source image — a logo, a product shot, a property photo
  • AI generates several “enhanced” versions of that image
  • Each enhanced image produces two animated clips
  • Select up to five animated clips per asset group
  • Note: faces cannot be used in source images, though AI may generate people in enhanced versions

Early results from testing. A logo generated a spinning animation of the image element. A house with a sold sign produced a slow cinematic pan. Simple inputs, but the output quality appears usable for display advertising without any video production required.

Where the ads appear. Google hasn’t provided in-product documentation on placement, but early testing shows animated clips surfacing in Display ad previews when added to an asset group.

Why we care. Video assets continue to be a strong creative option on Paid Media — but producing video has always required time, budget, and resources many advertisers don’t have. This feature effectively removes that barrier — turning a single product photo or logo into animated display creative in seconds, at no additional production cost.

For advertisers who’ve been running PMax on static images alone, this could be a meaningful and easy win.

The bottom line. This feature is still unconfirmed by Google, but advertisers running PMax should check their asset groups now. If it’s available in your account, it’s worth testing — especially for campaigns that have been running on static images alone.

First seen. Kuhlman shared spotting this new feature on LinkedIn.

Read more at Read More

Web Design and Development San Diego

SEO’s biggest threat in 2026? Your own organization

SEO’s biggest threat in 2026? Your own organization

AI tools and visibility have dominated the SEO conversation in the past two years. But while discussions focus on these new technologies, most of the biggest SEO risks in 2026 will come from somewhere else: within your own organization.

Fragmented data, unclear ownership, outdated KPIs, and weak collaboration can quietly destroy even the best strategies. As SEO expands beyond the website and into AI-driven discovery, the role of the SEO team is becoming broader, more influential, and, paradoxically, harder to define.

Here are some of the risks your team should start thinking about now.

Relying too much on AI for everything

Many SEO teams now rely on AI for everything, from generating briefs to analyzing data. That’s often necessary. You can’t spend hours creating a brief when AI can produce something usable in minutes. But that’s also where the risk starts.

AI can generate content quickly, but “acceptable” won’t differentiate you. You still need a clear point of view — what story you’re telling and what unique angle you bring. Without that, your content becomes generic, predictable, and indistinguishable from competitors using the same tools.

The issue is simple: if you ask similar tools similar questions, you’ll get similar answers. And your competitors have access to the same tools.

Some companies try to stand out by training models on proprietary data. In reality, few teams do this at scale. Most prioritize speed over quality.

There’s also risk in using AI for analysis without understanding the data behind it. AI is fast, but it can misinterpret or hallucinate results.

I’ve seen this firsthand. An AI tool hallucinated part of a calculation during an urgent analysis, making every insight that followed incorrect. It only acknowledged the mistake after it was explicitly pointed out.

More broadly, AI excels at identifying patterns. But in SEO, competitive advantage rarely comes from following patterns. The most effective strategies don’t just mirror what everyone else is doing. Sometimes the best opportunity isn’t the obvious one.

AI is reshaping how SEO work gets done, how impact is measured, and whether it can be measured at all.

Dig deeper: Why most SEO failures are organizational, not technical

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Fragmented data and limited visibility

For years, SEO professionals have worked with incomplete datasets. We’ve never had a full view of the user journey. That’s one reason organic impact has often been underestimated. In the past, though, we could still piece together a reasonably clear picture — from ranking to click to conversion.

Today, that picture is far more fragmented. AI tools have changed how people research and discover products. Users now start in AI assistants – asking questions, comparing options, and building shortlists before ever visiting a website. By the time they land on your page, part of the decision-making process is already done.

The problem is we have zero visibility into that journey. If a user discovers your brand through an AI-generated answer, adds you to a shortlist, then later searches for you directly, the signals that influenced that decision are invisible. We only see the final step.

Microsoft Bing has introduced basic reporting for AI searches, but it’s limited. We still can’t see the prompts behind specific page visibility.

At the same time, SEO teams are still expected to prove impact. Some companies are adding questions to lead forms to understand how users discovered them. In theory, this adds signal. In practice, it depends on accurate self-reporting. I know how I fill out forms, so I question how reliable that data really is. Still, it’s a start.

Setting the wrong KPIs

Fragmented data creates another risk: focusing on the wrong KPIs. Stakeholders still ask about traffic. No matter how often SEO teams explain that its role has changed, traffic remains a default measure of success. For years, organic growth meant more sessions, users, and visits. That mindset hasn’t fully shifted.

At the same time, stakeholders are drawn to newer metrics — AI visibility, citations, and mentions. These aren’t inherently wrong, but they need to be used carefully.

Most tools measure AI visibility using a predefined set of queries. That’s where risk creeps in. Teams can become too focused on improving visibility scores, even if it means optimizing for prompts that look good in reports rather than those that matter to the business.

For example, appearing for “What is XYZ software?” isn’t the same as showing up for “Which XYZ software is best?” The first may drive visibility, but the second is much closer to a purchase decision.

To avoid this, visibility metrics need to be tied to business outcomes — a real challenge given the fragmented data problem.

Tracking AI visibility also opens another rabbit hole: debates over which prompts to track, how many to include, and why. This can quickly overcomplicate measurement, especially if teams lose sight of the goal. The objective isn’t to track every phrasing, but to understand the intent behind it. Trying to capture every variation is impossible.

Dig deeper: Why governance maturity is a competitive advantage for SEO

Owning more than you can actually own

SEO teams are expected to own AI visibility strategy much like they owned SEO strategy. But strategy is often treated as execution.

Even in the past, SEO was never fully independent. It relied on other teams — engineering to implement changes and content to create pages. The difference is that most of this work used to happen on the company’s own website.

That’s no longer true. Visibility in AI answers requires presence beyond your domain — Reddit threads, YouTube videos, and media mentions all play a role.

This significantly expands the scope of work. At the same time, many of these surfaces don’t have clear owners inside organizations. Even when they do, there’s a tendency to assume that if SEO owns the strategy, it should also own execution or at least be accountable for outcomes.

The opposite happens, too. If other teams own execution, they may take ownership of the entire strategy. In reality, neither model works well.

SEO teams can’t manage every platform that influences AI visibility. They don’t have the expertise to produce YouTube content or run PR campaigns. Their strength is knowing what works and helping optimize it. For example, advising on how a video should be structured to perform on YouTube.

Owning strategy also doesn’t mean deciding who owns execution. That’s a leadership responsibility. It requires visibility across teams and the authority to assign ownership. Otherwise, one team is left deciding how its peers should operate.

Get the newsletter search marketers rely on.


Lack of cross-team collaboration

Even when companies recognize the importance of AI visibility, cross-team collaboration remains a challenge.

Roles and processes are often unclear. SEO teams may expect others to execute, while those teams assume it’s SEO’s responsibility. In other cases, teams don’t prioritize AI visibility because their KPIs focus elsewhere.

This is where leadership alignment becomes critical. If AI visibility is truly a strategic priority, it needs to be reflected in goals and KPIs across all relevant teams. When AI-related KPIs sit only with SEO, it creates an imbalance: one team is accountable for outcomes, while execution depends on many others.

Many teams are also unsure how to work with SEO. Some don’t involve SEO early enough. Others choose not to follow recommendations because they don’t agree with them.

SEO teams share responsibility here, too. They need to actively onboard other teams and clearly connect SEO efforts to broader business goals. It’s our job to show that lack of visibility means lost revenue.

I’ve seen cases where teams critical to AI visibility hadn’t even read the strategy document. In these situations, the issue isn’t one-sided. Teams need to understand what’s expected of them, and SEO needs to push for alignment and involve stakeholders early. Simply moving forward without that alignment doesn’t work.

SEO teams also don’t always explain the “why.” AI visibility can end up treated as a standalone SEO metric rather than a business driver. Even when there’s agreement on its importance, a lack of clear processes, shared goals, and training keeps collaboration inconsistent.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Too much strategy, not enough doing

With rapid changes in search, SEO teams often spend more time on theory — reading, analyzing, building frameworks, and refining strategies — instead of making changes to the website.

That doesn’t mean teams should stop learning. Quite the opposite. But strategy without execution quickly loses value. In many organizations, SEO teams are expected to produce in-depth strategy documents meant to align teams and define priorities. In reality, many go unread outside the SEO team. They require significant effort but deliver little impact.

Part of the problem is that strategies are often too theoretical. They explain the why but miss the what. The value of a strategy isn’t the document, but the actions that follow. Other teams need to understand what to do and how to contribute.

AI is also accelerating how quickly search evolves. Waiting months to test ideas no longer works. A more practical approach is to understand the direction, implement changes, observe results, and iterate. Smaller experiments often lead to faster learning.

When SEO succeeds, SEO disappears

SEO has always been a consulting function. Success depends on collaboration with teams like engineering, content, and product. Today, that dynamic is more visible than ever. In many cases, SEO teams don’t execute directly. Their role is to enable others.

In mature organizations, this works well. Collaboration is strong, and credit is shared. SEO’s consulting role is recognized without forcing the team to own areas outside its expertise. In less mature environments, it can lead to SEO being undervalued or seen as unnecessary.

AI adds another layer. It can generate keyword ideas, outlines, and optimization suggestions, making SEO look deceptively simple, much like writing content. AI lowers the barrier to entry, but it doesn’t replace expertise. Without that expertise, teams produce work that’s technically correct but average.

It’s a familiar pattern: copy-pasting a Screaming Frog SEO Spider error list into a task doesn’t demonstrate real understanding. This creates a paradox. The more SEO becomes a company-wide capability, the more the SEO team risks becoming invisible.

Dig deeper: SEO execution: Understanding goals, strategy, and planning

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

SEO is evolving, but are companies ready?

SEO teams won’t fail in 2026 because of a lack of knowledge. They’ll fail if they can’t turn that knowledge into action, influence, and business impact.

The challenge is no longer just optimizing pages. It’s building processes, partnerships, and measurement models that reflect how visibility works today.

Success also depends on leadership support. Many of the biggest risks are structural — fragmented data, unclear ownership, weak collaboration, outdated KPIs, and the gap between strategy and execution.

AI visibility expands beyond the website and into the broader organization. That doesn’t make SEO less important, but it does make it harder to define, measure, and defend.

The companies that succeed will stop treating SEO as a traffic function and start treating it as a business capability that drives visibility, discovery, and growth.

Read more at Read More

Web Design and Development San Diego

Apple is bringing ads to Apple Maps this summer

Apple

Apple is preparing to introduce sponsored listings in Apple Maps, marking a significant expansion of its advertising business beyond the App Store.

How it will work. According to Bloomberg’s Mark Gurman, the system will function similarly to Google Maps — allowing retailers and brands to bid for ad slots against search queries. Sponsored businesses will appear in Maps search results, much like sponsored apps already appear in App Store searches.

The timeline. An announcement could come as early as this month, with ads beginning to appear inside Maps as early as this summer across iPhone, other Apple devices, and the web version.

Why Apple is doing this. Advertising is a growing and high-margin revenue stream for Apple’s services business. Maps — with its massive built-in user base across Apple devices — is a natural next step, particularly as location-based advertising continues to grow.

Why we care. Apple Maps has a massive built-in user base across iPhone and Apple devices, and users searching within Maps are expressing clear, high-intent signals — they’re actively looking for somewhere to go or something to buy. This opens up a brand new location-based advertising channel that previously didn’t exist on Apple’s platform, giving local businesses and retailers a way to reach those users at exactly the right moment.

Advertisers already running Google Maps or local search campaigns should pay close attention, as this could quickly become a significant complementary channel.

The privacy angle. True to Apple’s form, a user’s location and the ads they see and interact with in Maps are not associated with their Apple Account. Personal data stays on the user’s device, is not collected or stored by Apple, and is not shared with third parties.

How to access it. Businesses will be able to access a fully automated experience for creating ads through Apple Business in a few simple steps. Current Apple Ads advertisers and agencies will also have the option to book ads through their existing Apple Ads experience, which will offer additional customization options.

What you need to do now. When Apple Business becomes available in April, businesses will need to first claim their location on Maps apple before ads become available this summer — so the time to get set up is now, not when the auction opens.

The bottom line. Apple Maps ads should open up a high-intent, location-based channel that hasn’t existed before on Apple’s platform. Advertisers running local or retail campaigns should claim their Maps listing now and start planning budgets for a summer launch. Early entrants in a new ad auction typically benefit from lower competition before the market matures.

Update 10:45 ET: Apple has officially confirmed that ads are coming to Apple Maps this summer, as part of a broader new platform called Apple Business launching April 14.

Read more at Read More

Web Design and Development San Diego

Bing Webmaster Tools now links AI queries to cited pages

AI connection map

Microsoft added query-to-page mapping to its AI Performance report in Bing Webmaster Tools, letting you connect AI grounding queries directly to cited URLs.

Why we care. The original dashboard showed queries and pages separately, limiting optimization. Now you can tie specific AI-triggering queries to the exact cited pages, so you can prioritize updates based on real AI-driven demand — not guesses.

The details. The new Grounding Query–Page Mapping feature links two existing views in the AI Performance dashboard:

  • Click a grounding query to see which pages are cited
  • Click a page to see which grounding queries drive its citations
  • Mapping is many-to-many: one query can map to multiple pages, and vice versa

Catch up quick. Microsoft launched the AI Performance report in Bing Webmaster Tools in February as its first GEO-focused dashboard. It:

  • Tracks where and how often your content is cited in AI answers across Bing, Copilot, and partners.
  • Shows grounding queries, cited URLs, and visibility trends over time.
  • Focuses on citation visibility — not clicks, rankings, or traffic.

What they’re saying. Microsoft said the update responds to “strong positive customer feedback and numerous requests.”

The announcement. The addition of query-to-page mapping to Bing Webmaster Tools appeared in a Microsoft Advertising blog post: The AI Performance dashboard: Your view into where your brand appears across the AI web

Read more at Read More

Web Design and Development San Diego

The entity home: The page that shapes how search, AI, and users see your brand

The entity home- The page that shapes how search, AI, and users see your brand

The entity home is the single page that anchors how algorithms, bots, and people understand your brand. It’s usually your About page, and it does far more than most teams realize.

It’s where algorithms resolve your identity, where bots map your footprint, and where humans verify trust before they convert. In one test, improving that page alone lifted conversions by 6% for visitors who reached it. The reason is simple: the human and the algorithm are doing the same job — checking claims, validating evidence, and deciding whether to trust you.

For years, this was overlooked. Most SEOs focused on rankings and traffic while underinvesting in the page that defines what their brand actually is. That’s no longer sustainable. The entity home is the foundation of how your brand is interpreted across search, AI, and what comes next.

What the entity home isn’t

Before going further, here are four misreadings worth pre-empting.

Not a ranking trick

Getting the entity home right doesn’t produce a traffic spike next Tuesday. It builds the confidence prior that compounds through every gate of the pipeline over time.

Not just schema

Schema markup helps the algorithm read what is already there. It isn’t a substitute for the claims, the evidence links, and the consistent positioning that schema describes. Schema without substance is a well-formatted, empty declaration.

Not always the About page

For most companies, it is, and for most individuals, it is a page on someone else’s website. The right URL to use carries the clearest identity statement, the strongest internal link prominence from the rest of the site, and the most stable long-term address (something people often don’t think about).

Not enough without corroboration

The entity home is where you declare your claims. Independent third-party sources confirm and corroborate your claims. The algorithm will only cross the confidence threshold when what you say matches what the weight of evidence supports.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Three audiences, one anchor — and most brands are ignoring two of them

The entity home serves three simultaneously, through three completely different mechanisms. Most brands haven’t yet given them enough thought.

The three audiences your entity home serves
  • Bots use the entity home when mapping the digital footprint. They use it to establish what entity they are dealing with and how to interpret every corroborative source they find. 
  • Algorithms anchor their identity resolution against it, checking confidence at every relevant gate against whatever baseline this page set. 
  • Humans reach for it when they want to see a resource that feels authoritative precisely because it is structured to inform rather than to sell.

So, the entity home webpage is vital to all three audiences — bots, algorithms, and humans: it sets the tone for the bot in DSCRI, the algorithms in ARGDW, and for the person who converts.

The entity home is just one page, and that isn’t enough

The entity home anchors everything: the canonical URL where the algorithm initializes its model of the brand, where bots orient themselves, and where humans arrive to verify their instinct. One page, doing one critical job. But one page declares. It doesn’t educate.

The entity home website educates. Every facet of the brand structured across pages that give the algorithm a complete picture of:

  • Who this entity is.
  • What it does.
  • Who it works alongside.
  • What it has produced.
  • Where independent sources confirm what the brand claims about itself. 

The difference between the two is the difference between introducing yourself and making your case.

Search built the web around a single assumption — the human acts. The engine organized, the website presented, and the human chose. That model shaped 30 years of architecture decisions because the website’s job was to win the human’s attention and trust once the engine had delivered them to you.

But assistive engines broke that assumption. They took on the evaluation work the human used to do: reading, comparing, synthesizing, and recommending. The human still makes the final call, but the website needs to have made its case to the algorithm before the human ever arrives. 

The audience that matters first has shifted, and a website that speaks only to humans is already losing the conversation that determines whether those humans show up at all.

Agents go one step further. The agent researches, decides, and acts. The human receives the outcome. The website that wins in an agentic environment isn’t the one with the most compelling hero section — it’s the one the agent can read, trust, and act on without inferring anything.

All three modes co-exist, and all three always will. 

  • Search serves the window shopper. 
  • Assistive engines serve the human who wants a recommendation without doing the research. 
  • Agents serve the task that can be delegated entirely. 

What shifts over the next three years isn’t which mode exists — it’s which mode does the most work, and what your website needs to do to win each one.

This is where I’ll plant a flag, and you can disagree. All three jobs need attention right now — the percentages below describe where the main focus of your effort sits, not permission to ignore the others. 

The work on assistive and agential is already overdue. The speed of change will probably make these figures look dated in a few months.

Focus weighting by year- Search, assistive, agential
  • 2026: Search 60%, Assistive 35%, Agential 5%
    • Search still drives most conversions. But the 35% on assistive isn’t optional, it’s late. The brands that started two years ago are already compounding.
  • 2027: Search 35%, Assistive 50%, Agential 15%
    • Assistive engines will be handling enough upstream evaluation that discovery and correct interpretation become the primary battle. Search remains significant. Agential execution is arriving.
  • 2028: Search 20%, Assistive 45%, Agential 35%
    • Agents execute. The algorithm’s confidence in your brand determines whether you’re in the consideration set before any human is involved. Search and assistive don’t disappear — they become the infrastructure the agential layer runs on.

The entity home website anchors all three eras. What changes is who it speaks to first, and what that conversation needs to contain.

Entity home (one page) vs entity home website (full education hub)

Each cluster in that diagram declares something: these satellite pages, grouped this way, belong to this entity and describe one specific dimension of what it is. 

  • /social names the platforms the brand controls. 
  • /peers places the entity in its professional network. 
  • /companies closes the relationship loop between person and organization. 

The grouping carries meaning — an algorithm that reads the structure learns something the individual pages couldn’t tell it separately.

The entity home website has three jobs

Search, assistive, and agential engines co-exist, which means the entity home website runs three distinct jobs simultaneously. 

  • The search job is the one 30 years of practice has refined, and it doesn’t change: get the bots through the DSCRI infrastructure gates cleanly, so the ranking engine delivers the right humans to you, and your content draws them through the funnel with clarity, credibility, and a path to conversion.
  • The assistive job is the one most brands are ignoring, and where the competitive gap is opening fastest: educate the algorithms. Your entity home website structures your brand’s story so algorithms understand it without guessing, and your content wins the competitive phase (ARGDW) with the highest possible confidence intact. Every explicit link from your entity home website to a satellite property declares a graph edge, carrying higher confidence through the pipeline than any connection the algorithm has to infer for itself.
  • Hardest to prepare for, and already arriving: brief the agents. Agentic engines don’t read your website the way a human reads a marketing page — they read it the way an instructed system reads a briefing document, scanning for structured, unambiguous, machine-interpretable facts. Don’t make the machine use imagination it doesn’t have.

Get the newsletter search marketers rely on.


Entity pillar pages solve the identity problem keyword cornerstones were never built for

SEO has always known what to do with a topic: build an authoritative page around it, link it well, and earn rankings. That architecture works because the ranking engine evaluates content.

What it can’t do is tell the algorithm who the entity behind that content is, what relationships it has built, what it has demonstrated over time, or why it should be trusted to recommend rather than merely rank.

An entity has facets, and facets aren’t the same thing as topics. A person isn’t “SEO consultant” plus “technical SEO” plus “keynote speaker”: those are keyword clusters, useful for ranking, useless for identity.

What the algorithm actually resolves identity against is the network of dimensions that define what this entity is — the companies it belongs to, the peers it works alongside, the publications it has appeared in, the expertise it has demonstrated over years, the events it speaks at, and the work it has produced.

An entity pillar page is the authoritative page on your own property for one of those dimensions.

  • The /expertise page establishes demonstrated knowledge in a specific domain, not as a content topic, but as an identity declaration.
  • The /peers page places the entity in a professional network the algorithm already trusts.
  • The /companies page closes the loop between person and organization.
  • The /press page links to independent coverage that corroborates the entity’s claims, giving the algorithm something to cross-reference rather than take on faith.

These pages aren’t traffic pages in the traditional sense, and that framing matters: SEOs who measure them against keyword rankings will consistently underinvest in them because the return doesn’t show up in rank tracking. The return shows up in what AI assistive engines say about your brand when your prospects ask.

Keyword cornerstones vs entity pillar pages

Keyword cornerstone pages and entity pillar pages serve different audiences, and your website needs both

The keyword cornerstone page and the entity pillar page aren’t competing strategies: they’re parallel architectures serving different audiences, which means your website needs both, and the question is how to build them so they compound each other’s value rather than compete for the same resource.

The coincidence between them is real and worth engineering deliberately. The expertise page that ranks for “technical SEO audit” can also function as the entity pillar page that declares this entity’s demonstrated knowledge in that domain if it’s built with that second function in mind:

  • Explicit entity statements.
  • Schema that names the relationships rather than just the topic.
  • Links to corroborating third-party sources stable enough to persist across years.
  • A URL structure that commits to the identity dimension rather than the keyword cluster.

When those two requirements align, one page does both jobs, which is a good thing.

When they diverge: when the page that captures search traffic can’t easily carry the identity declaration without sacrificing one function for the other, you face an architectural choice, and making that choice consciously rather than defaulting to the keyword model is the skill the transition requires.

The percentages already told you the weighting: Both layers are required starting today

Earlier in this article, the 2026/2027/2028 split put search at 60%, then 35%, then 20% of focus. What those numbers don’t say, but what the logic demands, is that the other percentage — the assistive and agential share — needs your website to feed them right now. Don’t wait until the balance shifts.

Keyword cornerstone pages feed the search share. Entity Pillar Pages feed the assistive and agential share.

If you build the Entity Pillar Pages in 2027 when assistive engines truly dominate, you’ll be building into a window that has already closed for the brands that started in 2025, because the algorithm’s model of your entity solidifies around whatever you gave it during the period it was actively learning.

The percentages describe where the demonstrable value sits at each stage. Your investment needs to precede the moment your boss sees the results, not follow it.

Both architectures are required today; the balance shifts, but the requirement for both never goes away.

Building for machines and humans simultaneously is cheaper than building for each separately

The risk brands hear when they encounter the machine-optimization argument is a false trade-off: build for machines at the expense of humans, strip the warmth from the copy, replace narrative with structured data fields, and turn the About page into a schema exercise. You can absolutely avoid the trade-off in practice because the best practices are more complementary than they might appear.

Clear entity statements that help the algorithm resolve your identity also help the human visitor understand immediately who they’re dealing with. Explicit links to corroborating third-party sources that build algorithmic confidence also give the human prospect the independent validation they’re quietly looking for. Schema markup that declares relationships for machine consumption gives structured clarity that human scanners doing final due diligence actually appreciate.

For me, this is the reframe that makes the whole project manageable: my approach to the entity home website is your current marketing, restructured to serve three audiences simultaneously, not a technical infrastructure project running alongside it. One investment that has three returns, and (when done right), the requirements pull in the same direction more often than they pull apart.

The funnel is moving inside the assistant.

When an assistive engine names your brand, summarizes it, and links to it in response to a user query, a conversion event has happened that you don’t see in your Analytics dashboard, and the human who arrives at your website has already been half-sold by the algorithm before they clicked. Traffic will decline as more of that evaluation work moves upstream, and the brands that measure only what arrives at the site will systematically underestimate both the value they’re generating and the gaps in their strategy.

Start measuring where your brand appears in assistive engine responses, how consistently it appears, and what the algorithm says about you when it does.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Getting the entity home right requires definition, proof, and a sustained corroboration campaign

Start with the entity home page itself: choose the single URL that functions as the canonical anchor for your brand’s identity and commit to it. Don’t discover it by asking an AI engine what it thinks your entity home is, because the engine will tell you what it has already learned, and that might be your website homepage, Wikipedia, a press profile, or a LinkedIn page you half-filled in five years ago. You choose it, then you verify the algorithm has learned the lesson you are giving it. You are the adult in the room.

Five criteria determine that choice, in order of weight:

  • The most explicit identity statement on the property.
  • The strongest internal link prominence from the rest of the site.
  • The best-structured schema markup with a stable @id.
  • The clearest outbound links to corroborating third-party sources.
  • The most stable long-term URL.

If your About page doesn’t hit all five, it isn’t doing the job the algorithm requires.

Invest in your About page. Strengthen it with a clear entity statement, schema with a proper @id, verified links to Wikipedia and Wikidata where they exist, every accurate sameAs declaration you can support, and the claims that define your brand’s positioning.

Declaration vs corroboration - claim vs evidence

That single page is the anchor.

The entity home website is the education hub built around it: every entity pillar page you build — /expertise, /peers, /companies, /press — extends the identity declaration outward, giving the algorithm more dimensions to resolve against and more facets to cross-reference with independent sources. Each of those pages does for one identity dimension what the About page does for the whole: declares something specific, verifiable, and machine-readable about who this entity is.

The practical work on the entity home website side is the same audit applied at scale: for each entity pillar page, ask whether it declares a clear facet, links to corroborating evidence, and carries schema that names the relationship rather than just the topic. The pages that answer yes to all three are doing both jobs simultaneously — identity infrastructure and keyword architecture. The ones that don’t need a decision: extend them, or build the pillar function its own dedicated page.

If you’re unsure how much influence you actually have over what AI communicates about you, the answer is more than most people assume — and the channels that give you the most leverage are exactly the ones entity pillar pages are built to activate.

Then force the corroboration loop across the whole footprint: drive independent third-party sources to reference, link to, and echo the claims the entity home makes and the facets the pillar pages declare across enough independent contexts that the algorithm’s confidence crosses from hedged claim to corroborated fact. 

That crossing doesn’t happen on a deadline and can’t be engineered in a sprint. The corroboration loop is the curriculum, slow by design, compounding with every cycle, never truly finished. It is the work, and it rewards the brands that start it today over the ones that plan to start it when the percentages shift.

This is the sixth piece in my AI authority series. 

Read more at Read More

Web Design and Development San Diego

Why better signals drive paid search performance

Why better signals drive paid search performance

In an increasingly automated environment, paid search performance is constrained by a simple reality: Algorithms can only optimize toward the signals they’re given. Improving those signals remains the most reliable way to improve results.

That sounds straightforward, but in practice, many people are still optimizing around signals that don’t reflect real business outcomes.

Let’s dive into how algorithms function, how you can influence them, and where some people fail.

How bidding algorithms actually work

Modern bidding systems are often described as “black boxes,” suggesting they operate mysteriously. But that description isn’t helpful.

At a high level, bidding algorithms are large-scale pattern recognition systems.

Early automated bidding used simple statistical methods, including rules-based logic and regression models. Over time, these evolved into more advanced machine learning approaches using decision trees and ensemble models.

Eventually, these became large-scale learning systems capable of processing thousands of contextual and historical inputs. The technology has developed significantly, but the goal has stayed remarkably consistent.

Today’s systems evaluate signals such as query intent, device, location, time, historical performance, and user behavior, updating predictions continuously and adjusting bids in near-real time.

Despite this complexity, the underlying mechanisms haven’t changed:

Bidding algorithms identify patterns tied to a desired outcome, estimate that outcome’s probability and expected value for each auction, and adjust bids accordingly. They don’t understand business context or strategy — they infer success from feedback. This distinction matters.

When the feedback loop is weak, noisy, or misaligned with real business value, even advanced algorithms will efficiently optimize toward the wrong objective. Better technology doesn’t compensate for poor inputs.

Dig deeper: Bidding and bid adjustments in paid search campaigns

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

The signals advertisers can influence

Paid search algorithms observe a vast range of signals, many of which are inferred by the platform and not directly controllable by you. These include user intent signals, behavioral patterns, and competitive dynamics.

While many signals sit outside of our control, there’s still a meaningful set of levers you control that shape how algorithms learn. These include:

These inputs shape how the algorithm explores and learns. They help define the environment in which optimization occurs. But they don’t, by themselves, define what success looks like. That role is played by conversion data.

Dig deeper: Conversion rate: how to calculate, optimize, and avoid common mistakes

Conversion data: The most important signal

When performance plateaus, the first instinct is to blame structure, budgets, or creative. In reality, the biggest lever you have available usually sits elsewhere: conversion data. 

In most accounts, conversion data is the most influential signal you control. It defines the outcome the algorithm is trained to pursue and directly informs prediction models, bid calculations, and learning feedback loops.

When conversion setups are misaligned, overly broad, duplicated, or noisy, platforms still optimize efficiently, just not toward outcomes the business actually values. This is why, at times, you can show improving platform metrics while your commercial performance stagnates or deteriorates.

A common mistake is focusing on increasing conversion volume rather than improving conversion quality. Volume accelerates learning, but if the signal is weak, faster learning just means faster optimization toward a suboptimal goal.

In practice, refining what counts as a conversion often delivers greater performance gains than structural or tactical changes elsewhere in the account.

Dig deeper: Why a lower CTR can be better for your PPC campaigns

Aligning conversion signals with real business KPIs

Before any optimization begins, define what success genuinely means for your business. Paid search platforms don’t have intrinsic knowledge of your revenue quality, profitability, or downstream value. They only see what is explicitly passed back to them.

Misalignment typically appears in predictable forms:

  • Revenue is used as the primary signal when margins vary significantly.
  • Lead submissions are optimized without regard to lead quality or sales outcomes.
  • Short-term efficiency metrics are prioritized over long-term value.

In each case, the algorithm is doing exactly what it has been instructed to do. The issue isn’t optimization accuracy, but goal definition. If an increase in a given conversion wouldn’t be seen as a win by the business, it shouldn’t be the primary signal used for optimization.

Dig deeper: 3 PPC KPIs to track and measure success

Get the newsletter search marketers rely on.


Strengthening conversion signals with richer, more resilient data

Conversion quality is determined by how confidently the platform can identify and interpret a tracked event.

Browser-based tracking alone is increasingly incomplete due to privacy controls, attribution gaps, and fragmented user journeys. As a result, ad platforms rely on a combination of browser-side and server-side data to improve matching and attribution. This means that, for you, this isn’t just a measurement problem, as it directly affects how confidently platforms can learn from conversions.

Stronger conversion signals are typically characterized by multiple reinforcing parameters, including:

  • First-party identifiers, such as hashed personal data passed via enhanced conversion frameworks.
  • Click identifiers that connect conversions back to ad interactions.
  • Transaction or event IDs that prevent duplication.
  • Accurate conversion values.
  • Session- and network-level attributes that improve attribution confidence.

When a conversion can be recognized through multiple mechanisms, platforms can match it more reliably and use it in learning models with greater confidence. This improves reporting accuracy and bidding performance by reducing feedback loop uncertainty.

Dig deeper: How to track and measure PPC campaigns

Choosing conversion goals

Selecting the right conversion goal isn’t a binary decision. It involves balancing several competing factors:

  • Volume: Higher volumes support faster learning.
  • Value accuracy: Closer alignment with business outcomes improves decision quality.
  • Stability: Highly variable values can introduce noise.
  • Latency: Delayed feedback slows learning and increases uncertainty.

Higher-volume, faster conversions often sit further away from true commercial outcomes, while lower-volume, high-quality conversions may better reflect business value but risk data sparsity. The most effective setups acknowledge these trade-offs rather than attempting to eliminate them entirely.

In many cases, the optimal solution involves using proxy or layered conversion goals that strike a balance between learning speed and value accuracy.

Dig deeper: How to use proxy metrics to speed up optimization in complex B2B journeys

Practical examples of selecting and strengthening conversion goals

Ecommerce optimization based on gross margin, not revenue

For ecommerce, optimizing toward order value assumes all revenue is equal. In reality, product margins often vary widely. When revenue alone is used as the optimization signal, algorithms may prioritize high-value — but low-margin — products.

A more effective approach is to optimize for gross margin by passing margin-adjusted conversion values via server-side tracking or offline conversion imports. This allows bidding systems to prioritize your business’s profitability rather than top-line revenue, without exposing sensitive cost data client-side.

Lead generation with long conversion latency

In lead gen models where final outcomes occur weeks or months after the initial click, form submissions alone can provide you with weak signals. They are fast and high-volume, but poorly correlated with revenue.

Introducing lead scoring improves signal quality. Leads can be assigned proxy values based on known attributes and early indicators of quality, such as company size, role seniority, or engagement depth. These values can then be passed back to the platform via CRM integrations or server-side tracking, enabling value-based optimization even when final outcomes are delayed.

Optimizing toward predicted lifetime value

If you’re focused on lifetime value (LTV), there are two viable approaches: 

  • Where LTV can be reliably predicted within a short window after conversion, predicted values can be imported and used directly for optimization. 
  • If early prediction isn’t feasible for you, lead scoring or early behavioral proxies can be used instead.

In both cases, your objective is the same: provide the algorithm with timely, value-weighted signals that correlate strongly with long-term revenue, rather than waiting for delayed outcomes that are too sparse to support learning.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

Key takeaways for performance marketers

Modern bidding systems are powerful pattern recognition engines, but their effectiveness is constrained by the signals they receive.

The biggest performance gains rarely come from constant restructuring or tactical tests. They come from improving the clarity, quality, and commercial relevance of your conversion data.

Conversion signals are the most influential inputs you control, and misaligned or low-quality setups will limit performance regardless of how advanced the algorithm becomes.

Regularly audit your conversion definitions and ask a simple question: “Would you genuinely celebrate an increase in this outcome?” If the answer isn’t clear, the signal likely needs refinement.

Improving conversion goals, strengthening signal quality, and balancing volume, accuracy, and latency aren’t optional. They’re among the highest-impact ways to improve paid search performance.

Read more at Read More