Posts

Google Shopping API cutoff looms, putting ad delivery at risk

Inside Google Ads’ AI-powered Shopping ecosystem: Performance Max, AI Max and more

Google Shopping API migration deadlines are approaching, and advertisers who don’t act risk disrupted Shopping and Performance Max campaigns.

What’s happening. Google is sunsetting older API versions and pushing all merchants toward the Merchant API as the single source of truth for Shopping Ads. Advertisers can confirm which API they’re using in Merchant Center Next by checking the “Source” column under Settings > Data sources, where any listing marked “Content API” requires action.

Why we care. Google is actively reminding advertisers to migrate to the new Merchant API, with beta users required to complete the switch by Feb. 28th, and Content API users by Aug. 18th. If feeds aren’t properly reconnected, campaigns that rely on product data — especially those using feed labels — may stop serving altogether.

The risk. Feed labels don’t automatically carry over during migration. If advertisers don’t update their campaign and feed configurations in Google Ads, Shopping and Performance Max setups that depend on those labels for structure or bidding logic can quietly break.

What to do now. Google recommends completing the migration well ahead of the deadline, reviewing feed labels, and validating campaign delivery after reconnecting feeds. The transition was first outlined in mid-2024, but enforcement is now imminent as Google moves closer to fully retiring legacy APIs.

Bottom line. This isn’t a cosmetic backend change — it’s a technical cutoff that can directly impact revenue if ignored.

First seen. This update was spotted by Google Shopping Specialist Emmanuel Flossie, who shared the warnings he received on LinkedIn.

Read more at Read More

Does llms.txt matter? We tracked 10 sites to find out

Does llms.txt matter

The debate around llms.txt has become one of the most polarized topics in web optimization.

Some treat llms.txt as foundational infrastructure, while many SEO veterans dismiss it as speculative theater. Platform tools flag missing llms.txt files as site issues, yet server logs show that AI crawlers rarely request them.

Google even adopted it. Sort of. In December, the company added llms.txt files across many developer and documentation sites.

The signal seemed clear: if the company behind the sitemap standard is implementing llms.txt, it likely matters.

Except Google pulled it from its Search developer docs within 24 hours.

Google’s John Mueller said the change came from a sitewide CMS update that many content teams didn’t realize was happening. When asked why the files still exist on other Google properties, Mueller said they aren’t “findable by default because they’re not at the top-level” and “it’s safe to assume they’re there for other purposes,” not discovery.

The llms.txt research

We wanted data, not debates.

So we tracked llms.txt adoption across 10 sites in finance, B2B SaaS, ecommerce, insurance, and pet care — 90 days before implementation and 90 days after.

We measured AI crawl frequency, traffic from ChatGPT, Claude, Perplexity, and Gemini, and what else these sites changed during the same window.

The results:

  • Two of the 10 sites saw AI traffic increases of 12.5% and 25%, but llms.txt wasn’t the cause.
  • Eight sites saw no measurable change.
  • One site declined by 19.7%.

The 2 ‘success’ stories weren’t about the file

The Neobank: 25% growth

This digital banking platform implemented llms.txt early in Q3 2025. Ninety days later, AI traffic was up 25%.

Here’s what else happened in that window:

  • A PR campaign around its banking license, with coverage in major national publications.
  • Product pages restructured with extractable comparison tables for interest rates, fees, and minimums.
  • Twelve new FAQ pages optimized for extraction.
  • A rebuilt resource center with new banking information and concepts.
  • Technical SEO issues, like header structures, fixed. 

When a company gets Bloomberg coverage the same month it launches optimized content and fixes crawl errors, you can’t isolate the llms.txt as the growth driver.

The B2B SaaS platform: 12.5% growth

This workflow automation company saw traffic jump 12.5% two weeks after implementing llms.txt.

Perfect timing. Case closed. Except…

Three weeks earlier, the company published 27 downloadable AI templates covering project management frameworks, financial models, and workflow planners. Functional tools, not content marketing, drove the engagement behind the spike.

Google organic traffic to the templates rose 18% during the same period and continued climbing throughout the 90 days we measured.

Search engines and AI models surfaced the templates because they solved real problems and launched an entirely new site section — not because they were listed in an llms.txt file.

The 8 sites where nothing happened after uploading llms.txt

Eight sites saw no measurable change. One declined by 19.7%.

The decline came from an insurance site that implemented llms.txt in early September. The drop likely had nothing to do with the file.

The same pattern showed up across all traffic channels. Llms.txt neither prevented the decline nor created any advantage.

The other seven sites — ecommerce (pet supplies, home goods, fashion), B2B SaaS (HR tech, marketing analytics), finance, and pet care — all documented their best existing content in llms.txt. That included product pages, case studies, API docs, and buying guides.

Ninety days later, nothing changed. Traffic stayed flat. Crawl frequency was identical. The content was already indexed and discoverable, and the file didn’t alter that.

Sites that launched new, functional content saw gains. Sites that documented existing content saw no gains.

Why the disconnect?

No major LLM provider has officially committed to parsing llms.txt. Not OpenAI. Not Anthropic. Not Google. Not Meta.

Google’s Mueller put it plainly:

  • “None of the AI services have said they’re using llms.txt, and you can tell when you look at your server logs that they don’t even check for it.”

That’s the reality. The file exists. The advocacy exists. The adoption by platforms doesn’t show it (yet!). 

The token efficiency argument (and its limits)

The strongest case for llms.txt is about efficiency. Markdown saves time and tokens when AI agents parse documentation. Clean structure instead of complex HTML with navigation, ads, and JavaScript.

Vercel says 10% of their signups come from ChatGPT. Its llms.txt includes contextual API descriptions that help agents decide what to fetch.

This matters — but almost exclusively for developer tools and API documentation. If your audience uses AI coding assistants like Cursor or GitHub Copilot to interact with your product, token efficiency improves integration.

For ecommerce selling pet supplies, insurance explaining coverage, or B2B SaaS targeting nontechnical buyers, token efficiency doesn’t translate into traffic.

llms.txt is a sitemap, not a strategy

The most accurate comparison is a sitemap.

Sitemaps are valuable infrastructure. They help search engines discover and index content more efficiently. But no one credits traffic growth to adding a sitemap. The sitemap documents what exists; the content drives discovery.

Llms.txt works the same way. It may help AI models parse your site more efficiently if they choose to use it, but it doesn’t make your content more useful, authoritative, or likely to answer user queries.

In our analysis, the sites that grew did so because they:

  • Created functional assets like downloadable templates, comparison tables, and structured data.
  • Earned external visibility through press and backlinks.
  • Fixed technical barriers such as crawl and indexing issues.
  • Published content optimized for extraction, including FAQs and structured comparisons.

Llms.txt documented those efforts. It didn’t drive them.

What actually works

The two successful sites show what matters:

  • Create functional, extractable assets. The SaaS platform built 27 downloadable templates that users could deploy immediately. AI models surfaced these because they solved real problems, not because they were listed in a markdown file.
  • Structure content for extraction. The neobank rebuilt product pages with comparison tables with interest rates, fees, and account minimums. This is data AI models can pull directly into answers without interpretation.
  • Fix technical barriers first. The neobank fixed crawl errors that had blocked content for months. If AI models can’t access your content, no amount of documentation helps.
  • Earn external validation. Coverage from Bloomberg and other major publications drove referral traffic, branded searches, and likely influenced how AI models assess authority.
  • Optimize for user intent. Both sites answered specific queries: “best project management templates” and “how do [brand] interest rates compare?” Models surface content that maps to what users are asking, not content that’s merely well documented.

None of this requires llms.txt. All of it drives results.

Should you implement an llms.txt file?

If you’re a developer tool where AI coding assistants are a primary distribution channel, then yes — token efficiency matters. Your audience is already using agents to interact with documentation.

For everyone else, treat llms.txt like a sitemap: useful infrastructure, not a growth lever.

It’s good practice to have. It won’t hurt. But the hour spent implementing llms.txt is often better spent restructuring product pages with extractable data, publishing functional assets, fixing technical SEO issues, creating FAQ content, or earning press coverage.

Those tactics have shown real ROI in AI discovery. Llms.txt hasn’t — at least not yet.

The lesson isn’t that llms.txt is bad. It’s that we’re reaching for control in a system where the rules aren’t written yet. Llms.txt offers that comfort: something concrete, actionable, and familiar, shaped like the web standards we already know.

But looking like infrastructure isn’t the same as functioning like infrastructure.

Focus on what actually works:

  • Create useful content.
  • Structure it for extraction.
  • Make it technically accessible.
  • Earn external validation.

Platforms and formats will change. The fundamentals won’t.

Read more at Read More

Why LLM-only pages aren’t the answer to AI search

Why LLM-only pages aren’t the answer to AI search

With new updates in the search world stacking up in 2026, content teams are trying a new strategy to rank: LLM pages.

They’re building pages that no human will ever see: markdown files, stripped-down JSON feeds, and entire /ai/ versions of their articles.

The logic seems sound: if you make content easier for AI to parse, you’ll get more citations in ChatGPT, Perplexity, and Google’s AI Overviews.

Strip out the ads. Remove the navigation. Serve bots pure, clean text.

Industry experts such as Malte Landwehr have documented sites creating .md copies of every article or adding llms.txt files to guide AI crawlers.

Teams are even building entire shadow versions of their content libraries.

Google’s John Mueller isn’t buying it.

  • “LLMs have trained on – read and parsed – normal web pages since the beginning,” he said in a recent discussion on Bluesky. “Why would they want to see a page that no user sees?”
JohnMu, Lily Ray on BlueSky

His comparison was blunt: LLM-only pages are like the old keywords meta tag. Available for anyone to use, but ignored by the systems they’re meant to influence.

So is this trend actually working, or is it just the latest SEO myth?

The rise of ‘LLM-only’ web pages

The trend is real. Sites across tech, SaaS, and documentation are implementing LLM-specific content formats.

The question isn’t whether adoption is happening, it’s whether these implementations are driving the AI citations teams hoped for.

Here’s what content and SEO teams are actually building.

llms.txt files

A markdown file at your domain root listing key pages for AI systems.

The format was introduced in 2024 by AI researcher Simon Willison to help AI systems discover and prioritize important content. 

Plain text lives at yourdomain.com/llms.txt with an H1 project name, brief description, and organized sections linking to important pages.

Stripe’s implementation at docs.stripe.com/llms.txt shows the approach in action:

markdown# Stripe Documentation

> Build payment integrations with Stripe APIs

## Testing

- [Test mode](https://docs.stripe.com/testing): Simulate payments

## API Reference

- [API docs](https://docs.stripe.com/api): Complete API reference

The payment processor’s bet is simple: if ChatGPT can parse their documentation cleanly, developers will get better answers when they ask, “how do I implement Stripe.”

They’re not alone. Current adopters include Cloudflare, Anthropic, Zapier, Perplexity, Coinbase, Supabase, and Vercel.

Markdown (.md) page copies

Sites are creating stripped-down markdown versions of their regular pages.

The implementation is straightforward: just add .md to any URL. Stripe’s docs.stripe.com/testing becomes docs.stripe.com/testing.md.

Everything gets stripped out except the actual content. No styling. No menus. No footers. No interactive elements. Just pure text and basic formatting.

The thinking: if AI systems don’t have to wade through CSS and JavaScript to find the information they need, they’re more likely to cite your page accurately.

/ai and similar paths

Some sites are building entirely separate versions of their content under /ai/, /llm/, or similar directories.

You might find /ai/about living alongside the regular /about page, or /llm/products as a bot-friendly alternative to the main product catalog. 

Sometimes these pages have more detail than the originals. Sometimes they’re just reformatted.

The idea: give AI systems their own dedicated content that’s built for machine consumption, not human eyes. 

If a person accidentally lands on one of these pages, they’ll find something that looks like a website from 2005.

JSON metadata files

Dell took this approach with their product specs.

Instead of creating separate pages, they built structured data feeds that live alongside their regular ecommerce site.

The files contain clean JSON – specs, pricing, and availability.

Everything an AI needs to answer “what’s the best Dell laptop under $1000” without having to parse through product descriptions written for humans.

You’ll typically find these files as /llm-metadata.json or /ai-feed.json in the site’s directory.

# Dell Technologies

> Dell Technologies is a leading technology provider, specializing in PCs, servers, and IT solutions for businesses and consumers.

## Product and Catalog Data

- [Product Feed - US Store](https://www.dell.com/data/us/catalog/products.json): Key product attributes and availability.

- [Dell Return Policy](https://www.dell.com/return-policy.md): Standard return and warranty information.

## Support and Documentation

- [Knowledge Base](https://www.dell.com/support/knowledge-base.md): Troubleshooting guides and FAQs.

This approach makes the most sense for ecommerce and SaaS companies that already keep their product data in databases. 

They’re just exposing what they already have in a format AI systems can easily digest.

Dig deeper: LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Real-world citation data: What actually gets referenced

The theory sounds good. The adoption numbers look impressive. 

But do these LLM-optimized pages actually get cited?

The individual analysis

Landwehr, CPO and CMO at Peec AI, ran targeted tests on five websites using these tactics. He crafted prompts specifically designed to surface their LLM-friendly content.

Some queries even contained explicit 20+ word quotes designed to trigger specific sources.

Landwehr - LLM experiment 1

Across nearly 18,000 citations, here’s what he found.

llms.txt: 0.03% of citations

Out of 18,000 citations, only six pointed to llms.txt files. 

The six that did work had something in common: they contained genuinely useful information about how to use an API and where to find additional documentation. 

The kind of content that actually helps AI systems answer technical questions. The “search-optimized” llms.txt files, the ones stuffed with content and keywords, received zero citations.

Markdown (.md) pages: 0% of citations

Sites using .md copies of their content got cited 3,500+ times. None of those citations pointed to the markdown versions. 

The one exception: GitHub, where .md files are the standard URLs. 

They’re linked internally, and there’s no HTML alternative. But these are just regular pages that happen to be in markdown format.

/ai pages: 0.5% to 16% of citations

Results varied wildly depending on implementation. 

One site saw 0.5% of its citations point to its/ai pages. Another hit 16%. 

The difference? 

The higher-performing site put significantly more information in their /ai pages than existed anywhere else on their site. 

Keep in mind, these prompts were specifically asking for information contained in these files. 

Even with prompts designed to surface this content, most queries ignored the /ai versions.

JSON metadata: 5% of citations

One brand saw 85 out of 1,800 citations (5%) come from their metadata JSON file. 

The critical detail here is that the file contained information that didn’t exist anywhere else on the website. 

Once again, the query specifically asked for those pieces of information.

Landwehr - LLM experiment 1

The large-scale analysis

SE Ranking took a different approach

Instead of testing individual sites, they analyzed 300,000 domains to see if llms.txt adoption correlated with citation frequency at scale.

Only 10.13% of domains, or 1 in 10, had implemented llms.txt. 

For context, that’s nowhere near the universal adoption of standards like robots.txt or XML sitemaps.

During the study, an interesting relationship between adoption rates and traffic levels emerged.

Sites with 0-100 monthly visits adopted llms.txt at 9.88%. 

Sites with 100,001+ visits? Just 8.27%. 

The biggest, most established sites were actually slightly less likely to use the file than mid-tier ones.

But the real test was whether llms.txt impacted citations. 

SE Ranking built a machine learning model using XGBoost to predict citation frequency based on various factors, including the presence of llms.txt.

The result: removing llms.txt from the model actually improved its accuracy. 

The file wasn’t helping predict citation behavior, it was adding noise.

The pattern

Both analyses point to the same conclusion: LLM-optimized pages get cited when they contain unique, useful information that doesn’t exist elsewhere on your site.

The format doesn’t matter. 

Landwehr’s conclusion was blunt: “You could create a 12345.txt file and it would be cited if it contains useful and unique information.”

A well-structured about page achieves the same result as an /ai/about page. API documentation gets cited whether it’s in llms.txt or buried in your regular docs.

The files themselves get no special treatment from AI systems. 

The content inside them might, but only if it’s actually better than what already exists on your regular pages.

SE Ranking’s data backs this up at scale. There’s no correlation between having llms.txt and getting more citations. 

The presence of the file made no measurable difference in how AI systems referenced domains.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

What Google and AI platforms actually say

No major AI company has confirmed using llms.txt files in their crawling or citation processes.

Google’s Mueller made the sharpest critique in April 2025, comparing llms.txt to the obsolete keywords meta tag: 

  • “[As far as I know], none of the AI services have said they’re using LLMs.TXT (and you can tell when you look at your server logs that they don’t even check for it).”

Google’s Gary Illyes reinforced this at the July 2025 Search Central Deep Dive in Bangkok, explicitly stating Google “doesn’t support LLMs.txt and isn’t planning to.”

Google Search Central’s documentation is equally clear: 

  • “The best practices for SEO remain relevant for AI features in Google Search. There are no additional requirements to appear in AI Overviews or AI Mode, nor other special optimizations necessary.”

OpenAI, Anthropic, and Perplexity all maintain their own llms.txt files for their API documentation to make it easy for developers to load into AI assistants. 

But none have announced their crawlers actually read these files from other websites.

The consistent message from every major platform: standard web publishing practices drive visibility in AI search. 

No special files, no new markup, and no separate versions needed.

What this means for SEO teams

The evidence points to a single conclusion: stop building content that only machines will see.

Mueller’s question cuts to the core issue: 

  • “Why would they want to see a page that no user sees?” 

If AI companies needed special formats to generate better responses, they would tell you. As he noted:

  • “AI companies aren’t really known for being shy.” 

The data proves him right. 

Across Landwehr’s nearly 18,000 citations, LLM-optimized formats showed no advantage unless they contained unique information that didn’t exist anywhere else on the site. 

SE Ranking’s analysis of 300,000 domains found that llms.txt actually added confusion to their citation prediction model rather than improving it.

Instead of creating shadow versions of your content, focus on what actually works.

Build clean HTML that both humans and AI can parse easily. 

Reduce JavaScript dependencies for critical content, which Mueller identified as the real technical barrier: 

  • “Excluding JS, which still seems hard for many of these systems.” 

Heavy client-side rendering creates actual problems for AI parsing.

Use structured data when platforms have published official specifications, such as OpenAI’s ecommerce product feeds

Improve your information architecture so key content is discoverable and well-organized.

The best page for AI citation is the same page that works for users: well-structured, clearly written, and technically sound. 

Until AI companies publish formal requirements stating otherwise, that’s where your optimization energy belongs.

Dig deeper: GEO myths: This article may contain lies

Read more at Read More

Web Design and Development San Diego

10 salary negotiation tips for search marketers

9 tips for negotiating your marketing salary

Before you apply for a new role, it’s important to prepare for marketing salary negotiations and learn how to pursue fair pay with practical, realistic guidance.

Whether you work in SEO, PPC, or somewhere in between, salaries remain a contentious topic. 

They are often hard to discuss, difficult to quantify, and challenging to change.

While many resources cover salary negotiations in general, this article focuses specifically on negotiating pay for marketing roles.

Difficulties with marketing salaries

Several factors make marketing roles harder to benchmark than many other professions, complicating salary expectations and negotiations.

No industry standard

Unlike fields with national governing bodies and defined career grades, marketing lacks standardization. 

This makes it difficult to align salary bands across companies or compare roles on an equal footing.

Inconsistent job titles

Job titles in marketing vary widely. 

A VP of marketing at one company may perform duties similar to a junior account manager elsewhere, while in another organization, the title represents the most senior marketing leader.

Because titles are used inconsistently, it can be challenging to assess role seniority and determine which salary ranges are appropriate.

Major market shifts in recent years

Marketers who last negotiated pay during the COVID-driven digital boom of 2020-2021 may find today’s job market markedly different.

Just five years ago, businesses rapidly shifted to online-first marketing, driving strong demand for digital talent. 

Performance and organic marketers benefited from a candidate-favorable market, with new roles being created, frequent poaching, and rising salaries.

Today, conditions have changed. The rise of AI, global economic uncertainty, and company downsizing have reduced salary pressure for many marketing roles. 

There is also more uncertainty around job stability, leading fewer marketers to change roles unless necessary.

As a result, the salary levels seen in 2020-2021 are largely a thing of the past. 

Well-paid marketing roles still exist, but they are harder to find. That reality should inform your salary negotiations, not discourage them.

Some marketing channels can be misunderstood

Less marketing-savvy companies often advertise a single role intended to cover three or more distinct specializations, typically at bottom-of-the-market pay.

Even organizations that better understand marketing skill sets may struggle to grasp the full complexity and breadth of knowledge required to perform a role effectively. 

This can lead to significant undervaluation of marketers.

Given that marketing salaries can be difficult for employers to navigate, how can you ensure you are fairly compensated for your experience and expertise?

The following nine tips can be broadly grouped into four areas:

  • Know what you bring to the table.
  • Know what is realistic.
  • Identify and demonstrate what is valuable to the company.
  • Stick to your boundaries.

Know what you bring to the table

We’ll start with the side of salary negotiations that, for some, can be very difficult: accurately valuing your own skill set.

If you are in a position to negotiate a salary, you have either already been offered the job or you work for the company and are hoping to secure a raise. 

In either case, the company must already believe you are suitable for the role. 

That does not stop it from trying to secure your services at the most economical price.

Knowing what you bring to the table is key to having the bargaining power and confidence to negotiate a fair salary. 

This does not just mean how much direct experience you have in the role you have landed.

Tip 1: Demonstrate your experience in the industry

Don’t underestimate how much employers value candidates who have knowledge and experience within their sector.

You may also find that some industries struggle to hire marketing professionals, and your willingness to join that industry can command a higher salary.

If you’ve worked in notoriously difficult industries, such as gambling, adult entertainment, or pharmaceuticals, you may be able to negotiate higher pay because of it. 

This can be due to the perception of difficulty in marketing within these industries.

Dig deeper: How to become exceptional at SEO

Tip 2: Promote your prior experience in and out of similar roles

Your years of experience in a role may feel like an obvious bargaining chip when negotiating salary. 

However, don’t forget that an employer may also benefit from the knowledge and experience you gained outside the role you are applying for.

Just because your previous job titles may not sound similar to the role you are negotiating now doesn’t mean the skills you developed there aren’t directly relevant.

Review your CV and compare it with the role you are applying for. Identify the parts of your work history that align with the job description.

Look beyond the obvious and consider transferable skills such as communication, problem-solving, and stakeholder management.

Tip 3: Highlight extra skills outside your job specification

Think about the skills you’ve developed over the years that may not be listed in the job description but are likely to be important for success in the role.

This can be particularly helpful if you are earlier in your career and lack directly relevant experience in similar positions.

Consider what you learned through volunteer roles, a first summer job, or even hobbies.

They may seem far removed from marketing, but you have likely gained lessons through those experiences that can support your current career.

Tip 4: Show your financial impact in previous roles

As with any ROI calculation, employers want to know whether the salary they may pay a candidate will deliver a return on that investment.

If you are negotiating a higher salary than originally offered, you need to demonstrate why it is financially worthwhile for the employer.

Be strategic in the examples you share. Rather than focusing only on increases in traffic or rankings, emphasize the revenue or cost savings you delivered.

You may be limited by NDAs and unable to share specific figures, but you can still reference outcomes, such as increasing organic search revenue by 5x or reducing a PPC budget by 20% while maintaining performance.

Get the newsletter search marketers rely on.


Know what is realistic

It’s one thing to understand your value based on the skills and experience you bring to a company. It’s another to assess that value accurately in the job market.

Ultimately, salaries are limited by what employers are willing to pay.

Tip 5: Be familiar with industry benchmarks

Do some research when considering your salary. 

You may have been paid above or below the market average in your current or previous role, which can skew your expectations.

Review job ads in your geographic area that require similar skills and experience, and note the lower and upper ends of the stated salary ranges.

Be careful not to compare roles based on job titles alone. 

As noted earlier, marketing titles are often inconsistent, and you may be comparing your role with one that is more senior or more junior.

Also consider the industry. Salaries in charities, for example, are unlikely to match those in tech or finance.

Salary benchmarking reports can also be useful, such as:

These resources can provide a more objective view of the market. 

Keep in mind that salaries vary significantly by country, so avoid comparing U.S. and UK salaries directly.

Tip 6: Find out the internal salary ranges

When applying for a role, it is always helpful to understand the salary range being offered, although this is not always possible.

Some companies wait for candidates to make the first move on salary and may avoid sharing ranges to prevent offering more than necessary.

This is why it’s important to follow Tip 5 first, so you understand what your skills and experience are worth in the broader market.

Some organizations use salary banding. For example, a senior SEO specialist may be classified as a level 4 role, while a junior SEO role may sit at level 2.

If a recruiter is unwilling to share the exact salary range, you can ask about role levels or banding instead. 

This can provide insight into where the role sits within the company hierarchy and what the potential salary ceiling might be.

If you are able to identify the salary range, try to determine what qualifies a candidate for the top of the band.

Is the company looking for additional “nice to haves” to justify the highest salary? In some cases, it may be reserved for candidates with experience in a specific industry.

Once you understand which skills command higher pay, you can emphasize them in your CV and during interviews.

Dig deeper: What 15 years in enterprise SEO taught me about people, power, and progress

Identify and demonstrate the values of the company

As many candidates discover during interviews, what a company truly wants is not always clearly stated in the job description or early conversations with recruiters.

Hiring managers may not fully define what they are looking for in a successful candidate until they have interviewed several people for comparison.

As a result, you may be unclear about what matters most for the role, making it harder to demonstrate your suitability and justify the salary you are requesting.

Interviews can provide an opportunity to explore these values in more detail.

Ask interviewers what “success” looks like in the role or how they would describe the traits of their top-performing colleagues.

This can help you understand the characteristics and behaviors the company values.

Tip 7: Demonstrate how you live up to those values

Once you understand what the company values, identify how you can deliver that through the role.

For example, if “initiative” is highly valued, you can use the interview process to highlight how you demonstrate initiative in your work.

Use examples from past experience to show how you embody the company’s values, citing specific projects or situations where you demonstrated them.

If “transparency” is important to the organization, you might reference a time when you acknowledged a mistake.

Demonstrating alignment with the company, in addition to job proficiency, can make you a more attractive candidate and support a stronger salary case.

Dig deeper: Becoming AI-native: The next leap for SEO professionals

Stick to your boundaries

When negotiating your salary, you need to know your absolute minimum. This is not just the lowest salary you can afford to accept.

It also means identifying what you need from a role to feel respected and valued, and what the overall compensation package must include to support that.

Going into negotiations with clear boundaries makes it easier to say no when an offer does not meet them.

Tip 8: Consider other benefits that may offset a lower salary

In some situations, accepting a lower salary may make sense.

You may be moving into a different role where you have less experience and are starting at a more junior level. 

The opportunity to develop new skills can justify a lower salary.

Other tangible benefits, such as strong health coverage, additional paid time off, shorter working hours, or a gym membership, may also make a lower salary acceptable.

Tip 9: Identify other positives that may justify a lower salary

You may be moving into an industry you care deeply about. 

For example, joining a charity may provide enough personal satisfaction to offset lower pay.

Be sure to factor these considerations into your salary expectations when defining your boundaries.

Tip 10: Decide how little is enough for you to walk away

After working through the previous tips, you should have a clear understanding of the minimum compensation you are willing to accept, or remain in, a role for.

Keep this in mind during negotiations. You may feel pressure not to lose the role by asking for more money, or worry about appearing overly focused on pay.

Joining a company and immediately feeling underpaid is not sustainable. 

At the same time, asking for a raise as soon as you start is unlikely to help you establish yourself.

You may be better off declining a role if the company cannot close the gap between its offer and your minimum salary expectations.

Dig deeper: 12 skills every SEO specialist must master by 2026

Empower yourself in marketing salary talks

You deserve to be paid what you are worth.

Use these tips to define your value, account for any mitigating factors, and arrive at a salary you are willing to accept.

Once you have that number, negotiating becomes a matter of clearly demonstrating the value you bring to the company compared with other candidates.

If the gap between what a company is willing to pay and what you believe your skills and experience are worth is too large, walking away may be the better option.

Read more at Read More

7 Marketing AI Adoption Challenges (And How to Fix Them)

You’ve likely invested in AI tools for your marketing team, or at least encouraged people to experiment.

Some use the tools daily. Others avoid them. A few test them quietly on the side.

This inconsistency creates a problem.

An MIT study found that 95% of AI pilots fail to show measurable ROI.

Scattered marketing AI adoption doesn’t translate to proven time savings, higher output, or revenue growth.

AI usage ≠ AI adoption ≠ effective AI adoption.

To get real results, your whole team needs to use AI systematically with clear guidelines and documented outcomes.

But getting there requires removing common roadblocks.

In this guide, I’ll explain seven marketing AI adoption challenges and how to overcome them. By the end, you’ll know how to successfully roll out AI across your team.

Free roadmap: I created a companion AI adoption roadmap with step-by-step tasks and timeframes to help you execute your pilot. Download it now.


First up: One of the biggest barriers to AI adoption — lack of clarity on when and how to use it.

1. No Clear AI Use Cases to Guide Your Team

Companies often mandate AI usage but provide limited guidance on which tasks it should handle.

In my experience, this is one of the most common AI adoption challenges teams face. Regardless of industry or company size.

Reddit – r/antiwork – AI usage

Vague directives like “use AI more” leave people guessing.

The solution is to connect tasks to tools so everyone knows exactly how AI fits into their workflow.

The Fix: Map Team Member Tasks to Your Tech Stack

Start by gathering your marketing team for a working session.

Ask everyone to write down the tasks they perform daily or weekly. (Not job descriptions, but actual tasks they repeat regularly.)

Then look for patterns.

Which tasks are repetitive and time-consuming?

Common AI Use Cases for Marketing Teams

Maybe your content team realizes they spend four hours each week manually tracking competitor content to identify gaps and opportunities. That’s a clear AI use case.

Or your analytics lead notices they are wasting half a day consolidating campaign performance data from multiple regions into a single report.

AI tools can automatically pull and format that data.

Once your team has identified use cases, match each task to the appropriate tool.

Task-to-Tool Decision

After your workshop, create assignments for each person based on what they identified in the session.

For example: “Automate competitor tracking with [specific tool].”

When your team knows exactly what to do, adoption becomes easier.

2. No Structured Plan to Roll Out AI Across the Organization

If you give AI tools to everyone at once, don’t be surprised if you get low adoption in return.

The issue isn’t your team or the technology. It’s launching without testing first.

The Fix: Start with a Pilot Program

A pilot program is a small-scale test where one team uses AI tools. You learn what works, fix problems, and prove value — before rolling it out to everyone else.

A company-wide launch doesn’t give you this learning period.

Everyone struggles with the same issues at once. And nobody knows if the problem is the tool, their approach, or both.

Which means you end up wasting months (and money) before realizing what went wrong.

Two Approaches to Marketing AI Adoption

Plan to run your pilot for 8-12 weeks.

Note: Your pilot timeline will vary by team.

Small teams can move fast and test in 4-8 weeks. Larger teams might need 3-4 months to gather enough feedback.

Start with three months as your baseline. Then adjust based on how quickly your team adapts.


Content, email, or social teams work best because they produce repetitive outputs that show AI’s immediate value.

Select 3-30 participants from this department, depending on your team size.

(Smaller teams might pilot with 3-5 people. Larger organizations can test with 20-30.)

Then, set measurable goals with clear targets you can track. Like:

  • Cut blog production time from 8 hours to 5 hours
  • Reduce email draft revisions from 3 rounds to 1
  • Create 50 social media posts weekly instead of 20

Schedule weekly meetings to gather feedback throughout the pilot.

The pilot will produce department-specific workflows. But you’ll also discover what transfers: which training methods work, where people struggle, and what governance rules you need.

When you expand to other departments, they’ll adapt these frameworks to their own AI tasks.

After three months, you’ll have proven results and trained users who can teach the next group.

3-Month Pilot

At that point, expand the pilot to your second department (or next batch of the same team).

They’ll learn from the first group’s mistakes and scale faster because you’ve already solved common problems.

Pro tip: Keep refining throughout the pilot.

  • Update prompts when they produce poor results
  • Add new tools when you find workflow gaps
  • Remove friction points the moment they appear


Your third batch will move even quicker.

Within a year, you’ll have organization-wide marketing AI adoption with measurable results.

3. Your Team Lacks the Training to Use AI Confidently

Most marketing teams roll out AI tools without training team members how to use them.

In fact, only 39% of people who use AI at work have received any training from their company.

61% of workers who use AI at work received no training from their company

And when training does exist, it might focus on generic AI concepts rather than specific job applications.

The answer is better training that connects to the work your team does.

The Fix: Role-Specific Training

Generic training explains how AI works. Role-specific training shows people how to use AI in their actual jobs.

Here’s the difference:

Role Generic Training (Lower Priority) Role-Specific Training (Start Here)
Social Media Manager AI concepts and how large language models work How to automate content calendars and schedule posts faster
SEO Specialist Understanding neural networks and machine learning AI-powered keyword research and competitor analysis
Email Marketer Machine learning algorithms and data processing Using AI for personalization and subject line testing
Content Writer How AI models generate text and natural language processing Using AI to research topics, create outlines, and edit drafts
Paid Ads Manager Deep learning fundamentals and algorithmic optimization AI tools for ad copy testing, audience targeting, and bid management

When training connects directly to someone’s daily tasks, they actually use what they learn.

For example, Mastercard applies this approach with three types of training:

  • Foundational knowledge for everyone
  • Job-specific applications for different roles
  • Reskilling programs where needed.

Mastercard – Putting the "I"in AI

Companies like KPMG, Accenture, and IKEA have also developed dedicated AI training programs for their teams.

This is likely because they learned that generic training creates enterprise AI adoption challenges at scale.

Employees complete courses but never apply what they learned to their actual work.

Ikea – AI training programs for their teams

But you don’t need enterprise-scale resources to make this work.

Start by mapping what each role actually does with AI.

For example:

  • Your content team uses AI for research, strategy, outlines, and drafts
  • Your ABM team uses it for account research and personalized outreach
  • Your social team uses it for video creation and caption variations
  • Your marketing ops team uses it for workflow automation and data integration

Once you know what each role needs, pick your training approach.

Platforms like Coursera and LinkedIn Learning offer specific AI training programs that work well for flexible, self-paced learning.

Coursera – GenAI for PR Specialists

Training may also be available from your existing tools.

Check whether your current marketing platforms offer AI training resources, such as courses or documentation.

For example, Semrush Academy offers various training programs that also cover its AI capabilities.

Semrush Academy – AI Courses

For teams with highly specific workflows, external trainers can be useful.

This costs more. But it delivers the most relevant results because the trainer focuses only on what your team actually needs to learn.

For example, companies like Section offer AI adoption programs for enterprises, including coaching and custom workshops.

Sectionai – Homepage

But keep in mind that training alone won’t sustain marketing AI adoption.

AI tools evolve constantly, and your team needs continuous support to adapt.

Create these support systems:

  • Set up a dedicated Slack channel for AI questions where your team can share wins and troubleshoot problems
  • Run weekly Q&A sessions where people discuss specific challenges
  • Update training materials as new features and use cases emerge

4. Team Members Fear AI Will Replace Their Roles

Employees may resist AI marketing adoption because they fear losing their jobs to automation.

Headlines about AI replacing workers don’t help.

Forbes – AI Is Killing Marketing

Your goal is to address these fears directly rather than dismissing them.

The Fix: Have Honest Conversations About Job Security

Meet with each team member and walk through how AI affects their workflow.

Point out which repetitive tasks AI will automate. Then explain what they’ll work on with that freed-up time.

Be careful about the language you use. Be empathetic and reassuring.

For example, don’t say “AI makes you more strategic.”

Say: “AI will pull performance reports automatically. You’ll analyze the insights, identify opportunities, and make strategic decisions on budget allocation.”

One is vague. The other shows them exactly how their role evolves.

How to Address AI Fears With Your Team

Don’t just spring changes on your team. Give them a clear timeline.

Explain when AI tools will roll out, when training starts, and when you expect them to start using the new workflows.

For example: “We’re implementing AI for competitor tracking in Q2. Training happens in March. By April, this becomes part of your weekly process.”

When people know what’s coming and when, they have time to prepare instead of panicking.

Sample Timeline

Pro tip: Let people choose which AI features align with their interests and work style.

Some team members might gravitate toward AI for content creation. Others prefer using it for data analysis or reporting.

When people have autonomy over which features they adopt first, resistance decreases. They’re exploring tools that genuinely interest them rather than following mandates.


5. Your Team Resists AI-Driven Workflow Changes

People resist AI when it disrupts their established workflows.

Your team has spent years perfecting their processes. AI represents change, even when the benefits are obvious.

Resistance gets stronger when organizations mandate AI usage without considering how people actually work.

Reddit – Why AI

New platforms can be especially intimidating.

It means new logins, new interfaces, and completely new workflows to learn.

Rather than forcing everyone to change their workflows at once, let a few team members test the new approach first using familiar tools.

The Fix: Start with AI Features in Existing Tools

Your team likely already uses HubSpot, Google Ads, Adobe, or similar platforms daily.

When you use AI within existing tools, your team learns new capabilities without learning an entirely new system.

If you’re running a pilot program, designate 2-3 participants as AI champions.

Their role goes beyond testing — they actively share what they’re learning with the broader team.

What Do AI Champions Do

The AI champions should be naturally curious about new tools and respected by their colleagues (not just the most senior people).

Have them share what they discover in a team Slack channel or during standups:

  • Specific tasks that are now faster or easier
  • What surprised them (good or bad)
  • Tips or advice on how others can use the tool effectively

When others see real examples, such as “I used Social Content AI to create 10 LinkedIn posts in 20 minutes instead of 2 hours,” it carries more weight than reassurance from leadership.

Slack – Message

For example, if your team already uses a tool like Semrush, your champions can demonstrate how its AI features improve their workflows.

Keyword Magic Tool’s AI-powered Personal Keyword Difficulty (PKD%) score shows which keywords your site can realistically rank for — without requiring any manual research or analysis.

Keyword Magic Tool – Newsletter platform – PKD

AI Article Generator creates SEO-friendly drafts from keywords.

Your content writers can input a topic, set their brand voice, and get a structured first draft in minutes. This reduces the time spent staring at a blank page.

Semrush – AI Article Generator

Social Content AI handles the repetitive parts of social media planning. It generates post ideas, copy variations, and images.

Your social team can quickly build out a week’s content calendar instead of creating each post from scratch.

Semrush – Social Content AI Kit – Ideas by topic

Don’t have a Semrush subscription? Sign up now and get a 14-day free trial + get a special 17% discount on annual plan.

6. No Governance or Guardrails to Keep AI Usage Safe

Without clear guidelines, your team may either avoid AI entirely or use it in ways that create risk.

In fact, 57% of enterprise employees input confidential data into AI tools.

Types of Sensitive Data Employees Input Into AI Tools

They paste customer data into ChatGPT without realizing it violates data policies.

Or publish AI-generated content without approval because the review process was never explained.

Your team needs clear guidelines on what’s allowed, what’s not, and who approves what.

Free AI policy template: Need help creating your company’s AI policy? Download our free AI Marketing Usage Policy template. Customize it with your team’s tools and workflows, and you’re ready to go.


The Fix: Create a One-Page AI Usage Policy

When creating your policy, keep it simple and accessible. Don’t create a 20-page document nobody will read.

Aim for 1-2 pages that are straightforward and easy to follow.

Include four key areas to keep AI usage both safe and productive.

Policy Area What to Include Example
Approved Tools List which AI tools your team can use — both standalone tools and AI features in platforms you already use “Approved: ChatGPT, Claude, Semrush’s AI Article Generator, Adobe Firefly”
Data Sharing Rules Define specifically what data can and can’t be shared with AI tools “Safe to share: Product descriptions, blog topics, competitor URLs

Never share: Customer names, email addresses, revenue data, internal campaign plans, pricing strategies, unannounced product details”

Review Requirements Document who reviews what type of content before publication “Social posts: Peer review

Blog posts: Content lead approval

Legal/compliance content: Legal team review”

Approval Workflows (optional) Clarify who approves AI content at each stage “Internal drafts: Content team

Customer-facing materials: Marketing director

Compliance-related content: Legal sign-off”

Beyond documenting the rules, establish who team members should contact when they encounter situations the policy doesn’t address.

Designate a department lead, governance contact, or weekly office hours as the escalation point for:

  • Scenarios not covered in your guidelines
  • Technical site issues with approved AI tools
  • Concerns about whether AI-generated content is accurate or appropriate
  • Questions about data sharing

Marketing AI Escalation Process

The goal is to give them a clear path to get help, rather than guessing or avoiding AI altogether.

Then, post the policy where your team will see it.

This might be your Slack workspace, project management tool, or a pinned document in your shared drive.

AI Policy document

And treat it as a living document.

When the same question comes up multiple times, add the answer to your policy.

For example, if three people ask, “Can I use AI to write email subject lines?” update your policy to explicitly say yes (and clarify who reviews them before sending).

AI Governance Checklist

7. No Reliable Way to Measure AI’s Impact or ROI

Without clear proof that AI improves their results, team members may assume it’s just extra work and return to old methods.

And if leadership can’t see a measurable impact, they might question the investment.

This puts your entire AI program at risk.

Avoid this by establishing the right metrics before implementing AI.

The Fix: Track Business Metrics (Not Just Efficiency)

Here’s how to measure AI’s business impact properly.

Pick 2-3 metrics your leadership already reviews in reports or meetings.

These are typically:

  • Leads generated
  • Conversion rate
  • Revenue growth
  • Customer acquisition
  • Customer retention

Measure Marketing AI's Business Impact

These numbers demonstrate to your team and leadership that AI is helping your business.

Then, establish your baseline by recording your current numbers. (Do this before implementing AI tools.)

For example, if you’re tracking leads and conversion rate, write down:

  • Current monthly leads: 200
  • Current conversion rate: 3%

This baseline lets you show your team (and leadership) exactly what changed after implementing AI.

Pro tip: Avoid making multiple changes simultaneously during your pilot or initial rollout.

If you implement AI while also switching platforms or restructuring your team, you won’t know which change drove results.

Keep other variables stable so you can clearly attribute improvements to AI.


Once AI is in use, check your metrics monthly to see if they’re improving. Use the same tools you used to record your baseline.

Write down your current numbers next to your baseline numbers.

For example:

  • Baseline leads (before AI): 200 per month
  • Current leads (3 months into AI): 280 per month

But don’t just check if numbers went up or down.

Look for patterns:

Did one specific campaign or content type perform better after using AI?

Are certain team members getting better results than others?

Track individual output alongside team metrics.

For example, compare how many blog posts each writer completes per week, or email open rates by the person who drafted them.

Email report overview page

If someone’s consistently performing better, ask them to share their AI workflow with the team.

This shows you what’s working, and helps the rest of your team improve.

Share results with both your team and leadership regularly.

When reporting, connect AI’s impact to the metrics you’ve been tracking.

For example:

Say: “AI cut email creation time from 4 hours to 2.5 hours. We used that time to run 30% more campaigns, which increased quarterly revenue from email by $5,000.”

Not: “We saved 90 hours with AI email tools.”

The first shows business impact — what you accomplished with the time saved. The second only shows time saved.

Other examples of how to frame your reporting include:

How to Report AI Results to Leadership

Build Your Marketing AI Adoption Strategy

When AI usage is optional, undefined, or unsupported, it stays fragmented.

Effective marketing AI adoption looks different.

It’s built on:

  • Role-specific training people actually use
  • Guardrails that reduce uncertainty and risk
  • Metrics that drive business outcomes

When those pieces are in place, AI becomes part of how work gets done.

If you want a step-by-step implementation plan, download our Marketing AI Adoption Roadmap.

Need help choosing which AI tools to pilot? Our AI Marketing Tools guide breaks down the best options by use case.

The post 7 Marketing AI Adoption Challenges (And How to Fix Them) appeared first on Backlinko.

Read more at Read More

How to choose a link building agency in the AI SEO era by uSERP

Remember when a handful of links from sites in your niche could drive steady organic traffic? That era is over.

Today, Google’s AI Overviews and the rise of answer engines like ChatGPT raise the bar. You have to do more to stay visible. Hiring an experienced link building agency is one efficient way to meet that challenge.

It’s also one of the most important investments you’ll make. The right partner doesn’t just build links. They position your brand as a trusted, cited source in the AI era.

So how do you choose the right agency for your company?

While the interface has changed, the core ranking signals remain largely the same. What’s changed is their priority.

LLMs need credible sources to ground their answers. That makes authoritative link building more important than ever.

This article shows you how to vet and choose a link building agency that understands these new priorities and can help your brand win trust in the AI-driven SEO landscape.

How link building and SEO are changing

Gartner predicted search engine volume to drop by 25% as AI takes over more answers. That makes working with an agency that understands AI SEO essential.

But how do you know which agencies actually do?

The real indicators are holistic authority and AI visibility. Only one in five links cited in Google’s AI Overviews matched a top-10 organic result, according to an Authoritas study. Even more telling, 62.1% of cited links or domains didn’t rank in the top 10 at all.

The takeaway is simple. AI systems and search engines don’t evaluate websites the same way. We’re no longer building links just for Google’s crawler.

Link equity alone isn’t enough. Sites need topical authority, brand mentions, and real market presence. The goal is to build a footprint that AI models recognize and can’t ignore.

The new criteria: Evaluating a link building agency for AI SEO

Choosing the right link building agency comes down to how well they prioritize the factors that matter now.

This section shows you what to look for.

Prioritizing quality, relevance, and traffic

I see this mistake all the time. A marketing director evaluates link quality based only on Domain Rating (DR).

High DR matters, but at uSERP, we know it’s not the finish line. You should also look for:

  • Relevance: A link from a DR 60, niche-specific site in your industry often beats a DR 80 general news site that covers everything from crypto to keto.
  • Minimum traffic standards: If a site doesn’t rank for keywords or attract real traffic, its links won’t help you rank. That’s why strict traffic minimums matter.

When vetting an agency, ask for contractual site-traffic guarantees.

A confident agency won’t hesitate to sign a Statement of Work that guarantees every link comes from a site with a minimum traffic threshold, such as 5,000+ monthly organic visitors.

If they won’t put traffic minimums in writing, they’re likely planning to place links on “ghost town” sites. These domains appear strong, but they lack a real audience, which protects their margins rather than supporting your growth.

Look for a content-driven approach and digital PR

Links don’t exist in a vacuum. The strongest ones come from being part of a real conversation.

The best agencies no longer operate like traditional link builders. They act more like content marketing and digital PR teams. 

Instead of asking for links, the best agencies create linkable assets — data studies, expert commentary, and in-depth guides that journalists and publishers want to cite – because they understand:

  • Google’s algorithms and AI models are continually getting better at identifying paid placements. A content-led approach keeps links natural, editorial, and valuable to readers.
  • Guest posting in the AI SEO era isn’t about a disposable 500-word article. It’s about thought leadership that positions your CEO as a credible expert.

At uSERP, for example, we created — and continuously update — our State of Backlinks for SEO report.

Red flags: Recognizing outdated or dangerous tactics

Choosing the wrong partner doesn’t just waste your budget. It puts your brand reputation — and potentially your company’s future — at risk.

Here are the biggest red flags to avoid when hiring an agency:

Guaranteed rankings

No one can guarantee a number-one ranking on Google. Any agency that promises specific keyword positions on a fixed timeline is likely doing one of two things:

  • Using risky, short-term tactics to force a temporary spike.
  • Selling you snake oil.

These agencies often rely on private blog networks (PBNs) or aggressive anchor text manipulation to manufacture fast results.

You might see an early jump, but the crash that follows—and the risk of a penalty when Google’s spam systems catch up—is never worth it.

Lack of transparency

If an agency won’t explain how they earn links or where placements will come from before you pay, walk away.

Reputable agencies are transparent. They’ll show real examples of past placements and share relevant case studies from your industry.

Agencies that hide their inventory usually do it for a reason. Those sites are often part of a low-quality network or link farm.

Self-serve link portfolios

If you’re a marketer or SEO on LinkedIn, chances are you’ve received a message like this:

This is a common tactic among low-quality link builders: reselling backlinks from a shared inventory. I understand the appeal.

Strategic link acquisition is hard. Buying and flipping links is easy.

The problem — for you — is the footprint. If an agency can secure a link by filling out a form, anyone can. That includes casino affiliates, gambling sites, adult content, and outright scammers.

That’s not a natural link profile. Google has almost certainly already identified and burned those domains.

In the best case, you pay for a link that passes zero authority. In the worst case, Google flags your site as part of a link scheme.

Dirt-cheap packages

SEO and link building deliver incredible ROI, but they aren’t cheap.

You can’t buy a high-quality article with a real, earned link from an authoritative site for $50. Speaking as someone who runs an AI SEO agency, the true cost of quality content, editing, outreach, and relationship building is at least an order of magnitude higher.

That’s why cheap packages that promise multiple high-authority links are a major red flag. They almost always rely on:

  • Fully AI-generated, barely edited content.
  • Low-value link farms or resold inventory.
  • Toxic backlinks.

None of those will help you show up on AI search engines or Google.

Partnering with a link building agency for a sustainable market presence

Link building in the AI era is a long-term investment. It’s about building a durable market presence, not chasing quick wins.

The right partner sees themselves as an extension of your team. They care about:

  • Your backlink gap compared to competitors.
  • Your brand mentions across LLMs.
  • Your overall search and AI visibility.

They help you navigate content syndication, backlink audits, content marketing, and modern link building strategies with a unified approach.

If you’re ready to move past vanity metrics and start building authority that drives revenue and AI citations, it’s time to be selective about who you trust with your domain.

The right link building agency is out there. You just need to know how to spot them.

Read more at Read More

Google expands Shopping promotion rules ahead of 2026

Inside Google Ads’ AI-powered Shopping ecosystem: Performance Max, AI Max and more

Google is broadening what counts as an eligible promotion in Shopping, giving merchants more flexibility heading into next year.

Driving the news. Google is update its Shopping promotion policies to support additional promotion types, including subscription discounts, common promo abbreviations, and — in Brazil — payment-method-based offers.

Why we care. Promotions are a key lever for visibility and conversion in Shopping results. These changes unlock more promotion formats that reflect how consumers actually buy today, especially subscriptions and cashback offers. Greater flexibility in promotion types and language reduces disapprovals and makes Shopping ads more competitive at key decision moments.

For retailers relying on subscriptions or local payment incentives, this update creates new ways to drive visibility and conversion on Google Shopping.

What’s changing. Google will now allow promotions tied to subscription fees, including free trials and percent- or amount-off discounts. Merchants can set these up by selecting “Subscribe and save” in Merchant Center or by using the subscribe_and_save redemption restriction in promotion feeds. Examples include a free first month on a premium subscription or a steep discount for the first few billing cycles.

Google is also loosening restrictions on language. Common promotional abbreviations like BOGO, B1G1, MRP and MSRP are now supported, making it easier for retailers to mirror real-world retail messaging without risking disapproval.

In Brazil only, Google will now support promotions that require a specific payment method, including cashback offers tied to digital wallets. Merchants must select “Forms of payment” in Merchant Center or use the forms_of_payment redemption restriction. Google says there are no immediate plans to expand this change to other markets.

Between the lines. These updates signal Google’s intent to better align Shopping promotions with modern retail models — especially subscriptions and localized payment behaviors — while reducing friction for merchants.

The bottom line. By expanding eligible promotion types, Google is giving advertisers more room to compete on value, not just price, when Shopping policies update in January 2026.

Read more at Read More

3 PPC myths you can’t afford to carry into 2026

SEO myths vs facts

PPC advice in 2025 leaned hard on AI and shiny new tools. 

Much of it sounded credible. Much of it cost advertisers money. 

Teams followed platform narratives instead of business constraints. Budgets grew. Efficiency did not.

As 2026 begins, carrying those beliefs forward guarantees more of the same. 

This article breaks down three PPC myths that looked smart in theory, spread quickly in 2025, and often drove poor decisions in practice. 

The goal is simple: reset priorities before repeating expensive mistakes.

Myth 1: Forget about manual targeting, AI does it better

We have seen this claim everywhere: 

AI outperforms humans at targeting, and manual structures belong to the past. 

Consolidate campaigns as much as possible. 

Let AI run the show.

There is truth in that – but only under specific conditions. 

AI performance depends entirely on inputs. No volume means no learning. No learning means no results. 

A more dangerous version of the same problem is poor signal quality. No business-level conversion signal means no meaningful optimization.

For ecommerce brands that feed purchase data back into Google Ads and consistently generate at least 50 conversions per bid strategy each month, trusting AI with targeting can make sense. 

In those cases, volume and signal quality are usually sufficient. Put simply, AI favors scale and clear outcomes.

That logic breaks down quickly for low-volume campaigns, especially those optimizing to leads as the primary conversion. 

Without enough high-quality conversions, AI cannot learn effectively. The result is not better performance, but automation without improvement.

How to fix this

Before handing targeting decisions entirely to AI, you should be able to answer “yes” to all three of the questions below:

  • Are campaigns optimized against a business-level KPI, such as CAC or a ROAS threshold?
  • Are enough of those conversions being sent back to the ad platforms?
  • Are those conversions reported quickly, with minimal latency?

If the answer to any of these is no, 2026 should be about reassessing PPC fundamentals.

Do not be afraid to go old school when the situation calls for it. 

In 2025, I doubled a client’s margin by implementing a match-type mirroring structure and pausing broad match keywords.

It ran counter to prevailing best practices, but it worked. 

The decision was grounded in historical performance data, shown below:

Match type Cost per lead Customer acquisition cost Search impression share
Exact €35 €450 24%
Phrase €34 1,485 17%
Broad €33 2,116 18%

This is a classic case of Google Ads optimizing to leads and delivering exactly what it was asked to do: drive the lowest possible cost per lead across all audiences. 

The algorithm is literal. It does not account for downstream outcomes, such as business-level KPIs.

By taking back control, you can direct spend toward top-performing audiences that are not yet saturated. In this case, that meant exact match keywords.

If you are not comfortable with older structures like match-type mirroring – or even SKAGs – learning advanced semantic techniques is a viable alternative. 

Those approaches can provide a more controlled starting point without relying entirely on automation.

Myth 2: Meta’s Andromeda means more ads, better results

This myth is particularly frustrating because it sounds logical and spreads quickly. 

The claim is simple: more creative means more learning, which leads to better auction performance. 

In practice, it far more reliably increases creative production costs than it improves results – and often benefits agencies more than advertisers.

Creative volume only helps when ad platforms receive enough high-quality conversion signals. 

Without those signals, more ads simply mean more assets to rotate. The AI has nothing meaningful to learn from.

Andromeda generated significant attention in 2025, and it gave marketers a new term to rally around. 

In reality, Andromeda is one component of Meta’s ad retrieval system:

  • “This stage [Andromeda] is tasked with selecting ads from tens of millions of ad candidates into a few thousand relevant ad candidates.”

That positioning coincided with Meta’s broader pivot from the metaverse narrative to AI. It worked. 

But it also led some teams to conclude that aggressive creative diversification was now required – more hooks, more formats, more variations, increasingly produced with generative AI.

Similar to Google Ads’ push around automated bidding, broad match, and responsive search ads, Andromeda has become a convenient justification for adopting Advantage+ targeting and Advantage+ creative. 

Those approaches can perform well in the right conditions. They are not universally reliable.

Get the newsletter search marketers rely on.


How to fix this

Creative diversification helps platforms match messages to people and contexts. That value is real. It is also not new. The same fundamentals still apply:

  • Creative testing requires a strategy. Testing without intent wastes resources.
  • Measurement must be planned in advance. Otherwise you’re setting yourself up for failure.
  • Business-level KPIs need to exist in sufficient volume to matter.

This myth breaks down most clearly when resources are limited – budget, skills, or time. In those cases, platforms often rotate ads with little signal-driven direction.

When resources are constrained, CRO is a better use of your resources:

  • Review tracking. More tracked conversions improve performance.
  • Improve the customer journey to increase conversion rates and signal volume.
  • Map higher-margin products to support more efficient spend.
  • Test new channels or networks using budget saved from excessive creative production.

The pattern is consistent. Creative scale follows signal scale, not the other way around.

Myth 3: GA4 and attribution are flawed, but marketing mix modeling will provide clarity

Can you think of 10 marketers who believe GA4 is a good tool? Probably not. 

That alone speaks to how poorly Google handled the rollout. 

As a result, more clients now say the same thing: GA4 does not align with ad platform data, neither feels trustworthy, and a more “serious” solution must be needed. 

More often than not, that path leads to higher costs and average results. 

Most brands simply do not have the spend, scale, or complexity required for MMM to produce meaningful insight. 

Instead of adding another layer of abstraction, they would be better served by learning to use the tools they already have.

For most brands, the setup looks familiar:

  • Media spend is concentrated across two or three channels at most – typically Google and Meta, with YouTube, LinkedIn, or TikTok as secondary options.
  • The business depends on a recurring but narrow customer base, which creates long-term fragility.
  • Outside that core audience, marketing is barely incremental, if incremental at all.

In those conditions, MMM does not add clarity. It adds abstraction. 

With such a limited channel mix, the focus should remain on fundamentals. 

The challenge is not modeling complexity, but identifying what is actually impactful. 

How to fix this

The priorities below deliver more value than MMM in these scenarios:

  • Differentiate clearly from competitors.
  • Increase margins, even basic budget planning can move the needle.
  • Build a solid data foundation, including tracking, CRO, and conversion pipelines.
  • Diversify channels or ad networks.
  • Lock creative execution to real customer pain points.
  • Fix marketing execution wherever it breaks.

MMM – like any advanced tool – becomes useful once complexity demands it. Not before. 

Used too early, it replaces accountability with abstraction, not insight.

The reality behind the myths

The common thread across these three myths is not AI, creative, or analytics. It is misuse. 

Platforms do exactly what they are asked to do. They optimize against the signals provided, within the constraints of budget and structure.

When business fundamentals break, AI cannot fix the problem. 

2026 is not about chasing the next abstraction. It is about business and ops focus, paired with disciplined execution, to scale profitably.

Read more at Read More

Why copywriting is the new superpower in 2026

Why copywriting is the new superpower in 2026

For the last few years, copywriting has been quietly written off.

Not with outrage. Not with ceremony.

Just sidelined. Replaced. Automated.

Words – the core material of SEO, landing pages, ads, and persuasion – were demoted during the traffic rush and later the AI gold rush.

Blog posts were generated. Product descriptions were bulked out. Landing pages were templated.

Content teams shrank. Freelancers disappeared. And a convenient narrative emerged to justify it all:

“AI can write now, so writing doesn’t matter anymore.”

Then Google made it worse.

The helpful content update, followed by AI Overviews and conversational search, didn’t just hurt SEO. It hurt the broader web.

It gutted an entire economy built on informational arbitrage – niche blogs, affiliate sites, ad-funded publishers, and content-led SEO businesses that had learned how to monetize curiosity at scale.

Now, large language models are finishing the job. Informational queries are answered directly in search. The click is increasingly optional. Traffic is evaporating.

So yes, on the surface, it sounds mad to say this:

Copywriting is once again becoming the most important skill in digital marketing.

But only if you confuse copywriting with the thing that just died.

AI didn’t kill copywriting

What AI destroyed was not persuasion. 

It destroyed low-grade informational publishing – content that existed to intercept search demand, not to change decisions.

  • “How to” posts.
  • “Best tools for” roundups.
  • Explainers written for algorithms, not people.

LLMs are exceptionally good at this kind of work because it never required judgment. It required:

  • Synthesis. 
  • Summarization. 
  • Pattern matching. 
  • Compression.

That’s exactly what LLMs do best.

This content was designed to intercept purchase decisions by giving users something else to click before buying, often with the hope that a cookie would track the stop in the journey and reward the page for “influencing” the buyer journey.

That influence was rewarded either through analytics for the SEO team or through an affiliate’s bank account.

But persuasion – real persuasion – has never worked like that.

Persuasion requires:

  • A defined audience.
  • A clearly articulated problem.
  • A credible solution.
  • A deliberate attempt to influence choice.

Most SEO copy never attempted any of this. It aimed to rank, not to convert.

So when people say “AI killed copywriting,” what they really mean is this: AI exposed how little real copywriting was being done in the first place.

And that matters, because the environment we’re moving into makes persuasion more important, not less.

Dig deeper: SEO copywriting: 5 pillars for ranking and relevance

GEO isn’t about rankings

Traditional search engines forced users to translate their problems into keywords.

Someone didn’t search for “I’m an 18-year-old who’s just passed my test and needs insurance without being ripped off.” They typed [cheap car insurance] and hoped Google would serve the best results.

This created a monopoly in SEO. Those who could spend the most on links usually won once a semi-decent landing page was written.

It also created a sea of sameness, with most ranking websites saying exactly the same thing.

LLMs reverse this process. They:

  • Start with the problem.
  • Understand context, constraints, and intent. 
  • Decide which suppliers are most relevant.

That distinction is everything.

LLMs are not ranking pages. Instead, they seek and select the best solutions to solve users’ problems.

And selection depends on one thing above all else – positioning.

Not “position on Google,” but strategic positioning.

  • Who are you for?
  • What problem do you solve?
  • Why are you a better or different choice than the alternatives?

If an LLM cannot clearly answer those questions from your website and third-party information, you will not be recommended, no matter how many backlinks you have or how “authoritative” your content once looked.

This is why copywriting suddenly sits at the center of SEO’s future.

Dig deeper: The new SEO imperative: Building your brand

From SEO to GEO: Availability beats visibility

Search engine optimization was about visibility.

Generative engine optimization is about AI availability.

Availability means increasing the likelihood that your business will be surfaced in a buying situation.

That depends on whether your relevance is legible.

Most businesses still describe themselves in static, categorical terms:

  • “We’re an SEO agency in Manchester.”
  • “We’re solicitors in London.”
  • “We’re an insurance provider.”

These descriptions tell you what the business is. 

They do not tell you what problem it solves or for whom it solves that problem. They are catchall descriptors for a world where humans use search engines.

This is where most companies miss the opportunity in front of them.

The vast majority of “it’s just SEO” advice centers on entities and semantics. 

The tactics suggested for AI SEO are largely the same as traditional SEO: 

  • Create a topical map.
  • Publish topical content at scale.
  • Build links.

This is why many SEOs have defaulted to the “it’s just SEO” position.

If your lens is meaning, topics, context, and relationships, everything looks like SEO.

In contrast, the world in which copywriters and PRs operate looks very different.

Copywriters and PRs think in terms of problems, solutions, and sales.

All of this stems from brand positioning.

Positioning is not a fixed asset

A strategic position is a viable combination of:

  • Who you target.
  • What you offer.
  • How your product or service delivers it

Change any one of those, and you have a new position.

Most firms treat their current position as fixed. 

They accept the rules of the category and pour their effort into incremental improvement, competing with the same rivals, for the same customers, in the same way.

LLMs quietly remove that constraint.

If you genuinely solve problems – and most established businesses do – there is no reason to limit yourself to a single inherited position simply because that’s how the category has historically been defined.

No position remains unique forever. Competitors copy attractive positions relentlessly. 

The only sustainable advantage is the ability to continually identify and colonize new ones.

This doesn’t mean becoming everything to everyone. Overextension dilutes brands.

It means being honest and explicit about the problems you already solve well.

This is something copywriters understand well. 

A good business or marketing strategist can help uncover new positions in the market, and a good copywriter can help articulate them on landing pages.

This is a key shift from semantic SEO to GEO.

You want LLMs to recommend your business to solve those problems.

Get the newsletter search marketers rely on.


From SEOs’ ‘what we are’ to GEOs’ ‘what problem we solve’

Take insurance as a simple example.

A large insurer may technically offer “car insurance.” But the problems faced by:

  • An 18-year-old new driver.
  • A parent insuring a second family car.
  • A courier using a vehicle for work.
  • Are completely different.

Historically, these distinctions were collapsed into broad keywords because that’s how search worked. 

LLMs don’t behave like that. They start with the user problem to be solved.

If you are well placed to solve a specific use case, it makes strategic sense to articulate that explicitly, even if no one ever typed that exact phrase into Google.

A helpful way to think about this is as a padlock.

Your business can be unlocked by many different combinations. 

Each combination represents a different problem, for a different person, solved in a particular way.

If you advertise only one combination, you artificially restrict your AI availability.

Have you ever had a customer say, “We didn’t know you offered that?”

Now you have the chance to serve more people as individuals.

Essentially, this makes one business suitable for more problems.

You aren’t just a solicitor in Manchester.

You’re a solicitor who solves X by Y.

You’re a solicitor for X with a Y problem.

The list could be endless.

Why copywriting becomes infrastructure again

This is where copywriting returns to its original job.

Good copywriting has always been about creating a direct relationship with a prospect, framing the problem correctly, intensifying it, and making the case that you are the best place to solve it.

That logic hasn’t changed.

What has changed is that the audience has expanded.

You now have to persuade:

  • A human decision-maker.
  • A LLM acting as a recommender.

Both require the same thing: clarity.

You must be explicit about:

  • The problem you solve.
  • Who you solve it for.
  • How you solve it.
  • Why your solution works.

You must also support those claims with evidence.

This is not new thinking. It comes straight out of classic direct marketing.

Drayton Bird defined direct marketing as the creation and exploitation of a direct relationship between you and an individual prospect. 

Eugene Schwartz spent his career explaining that persuasion is not accidental – benefits must be clear, claims must be demonstrated, and relevance must be immediate.

The web environment made it possible to forget these fundamentals for a while.

AI brings them back.

Dig deeper: Why ‘it’s just SEO’ misses the mark in the era of AI SEO

Less traffic doesn’t mean less performance

Traffic is going to fall.

Informational traffic is being stripped out of the system.

Traffic only became a problem when it stopped being a measure and became a target. 

Once that happened, it ceased to be useful. Volume replaced outcomes. Movement replaced progress.

In an AI-mediated world, fewer clicks does not mean less opportunity.

It means less irrelevant traffic.

When GEO and positioning-led copy work, you see:

  • Traffic landing on revenue-generating pages.
  • Brand-page visits from pre-qualified prospects.
  • Fewer exploratory visits and more decisive ones

No one can buy from you if they never reach your site. Traffic still matters, but only traffic with intent.

In this environment, traffic stops being a vanity metric and becomes meaningful again.

Every click has a purpose.

What measurement looks like now

The North Star is no longer sessions. It is commercial interaction.

The questions that matter are:

  • How many clicks did we get to revenue-driving pages this month versus last?
  • How many of those visits turned into real conversations?
  • Is branded demand increasing as our positioning becomes clearer?
  • Are lead quality and close rates improving, even as traffic falls?

Share of search still has relevance – particularly brand share – but it must be interpreted differently when the interface doesn’t always click through.

AI attribution is messy and imperfect. Anyone claiming otherwise is lying. But signals already exist:

  • Prospects saying, “ChatGPT recommended you.”
  • Sales calls referencing AI tools.
  • Brand searches rising without content expansion.
  • Direct traffic increasing alongside reduced informational content

These are directional indicators. And they are enough.

The real shift SEO needs to make

For a decade, SEO rewarded people who were good at publishing.

The next decade will reward people who are good at positioning.

That means:

  • Fewer pages, but sharper ones.
  • Less information, more persuasion.
  • Fewer visitors, higher intent.

It means treating your website not as a library, but as a set of sales letters, each one earning its place by clearly solving a problem for a defined audience.

This is not the death of SEO.

SEO is growing up.

The reality nobody wants, but everyone needs

Copywriting didn’t die.

Those spending a fortune on Facebook ads embraced copywriting. Those selling SEO went down the route of traffic chasing.

The two worlds had different values.

  • The ad crowd embraced copy.
  • The SEO crowd disowned it.

One valued conversion. The other valued traffic.

We are entering a world with less traffic, fewer clicks, and an intelligent intermediary between you and the buyer.

That makes clarity a weapon. That makes good copy a weapon.

In 2026, the brands that win will not be the ones with the most content.

They will be the brands that return to the basics of good copy and PR.

The information era of SEO is over.

It’s time to get back to marketing.

Read more at Read More

Not all MMM tools are equal: Meridian, Robyn, Orbit, and Prophet explained

Not all MMM tools are equal: Meridian, Robyn, Orbit, and Prophet explained

Marketing mix modeling (MMM) has shifted from an enterprise luxury to an essential measurement tool. 

Tech giants like Google, Meta, and Uber have released powerful open-source MMM frameworks that anyone can use for free. 

The challenge is understanding which tool actually solves your problem and which require a PhD in statistics to implement.

Open-source MMM tools are often grouped together but solve different problems

The landscape can be confusing because these tools serve fundamentally different purposes despite being mentioned together. 

Google’s Meridian and Meta’s Robyn are complete, production-ready MMM frameworks that take your marketing data and deliver actionable budget recommendations. 

They include everything needed: 

  • Data transformations that model advertising decay.
  • Saturation curves that capture diminishing returns.
  • Visualization dashboards and budget optimizers that recommend spend allocation.

Uber’s Orbit and Facebook’s Prophet occupy different niches. 

Orbit is a time-series forecasting library that can be adapted for MMM, but it requires months of custom development to build MMM-specific features. 

Prophet is a forecasting component used within other frameworks, not a standalone MMM solution. 

Think of it like transportation: 

  • Meridian and Robyn are complete cars you can drive today. 
  • Orbit is a high-performance engine that requires you to build the transmission, body, and wheels. 
  • Prophet is the GPS system that goes inside the car.

Dig deeper: Marketing attribution models: The pros and cons

Robyn: The accessible powerhouse

Meta built Robyn specifically to democratize MMM through automation and accessibility. 

The framework uses machine learning to handle model building that traditionally required weeks of expert tuning. 

Upload your data, specify channels, and Robyn’s evolutionary algorithms explore thousands of configurations automatically.

What makes Robyn distinctive is its approach to model selection. 

Rather than claiming one “correct” model, it produces multiple high-quality solutions that show trade-offs between them. 

Some fit historical data better but recommend dramatic budget changes. 

Others have slightly lower accuracy but suggest more conservative shifts. 

Robyn presents this range, allowing decisions based on business context and risk tolerance.

Budget allocation with Robyn

The framework also excels at incorporating real-world experimental results. 

If you have run geo-holdout tests or lift studies, you can calibrate Robyn using those results. 

This grounds statistical analysis in experiments rather than pure correlation, improving accuracy and giving skeptical executives evidence to trust the outputs.

However, Robyn assumes marketing performance remains constant throughout the analysis period. 

In practice, algorithm updates, competitive changes, and optimization efforts mean channel effectiveness often varies over time.

Meridian: The statistical heavyweight

Meridian represents Google’s Bayesian causal inference approach to MMM. 

Unlike Robyn’s pragmatic optimization, Meridian models the mechanisms behind advertising effects, including decay, saturation, and confounding variables. 

This theoretical rigor allows Meridian to better answer, “What would happen if we changed budget allocation?” rather than simply, “What patterns existed in the past?”

Its standout capability is hierarchical, geo-level modeling. 

While most MMMs operate at a national level, Meridian can model more than 50 geographic locations simultaneously using hierarchical structures that share information across regions. 

Advertising may perform well in urban coastal markets but struggle in rural areas. 

National models average these differences away. 

Meridian’s geo-level approach identifies regional variation and delivers market-specific recommendations that national models can’t.

Meridian insights on channel contribution

Another distinguishing feature is its paid search methodology, which addresses a fundamental challenge: when users search for your brand, is that demand driven by advertising or independent of it? 

Meridian uses Google query volume data as a confounding variable to separate organic brand interest from paid search effects. 

If brand searches spike because of viral news or word-of-mouth, Meridian isolates that activity from the impact of search ads.

The technical complexity, however, is significant. 

Meridian requires deep knowledge of Bayesian statistics, comfort with Python, and access to GPU infrastructure. 

The documentation assumes a level of statistical literacy most marketing teams lack. 

Concepts such as MCMC sampling, convergence diagnostics, and posterior predictive checks typically require graduate-level training.

Dig deeper: How Bayesian testing lets Google measure incrementality with $5,000

Get the newsletter search marketers rely on.


Uber Orbit: The time-varying specialist

Orbit is not technically an MMM tool. 

It’s a time-series forecasting library from Uber with a notable feature: Bayesian time-varying coefficients, or BTVC, which address a fundamental MMM challenge.

Imagine presenting MMM results to your CEO, who asks, “This assumes Facebook ads had the same ROI in January and December? But iOS 14 hit in April, and we spent months recovering. How can one number represent the whole year?” 

That is the credibility-breaking moment practitioners fear because it exposes a simplifying assumption executives correctly recognize as unrealistic.

Traditional MMM frameworks assign one coefficient per channel for the entire analysis period, producing a single ROI or effectiveness estimate. 

  • For stable channels like TV, this can work. 
  • For dynamic digital channels, where teams constantly optimize, respond to algorithm changes, and face shifting competition, assuming static performance is clearly flawed. 

Orbit’s BTVC allows channel effectiveness to change week by week. 

Facebook ROI in January can differ from December, while the model keeps estimates stable unless the data shows clear evidence of real change.

The reality, however, is that while time-varying coefficients are powerful, Orbit lacks the other components required for a complete MMM solution. 

Orbit makes sense only for data science teams building proprietary frameworks that require advanced capabilities and have the resources for significant custom development. 

For most organizations, the cost-benefit tradeoff does not justify that investment. 

Teams are better served using Robyn or Meridian while acknowledging their limitations, or working with commercial MMM vendors that have already built time-varying capabilities into production-ready systems.

Facebook Prophet: The misunderstood component

Prophet is Meta’s time-series forecasting tool. 

It’s highly effective at its intended purpose but is often misrepresented as an MMM solution, which it is not.

Prophet decomposes time-series data into trend, seasonality, and holiday effects. 

It answers questions, such as:

  • “What will our revenue be next quarter?” 
  • “How do Black Friday spikes affect baseline performance?” 

This is forecasting, or predicting future values based on historical patterns, which is fundamentally different from attribution. 

Prophet can’t identify which marketing channels drove results or provide guidance on budget optimization. 

It detects patterns but has no concept of marketing cause and effect.

Prophet’s primary role is as a preprocessing component within larger systems. 

Robyn uses Prophet to remove seasonal patterns and holiday effects before applying regression to isolate media impact. 

Revenue often rises in December because of holiday shopping rather than advertising. 

Prophet identifies and removes that seasonal effect, making it easier for regression models to detect true media impact.

This preprocessing is valuable, but Prophet addresses only one part of the overall attribution problem. 

Marketing teams should use Prophet for standalone KPI forecasting or as a component within custom MMM frameworks, not as a complete attribution or budget optimization solution.

Dig deeper: MTA vs. MMM: Which marketing attribution model is right for you?

Making the right choice for your team

Making the right choice for your team

Choosing between these tools requires an honest assessment of your organization’s capabilities, resources, and needs. 

  • Do you have data scientists comfortable with Bayesian statistics and complex Python? 
  • Or marketing analysts whose statistical training ended with basic regression? 

The answer determines which tools are viable options and which are aspirational.

For about 80% of organizations, Meta’s Robyn is the right choice. 

This includes:

  • Teams without deep data science resources but still need rigorous MMM insights.
  • Digital-heavy advertisers seeking attribution without lengthy implementations. 
  • Organizations that require insights in weeks rather than quarters. 

The learning curve is manageable, implementation takes weeks rather than months, and outputs are presentation-ready. 

A large, active user community also shares solutions when challenges arise.

Google’s Meridian suits:

  • Small and midsize businesses and enterprise organizations with dedicated data science teams comfortable working in Bayesian frameworks. 
  • Multi-regional operations where geo-level insights would meaningfully influence budget decisions.
  • Complex paid search programs requiring more precise attribution.
  • Stakeholders who prioritize causal inference over pragmatic correlations can justify Meridian’s added complexity.

Uber Orbit is appropriate only for data science teams building proprietary frameworks with requirements that Robyn and Meridian can’t meet. 

The opportunity cost of spending months on custom infrastructure rather than using existing tools is substantial unless proprietary measurement itself provides a competitive advantage. 

Facebook Prophet should be used for KPI forecasting or as a preprocessing component within larger systems, never as a complete attribution solution.

Matching MMM tools to real-world team capabilities

The most advanced tool delivers little value if it can’t be implemented effectively. 

A well-executed Robyn implementation running consistently provides more value than an abandoned Meridian project that never progressed beyond a pilot. 

Tools should be chosen based on what teams can realistically use and maintain, not on the most impressive feature set.

For most marketing teams, Robyn and Meridian represent pragmatic choices that balance performance with accessibility. 

Automation handles much of the statistical work, allowing analysts to focus on insights rather than debugging code. 

Strong community support and documentation reduce friction, and teams can move from zero to actionable insights in weeks instead of months, which matters when executives want answers quickly.

For enterprises with substantial technical resources and multi-regional operations, Google Meridian can deliver returns through more reliable causal estimates and geo-level granularity that materially improve budget allocation. 

The investment in infrastructure, expertise, and implementation time is significant, but at a sufficient scale, better decision-making can justify the cost.

Uber Orbit offers advanced capabilities for organizations that truly need time-varying performance measurement and have the resources to build complete MMM systems around it. 

For most teams, commercial vendors that have already incorporated time-varying capabilities into production-ready platforms are more cost-effective than extended custom development.

These open-source frameworks have made marketing measurement accessible beyond Fortune 500 companies. 

The priority is choosing the tool that fits current capabilities, implementing it well to earn stakeholder trust, and using insights to make better decisions. 

Competitive advantage comes from allocating budgets more effectively and faster than competitors, not from maintaining a technically impressive system that is too complex to sustain.

Dig deeper: How to avoid marketing mix modeling mistakes that derail results

Read more at Read More