Google adds AI-qualified call leads to improve measurement

Google Ads

Google is upgrading Google Ads call campaign measurement with a new AI-qualified call leads feature, designed to optimize for lead quality — not just call length.

What’s new. AI-qualified call leads use machine learning to analyze calls and determine whether they represent meaningful business opportunities. The system then feeds that higher-quality data into bidding and reporting.

Zoom in. Advertisers will get AI-generated call summaries and tags, giving more transparency into what happened during each interaction. At the same time, smart bidding can prioritize higher-value leads based on these signals rather than simple time thresholds.

Why we care. Call campaigns have long relied on blunt metrics like duration to signal value. This update shifts optimization toward actual lead quality, filtering out low-value interactions like spam or robocalls. This should result in better ROI, less wasted spend, and clearer insight into which calls actually matter.

How it works. Call recording is turned on by default for most advertisers so AI can assess call quality, though industries like healthcare and financial services are excluded. Advertisers can still adjust call length thresholds or disable recording in account settings.

The fine print. The feature is currently limited to calls in the U.S. and Canada.

Bottom line. Google is turning call tracking into call qualification, helping advertisers focus on leads that are more likely to convert.

Read more at Read More

Web Design and Development San Diego

The funnel flip: Why AI forces a bottom-up acquisition strategy

The funnel flip- Why AI forces a bottom-up acquisition strategy

The industry has been building top-down for 30 years. Start with awareness, get in front of as many people as possible, and work them down through the acquisition funnel.

The logic made sense in the broadcast era, and it wasn’t entirely wrong in the search era.

In AI-driven environments, it’s simply wrong.

Search engines, assistive engines, and agents build their ability to recommend your brand from the bottom up. They need to understand who you are before they can evaluate whether you’re credible. They need to evaluate your credibility before they recommend you to anyone.

If you build from the top down, you’re wasting budget on awareness while the engines and agents have no foundation to attach it to.

Agential systems make the stakes absolute. An agent acting on behalf of a user evaluates your brand, your offers, and your credibility, then commits.

If the machine doesn’t understand who you are, what you offer, and whom you serve, the agent can’t act in your favor. If it understands you but doesn’t find you the most credible option, it selects your competitor.

This is the ultimate zero-sum moment in AI: the recommendation you never saw happening, to the prospect you never knew was considering.

The acquisition funnel runs simultaneously in opposite directions

The user experience of the acquisition funnel hasn’t changed. Someone hears about you, considers you, and decides whether to commit. That journey runs wide to narrow, top to bottom: awareness first, evaluation second, and decision at the bottom.

This is the familiar funnel. Elias St. Elmo Lewis formalized it in 1898. Every marketing model since has been built around it, and for 128 years, nothing fundamental has changed. The channels evolved, but the direction was always the same: reach first, relationship second, commitment third. 

In 2002, my friend Philippe Lanceleur described the web perfectly for search: building a website and hoping people find it is like opening a shop in the middle of a field. Nobody passes by accident. You go where your audience hangs out, engage with them, and invite them to cross the field and visit your shop. Awareness was still the prerequisite, and your marketing had no chance of working without it.

The shift to entities changed the prerequisite. When Google introduced the Knowledge Graph in 2012, the machine began forming opinions about brands independently of what users were searching. The machine was drawing its own map and building roads for you. 

Those machine-built roads are built from the shop outwards by the machines, which means brand understanding and reputation, not awareness, become the prerequisite. All my work since 2012 has been focused on brand understanding and reputation for exactly this reason.

AI makes the acquisition funnel flip more powerful still. Assistive engines and agents now actively direct users toward destinations they’ve assessed as credible. Lanceleur’s shop in the field is no longer a handicap if the machines know it’s there and believe it’s the best destination for their users: they provide the roads.

This is the first genuine structural break in how brands must think about marketing since 1898. The display funnel is unchanged: the user still travels from awareness to decision. What makes you a candidate at the top of that funnel in AI engines and agents is built by training the machine to bring users to you.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

How top-down and bottom-up coexist

The big takeaway is that the build funnel runs in the opposite direction. 

  • The machine starts at the bottom. Does it know who you are? 
  • It works up through credibility. Does it trust what you do? 
  • Only then does it reach advocacy. Will it recommend you proactively? 

The moment of commitment by the user stays the same: know-like-trust the brand, but the only way for the user to arrive at that moment in AI assistive engines is that the machine knows, likes, and trusts your brand.

The coexistence of the bi-directional funnel is real. You can build top-down in channels you control: paid media, broadcast, and direct outreach. You can still buy awareness and pull people to decision. In the engines themselves, the user still has the top-down experience. 

The difference is that within the engines for organic, you have to build from the bottom of the funnel (BOFU) up because that’s how the machines build the roads to your brand.

Every algorithm, assistive engine, and agent operates on entity and brand signals, not on how loudly you push. Reach on social media has always been influenced by brand recognition, engagement, and topic, and here too, brand understanding and trust are gaining increasing weight.

With AI, roads to your shop in the field are increasingly machine-built, and machine-built roads are built from brand understanding outwards to awareness.

The original 1898 funnel still describes what users experience. In AI assistive engines and agents, it no longer describes the strategy that gets you in front of them: for that, you need to flip the funnel.

In short, you can’t build your funnel in AI engines and agents top-down in a world where those machines are the mediators between you and your audience. The machine won’t recommend brands it doesn’t understand, and it will only advocate for brands it trusts. This is a mechanical fact.

AI infrastructure works like this, so you also must. 

  • Understandability creates the entity node.
  • Credibility gives it preferential consideration.
  • Deliverability gives it visibility.

Foundation. Proof. Reach. Put like that, it really does seem obvious, unavoidable, and comfortable.

Get the newsletter search marketers rely on.


How the funnel becomes a guided sequence in AI

The user journey on Google used to be a series of single-composed SERPs that users navigated themselves. Search engines composed those pages cleverly (Google and Bing have run a whole page algorithm since universal search launched in 2007, Darwinistically pulling elements from across verticals and scoring the composition as the “product”), but the navigation across the funnel was the user’s job.

As an SEO, you optimized for a position in the composition, and the user carried themselves from awareness to consideration to decision by browsing, comparing, and choosing.

Over the last few years, the algorithmic trinity has fundamentally changed that dynamic. The LLM reasons about what the user is asking, decides whether to answer directly, ground, search, or fact-check via the knowledge graph, and runs fan-out queries to retrieve across multiple angles of the question.

Those fan-out queries (which I’ve also called cascading queries) help the assistive engine answer the question more completely and more accurately than a single query would. But the breadth of what it gathers also lets it do one more thing — and this is the mechanic that actually matters in the funnel that leads to the perfect click: it can anticipate what the user is likely to do next, and set the current answer up to flow toward it.

The explicit representation of the LLM’s prediction of “next step” is the follow-up questions you see in the results. But there’s an additional implicit side to this architecture you might have missed: the way it composes the current answer shapes what the user is likely to do next. The AI is, to a very large extent, defining the acquisition journey. It seems to me the user is less in control than they feel.

That means your job appears to be to fight for a slot in a sequence the machine has already built.

That’s fair. But I’d argue that the brand’s job is also to train the machine’s expectations about what a logical next step looks like, so that when the LLM composes, your content is the natural thing it reaches for. 

You supply the ideas, you structure the follow-ups, you publish the logical bridges (“if you’re thinking about X, the next thing to consider is Y, and here’s the evidence”) in enough places, and with enough corroboration, that the machine treats those bridges as settled, not speculative. The machine then guides users toward you because your content is what its prediction landed on, because your framing is what made that prediction logical in the first place.

Now, is the AI thinking one step ahead? Or playing chess and planning several moves in advance? It depends. How far ahead the machine can usefully look depends on the territory. 

On well-traveled ground, the paths are well-worn, and the branches are narrow, so the LLM can stage two, three, or more moves ahead. Think of this as established neurological synapses: your influence on the paths is limited here. 

In unusual territory, the branches collapse the prediction horizon back to one, perhaps two steps. That’s an opportunity for a brand to create the synapses with your brand firmly anchored. Here’s yet another good reason to niche down, solve very specific problems, and have a very clear funnel pathway.

When defining the content I work on and terms I track, I use the concept of funnel pathway for exactly that reason — a top-of-funnel (TOFU) query that naturally leads to my brand at BOFU with a series of steps that are logical and relatively predictable.

So, track a set of terms that have a natural pathway to your brand at the zero-sum moment at the bottom of the funnel. Some start at TOFU and move through MOFU to BOFU. Others begin at MOFU with a clear path to BOFU, and some start (and end) at BOFU.

I’ll probably get pushback here. The number of possible paths is effectively infinite because conversations with AI can go anywhere. True. But this is a better system than chasing search volume or tracking the terms the boss likes: it forces you to think, focus, and prioritize — and it works.

Get your foot in the door, and keep it there

Strategically, you have to get a foot in the door as early as possible in the conversation, and ensure that you keep your foot there as the conversation evolves and the AI guides the user down the funnel.

The stronger your foot in the door, the more you shape the conversation the machine builds, the more that conversation thins the field of competitors the machine considers for the next step, and, by virtue of elimination, the more likely you are to get the perfect click at the zero-sum moment at the bottom of the funnel.

I’m advocating for educating the algorithms (remember, Google is a child?). The better you guide, the more the machine’s best-brand prediction converges on you step after step, because the path it’s following is the path you built into its brain. 

Get in high, and the compounding works in your favor. Get in late, and your competitors’ bridges become the machine’s bridges, and every subsequent step is a fight to re-enter a sequence where your competitor is Top of Algorithmic Mind.

Display is where your acquisition funnel lives in the AI engine pipeline

The AI engine pipeline runs 10 gates from discovered to won. 

  • Everything up to annotation (Gate 5) is infrastructure: can the machine access, store, and classify your content? 
  • From recruitment (Gate 6) onward, the engine compares you to every alternative. 
  • The understandability, credibility, and deliverability (UCD) layer is where the user sees the machine evaluation at display (Gate 8). Understandability is the key to won (Gate 9).

The three dimensions of brand visibility at display

Display is the moment when the machine can make or break your brand by being the most visible in the market at every touchpoint when your ideal customer profile (ICP) is having a conversation with the engine or agent. 

It’s obvious that this is the key moment when you need the engine or agent to be absolutely convinced that you’re the best solution to the specific user’s problem at the exact moment they convert (see the 95/5 rule here).

Understandability (U) is the trusted partner/decision layer, without which nothing else will work long term. Does the machine know who you are, what you do, and who you do it for? 

U is BOFU, which is both the moment of decision and (logically) the deepest trust layer for both the AI user and the human user. When someone searches your brand name or asks an AI assistant directly about you, the machine draws on its understanding of your entity. 

If that understanding is weak, contradictory, or absent, the machine either hedges or stays silent. Typical failure modes show up in AI responses as “claims to be,” “appears to offer,” or “no idea who you are talking about.” The doubt tax — where prospects ready to buy get a hedge instead of a confirmation — is a U failure.

Credibility (C) is the recommender/consideration layer. Does the AI believe you’re genuinely better than your competitors at what you do? 

C is MOFU, the comparison and evaluation layer. When someone asks an AI who is the best in market, the machine draws on its confidence in your N-E-E-A-T-T credibility and will exclude you if you haven’t built a rock-solid argument to be cited. 

If AI confidence in you is weaker than its confidence in the credibility of your competitor, you lose the comparison. The ghost tax – absent from competitive evaluation and ignored in shortlists — is a C failure.

Deliverability (D) is the advocate/awareness layer. Does the AI surface your brand to people who aren’t searching for you, recommend you unprompted when they research the market, and treat you as the reference option in your category? 

D is TOFU, the reach layer. When someone asks an AI about a problem, you solve without knowing your brand exists, the machine draws on its confidence that you are the right answer to put in front of them. 

Advocacy only happens when the machine has first understood who you are (U), and judged you better than the alternatives (C). The invisibility tax — never mentioned to prospects researching the market — is a D failure.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

The business case for UCD: The three taxes

My untrained salesforce framing is super clear for a non-technical audience. Google, ChatGPT, Perplexity, Claude, Copilot, Siri, and Alexa are seven employees working 24/7, and they’re either selling for your brand or for your competitors. AAO can be defined as training AI assistive engines and agents to sell for you at the top, middle, and bottom of the funnel.

Here’s the part most of the industry still hasn’t internalized: machines aren’t an alternative audience. They’re a mirror of how people process information, with the noise filtered out. 

Optimizing for machines is optimizing for humans with less guesswork. A brand SERP is Google’s opinion of the world’s opinion of you, and Google’s opinion is built from the same signals that form human opinion, only weighted more consistently, and corroborated across millions of data points. 

When you optimize to improve what Google believes about your brand, you’re not gaming an algorithm. You’re correcting and reinforcing what the world already believes about you, expressed with the precision humans rarely articulate. The algorithm is the clearest feedback loop marketing has ever had. 

Each tax is a specific failure mode of that untrained salesforce. 

  • The doubt tax is what you pay when they can’t confirm who you are to a prospect ready to buy. 
  • The ghost tax is what you pay when they can’t argue your case against competitors in a shortlist. 
  • The invisibility tax is what you pay when they don’t mention you at all to the prospect researching the market. 

The fixes run in one order: U before C, C before D, because the taxes are mechanically ordered, and the remediation has to match.

Content was king in the keyword era, context took the throne around 2016, and confidence is king now. The AI engines don’t just store and retrieve. They stake their own credibility on the brands they recommend, and that staking runs on accumulated confidence at every layer. 

Build U to retire the doubt tax. Build C to retire the ghost tax. Build D to retire the invisibility tax. Every tax retired is a recommendation earned, and every recommendation earned is revenue the machine now generates on your behalf instead of your competitor’s. 

Strategy: Your brand SERP and AI résumé tell you where to begin

Brand SERP is what Google shows when someone searches your brand name. The AI résumé is the same object in conversational format. The agent dossier is the machine’s silent judgment during evaluation before any recommendation reaches a person. 

All three are dual-function objects. They’re the machine’s output to every audience that asks about you, and your diagnostic instrument for reading the machine’s current confidence. That dual function is why they’re both the product and the audit.

Read all three as the machine’s understanding of you, its assessment of your credibility, and its confidence in you as a solution provider. The diagnostic triage is short.

If the machine gets things wrong, hedges facts, or the results don’t reflect your brand narrative, that’s an understandability problem. The entity record is inconsistent, weak, or contradictory, and the work is on your entity home: clean structured data, consistent descriptions, clear schema, and entity resolution that points to a single authoritative source.

If the results are unconvincing, unflattering, or don’t do you full justice, that’s a credibility problem. Your N-E-E-A-T-T is weak, and the work is offsite: third-party mentions, review platforms, earned media, and co-citations from sources the machine trusts.

If the results don’t reflect your digital marketing strategy, that’s a deliverability issue. The work is in content, both on your channels and on third-party properties, the type of material the machine treats as proof rather than a claim.

In every case, the diagnosis comes before the tactics. U before C, C before D, and the sequence isn’t optional.

Acquisition is one act in a 15-stage play

The acquisition funnel feels dominant because it’s where conversion happens. The funnel sits on the display gate, where UCD determines whether the machine recommends you. 

Everything else, the work that lets display happen at all and the work that compounds afterward, runs across the nine gates before it and the five gates after it.

Those five gates after Won are where most of the money is made and most of the confidence is generated. Onboarded, performed, integrated, devoted, and codified — every client outcome feeds signals back into gate zero for the next prospect who has never heard of you. 

The flywheel is the mechanism. Get it right, and every satisfied client strengthens the machine’s confidence in your brand for the next one. Get it wrong, and every neutral outcome decays it.

That’s more than just an acquisition strategy; it’s a business strategy, with the machine as a constant participant at every stage.

The final articles in this series will show you what happens after won: how every satisfied client either trains the machine to recommend you more confidently next time, or quietly erodes the confidence you’ve already built. 

The funnel isn’t where the money is made, but it is the critical moment the flywheel feeds where the path to money is.

This is the 10th piece in my AI authority series. 

Read more at Read More

Web Design and Development San Diego

Google rolls out new AI safety features in Ads Advisor

What 23 tests reveal about AI Max performance in Google Ads

Google is adding three new “agentic” safety features to Ads Advisor, its AI assistant inside Google Ads, aimed at reducing manual work while tightening security and compliance.

As campaigns grow more complex, advertisers are spending more time fixing policy issues, managing access, and handling certifications. Google’s pitch: let AI handle the heavy lifting so marketers can focus on performance.

What’s new. The update introduces proactive troubleshooting, always-on security monitoring, and instant certifications — all powered by AI and Gemini capabilities.

Zoom in:

  • Ads Advisor can now flag and help resolve policy violations automatically, even before advertisers notice them.
  • It monitors accounts 24/7, surfacing risks like suspicious domains or inactive users through a new security dashboard.
  • Certifications that once took weeks can now be granted instantly or submitted with a single click.

How it works. Instead of waiting for user prompts, Ads Advisor scans accounts and websites proactively, suggests fixes, and confirms resolution before appeals are submitted. On the security side, it continuously evaluates account health and recommends improvements, while new passkey support reduces reliance on passwords.

Why we care. Tasks that used to take hours — fixing policy issues, monitoring account security, and handling certifications — can now be done proactively by Ads Advisor, reducing delays and aims to reduce risks. The result is faster campaign execution, fewer disruptions, and less manual overhead.

What to watch. These features are rolling out in the coming months to English-language accounts, with more languages expected later.

Bottom line. Google is turning Ads Advisor into a hands-on operator, not just a helper — aiming to make ad accounts safer, faster, and far less manual to manage.

Read more at Read More

Web Design and Development San Diego

How to build a YouTube analytics report in Data Studio

How to build a YouTube analytics report in Data Studio

Creating video content takes time and budget, so understanding how it performs is critical.

YouTube’s native analytics in YouTube Studio are robust, but they’re locked behind account access. That can make reporting difficult — especially when you need to share data or don’t have direct login access.

Moving that data into Google Data Studio (formerly Looker Studio) makes it easier to analyze and distribute.

With Data Studio, you can:

  • Pull YouTube data into reports you already use.
  • Schedule automated updates for stakeholders.
  • Customize dashboards around the metrics that matter.
  • Track performance without relying on backend access.

Here’s how to pull your YouTube analytics into a Data Studio report.

Using a template or starting from scratch

You have two options when setting up a YouTube report in Data Studio.

  • If you want something quick and easy, you can use Google’s YouTube Analytics template from their template gallery. It’s a great place to start because it provides a clean, well-designed report with foundational metrics and puts you in a good position to understand which metrics are available. But know that this template has problems you’ll need to fix, which I’ll discuss below.
  • The other option is to create a report from scratch, which is a great choice if you already have a report you want to add a new YouTube Analytics page to, or if you just want to learn how to use Data Studio.

The information below will help you do both.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

If you’re not the YouTube account owner

If you’re setting up this report for a client, or if you’re not the owner of the YouTube account, you’re going to run into an issue where the YouTube account doesn’t show up as a usable source in Data Studio. Here’s how to get around it:

  • Go to YouTube Studio settings > Permissions, and give Manager permissions to the account email that you’re using in Data Studio.
  • Get the Channel ID from the channel’s YouTube URL.
  • Add a YouTube connector to Data Studio, go to Advanced, and paste the Channel ID.

You should now have access to that YouTube account.

Using the Data Studio YouTube Analytics template

From the Data Studio home page, click Templates > Template Gallery. Under the category dropdown, click on YouTube Analytics.

Clicking this will create a brand new Data Studio report that’s mostly ready to use. It loads up with sample data from the Google Analytics YouTube channel. Click the button at the top that says “Use my own data.”

The first time you set up a report, you need to authorize access to your data. Click the Authorize button.

Choose the Google Account connected to your YouTube channel, and then you’ll see any connected channels in the dropdown at the top of the page.

You’ll notice that the data doesn’t change when you select a site here. That’s because this dropdown is connected only to the other dropdowns next to it, not any of the charts on the page.

To update everything else on the page, click the Edit and Share button.

If this is the first time using Data Studio, you’ll also need to do some basic account setup.

Then click the Edit button at the top of the page.

Now you’ll need to add your YouTube channel as a source. Click the Add data button and then search for the YouTube Analytics connector.

If the Google Account is the owner of the YouTube account you connected to this Data Studio report, it’ll show up in the Channel section as an option. 

Your main YouTube channel will be in the My Channel tab, and other channels are in the All Channels tab, as shown below. 

If you don’t own the channel, see the section above to connect other channels that you don’t own, but have access to.

Now you’ll be able to change the data source on any charts on the page. Simply click a chart, and you’ll see the data sources available to you in the right Properties panel.

You can change the source of all of the charts on the page by selecting a chart, right-clicking on it, going to the Select menu, and then choosing “Charts with this data source on page” and then choosing your data source in the Properties sidebar.

You’re mostly done, but as mentioned earlier, there are some errors in this report that you’ll need to fix. The charts at the bottom of the report are using the wrong metrics.

I don’t know why Google hasn’t updated this template. It’s been like this for a long time, so I don’t know if they ever will. In the meantime, you’ll need to update the following.

Change:

  • Likes from “Average Watch Time” to “Video Likes Added”
  • Subscriptions from “Video Link” to “User Subscriptions Added”
  • Dislikes from “Average View Percentage” to “Video Dislikes Added”

The charts in the Comments section are correct, so you don’t need to change anything there.

Click on each of the charts highlighted above, one by one, and change the metric in the Properties sidebar.

And now the report is finished and ready to use. Click the View button at the top of the page to view the report in a view-only format.

Get the newsletter search marketers rely on.


Copying a template into an existing report

Data Studio doesn’t support the ability to add or import templates into an existing report, but you can copy a page from one report to another. Follow the steps above to create a report using the YouTube Analytics Channel template, then copy it into another report.

To do that, go into Edit mode, select all (Ctrl+A or Cmd+A), and copy all (Ctrl+C or Cmd+C). Then, in your existing report, create a new page, and paste everything you’ve copied into the page (Ctrl+V or Cmd+V), or right-click on the page and select Paste.

All of the charts will likely come in broken, but you can easily update them using the tip mentioned earlier – right-clicking a chart, choosing Charts with this data source on page, and then choosing the correct source in the Properties sidebar.

Customizing your report

The YouTube template in Data Studio has most of what you need, but you can add much more.

There are some metrics you simply can’t get in Data Studio that you’d find in the official YouTube Analytics backend, such as revenue, how viewers find your videos, watch behavior, popular viewing times, device types, genders, and retention, so there are some big limitations, but there’s still plenty to work with.

To add more charts to your report, you’ll need to create more space at the bottom. In the menu, click on Page > Current page settings.

In the Style tab of the Current Page Settings sidebar, set the canvas size to something like 3,000 pixels. This will give you plenty of space to work with, and you can always shorten or lengthen it as needed later.

Now you can add many types of charts with a wide range of dimensions and metrics.

You can add multiple metrics to graphs to get the data you need for better analysis. You can also rename headers to clean them up, and make them look less cluttered.

You can pull in quite a lot of data. Here’s what’s currently offered:

Using Data Studio for ongoing YouTube reporting

Setting up a Data Studio report for YouTube is a great way to track your top-level metrics, and can be especially useful for monthly client reporting. It takes siloed, hard-to-share data from YouTube, and puts it into a clean, automated, centralized tool for easier decision-making.

You can also set up scheduling so that Data Studio sends automated PDF exports to your email.

That’s it. As you can see, it’s fairly simple to set up, but you can also add more advanced customizations to track many other KPIs.

Read more at Read More

Web Design and Development San Diego

Why IBM says every brand now needs a GEO playbook

GEO playbook

Search has changed, and brands need to catch up fast, according to IBM’s Alexis Zamkow (global lead of Marketing Transformation solutions) and Sandhya Ranganathan Iyer (associate partner – AI), speaking yesterday at Adobe Summit.

AI tools don’t just help people search. They answer questions, compare products, and recommend brands. In many cases, users never even visit a website.

That means if your brand isn’t part of the AI-generated answer, you may not be part of the decision.

To keep up, brands need more than new tactics. They need a system — a GEO (Generative Engine Optimization) playbook. Here’s a recap of their presentation, Adapt or Disappear: How Brands Win with AI-Powered Search.

The AI shift: You’re marketing to machines

AI agents now sit between you and your customer.

They take a complex market and simplify it. They decide what information to show. And they often speak on your behalf.

  • “These machines are disintermediating the brand experience,” Zamkow said.

At the same time:

  • Consumers are using AI for research and decisions
  • Businesses are adopting it even faster
  • Many searches now end without a click

Zamkow said an estimated 75% of search visibility could shift to AI agents in the next two years.

That’s why visibility today depends on being part of the answer itself.

The GEO playbook: 12 components every brand needs

To respond, the speakers outlined a 12-part playbook. It spans content, technology, and operations.

1. Strategic content foundations

Your content must tell one clear story — everywhere.

That includes your website, PR, social, and third-party mentions. If each channel says something different, AI won’t trust your brand.

For example, if your site highlights premium quality, but reviews focus on low price, that mixed message weakens your authority.

Consistency builds trust for people and machines.

2. Retrieval-grade passage standards

AI doesn’t rank webpages. It extracts answers. So your content must be easy to extract.

Good content looks like:

  • Clear questions and answers.
  • Short, focused sections.
  • Direct language.

For example, instead of a long paragraph, write:

  • Question: What are the best running shoes for beginners?
  • Answer: A short, clear response

This makes it easier for AI to reuse your content in answers.

3. Technical foundations

Even great content won’t work if AI can’t read it.

Machines rely on:

  • Clean HTML (not just visual design)
  • Structured data (schema, metadata)
  • Pages that load content directly

One example from the session: a beautiful website appeared to AI as “a headline and a blank page.”

If your content isn’t readable, it won’t be used.

4. On-site search + genAI search alignment

Start with your own site.

If your internal search — especially AI-powered search — works well, you’re already ahead.

Think of it this way: If your own system can’t find answers on your site, external AI tools won’t either

Strong internal search helps train your content for external visibility.

5. AI search citation qualification model

In GEO, the goal isn’t just to be mentioned. It’s to be cited.

  • Mentions mean you show up.
  • Citations mean AI trusts you.

AI looks for signals like:

  • Clear expertise.
  • Consistent messaging.
  • Agreement across sources.

Zamkow called citations the “holy grail” of visibility.

6. Extraction optimization

AI tools pull content from many places and combine it.

To be included, your content must be:

  • Easy to extract.
  • Clearly structured.
  • Rich in context.

If your content is hard to break apart, AI will skip it and use something else.

7. Real estate: third-party strategy

Your website is no longer your main source of visibility.

  • 85% of mentions come from external domains.
  • Third-party content drives most citations.

That includes:

  • Reddit
  • Social media
  • Reviews and forums
  • Media coverage

This means your PR and social teams are now critical to search success.

Your brand lives across the internet — not just on your site.

8. Measurement, KPIs, and reporting

Old metrics don’t tell the full story anymore.

Instead of just tracking clicks, you need to track:

  • How often AI mentions your brand.
  • Where you’re cited.
  • Which platforms show your content.

The key question changes from “Did we get traffic?” to “Did AI recommend us?”

9. SOPs (standard operating procedures)

Consistency doesn’t happen by accident. Teams need clear rules for:

  • How content is written.
  • How it is structured.
  • How it is published.

Without SOPs, different teams will create different formats. That confuses AI and weakens your visibility.

10. Prompting best practices

Search is now conversational.

While people still type keywords, they are increasingly describing their needs using more conversational language. For example:

  • Old search: “running shoes”
  • New search: “I’m training for a marathon. What shoes should I buy?”

Your content needs to match these types of questions.

That means thinking like the user — and writing like the answer.

11. Change management

This shift affects the whole organization.

Marketing, IT, PR, and product teams all play a role.

That means:

  • Training teams on new workflows.
  • Aligning goals and KPIs.
  • Breaking down silos.

This is bigger than just a marketing update. It’s a company-wide change.

12. Governance + versioning

GEO is never finished.

AI systems change constantly. Competitors update content. Rankings shift fast.

To keep up, brands need:

  • Ongoing monitoring.
  • Regular content updates.
  • Clear ownership of changes.

If your content becomes outdated, you can quickly lose your position in AI answers.

From SEO tactics to GEO systems

The GEO playbook reflects a larger change in how marketing works:

  • From keywords to prompts.
  • From links to citations.
  • From websites to ecosystems.
  • From traffic to answer eligibility.
  • From campaigns to continuous content.

The focus has shifted to building a system that consistently feeds AI the right information.

This is now a leadership issue

This shift is already reaching the top of the organization.

In one example, a product leader asked why their brand didn’t show up in an AI recommendation. The issue quickly escalated beyond marketing.

  • “This is not a problem for your SEO team,” Zamkow said. “This is at the CEO level.”

As AI becomes the front door to discovery, every leader will care about visibility.

Adapt or disappear

AI is already shaping how people discover and choose brands.

Consumers trust it. Businesses are using it. And it’s growing fast.

Brands that build and follow a clear GEO playbook — across all 12 components — will stay visible.

Everyone else risks being left out of the answer.

Read more at Read More

Web Design and Development San Diego

SEO reporting outgrew Data Studio — here’s what comes next

SEO reporting outgrew Data Studio — here’s what comes next

Picture this: Your company relies on Data Studio for SEO reporting. 

It’s right before your next big meeting when you’re planning to present results… but Data Studio has an outage (again) and suddenly you have nothing to show. 

That’s embarrassing. And it happens more than it should.

It wasn’t even a year ago that I touted the benefits of Looker Studio (now Data Studio) for SEO reporting. Now the platform feels archaic compared to the agentic coding tools available today.  

Here’s how rigid SEO dashboards like those produced in Data Studio are holding you back and why code-driven SEO reporting is the only way to remain efficient and competitive.

The problem with Data Studio 

In the not-too-distant past, Data Studio was considered one of the best ways to customize SEO reporting. 

But things have evolved, and with new technology at our fingertips, Data Studio’s flaws are only becoming more pronounced. 

Here are some common issues that you may recognize when generating reports using Data Studio.

It’s easy to explode your dataset, and then everything breaks

You assume Data Studio can handle massive “Google-scale” data, but it’s buggy. For example, there are low limits on rows and fields, and even adding a few dimensions or joining multiple data sources can break the report at the worst times. 

You’re manually clicking through a slow interface

Every change in Data Studio requires manual updates. You’re clicking, refreshing and waiting to see whether it worked. That makes iteration painfully slow. Even with added AI features, they only address a small part of the report development workflow.

Relatedly, debugging reports is a nightmare

Whereas agents can simply scan files with code-based reports, in Data Studio, a user has to laboriously click around the interface. 

The API is weak

Like a lot of Google services, Data Studio isn’t built as an API-first platform. This is something Google got institutionally wrong decades ago. Not being able to manage the platform using external tools creates bottlenecks.

Despite its recent rebrand, Data Studio hasn’t become any more relevant — not with the technologies that are now in play for SEO reporting.

But it’s not just Data Studio. Really, what SEO teams are up against is the rigidity of any dashboard-based reporting tool. Now all that is changing.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

What’s changed: AI, APIs, and coding 

The shift away from rigid SEO dashboards is now possible because large language models are becoming more capable of generating reliable code, and APIs are accessible across many platforms.

This has led to the rise of AI-driven coding tools, including Claude Code, OpenAI Codex, and Gemini CLI.

At a high level, it works like this: You describe what you want in your SEO report, and they handle the heavy lifting. 

These tools are “agentic” because they can execute multi-step workflows like pulling data, transforming it, analyzing it, and then generating reports with minimal intervention.

You don’t need advanced coding skills to use them, but a basic understanding of data structures and APIs will make the process effective.

In practice, the entire reporting workflow can be done programmatically from start to finish.

They generate code that connects directly to data sources through APIs, removing the need to rely on dashboard connectors or preconfigured data pipelines.  

From there, they can analyze the data and create full reports. This can happen in minutes as you become more familiar with the tools.

While each of the tools I mentioned has different strengths (for instance, some are better at reasoning, others at speed or integrations), they essentially do the same thing: transform SEO reporting from a manual, rigid process into something with endless possibilities. 

The power of this technology is hard to overstate. 

Why AI coding tools are better for SEO teams

AI coding tools are removing the roadblocks between data, development, and reporting for SEO teams. 

Faster SEO reporting and analysis

Speed is the most obvious advantage. 

Agentic coding assistants are enabling SEOs to create reports that previously required support from developers.  

In many cases, tasks that previously took days can be done in hours and tasks that took hours can be done in minutes. 

You can see this improvement even in small interactions.

For example, when data is processed directly in the browser (instead of re-querying a dashboard), it makes filtering, sorting, and slicing data significantly faster. 

Instead of waiting for a dashboard to refresh after every change, you can interact with the data in real time.

That’s just one way these technologies make you more agile.

Flexible and custom reporting workflows

Instead of having to work in predefined templates and a fixed structure, you can build the report for exactly what the situation requires. 

Plus, every major data visualization and plotting library is available on demand in any programming language. 

If you feel like one approach isn’t capturing the whole story in your SEO report, you can switch or combine multiple frameworks in the same output. 

From rankings and traffic trends to keyword clusters or content performance, you can apply nearly any chart. 

The examples below come from Observable Plot, created by data visualization expert Mike Bostock, but many other charting libraries are available.

While setup and onboarding take some initial effort, these tools are accessible to most roles on the team and immediately become more efficient than traditional reporting.

Transparent data constraints

Data limitations are clearer, too. 

For example, when you’re working with browser-based charting libraries, you have a better feel for how many rows you’re handling and what the system can realistically process. 

And when you do hit a limit, you understand exactly what’s happening and how to adjust. This helps prevent misleading or incomplete reporting. 

Get the newsletter search marketers rely on.


Real-world SEO reporting applications

What are some practical ways you can use these agentic coding assistants to run SEO reporting? 

Pre-meeting reports

Before client meetings, you can pull data from Google Search Console and GA4 via APIs, then have it cleaned and segmented programmatically and generate a notebook, dashboard or slide deck in a single workflow.

Technical SEO analysis

Say you need to analyze crawl data or log files. Instead of exporting, filtering, and then visualizing the data manually, you could get the raw data, process it with code, and generate custom visualizations tailored to the exact problem you’re trying to solve.

Ad hoc stakeholder requests

Once data connections are established, last-minute reporting requests no longer have to mean staying up late to pull data and build reports. The next time someone asks for something like “non-brand CTR trends by device over the last 90 days,” you can produce this data with much less effort. 

Really, if you can imagine it, you can do it with these agentic coding assistants. As a result, SEO teams can do more proactive analyses.

What this means for agencies and in-house teams

AI is impacting all knowledge workers, not just SEOs. 

By now, many have seen the viral article “Something Big Is Happening” by Matt Shumer, which paints a startling picture about the future of AI-powered work and adopting an “adapt or die” mentality.  

Research is beginning to show how these types of technologies are impacting productivity. 

One study by Stanford and MIT researchers found that access to AI tools in the workplace increases productivity by at least 14% on average, with a 34% increase for low-skilled workers. 

The bottom line is that anything that can be generated with code is going to be eaten by these CLI tools and agents, because they’re just so much faster. 

Businesses are catching on. Up to 64% of businesses now generate a majority of their code with AI assistance, according to a Business Insider report, and high-adoption teams are producing nearly double the output. 

For SEO teams, they’re experiencing faster reporting cycles, more iterative analyses, and the ability to handle more complex data.

AI coding assistants are also helping analysts become builders. Non-technical users can build and iterate in ways that were previously out of reach.

Ultimately, this shift is becoming table stakes. The SEO teams that integrate these tools into their workflows will move faster and produce better results. 

The competitive advantage is going to those who adopt these technologies first.

Where to begin, though? Consider piloting a small project:

  • Start with one repeatable reporting workflow.
  • Connect a data source like Google Search Console via API.
  • Test and refine a single report before expanding to other use cases.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

The future of SEO reporting is agentic and code-driven

Traditional SEO reporting tools are quickly becoming a bottleneck. 

AI coding assistants are helping SEO teams respond to any type of reporting without the added friction, while delivering faster, better insights. 

The companies that adapt will gain the advantage in SEO execution. Start by replacing one recurring report with a code-driven workflow and build from there.

Read more at Read More

Web Design and Development San Diego

Microsoft launches AI Max and new ad tools for the “agentic web” era

Microsoft (Credit: Shutterstock)

Microsoft is rolling out a suite of updates across Microsoft Advertising to help brands stay visible — not just to people, but to AI agents increasingly making decisions on their behalf.

What’s new. The update spans measurement, commerce, and media, with new tools designed to help advertisers show up in AI-driven experiences and transactions.

On the ads side. Microsoft is introducing AI Max for Search campaigns, which expands query matching and personalizes ad delivery across AI surfaces like Copilot and Bing. It’s also launching “Offer Highlights,” new ad formats that surface key selling points — like free shipping — directly within AI conversations.

Zoom in:

  • Expanded AI Visibility in Microsoft Clarity shows how brands appear in AI-generated answers, including which content gets cited and where competitors outperform.
  • New Universal Commerce Protocol support in Microsoft Merchant Center structures product data so AI agents can discover and transact on it more easily.
  • Copilot Checkout enhancements enable purchases directly inside Microsoft Copilot, reducing friction from discovery to sale.

Also notable. A new AI-powered audience generation tool lets advertisers describe their ideal customer in plain language, with the system building targeting segments automatically.

Why we care. Microsoft is changin how visibility works in Microsoft Advertising — shifting from clicks and rankings to being selected by AI systems. Tools like AI Max, AI Visibility, and Offer Highlights help brands show up in AI-driven decisions, not just search results. As AI agents take a bigger role in discovery and transactions, advertisers who adapt early will have a clear advantage.

Between the lines. This is a shift from optimizing for clicks to optimizing for selection — ensuring your brand is chosen by AI systems, not just seen by users.

What to watch. Early data suggests AI-driven traffic is growing far faster than human traffic, signaling where future demand may concentrate.

Bottom line. Microsoft is preparing advertisers for a world where winning means being understood — and trusted — by AI agents, not just ranking in search results.

Dig deeper. Win Across All Three Eras of the Web

Read more at Read More

Web Design and Development San Diego

How to measure Demand Gen creative impact with asset uplift tests

How to measure Demand Gen creative impact with asset uplift tests

Demand Gen campaigns have high visibility across YouTube, Discover, and Gmail. However, they pose a key challenge: the “attribution illusion.” You’ll often question whether reported conversions in the platform are truly incremental or if these users would’ve converted through search either way.

That’s why in November, Google launched asset uplift experiments, giving you the ability to measure the impact of Demand Gen creative through an A/B split test. This means you can replace assumptions with a clearer view of what’s actually driving incremental results.

Relying too heavily on creative instinct or default reporting can lead you down an inefficient path and divert valuable creative resources toward poor-performing assets. Using Google’s A/B testing capabilities helps you isolate the impact of individual assets and avoid that outcome.

Why attribution doesn’t equal incrementality

If a user views a Demand Gen ad on YouTube and doesn’t click but then searches for the brand and converts, Google may attribute partial or full credit to the Demand Gen campaign and creative. This attribution more so reflects correlation rather than causation.

Accurate measurement and the scientific method show the need to understand the scenario in which the creative isn’t shown. By withholding the test assets from a segment of the target audience, it’s possible to establish a baseline. 

The difference in conversion rates or any primary KPI between the treatment group — those who were exposed to the ad — and the control group — those who weren’t exposed — shows the actual incremental lift the creative is driving.

Dig deeper: Why incrementality is the only metric that proves marketing’s real impact

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

What you need before testing creative uplift

One common mistake is launching experiments without enough data to reach statistical significance. To avoid inconclusive or invalid results, make sure your campaign meets these prerequisites before setting up the test.

Conversion volume 

Google recommends having at least 50 conversions across treatment and control arms during the experiment to measure lift accurately. If your primary conversion doesn’t receive this volume, consider optimizing the test around high-intent micro-conversion actions, such as “Add to Cart.”

Budget minimums

Experiments should run with continuous, uninterrupted spending. If your Demand Gen campaign is limited by budget and stops early each day, the control group data will be skewed. 

The campaign must have a sufficient budget to run for at least four weeks, or until a statistically significant result is achieved.

Creative isolation

Test only one new variable at a time. To determine if a specific video asset drives uplift, keep all other campaign elements, such as audience, bidding, and standard image assets, unchanged.

Dig deeper: Why Demand Gen is the most underrated campaign type in Google Ads

Get the newsletter search marketers rely on.


How to run an asset uplift test in Google Ads

Setting up a creative uplift test is now more streamlined within Google Ads. To build a valid experiment, follow these steps.

1. Define a clear hypothesis

Every valid scientific test begins with a clear hypothesis. Avoid running tests without a defined objective. For example:

  • Bad hypothesis: “Let’s see if our new video works.”
  • Good hypothesis: “Adding user-generated content (UGC) to our Demand Gen asset group will drive a 10% incremental lift in ‘purchase’ conversions compared to standard static image carousels.”

Navigate to the Experiments interface

Log in to your Google Ads account and navigate to the left menu. Select Campaigns > Experiments. Click the plus (+) button to create a new experiment, choose Asset tests provided by you, and make it a Demand Gen campaign experiment.

Configure a 50/50 split

Google will prompt you to define your split. To set up statistically sound results, use a 50/50 cookie-based split. 

This ensures both control and treatment groups have equal historical data and algorithmic weighting, and prevents users from ending up in both arms of the test. Assign your existing campaign as the control, and the duplicated campaign with new assets as the treatment.

Lock your variables

Once the experiment begins, you must practice extreme discipline. Don’t change audiences or targeting, and avoid drastic bid and budget changes. 

Any adjustment made to either campaign during the testing window will introduce noise and could invalidate the statistical significance of your results.

Set the duration

Run the experiment for at least four weeks. 

  • Week 1 serves as a learning period while the algorithm adjusts to the audience split, new creative, and bid model learning (especially if leveraging smart bidding). 
  • Weeks 2 to 4 provide actionable performance data. 

For longer conversion cycles, such as B2B SaaS, consider extending the test to six or eight weeks.

Dig deeper: What it takes to make demand gen work for B2B and ecommerce

What your experiment results actually mean

When the experiment concludes, review results in the Experiments dashboard, where a report showing the performance of each arm and its confidence interval across metrics is available. Interpret the outcomes as follows to validate your hypothesis made earlier.

Outcome 1: Positive lift (statistically significant)

If the treatment group shows a positive lift with 95% confidence, your creative asset has been proven to drive incremental conversions. 

From there, you can calculate incremental cost per acquisition (iCPA) by dividing the treatment group’s total ad spend by the incremental conversions above the control arm. 

Use this iCPA as your benchmark for scaling the campaign going forward.

Outcome 2: Negative lift

Occasionally, a new creative asset may suppress performance. It may be too disruptive, or the video may have a high skip rate, causing the algorithm to reduce delivery to high-intent users. Pause the treatment asset immediately. This allows you to let data guide your budget decisions vs. preference.

Outcome 3: Inconclusive result

If the difference between groups is negligible and the system cannot confidently attribute conversions to the ad after four weeks and adequate conversion volume, consider extending the test for two more weeks to collect additional data. 

If results are still inconclusive, it could be that creatives are too similar. Test a significantly different creative asset, as small changes rarely produce a statistically significant lift in Demand Gen.

Prove creative impact with incrementality testing

Creative is a key remaining lever and differentiator you can pull to drive performance. Producing high-quality video or UGC is just the first step in this world, where creative bandwidth and impact must be proven as a driver of results. 

Demand Gen is a powerful tool for visual storytelling, but justifying its budget to stakeholders requires rigorous, scientific evidence of its impact. Asset uplift experiments enable just that. Begin your first holdout test, establish a baseline, and let data guide your creative decisions and roadmap.

Dig deeper: The Google Ads Demand Gen playbook

Read more at Read More

Web Design and Development San Diego

groas introduces a fully autonomous approach to Google Ads management by groas

 groas distributed AI agent network managing Google Ads campaigns across multiple screens.

For 20 years, Google Ads management has followed the same basic model: you log in, review performance, make changes, and hope they work before the next check-in. 

Agencies, freelancers, and in-house teams all work this way, even as the tools have changed. Spreadsheets gave way to scripts, and scripts gave way to automated bidding, but the core loop never changed — someone still had to sit in the account.

groas aims to change that model by introducing a system designed to automate campaign execution end-to-end.

Our company announced today it has developed a fully end-to-end autonomous system that’s designed to match or exceed PPC performance benchmarks observed in internal testing. It’s designed to operate without routine manual approvals or constant dashboard monitoring.

From campaign creation through bid management, ad copy generation, keyword expansion, negative keyword pruning, budget allocation, and dynamic landing page deployment — along with everything else you can do in the Google Ads console and beyond — the entire workflow now runs autonomously, 24/7. 

The system runs on a distributed network of specialized AI agents that handle different parts of campaign management and communicate in real time.

We didn’t start here. 

A year ago, groas launched as a lightweight product that surfaced optimization recommendations for you to review and implement. The same model most PPC products still follow. 

By the founder’s own admission, it was a fairly unremarkable v1. But what it lacked in sophistication, it made up for in something more valuable: real data from large volumes of real campaigns at scale.

Hundreds of early customers across the world signed up and connected their Google Ads accounts, representing a wide range of ad spend levels, campaign structures, and conversion goals.

These weren’t a narrow slice of one vertical. They spanned dozens of industries and niches — from local service businesses spending a few thousand a month to large agencies managing seven-figure monthly budgets across full client portfolios.

That diversity became the most important asset groas built. 

The custom-trained, fine-tuned models that now power the system were shaped by this breadth — not a static dataset or simulation, but live campaigns with real money on the line across every industry and budget tier. 

Without that base of early adopters, what groas is today couldn’t exist. The training data that enables autonomous management came from actively managing real dollars across real campaigns, learning what worked and what didn’t in conditions no synthetic environment could replicate.

David Pourquery, founder and CEO of groas, said:

“We kept seeing the same pattern. We’d surface a recommendation that would clearly improve performance, and it would sit there for days or weeks because the account manager was busy, or the client needed to approve it, or someone was on vacation. The insight had a shelf life, and by the time it got implemented, the data had moved on. So we stopped recommending and started doing.”

That realization drove a complete six-month rebuild. The result is a system of interconnected AI agents, each specialized in a different part of campaign management, collectively processing over 100,000 data points per hour per campaign. 

The network handles a wide range of tasks typically performed inside the Google Ads console without the limits of working hours, cognitive load, or the tradeoffs that come with managing multiple accounts. The system automates most day-to-day campaign management tasks that would typically require manual input. If you wouldn’t have time to do it, the agents would.

From day one, groas built dynamic landing pages into the system, deployed and continuously A/B tested to find winning combinations of messaging, layout, and calls to action for every campaign. groas deploys them with a single line of JavaScript on your existing site — no developer resources, no new hosting, no CMS changes. The system tests and iterates 24/7, designed to improve conversion rates through continuous testing.

There’s a full undo capability for each agent action, but the point is you don’t need to regularly check into groas or Google Ads. Weekly reports are emailed, summarizing what was done, while a dedicated human PPC account manager oversees everything groas does around the clock.

Onboarding is fully hands-off. After sign-up, your groas account manager learns your business, audits your existing Google Ads accounts, and delivers a detailed action plan within 24 hours. From there, they implement everything across groas and Google Ads with zero work on your side.

In less than a year since shifting to full autonomy, groas now manages eight figures in monthly ad spend across its client base. Every account came through organic discovery or direct referrals — the company hasn’t spent anything on paid acquisition to date.

The client base has consolidated around two profiles:

  • Businesses moving away from agency relationships where results haven’t kept pace with cost. These are companies paying $5,000 to $15,000 per month and looking for more consistent performance and transparency. groas provides an alternative by automating day-to-day execution while reducing management overhead.
  • Agencies. This is now the larger segment. Agencies plug groas into their clients’ accounts behind the scenes, bundle the cost into your existing fees, and let the agent network handle day-to-day execution while their teams focus on strategy, creative direction, and client relationships. The implementation runs behind the scenes within agency workflows. groas turns a labor-intensive, low-margin service into something that scales without added headcount. groas offers a 30% lifetime recurring commission for referrals, but most of you choose to pay for it yourselves and keep the margin.

Google’s automation — from Performance Max to AI Max to broad match expansion — has pushed the industry toward more black-box control for years. Many advertisers feel they are losing visibility into what’s actually happening inside their campaigns. Meanwhile, agencies and recommendation-based products still run the old loop: review, recommend, wait for approval, implement, repeat.

groas occupies a category that didn’t exist. Instead of helping you manage campaigns better or relying on Google’s automation, it removes you from the execution loop while keeping you in the strategic loop through a dedicated account manager.

The PPC industry has spent two decades debating how much to automate. groas is the first to answer “everything” and back it up with eight figures in managed spend. 

The growth points to something the industry has been circling for years without arriving at. The bottleneck in Google Ads performance has often been the limits of manual execution — constrained by time, attention, and the volume of data modern campaigns generate.

groas didn’t build a better recommendation engine — it reduced the need for traditional recommendation-based workflows.

groas starts at $999 per month for up to $15,000 in managed ad spend, scaling to $6,999 per month for up to $150,000. No contracts, lock-ins, or setup fees. The only requirement is at least $2,000 per month in Google Ads spend — below that, there isn’t enough data for the agents to optimize effectively.

Learn more about how groas works at groas.ai.

Read more at Read More

Web Design and Development San Diego

Yelp launches AI-powered Assistant to streamline local search and bookings

yelp

Yelp is rolling out its most significant AI update yet, centered on a new conversational “Yelp Assistant” designed to move users from searching to actually booking, ordering, and scheduling — all in one flow.

What’s new. Yelp Assistant sits at the center of the update, acting as a chatbot that can answer complex queries, recommend businesses, and complete actions like reservations or appointments without leaving the app.

Zoom in. The assistant pulls from Yelp’s massive base of user reviews and photos to generate tailored recommendations, explain why a business fits, and let users refine results conversationally. It can then take the next step — booking a table, ordering food, or requesting a quote — directly within the same interaction.

What else is new. Yelp is expanding integrations with platforms like Vagaro, Zocdoc, and Calendly to streamline bookings across categories like beauty, healthcare, and home services, while deepening delivery ties with DoorDash.

Also notable. An upgraded “Menu Vision” feature uses AI and visual overlays to show dishes, reviews, and photos in real time when scanning a menu, helping users decide what to order faster.

Why we care. Yelp is shifting from a discovery platform to a transaction-driven experience powered by AI. With Yelp Assistant handling recommendations and bookings in one flow, visibility alone may not be enough — businesses will need to be optimized for conversion within the platform. The update also signals more competition for high-intent users as Yelp tightens control over the path from search to purchase.

Between the lines. Yelp is leaning into AI not just for discovery, but for conversion — turning intent into transactions without sending users elsewhere.

What’s next. The assistant is live on iOS and Android with broader expansion across categories and desktop coming later this year.

Bottom line. Yelp wants to own the full local journey — from “where should I go?” to “it’s booked.”

Read more at Read More