Google Business Profiles What’s happening feature expands

Google has expanded the What’s happening feature within Google Business Profiles to restaurants and bars in the United Kingdom, Canada, Australia, and New Zealand. It is now available for multi-location restaurants, not just single-location restaurants.

The What’s happening feature launched back in May as a way for some businesses to highlight events, deals, and specials prominently at the top of your Google Business Profile. Now, Google is bringing it to more countries.

What Google said. Google’s Lisa Landsman wrote on LinkedIn:

How do you promote your “Taco Tuesday” in Toledo and your “Happy Hour” in Houston… right when locals are searching for a place to go?

I’m excited to share that the Google Business Profile feature highlighting what’s happening at your business, such as timely events, specials and deals, has now rolled out for multi-location restaurants & bars across the US, UK, CA, AU & NZ! (It was previously only available for single-location restaurants)

This is a great option for driving real-time foot traffic. It automatically surfaces the unique specials, live music, or events you’re already promoting at a specific location, catching customers at the exact moment they’re deciding where to eat or grab a cocktail.

What it looks like. Here is a screenshot of this feature:

More details. Google’s Lisa Landsman added, “We’ve already seen excellent results from testing and look forward to hearing how this works for you!”

Availability. This feature is only available for restaurants & bars. Google said it hopes to expand to more categories soon. It is also only available in the United States, United Kingdom, Canada, Australia, and New Zealand.

The initial launch was for single-location Food and Drink businesses in the U.S., UK, Australia, Canada, and New Zealand. It is now available for multi-location restaurants, not just single-location restaurants.

Why we care. If you manage restaurants and/or bars, this may be a new way to get more attention and visitors to your business from Google Search. Now, if you manage multi-location restaurants or bars, you can leverage this feature.

Read more at Read More

LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Marketing, technology, and business leaders today are asking an important question: how do you optimize for large language models (LLMs) like ChatGPT, Gemini, and Claude? 

LLM optimization is taking shape as a new discipline focused on how brands surface in AI-generated results and what can be measured today. 

For decision makers, the challenge is separating signal from noise – identifying the technologies worth tracking and the efforts that lead to tangible outcomes.

The discussion comes down to two core areas – and the timeline and work required to act on them:

  • Tracking and monitoring your brand’s presence in LLMs.
  • Improving visibility and performance within them.

Tracking: The foundation of LLM optimization

Just as SEO evolved through better tracking and measurement, LLM optimization will only mature once visibility becomes measurable. 

We’re still in a pre-Semrush/Moz/Ahrefs era for LLMs. 

Tracking is the foundation of identifying what truly works and building strategies that drive brand growth. 

Without it, everyone is shooting in the dark, hoping great content alone will deliver results.

The core challenges are threefold:

  • LLMs don’t publish query frequency or “search volume” equivalents.
  • Their responses vary subtly (or not so subtly) even for identical queries, due to probabilistic decoding and prompt context.
  • They depend on hidden contextual features (user history, session state, embeddings) that are opaque to external observers.

Why LLM queries are different

Traditional search behavior is repetitive – millions of identical phrases drive stable volume metrics. LLM interactions are conversational and variable. 

People rephrase questions in different ways, often within a single session. That makes pattern recognition harder with small datasets but feasible at scale. 

These structural differences explain why LLM visibility demands a different measurement model.

This variability requires a different tracking approach than traditional SEO or marketing analytics.

The leading method uses a polling-based model inspired by election forecasting.

The polling-based model for measuring visibility

A representative sample of 250–500 high-intent queries is defined for your brand or category, functioning as your population proxy. 

These queries are run daily or weekly to capture repeated samples from the underlying distribution of LLM responses.

Competitive mentions and citations metrics

Tracking tools record when your brand and competitors appear as citations (linked sources) or mentions (text references), enabling share of voice calculations across all competitors. 

Over time, aggregate sampling produces statistically stable estimates of your brand visibility within LLM-generated content.

Early tools providing this capability include:

  • Profound.
  • Conductor.
  • OpenForge.
Early tools for LLM visibility tracking

Consistent sampling at scale transforms apparent randomness into interpretable signals. 

Over time, aggregate sampling provides a stable estimate of your brand’s visibility in LLM-generated responses – much like how political polls deliver reliable forecasts despite individual variations.

Building a multi-faceted tracking framework

While share of voice paints a picture of your presence in the LLM landscape, it doesn’t tell the complete story. 

Just as keyword rankings show visibility but not clicks, LLM presence doesn’t automatically translate to user engagement. 

Brands need to understand how people interact with their content to build a compelling business case.

Because no single tool captures the entire picture, the best current approach layers multiple tracking signals:

  • Share of voice (SOV) tracking: Measure how often your brand appears as mentions and citations across a consistent set of high-value queries. This provides a benchmark to track over time and compare against competitors.
  • Referral tracking in GA4: Set up custom dimensions to identify traffic originating from LLMs. While attribution remains limited today, this data helps detect when direct referrals are increasing and signals growing LLM influence.
  • Branded homepage traffic in Google Search Console: Many users discover brands through LLM responses, then search directly in Google to validate or learn more. This two-step discovery pattern is critical to monitor. When branded homepage traffic increases alongside rising LLM presence, it signals a strong causal connection between LLM visibility and user behavior. This metric captures the downstream impact of your LLM optimization efforts.

Nobody has complete visibility into LLM impact on their business today, but these methods cover all the bases you can currently measure.

Be wary of any vendor or consultant promising complete visibility. That simply isn’t possible yet.

Understanding these limitations is just as important as implementing the tracking itself.

Because no perfect models exist yet, treat current tracking data as directional – useful for decisions, but not definitive.

Why mentions matter more than citations

Dig deeper: In GEO, brand mentions do what links alone can’t

Estimating LLM ‘search volume’

Measuring LLM impact is one thing. Identifying which queries and topics matter most is another.

Compared to SEO or PPC, marketers have far less visibility. While no direct search volume exists, new tools and methods are beginning to close the gap.

The key shift is moving from tracking individual queries – which vary widely – to analyzing broader themes and topics. 

The real question becomes: which areas is your site missing, and where should your content strategy focus?

To approximate relative volume, consider three approaches:

Correlate with SEO search volume

Start with your top-performing SEO keywords. 

If a keyword drives organic traffic and has commercial intent, similar questions are likely being asked within LLMs. Use this as your baseline.

Layer in industry adoption of AI

Estimate what percentage of your target audience uses LLMs for research or purchasing decisions:

  • High AI-adoption industries: Assume 20-25% of users leverage LLMs for decision-making.
  • Slower-moving industries: Start with 5-10%.

Apply these percentages to your existing SEO keyword volume. For example, a keyword with 25,000 monthly searches could translate to 1,250-6,250 LLM-based queries in your category.

Using emerging inferential tools

New platforms are beginning to track query data through API-level monitoring and machine learning models. 

Accuracy isn’t perfect yet, but these tools are improving quickly. Expect major advancements in inferential LLM query modeling within the next year or two.

Get the newsletter search marketers rely on.


Optimizing for LLM visibility

The technologies that help companies identify what to improve are evolving quickly. 

While still imperfect, they’re beginning to form a framework that parallels early SEO development, where better tracking and data gradually turned intuition into science.

Optimization breaks down into two main questions:

  • What content should you create or update, and should you focus on quality content, entities, schema, FAQs, or something else?
  • How should you align these insights with broader brand and SEO strategies?

Identify what content to create or update

One of the most effective ways to assess your current position is to take a representative sample of high-intent queries that people might ask an LLM and see how your brand shows up relative to competitors. This is where the Share of Voice tracking tools we discussed earlier become invaluable.

These same tools can help answer your optimization questions:

  • Track who is being cited or mentioned for each query, revealing competitive positioning.
  • Identify which queries your competitors appear for that you don’t, highlighting content gaps.
  • Show which of your own queries you appear for and which specific assets are being cited, pinpointing what’s working.

From this data, several key insights emerge:

  • Thematic visibility gaps: By analyzing trends across many queries, you can identify where your brand underperforms in LLM responses. This paints a clear picture of areas needing attention. For example, you’re strong in SEO but not in PPC content. 
  • Third-party resource mapping: These tools also reveal which external resources LLMs reference most frequently. This helps you build a list of high-value third-party sites that contribute to visibility, guiding outreach or brand mention strategies. 
  • Blind spot identification: When cross-referenced with SEO performance, these insights highlight blind spots; topics or sources where your brand’s credibility and representation could improve.

Understand the overlap between SEO and LLM optimization

LLMs may be reshaping discovery, but SEO remains the foundation of digital visibility.

Across five competitive categories, brands ranking on Google’s first page appeared in ChatGPT answers 62% of the time – a clear but incomplete overlap between search and AI results.

That correlation isn’t accidental. 

Many retrieval-augmented generation (RAG) systems pull data from search results and expand it with additional context. 

The more often your content appears in those results, the more likely it is to be cited by LLMs.

Brands with the strongest share of voice in LLM responses are typically those that invested in SEO first. 

Strong technical health, structured data, and authority signals remain the bedrock for AI visibility.

What this means for marketers:

  • Don’t over-focus on LLMs at the expense of SEO. AI systems still rely on clean, crawlable content and strong E-E-A-T signals.
  • Keep growing organic visibility through high-authority backlinks and consistent, high-quality content.
  • Use LLM tracking as a complementary lens to understand new research behaviors, not a replacement for SEO fundamentals.

Redefine on-page and off-page strategies for LLMs

Just as SEO has both on-page and off-page elements, LLM optimization follows the same logic – but with different tactics and priorities.

Off-page: The new link building

Most industries show a consistent pattern in the types of resources LLMs cite:

  • Wikipedia is a frequent reference point, making a verified presence there valuable.
  • Reddit often appears as a trusted source of user discussion.
  • Review websites and “best-of” guides are commonly used to inform LLM outputs.

Citation patterns across ChatGPT, Gemini, Perplexity, and Google’s AI Overviews show consistent trends, though each engine favors different sources.

This means that traditional link acquisition strategies, guest posts, PR placements, or brand mentions in review content will likely evolve. 

Instead of chasing links anywhere, brands should increasingly target:

  • Pages already being cited by LLMs in their category.
  • Reviews or guides that evaluate their product category.
  • Articles where branded mentions reinforce entity associations.

The core principle holds: brands gain the most visibility by appearing in sources LLMs already trust – and identifying those sources requires consistent tracking.

On-page: What your own content reveals

The same technologies that analyze third-party mentions can also reveal which first-party assets, content on your own website, are being cited by LLMs. 

This provides valuable insight into what type of content performs well in your space.

For example, these tools can identify:

  • What types of competitor content are being cited (case studies, FAQs, research articles, etc.).
  • Where your competitors show up but you don’t.
  • Which of your own pages exist but are not being cited.

From there, three key opportunities emerge:

  • Missing content: Competitors are cited because they cover topics you haven’t addressed. This represents a content gap to fill.
  • Underperforming content: You have relevant content, but it isn’t being referenced. Optimization – improving structure, clarity, or authority – may be needed.
  • Content enhancement opportunities: Some pages only require inserting specific Q&A sections or adding better-formatted information rather than full rewrites.

Leverage emerging technologies to turn insights into action

The next major evolution in LLM optimization will likely come from tools that connect insight to action.

Early solutions already use vector embeddings of your website content to compare it against LLM queries and responses. This allows you to:

  • Detect where your coverage is weak.
  • See how well your content semantically aligns with real LLM answers.
  • Identify where small adjustments could yield large visibility gains.

Current tools mostly generate outlines or recommendations.

The next frontier is automation – systems that turn data into actionable content aligned with business goals.

Timeline and expected results

While comprehensive LLM visibility typically builds over 6-12 months, early results can emerge faster than traditional SEO. 

The advantage: LLMs can incorporate new content within days rather than waiting months for Google’s crawl and ranking cycles. 

However, the fundamentals remain unchanged.

Quality content creation, securing third-party mentions, and building authority still require sustained effort and resources. 

Think of LLM optimization as having a faster feedback loop than SEO, but requiring the same strategic commitment to content excellence and relationship building that has always driven digital visibility.

From SEO foundations to LLM visibility

LLM traffic remains small compared to traditional search, but it’s growing fast.

A major shift in resources would be premature, but ignoring LLMs would be shortsighted. 

The smartest path is balance: maintain focus on SEO while layering in LLM strategies that address new ranking mechanisms.

Like early SEO, LLM optimization is still imperfect and experimental – but full of opportunity. 

Brands that begin tracking citations, analyzing third-party mentions, and aligning SEO with LLM visibility now will gain a measurable advantage as these systems mature.

In short:

  • Identify the third-party sources most often cited in your niche and analyze patterns across AI engines.
  • Map competitor visibility for key LLM queries using tracking tools.
  • Audit which of your own pages are cited (or not) – high Google rankings don’t guarantee LLM inclusion.
  • Continue strong SEO practices while expanding into LLM tracking – the two work best as complementary layers.

Approach LLM optimization as both research and brand-building.

Don’t abandon proven SEO fundamentals. Rather, extend them to how AI systems discover, interpret, and cite information.

Read more at Read More

How to balance speed and credibility in AI-assisted content creation

How to balance speed and credibility in AI-assisted content creation

AI tools can help teams move faster than ever – but speed alone isn’t a strategy.

As more marketers rely on LLMs to help create and optimize content, credibility becomes the true differentiator. 

And as AI systems decide which information to trust, quality signals like accuracy, expertise, and authority matter more than ever.

It’s not just what you write but how you structure it. AI-driven search rewards clear answers, strong organization, and content it can easily interpret.

This article highlights key strategies for smarter AI workflows – from governance and training to editorial oversight – so your content remains accurate, authoritative, and unmistakably human.

Create an AI usage policy

More than half of marketers are using AI for creative endeavors like content creation, IAB reports.

Still, AI policies are not always the norm. 

Your organization will benefit from clear boundaries and expectations. Creating policies for AI use ensures consistency and accountability.

Only 7% of companies using genAI in marketing have a full-blown governance framework, according to SAS.

However, 63% invest in creating policies that govern how generative AI is used across the organization. 

Source- “Marketers and GenAI- Diving Into the Shallow End,” SAS
Source- “Marketers and GenAI- Diving Into the Shallow End,” SAS

Even a simple, one-page policy can prevent major mistakes and unify efforts across teams that may be doing things differently.

As Cathy McPhillips, chief growth officer at the Marketing Artificial Intelligence Institute, puts it

  • “If one team uses ChatGPT while others work with Jasper or Writer, for instance, governance decisions can become very fragmented and challenging to manage. You’d need to keep track of who’s using which tools, what data they’re inputting, and what guidance they’ll need to follow to protect your brand’s intellectual property.” 

So drafting an internal policy sets expectations for AI use in the organization (or at least the creative teams).

When creating a policy, consider the following guidelines: 

  • What the review process for AI-created content looks like. 
  • When and how to disclose AI involvement in content creation. 
  • How to protect proprietary information (not uploading confidential or client information into AI tools).
  • Which AI tools are approved for use, and how to request access to new ones.
  • How to log or report problems.

Logically, the policy will evolve as the technology and regulations change. 

Keep content anchored in people-first principles

It can be easy to fall into the trap of believing AI-generated content is good because it reads well. 

LLMs are great at predicting the next best sentence and making it sound convincing. 

But reviewing each sentence, paragraph, and the overall structure with a critical eye is absolutely necessary.

Think: Would an expert say it like that? Would you normally write like that? Does it offer the depth of human experience that it should?

“People-first content,” as Google puts it, is really just thinking about the end user and whether what you are putting into the world is adding value. 

Any LLM can create mediocre content, and any marketer can publish it. And that’s the problem. 

People-first content aligns with Google’s E-E-A-T framework, which outlines the characteristics of high-quality, trustworthy content.

E-E-A-T isn’t a novel idea, but it’s increasingly relevant in a world where AI systems need to determine if your content is good enough to be included in search.

According to evidence in U.S. v. Google LLC, we see quality remains central to ranking:

  • “RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: [redacted]% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.” 
Source: U.S. v. Google LLC court documentation
Source: U.S. v. Google LLC court documentation

It suggests that the same quality factors reflected in E-E-A-T likely influence how AI systems assess which pages are trustworthy enough to ground their answers.

So what does E-E-A-T look like practically when working with AI content? You can:

  • Review Google’s list of questions related to quality content: Keep these in mind before and after content creation.
  • Demonstrate firsthand experience through personal insights, examples, and practical guidance: Weave these insights into AI output to add a human touch.
  • Use reliable sources and data to substantiate claims: If you’re using LLMs for research, fact-check in real time to ensure the best sources. 
  • Insert authoritative quotes either from internal stakeholders or external subject matter experts: Quoting internal folks builds brand credibility while external sources lend authority to the piece.
  • Create detailed author bios: Include:
    • Relevant qualifications, certifications, awards, and experience.
    • Links to social media, academic papers (if relevant), or other authoritative works.
  • Add schema markup to articles to clarify the content further: Schema can clarify content in a way that AI-powered search can better understand.
  • Become the go-to resource on the topic: Create a depth and breadth of material on the website that’s organized in a search-friendly, user-friendly manner. You can learn more in my article on organizing content for AI search.
Source: Creating helpful, reliable, people-first content,” Google Search Central
Source: Creating helpful, reliable, people-first content,” Google Search Central

Dig deeper: Writing people-first content: A process and template

Train the LLM 

LLMs are trained on vast amounts of data – but they’re not trained on your data. 

Put in the work to train the LLM, and you can get better results and more efficient workflows. 

Here are some ideas.

Maintain a living style guide

If you already have a corporate style guide, great – you can use that to train the model. If not, create a simple one-pager that covers things like:

  • Audience personas.
  • Voice traits that matter.
  • Reading level, if applicable.
  • The do’s and don’ts of phrases and language to use. 
  • Formatting rules such as SEO-friendly headers, sentence length, paragraph length, bulleted list guidelines, etc. 

You can refresh this as needed and use it to further train the model over time. 

Build a prompt kit  

Put together a packet of instructions that prompts the LLM. Here are some ideas to start with: 

  • The style guide
    • This covers everything from the audience personas to the voice style and formatting.
    • If you’re training a custom GPT, you don’t need to do this every time, but it may need tweaking over time. 
  • A content brief template
    • This can be an editable document that’s filled in for each content project and includes things like:
      • The goal of the content.
      • The specific audience.
      • The style of the content (news, listicle, feature article, how-to).
      • The role (who the LLM is writing as).
      • The desired action or outcome.
  • Content examples
    • Upload a handful of the best content examples you have to train the LLM. This can be past articles, marketing materials, transcripts from videos, and more. 
    • If you create a custom GPT, you’ll do this at the outset, but additional examples of content may be uploaded, depending on the topic. 
  • Sources
    • Train the model on the preferred third-party sources of information you want it to pull from, in addition to its own research. 
    • For example, if you want it to source certain publications in your industry, compile a list and upload it to the prompt.  
    • As an additional layer, prompt the model to automatically include any third-party sources after every paragraph to make fact-checking easier on the fly.
  • SEO prompts
    • Consider building SEO into the structure of the content from the outset.  
    • Early observations of Google’s AI Mode suggest that clearly structured, well-sourced content is more likely to be referenced in AI-generated results.

With that in mind, you can put together a prompt checklist that includes:

  • Crafting a direct answer in the first one to two sentences, then expanding with context.
  • Covering the main question, but also potential subquestions (“fan-out” queries) that the system may generate (for example, questions related to comparisons, pros/cons, alternatives, etc.).
  • Chunking content into many subsections, with each subsection answering a potential fan-out query to completion.
  • Being an expert source of information in each individual section of the page, meaning it’s a passage that can stand on its own.
  • Provide clear citations and semantic richness (synonyms, related entities) throughout. 

Dig deeper: Advanced AI prompt engineering strategies for SEO

Create custom GPTs or explore RAG 

A custom GPT is a personalized version of ChatGPT that’s trained on your materials so it can better create in your brand voice and follow brand rules. 

It mostly remembers tone and format, but that doesn’t guarantee the accuracy of output beyond what’s uploaded.

Some companies are exploring RAG (retrieval-augmented generation) to further train LLMs on the company’s own knowledge base. 

RAG connects an LLM to a private knowledge base, retrieving relevant documents at query time so the model can ground its responses in approved information.

While custom GPTs are easy, no-code setups, RAG implementation is more technical – but there are companies/technologies out there that can make it easier to implement. 

That’s why GPTs tend to work best for small or medium-scale projects or for non-technical teams focused on maintaining brand consistency.

Create a custom GPT in ChatGPT
Create a custom GPT in ChatGPT

RAG, on the other hand, is an option for enterprise-level content generation in industries where accuracy is critical and information changes frequently.

Run an automated self-review

Create parameters so the model can self-assess the content before further editorial review. You can create a checklist of things to prompt it.

For example:

  • “Is the advice helpful, original, people-first?” (Perhaps using Google’s list of questions from its helpful content guidance.) 
  • “Is the tone and voice completely aligned with the style guide?” 

Have an established editing process 

Even the best AI workflow still depends on trained editors and fact-checkers. This human layer of quality assurance protects accuracy, tone, and credibility.

Editorial training

About 33% of content writers and 24% of marketing managers added AI skills to their LinkedIn profiles in 2024.

Writers and editors need to continue to upskill in the coming year, and, according to the Microsoft 2025 annual Work Trend Index, AI skilling is the top priority.  

Microsoft 2025 Annual Work Trend Index
Source: 2025 Microsoft Work Trend Index Annual Report

Professional training creates baseline knowledge so your team gets up to speed faster and can confidently handle outputs consistently.

This includes training on how to effectively use LLMs and how to best create and edit AI content.

In addition, training content teams on SEO helps them build best practices into prompts and drafts.

Editorial procedures

Ground your AI-assisted content creation in editorial best practices to ensure the highest quality. 

This might include:

  • Identifying the parts of the content creation workflow that are best suited for LLM assistance.
  • Conducting an editorial meeting to sign off on topics and outlines. 
  • Drafting the content.
  • Performing the structural edit for clarity and flow, then copyediting for grammar and punctuation.
  • Getting sign-off from stakeholders.  
AI editorial process
AI editorial process

The AI editing checklist

Build a checklist to use during the review process for quality assurance. Here are some ideas to get you started:

  • Every claim, statistic, quote, or date is accompanied by a citation for fact-checking accuracy.
  • All facts are traceable to credible, approved sources.
  • Outdated statistics (more than two years) are replaced with fresh insights. 
  • Draft meets the style guide’s voice guidelines and tone definitions. 
  • Content adds valuable, expert insights rather than being vague or generic.
  • For thought leadership, ensure the author’s perspective is woven throughout.
  • Draft is run through the AI detector, aiming for a conservative percentage of 5% or less AI. 
  • Draft aligns with brand values and meets internal publication standards.
  • Final draft includes explicit disclosure of AI involvement when required (client-facing/regulatory).

Grounding AI content in trust and intent

AI is transforming how we create, but it doesn’t change why we create.

Every policy, workflow, and prompt should ultimately support one mission: to deliver accurate, helpful, and human-centered content that strengthens your brand’s authority and improves your visibility in search. 

Dig deeper: An AI-assisted content process that outperforms human-only copy

Read more at Read More

The future of SEO teams is human-led and agent-powered

The conversation around artificial intelligence (AI) has been dominated by “replacement theory” headlines. From front-line service roles to white-collar knowledge work, there’s a growing narrative that human capital is under threat.

Economic anxiety has fueled research and debate, but many of the arguments remain narrow in scope.

  • Stanford’s Digital Economy Lab found that since generative AI became widespread, early-career workers in the most exposed jobs have seen a 13% decline in employment.
  • This fear has spread into higher-paid sectors as well, with hedge fund managers and CEOs predicting large-scale restructuring of white-collar roles over the next decade.

However, much of this narrative is steeped in speculation rather than the fundamental, evolving dynamics of skilled work.

Yes, we’ve seen layoffs, hiring slowdowns, and stories of AI automating tasks. But this is happening against the backdrop of high interest rates, shifts in global trade, and post-pandemic over-hiring.

As the global talent thought-leader Josh Bersin argues, claims of mass job destruction are “vastly over-hyped.” Many roles will transform, not vanish. 

What this means for SEO

For the SEO discipline, the familiar refrain “SEO is dead” is just as overstated.

Yes, the nature of the SEO specialist is changing. We’ve seen fewer leadership roles, a contraction in content and technical positions, and cautious hiring. But the function itself is far from disappearing.

In fact, SEO job listings remain resilient in 2025 and mid-level roles still comprise nearly 60% of open positions. Rather than declining, the field is being reshaped by new skill demands.

Don’t ask, “Will AI replace me?” Ask instead, “How can I use AI to multiply my impact?”

Think of AI not as the jackhammer replacing the hammer but as the jackhammer amplifying its effect. SEOs who can harness AI through agents, automation, and intelligent systems will deliver faster, more impactful results than ever before.

  • “AI is a tool. We can make it or teach it to do whatever we want…Life will go on, economies will continue to be driven by emotion, and our businesses will continue to be fueled by human ideas, emotion, grit, and hard work,” Bersin said.

Rewriting the SEO narrative

As an industry, it’s time to change the language we use to describe SEO’s evolution.

Too much of our conversation still revolves around loss. We focus on lost clicks, lost visibility, lost control, and loss of num=100.

That narrative doesn’t serve us anymore.

We should be speaking the language of amplification and revenue generation. SEO has evolved from “optimizing for rankings” to driving measurable business growth through organic discovery, whether that happens through traditional search, AI Overviews, or the emerging layer of Generative Engine Optimization (GEO).

AI isn’t the villain of SEO; it’s the force multiplier.

When harnessed effectively, AI scales insight, accelerates experimentation, and ties our work more directly to outcomes that matter:

  • Pipeline.
  • Conversions.
  • Revenue.

We don’t need to fight the dystopian idea that AI will replace us. We need to prove that AI-empowered SEOs can help businesses grow faster than ever before.

The new language of SEO isn’t about survival, it’s about impact.

The team landscape has already shifted

For years, marketing and SEO teams grew headcount to scale output.

Today, the opposite is true. Hiring freezes, leaner budgets, and uncertainty around the role of SEO in an AI-driven world have forced leaders to rethink team design.

A recent Search Engine Land report noted that remote SEO roles dropped to 34% of listings in early 2025, while content-focused SEO positions declined by 28%. A separate LinkedIn survey found a 37% drop in SEO job postings in Q1 compared to the previous year.

This signals two key shifts:

  • Specialized roles are disappearing. “SEO writers” and “link builders” are being replaced by versatile strategists who blend technical, analytical, and creative skill sets.
  • Leadership is demanding higher ROI per role. Headcount is no longer the metric of success – capability is.

What it means for SEO leadership

If your org chart still looks like a pyramid, you’re behind. 

The new landscape demands flexibility, speed, and cross-functional integration with analytics, UX, paid media, and content.

It’s time to design teams around capabilities, not titles.

Rethinking SEO Talent

The best SEO leaders aren’t hiring specialists, they’re hiring aptitude. Modern SEO organizations value people who can think across disciplines, not just operate within one.

The strongest hires we’re seeing aren’t traditional technical SEOs focused on crawl analysis or schema. They’re problem solvers – marketers who understand how search connects to the broader growth engine and who have experience scaling impact across content, data, and product.

Progressive leaders are also rethinking resourcing. The old model of a technical SEO paired with engineering support is giving way to tech SEOs working alongside AI product managers and, in many cases, vibe coding solutions. This model moves faster, tests bolder, and builds systems that drive real results.

For SEO leaders, rethinking team architecture is critical. The right question isn’t “Who should I hire next?” It’s “What critical capability must we master to stay competitive?”

Once that’s clear, structure your people and your agents around that need. The companies that get this right during the AI transition will be the ones writing the playbook for the next generation of search leadership.

The new human-led, agent-empowered team

The future of SEO teams will be defined by collaboration between humans and agents.

  • These agents are AI-enabled systems like automated content refreshers, site-health bots, or citation-validation agents that work alongside human experts.
  • The human role? To define, train, monitor, and QA their output.

Why this matters

  • Agents handle high-volume, repeatable tasks (e.g., content generation, basic auditing, link-score filtering) so humans can focus on strategy, insight, and business impact.
  • The cost of building AI agents can range from $20,000 to $150,000, depending on the complexity of the system, integrations, and the specialized work required across data science, engineering, and human QA teams, according to RTS Labs.
  • A single human manager might oversee 10-20 agents, shifting the traditional pyramid and echoing the “short pyramid” or “rocket ship” structure explored by Tomasz Tunguz.

The future: teams built around agents and empowered humans.

Real-world archetypes

  • SaaS companies: Develop a bespoke “onboarding agent” that reads product data, builds landing pages, and runs first-pass SEO audits, human strategist refines output.
  • Marketplace brands (e.g., upcoming seasonal trend): Use an “Audience Discovery Agent” that taps customer and marketplace data, but the human team writes the narrative and guides the vertical direction.
  • Enterprise content hubs: deploy “Content Refresh Agents” that identify high-value pages, suggest optimizations, and push drafts that editors review and finalise.

Integration is key

These new teams succeed when they don’t live in silos. The SEO/GEO squad must partner with paid search, analytics, revenue ops, and UX – not just serve them.

Agents create capacity; humans create alignment and amplification.

A call to SEO practitioners

Building the SEO community of the future will require change.

The pace of transformation has never been faster and it’s created a dangerous dependence on third-party “AI tools” as the answer to what is unknown.

But the true AI story doesn’t begin with a subscription. It begins inside your team.

If the only AI in your workflow is someone else’s product, you’re giving up your competitive edge. The future belongs to teams that build, not just buy.

Here’s how to start:

  • Build your own agent frameworks, designed with human-in-the-loop oversight to ensure accuracy, adaptability, and brand alignment.
  • Partner with experts who co-create, not just deliver. The most successful collaborations help your team learn how to manage and scale agents themselves.
  • Evolve your team structure, move beyond the pyramid mentality, and embrace a “rocket ship” model where humans and agents work in tandem to multiply output, insights, and results.

The future of SEO starts with building smarter teams. It’s humans working with agents. It’s capability uplift. And if you lead that charge, you’ll not only adapt to the next generation of search, you’ll be the ones designing it.

Read more at Read More

Google Search Console adds Query groups

Screenshot of Google Search Console

Google added Query groups to the Search Console Insights report. Query groups groups similar search queries together so you can quickly see the main topics your audience searches for.

What Google said. Google wrote, “We are excited to announce Query groups, a powerful Search Console Insights feature that groups similar search queries.”

“Query groups solve this problem by grouping similar queries. Instead of a long, cluttered list of individual queries, you will now see lists of queries representing the main groups that interest your audience. The groups are computed using AI; they may evolve and change over time. They are designed for providing a better high level perspective of your queries and don’t affect ranking,” Google added.

What it looks like. Here is a sample screenshot of this new Query groups report:

You can see that Google is lumping together “search engine optimization, seo optimization, seo website, seo optimierung, search engine optimization (seo), search …” into the “seo” query group in the second line. This shows the site overall is getting 9% fewer clicks on SEO related queries than it did previously.

Availability. Google said query groups will be rolling out gradually over the coming weeks. It is a new card in the Search Console Insights report. Plus, query groups are available only to properties that have a large volume of queries, as the need to group queries is less relevant for sites with fewer queries.

Why we care. Many SEOs have been grouping these queries into these clusters manually or through their own tools. Now, Google will do it for you, making it easier for more novie SEOs and beginner SEOs to understand.

More details will be posted in this help document soon.

Read more at Read More

Search Engine Land Awards 2025: And the winners are…

Search Engine Land 2025 Awards

Every year, Search Engine Land is delighted to celebrate the best of search marketing by rewarding the agencies, in-house teams, and individuals worldwide for delivering exceptional results.

Today, I’m excited to announce all 18 winners of the 11th annual Search Engine Land Awards.

The 2025 Search Engine Land Awards winners

Best Use Of AI Technology In Search Marketing

  • 15x ROAS with AI: How CAMP Digital Redefined Paid Search for Home Services

Best Overall PPC Initiative – Small Business

  • Anchor Rides – Post-Hurricane PPC Comeback (AIMCLEAR)

Best Overall PPC Initiative – Enterprise

  • ATRA & Jason Stone Injury Lawyers – Leveraging CRM Data to Scale Case Volume

Best Commerce Search Marketing Initiative – PPC

  • Adwise & Azerty – 126% uplift in profit from paid advertising & 1 percent point net margin business uplift by advanced cross-channel bucketing

Best Local Search Marketing Initiative – PPC

  • How We Crushed Belron’s Lead Target by 238% With an AI-Powered Local Strategy (Adviso)

Best B2B Search Marketing Initiative – PPC

  • Blackbird PPC and Customer.io: Advanced Data Integration to Drive 239% Revenue Increase with 12% Greater Lead Efficiency, with MMM Future-Proofing 2025 Growth

Best Integration Of Search Into Omnichannel Marketing

  • How NBC used search to drive +2,573 accounts in a Full-Funnel Media Push (Adviso)

Best Overall SEO Initiative – Small Business

  • Digital Hitmen & Elite Tune: The Toyota Shift That Delivered 678% SEO ROI

Best Overall SEO Initiative – Enterprise

  • 825 Million Clicks, Zero Content Edits: How Amsive Engineered MSN’s Technical SEO Turnaround

Best Commerce Search Marketing Initiative – SEO

  • Scaling Non-Branded SEO for Assouline to Drive +26% Organic Revenue Uplift (Block & Tam)

Best Local Search Marketing Initiative – SEO

  • Building an Unbeatable Foundation for Success: Using Hyperlocal SEO to Build Exceptional ROI (Digital Hitmen)

Best B2B Search Marketing Initiative – SEO

  • Page One, Pipeline Won: The B2B SEO Playbook That Turned 320 Visitors into $10.75M in Pipeline (LeadCoverage)

Agency Of The Year – PPC

  • Driving Growth Where Search Happens: Stella Rising’s Paid Search Transformation

Agency Of The Year – SEO

  • How Amsive Rescued MSN’s Global Visibility Through Enterprise Technical SEO at Scale

In-House Team Of The Year – SEO

  • How the American Cancer Society’s Lean SEO Team Drove Enterprise-Wide Consolidation and AI Search Visibility Gains for Cancer.org

Search Marketer Of The Year

  • Mike King, founder and CEO of iPullRank

Small Agency Of The Year – PPC

  • ATRA & Jason Stone Injury Lawyers – Leveraging CRM Data to Scale Case Volume

Small Agency Of The Year – SEO

  • From Zero to Top of the Leaderboard: Bloom Digital Drives Big Growth With Small SEO Budgets

“I’m going to SMX Next!”

Select winners of the 2025 Search Engine Land Awards will be invited to speak live at SMX Next during our two ask-me-anything-style sessions. Bring your burning SEO and PPC questions to ask this award-winning panel of search marketers!

Register here for SMX Next (it’s free) if you haven’t yet.

Congrats again to all the winners. And huge thank yous to everyone who entered the 2025 Search Engine Land Awards, the finalists, and our fantastic panel of judges for this year’s awards.

Read more at Read More

Why a lower CTR can be better for your PPC campaigns

Why a lower CTR can be better for your Google Ads campaigns

Many PPC advertisers obsess over click-through rates, using them as a quick measure of ad performance.

But CTR alone doesn’t tell the whole story – what matters most is what happens after the click. That’s where many campaigns go wrong.

The problem with chasing high CTRs

Most advertisers think the ad with the highest CTR is often the best. It should have a high Quality Score and attract lots of clicks.

However, in most cases, lower CTR ads usually outperform higher CTR ads in terms of total conversions and revenue.

If all I cared about was CTR, then I could write an ad:

  • “Free money.”
  • “Claim your free money today.”
  • “No strings attached.”

That ad would get an impressive CTR for many keywords, and I’d go out of business pretty quickly, giving away free money. 

When creating ads, we must consider:

  • Type of searchers we want to attract.
  • Ensure the users are qualified.
  • Set expectations for the landing page.

I can take my free money ad and refine it:

  • “Claim your free money.”
  • “Explore college scholarships.”
  • “Download your free guide.”

I’ve now:

  • Told searchers they can get free money for college through scholarships if they download a guide.
  • Narrowed down my audience to people who are willing to apply for scholarships and willing to download a guide, presumably in exchange for some information.

If you focus solely on CTR and don’t consider attracting the right audience, your advertising will suffer. 

While this sentiment applies to both B2C and B2B companies, B2B companies must be exceptionally aware of how their ads appear to consumers versus business searchers. 

B2B companies must pre-qualify searchers

If you are advertising for a B2B company, you’ll often notice that CTR and conversion rates have an inverse relationship. As CTR increases, conversion rates decrease.

The most common reason for this phenomenon is that consumers and businesses can search for many B2B keywords. 

B2B companies must try to show that their products are for businesses, not consumers.

For instance, “safety gates” is a common search term. 

The majority of people looking to buy a safety gate are consumers who want to keep pets or babies out of rooms or away from stairs. 

However, safety gates and railings are important for businesses with factories, plants, or industrial sites. 

These two ads are both for companies that sell safety gates. The first ad’s headlines for Uline could be for a consumer or a business. 

It’s not until you look at the description that you realize this is for mezzanines and catwalks, which is something consumers don’t have in their homes. 

As many searchers do not read descriptions, this ad will attract both B2B and B2C searchers. 

OSHA compliance - Google Ads

The second ad mentions Industrial in the headline and follows that up with a mention of OSHA compliance in the description and the sitelinks. 

While both ads promote similar products, the second one will achieve a better conversion rate because it speaks to a single audience. 

We have a client who specializes in factory parts, and when we graph their conversion rates by Quality Score, we can see that as their Quality Score increases, their conversion rates decrease. 

They will review their keywords and ads whenever they have a 5+ Quality Score on any B2B or B2C terms. 

This same logic does not apply to B2B search terms. 

Those terms often contain more jargon or qualifying statements when looking for B2B services and products. 

B2B advertisers don’t have to use characters to weed out B2C consumers and can focus their ads only on B2B searchers.

How to balance CTR and conversion rates

As you are testing various ads to find your best pre-qualifying statements, it can be tricky to examine the metrics. Which one of these would be your best ad?

  • 15% CTR, 3% conversion rate.
  • 10% CT, 7% conversion rate.
  • 5% CTR, 11% conversion rate.

When examining mixed metrics, CTR and conversion rates, we can use additional metrics to define our best ads. My favorite two are:

  • Conversion per impression (CPI): This is a simple formula dividing your conversion by the number of impressions (conversions/impressions). 
  • Revenue per impression (RPI): If you have variable checkout amounts, you can instead use your revenue metrics to decide your best ads by dividing your revenue by your impressions (revenue/impressions).

You can also multiply the results by 1,000 to make the numbers easier to digest instead of working with many decimal points. So, we might write: 

  • CPI = (conversions/impressions) x 1,000 

By using impression metrics, you can find the opportunity for a given set of impressions. 

CTR Conversion rate Impressions Clicks Conversions CPI
15% 3% 5,000 750 22.5 4.5
10% 7% 4,000 400 28 7
5% 11% 4,500 225 24.75 5.5

By doing some simple math, we can see that option 2, with a 10% CTR and a 7% conversion rate, gives us the most total conversions.

Dig deeper: CRO for PPC: Key areas to optimize beyond landing pages

Focus on your ideal customers

A good CTR helps bring more people to your website, improves your audience size, and can influence your Quality Scores.

However, high CTR ads can easily attract the wrong audience, leading you to waste your budget.

As you are creating headlines, consider your audience. 

  • Who are they? 
  • Do non-audience people search for your keywords?
    • How do you dissuade users who don’t fit your audience from clicking on your ads? 
  • How do you attract your qualified audience?
  • Are your ads setting proper landing page expectations?

By considering each of these questions as you create ads, you can find ads that speak to the type of users you want to attract to your site. 

These ads are rarely your best CTRs. These ads balance the appeal of high CTRs with pre-qualifying statements that ensure the clicks you receive have the potential to turn into your next customer. 

Read more at Read More

The agentic web is here: Why NLWeb makes schema your greatest SEO asset

The agentic web is here: Why NLWeb makes schema your greatest SEO asset

The web’s purpose is shifting. Once a link graph – a network of pages for users and crawlers to navigate – it’s rapidly becoming a queryable knowledge graph

For technical SEOs, that means the goal has evolved from optimizing for clicks to optimizing for visibility and even direct machine interaction.

Enter NLWeb – Microsoft’s open-source bridge to the agentic web

At the forefront of this evolution is NLWeb (Natural Language Web), an open-source project developed by Microsoft. 

NLWeb simplifies the creation of natural language interfaces for any website, allowing publishers to transform existing sites into AI-powered applications where users and intelligent agents can query content conversationally – much like interacting with an AI assistant.

Developers suggest NLWeb could play a role similar to HTML in the emerging agentic web

Its open-source, standards-based design makes it technology-agnostic, ensuring compatibility across vendors and large language models (LLMs). 

This positions NLWeb as a foundational framework for long-term digital visibility.

Schema.org is your knowledge API: Why data quality is the NLWeb foundation

NLWeb proves that structured data isn’t just an SEO best practice for rich results – it’s the foundation of AI readiness. 

Its architecture is designed to convert a site’s existing structured data into a semantic, actionable interface for AI systems. 

In the age of NLWeb, a website is no longer just a destination. It’s a source of information that AI agents can query programmatically.

The NLWeb data pipeline

The technical requirements confirm that a high-quality schema.org implementation is the primary key to entry.

Data ingestion and format

The NLWeb toolkit begins by crawling the site and extracting the schema markup. 

The schema.org JSON-LD format is the preferred and most effective input for the system. 

This means the protocol consumes every detail, relationship, and property defined in your schema, from product types to organization entities. 

For any data not in JSON-LD, such as RSS feeds, NLWeb is engineered to convert it into schema.org types for effective use.

Semantic storage

Once collected, this structured data is stored in a vector database. This element is critical because it moves the interaction beyond traditional keyword matching. 

Vector databases represent text as mathematical vectors, allowing the AI to search based on semantic similarity and meaning. 

For example, the system can understand that a query using the term “structured data” is conceptually the same as content marked up with “schema markup.” 

This capacity for conceptual understanding is absolutely essential for enabling authentic conversational functionality.

Protocol connectivity

The final layer is the connectivity provided by the Model Context Protocol (MCP). 

Every NLWeb instance operates as an MCP server, an emerging standard for packaging and consistently exchanging data between various AI systems and agents. 

MCP is currently the most promising path forward for ensuring interoperability in the highly fragmented AI ecosystem.

The ultimate test of schema quality

Since NLWeb relies entirely on crawling and extracting schema markup, the precision, completeness, and interconnectedness of your site’s content knowledge graph determine success.

The key challenge for SEO teams is addressing technical debt. 

Custom, in-house solutions to manage AI ingestion are often high-cost, slow to adopt, and create systems that are difficult to scale or incompatible with future standards like MCP. 

NLWeb addresses the protocol’s complexity, but it cannot fix faulty data. 

If your structured data is poorly maintained, inaccurate, or missing critical entity relationships, the resulting vector database will store flawed semantic information. 

This leads inevitably to suboptimal outputs, potentially resulting in inaccurate conversational responses or “hallucinations” by the AI interface.

Robust, entity-first schema optimization is no longer just a way to win a rich result; it is the fundamental barrier to entry for the agentic web. 

By leveraging the structured data you already have, NLWeb allows you to unlock new value without starting from scratch, thereby future-proofing your digital strategy.

NLWeb vs. llms.txt: Protocol for action vs. static guidance

The need for AI crawlers to process web content efficiently has led to multiple proposed standards. 

A comparison between NLWeb and the proposed llms.txt file illustrates a clear divergence between dynamic interaction and passive guidance.

The llms.txt file is a proposed static standard designed to improve the efficiency of AI crawlers by:

  • Providing a curated, prioritized list of a website’s most important content – typically formatted in markdown.
  • Attempting to solve the legitimate technical problems of complex, JavaScript-loaded websites and the inherent limitations of an LLM’s context window.

In sharp contrast, NLWeb is a dynamic protocol that establishes a conversational API endpoint. 

Its purpose is not just to point to content, but to actively receive natural language queries, process the site’s knowledge graph, and return structured JSON responses using schema.org. 

NLWeb fundamentally changes the relationship from “AI reads the site” to “AI queries the site.”

Attribute NLWeb llms.txt
Primary goal Enables dynamic, conversational interaction and structured data output Improves crawler efficiency and guides static content ingestion
Operational model API/Protocol (active endpoint) Static Text File (passive guidance)
Data format used Schema.org JSON-LD Markdown
Adoption status Open project; connectors available for major LLMs, including Gemini, OpenAI, and Anthropic Proposed standard; not adopted by Google, OpenAI, or other major LLMs
Strategic advantage Unlocks existing schema investment for transactional AI uses, future-proofing content Reduces computational cost for LLM training/crawling

The market’s preference for dynamic utility is clear. Despite addressing a real technical challenge for crawlers, llms.txt has failed to gain traction so far. 

NLWeb’s functional superiority stems from its ability to enable richer, transactional AI interactions.

It allows AI agents to dynamically reason about and execute complex data queries using structured schema output.

The strategic imperative: Mandating a high-quality schema audit

While NLWeb is still an emerging open standard, its value is clear. 

It maximizes the utility and discoverability of specialized content that often sits deep in archives or databases. 

This value is realized through operational efficiency and stronger brand authority, rather than immediate traffic metrics.

Several organizations are already exploring how NLWeb could let users ask complex questions and receive intelligent answers that synthesize information from multiple resources – something traditional search struggles to deliver. 

The ROI comes from reducing user friction and reinforcing the brand as an authoritative, queryable knowledge source.

For website owners and digital marketing professionals, the path forward is undeniable: mandate an entity-first schema audit

Because NLWeb depends on schema markup, technical SEO teams must prioritize auditing existing JSON-LD for integrity, completeness, and interconnectedness. 

Minimalist schema is no longer enough – optimization must be entity-first.

Publishers should ensure their schema accurately reflects the relationships among all entities, products, services, locations, and personnel to provide the context necessary for precise semantic querying. 

The transition to the agentic web is already underway, and NLWeb offers the most viable open-source path to long-term visibility and utility. 

It’s a strategic necessity to ensure your organization can communicate effectively as AI agents and LLMs begin integrating conversational protocols for third-party content interaction.

Read more at Read More

90% of businesses fear losing SEO visibility as AI reshapes search

AI search evolution

Nearly 90% of businesses are worried about losing organic visibility as AI transforms how people find information, according to a new survey by Ann Smarty.

Why we care. The shift from search results to AI-generated answers seems to be happening faster than many expected, threatening the foundation of how companies are found online and drive sales. AI is changing the customer journey and forcing an SEO evolution.

By the numbers. Most prefer to keep the “SEO” label – with “SEO for AI” (49%) and “GEO” (41%) emerging as leading terms for this new discipline.

  • 87.8% of businesses said they’re worried about their online findability in the AI era.
  • 85.7% are already investing or plan to invest in AI/LLM optimization.
  • 61.2% plan to increase their SEO budgets due to AI.

Brand over clicks. Three in four businesses (75.5%) said their top priority is brand visibility in AI-generated answers – even when there’s no link back to their site.

  • Just 14.3% prioritize being cited as a source (which could drive traffic).
  • A small group said they need both.

Top concerns. “Not being able to get my business found online” ranked as the biggest fear, followed by the total loss of organic search and loss of traffic attribution.

About the survey. Smarty surveyed 300+ in-house marketers and business owners, mostly from medium and enterprise companies, with nearly half representing ecommerce brands.

Yes, but. While AI search is booming, multiple studies suggest that ChatGPT and LLM referrals convert worse than Google Search – and AI systems won’t have parity with organic search within the next year.

The survey. SEO for AI (GEO) Statistics: 90% of Businesses Are Worried About the Future of SEO and Organic Findability Due to AI / LLMs

Read more at Read More

Yelp’s new tools help brands connect faster and engage customers in real time

Yelp just unveiled its 2025 Fall Product Release, a sweeping AI-driven update that turns the local discovery platform into a more conversational, visual, and intelligent experience.

Driving the news:
Yelp’s rollout includes over 35 new AI-powered features, headlined by:

  • Yelp Assistant, an upgraded chatbot that instantly answers customer questions about restaurants, shops, or attractions—citing reviews and photos.
  • Menu Vision, which lets users scan menus to see photos, reviews, and dish details in real time.
  • Yelp Host and Yelp Receptionist, AI-powered call solutions that handle reservations, collect leads, and answer questions with natural, customizable voices.
  • Natural language and voice search, allowing users to search conversationally (“best vegan sushi near me”) for smarter, more relevant results.
  • Popular Offerings, which highlights a business’s most-mentioned services, products, or experiences.

Why we care. Yelp’s new AI tools make it easier to capture and convert high-intent customers at the moment of discovery. With features like Yelp Assistant, AI-powered call handling, and natural language search, businesses can respond instantly, stay visible in smarter search results, and never miss a lead. The update turns Yelp from a review site into an always-on customer engagement platform—giving advertisers more efficient ways to connect, communicate, and close.

What’s next. Yelp plans to make its AI assistant the primary interface for discovery and transactions in 2026, merging instant answers, booking, and customer messaging into one seamless experience.

The bottom line. Yelp’s latest AI release gives brands smarter tools to engage customers in real time—turning everyday search and service interactions into instant connections.

Read more at Read More