AI startup Anthropic (developer of Claude) reportedly reached an annualized revenue of $850 million and forecasts to generate $2.2 billion in revenue in 2025.
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-02-25 16:11:012025-02-25 16:11:01Claude Statistics: How Many People Use Claude?
Understanding the difference between search bots and scrapers is crucial for SEO.
Website crawlers fall into two categories:
First-party bots, which you use to audit and optimize your own site.
Third-party bots, which crawl your site externally – sometimes to index your content (like Googlebot) and other times to extract data (like competitor scrapers).
This guide breaks down first-party crawlers that can improve your site’s technical SEO and third-party bots, exploring their impact and how to manage them effectively.
First-party crawlers: Mining insights from your own website
Crawlers can help you identify ways to improve your technical SEO.
Enhancing your site’s technical foundation, architectural depth, and crawl efficiency is a long-term strategy for increasing search traffic.
Occasionally, you may uncover major issues – such as a robots.txt file blocking all search bots on a staging site that was left active after launch.
Fixing such problems can lead to immediate improvements in search visibility.
Now, let’s explore some crawl-based technologies you can use.
Googlebot via Search Console
You don’t work in a Google data center, so you can’t launch Googlebot to crawl your own site.
However, by verifying your site with Google Search Console (GSC), you can access Googlebot’s data and insights. (Follow Google’s guidance to set yourself up on the platform.)
GSC is free to use and provides valuable information – especially about page indexing.
Technically, this is third-party data from Google, but only verified users can access it for their site.
In practice, it functions much like the data from a crawl you run yourself.
Screaming Frog SEO Spider
Screaming Frog is a desktop application that runs locally on your machine to generate crawl data for your website.
They also offer a log file analyzer, which is useful if you have access to server log files. For now, we’ll focus on Screaming Frog’s SEO Spider.
At $259 per year, it’s highly cost-effective compared to other tools that charge this much per month.
However, because it runs locally, crawling stops if you turn off your computer – it doesn’t operate in the cloud.
Still, the data it provides is fast, accurate, and ideal for those who want to dive deeper into technical SEO.
From the main interface, you can quickly launch your own crawls.
Once completed, export Internal > All data to an Excel-readable format and get comfortable handling and pivoting the data for deeper insights.
Screaming Frog also offers many other useful export options.
It provides reports and exports for internal linking, redirects (including redirect chains), insecure content (mixed content), and more.
The drawback is it requires more hands-on management, and you’ll need to be comfortable working with data in Excel or Google Sheets to maximize its value.
Ahrefs is a comprehensive cloud-based platform that includes a technical SEO crawler within its Site Audit module.
To use it, set up a project, configure the crawl parameters, and launch the crawl to generate technical SEO insights.
Once the crawl is complete, you’ll see an overview that includes a technical SEO health rating (0-100) and highlights key issues.
You can click on these issues for more details, and a helpful button appears as you dive deeper, explaining why certain fixes are necessary.
Since Ahrefs runs in the cloud, your machine’s status doesn’t affect the crawl. It continues even if your PC or Mac is turned off.
Compared to Screaming Frog, Ahrefs provides more guidance, making it easier to turn crawl data into actionable SEO insights.
However, it’s less cost-effective. If you don’t need its additional features, like backlink data and keyword research, it may not be worth the expense.
Semrush Site Audit
Next is Semrush, another powerful cloud-based platform with a built-in technical SEO crawler.
Like Ahrefs, it also provides backlink analysis and keyword research tools.
Semrush offers a technical SEO health rating, which improves as you fix site issues. Its crawl overview highlights errors and warnings.
As you explore, you’ll find explanations of why fixes are needed and how to implement them.
Both Semrush and Ahrefs have robust site audit tools, making it easy to launch crawls, analyze data, and provide recommendations to developers.
While both platforms are pricier than Screaming Frog, they excel at turning crawl data into actionable insights.
Semrush is slightly more cost-effective than Ahrefs, making it a solid choice for those new to technical SEO.
Each uses separate rendering engines for mobile and desktop, but both contain “Googlebot/2.1” in their user-agent string.
If you analyze your server logs, you can isolate Googlebot traffic to see which areas of your site it crawls most frequently.
This can help identify technical SEO issues, such as pages that Google isn’t crawling as expected.
To analyze log files, you can create spreadsheets to process and pivot the data from raw .txt or .csv files. If that seems complex, Screaming Frog’s Log File Analyzer is a useful tool.
In most cases, you shouldn’t block Googlebot, as this can negatively affect SEO.
However, if Googlebot gets stuck in highly dynamic site architecture, you may need to block specific URLs via robots.txt. Use this carefully – overuse can harm your rankings.
Fake Googlebot traffic
Not all traffic claiming to be Googlebot is legitimate.
Many crawlers and scrapers allow users to spoof user-agent strings, meaning they can disguise themselves as Googlebot to bypass crawl restrictions.
For example, Screaming Frog can be configured to impersonate Googlebot.
However, many websites – especially those hosted on large cloud networks like AWS – can differentiate between real and fake Googlebot traffic.
They do this by checking if the request comes from Google’s official IP ranges.
If a request claims to be Googlebot but originates outside of those ranges, it’s likely fake.
Other search engines
In addition to Googlebot, other search engines may crawl your site. For example:
In your robots.txt file, you can create wildcard rules to disallow all search bots or specify rules for particular crawlers and directories.
However, keep in mind that robots.txt entries are directives, not commands – meaning they can be ignored.
Unlike redirects, which prevent a server from serving a resource, robots.txt is merely a strong signal requesting bots not to crawl certain areas.
Some crawlers may disregard these directives entirely.
Screaming Frog’s Crawl Bot
Screaming Frog typically identifies itself with a user agent like Screaming Frog SEO Spider/21.4.
The “Screaming Frog SEO Spider” text is always included, followed by the version number.
However, Screaming Frog allows users to customize the user-agent string, meaning crawls can appear to be from Googlebot, Chrome, or another user-agent.
This makes it difficult to block Screaming Frog crawls.
While you can block user agents containing “Screaming Frog SEO Spider,” an operator can simply change the string.
If you suspect unauthorized crawling, you may need to identify and block the IP range instead.
This requires server-side intervention from your web developer, as robots.txt cannot block IPs – especially since Screaming Frog can be configured to ignore robots.txt directives.
Be cautious, though. It might be your own SEO team conducting a crawl to check for technical SEO issues.
Before blocking Screaming Frog, try to determine the source of the traffic, as it could be an internal employee gathering data.
Ahrefs Bot
Ahrefs has a crawl bot and a site audit bot for crawling.
When Ahrefs crawls the web for its own index, you’ll see traffic from AhrefsBot/7.0.
When an Ahrefs user runs a site audit, traffic will come from AhrefsSiteAudit/6.1.
Both bots respect robots.txt disallow rules, per Ahrefs’ documentation.
If you don’t want your site to be crawled, you can block Ahrefs using robots.txt.
Alternatively, your web developer can deny requests from user agents containing “AhrefsBot” or “AhrefsSiteAudit“.
Semrush Bot
Like Ahrefs, Semrush operates multiple crawlers with different user-agent strings.
Be sure to review all available information to identify them properly.
The two most common user-agent strings you’ll encounter are:
SemrushBot: Semrush’s general web crawler, used to improve its index.
SiteAuditBot: Used when a Semrush user initiates a site audit.
Rogerbot, Dotbot, and other crawlers
Moz, another widely used cloud-based SEO platform, deploys Rogerbot to crawl websites for technical insights.
Moz also operates Dotbot, a general web crawler. Both can be blocked via your robots.txt file if needed.
Another crawler you may encounter is MJ12Bot, used by the Majestic SEO platform. Typically, it’s nothing to worry about.
Non-SEO crawl bots
Not all crawlers are SEO-related. Many social platforms operate their own bots.
Meta (Facebook’s parent company) runs multiple crawlers, while Twitter previously used Twitterbot – and it’s likely that X now deploys a similar, though less-documented, system.
Crawlers continuously scan the web for data. Some can benefit your site, while others should be monitored through server logs.
Understanding search bots, SEO crawlers and scrapers for technical SEO
Managing both first-party and third-party crawlers is essential for maintaining your website’s technical SEO.
Key takeaways
First-party crawlers (e.g., Screaming Frog, Ahrefs, Semrush) help audit and optimize your own site.
Googlebot insights via Search Console provide crucial data on indexation and performance.
Third-party crawlers (e.g., Bingbot, AhrefsBot, SemrushBot) crawl your site for search indexing or competitive analysis.
Managing bots via robots.txt and server logs can help control unwanted crawlers and improve crawl efficiency in specific cases.
Data handling skills are crucial for extracting meaningful insights from crawl reports and log files.
By balancing proactive auditing with strategic bot management, you can ensure your site remains well-optimized and efficiently crawled.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2025/02/IMG1-GSC-Page-Indexing-Report-wyNDwZ.png?fit=2048%2C1320&ssl=113202048http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-02-25 15:00:002025-02-25 15:00:00A guide to web crawlers: What you need to know
Budgeting for paid ad campaigns has long been a static process – set a monthly budget, monitor spending, and adjust incrementally as needed.
This method works for industries with stable demand and predictable conversion rates but falls short in dynamic, competitive markets.
Still, static budgets aren’t obsolete. In industries with long sales cycles, consistent conversion trends, or strict financial planning – like B2B SaaS and healthcare – planned budgets remain essential.
The key isn’t choosing between static and dynamic budgeting; it’s knowing when and how to adjust PPC spend using data-driven signals.
The role of Smart Bidding and Performance Max in budgeting
Automation has changed our budgeting strategies, but it hasn’t eliminated the need for human oversight.
While Google’s Smart Bidding and Performance Max (PMax) campaigns help optimize performance, they do not fully control budget allocation the way some advertisers may assume.
Smart Bidding: What it does (and doesn’t do) for budgeting
Smart Bidding (i.e., Target ROAS, Target CPA, Maximize Conversions, and Maximize Conversion Value) uses real-time auction signals to adjust bids but does not shift budgets between campaigns.
If a campaign has an insufficient budget, smart bidding won’t automatically pull spend from another campaign; this still requires manual adjustments or automated budget rules.
To overcome the budget allocation limitations of Smart Bidding, use:
Portfolio bidding strategies: Setting bid strategies at the campaign level lets you use a common bidding approach (e.g., Target ROAS or Target CPA) across multiple campaigns. This enables more efficient spending across campaigns with similar goals without manual adjustments.
Shared budgets: Assigning a single budget across multiple campaigns ensures high-performing campaigns receive adequate funding while preventing overspending on lower-performing ones.
Performance Max: A black box for budget allocation?
PMax automates asset and bid optimization across multiple Google properties (Search, Display, YouTube, Discovery, etc.), but you don’t control which channel yorur budget goes to.
Google’s algorithm decides how much to allocate to each network, which can sometimes result in excessive spend on lower-performing placements like Display rather than Search.
Instead of relying solely on PMax, run separate Search campaigns alongside it to ensure an adequate budget is allocated to high-intent traffic.
When setting a tCPA or tROAS, allow a 10-20% margin for flexibility to help Google’s algorithm optimize effectively.
For example, if your ideal tCPA is $100, setting it to $115 gives Google room to secure conversions that may exceed your target while still delivering strong performance.
Since tCPA operates as an average, not every lead will cost the same amount.
Once you are consistently hitting your target, gradually lower the tCPA (or raise the tROAS) to improve budget efficiency without restricting conversions.
Underfunding efficient campaigns
If a campaign has a long conversion delay (i.e., B2B lead gen), Smart Bidding may incorrectly shift the budget elsewhere before enough data accumulates.
Solution
Extend conversion windows in Smart Bidding settings. The default is 30 days, but advertisers can adjust the window from one day up to 90 days.
Manually monitor lagging conversions and adjust budgets proactively.
Lack of budget control in PMax campaigns
Performance Max doesn’t allow advertisers to set separate budgets for Search, YouTube, and Display.
As a result, Google may (advertiser sentiment is that they do) favor low-cost clicks from Display rather than higher-intent Search traffic.
Solution
Run branded and high-intent non-branded Search campaigns separately to control budget spend on direct-response traffic.
Apply negative keywords via account-level negatives. While PMax doesn’t allow campaign-level negatives, account-level negative keyword lists can help block irrelevant or redundant queries. The maximum number of negative keywords allowed to be applied is 100. Google has stated that it created this limit because PMax isn’t meant to be a heavily restricted campaign type.
By monitoring your search impression share, you can identify when branded queries are slipping into PMax instead of the dedicated Search campaign. This will allow you to adjust bid strategies and audience signals accordingly.
Use audience exclusions in PMax to prevent excessive Display spend on irrelevant audiences.
Advanced tip
Tools like Optmyzr can help advertisers determine how their budget is allocated in PMax with the PMax Channel Distribution feature.
Although we may not have much control over the allocation, we can at least be aware of it.
How to use first-party data to improve budget allocation
An underutilized strategy for improving budgeting is leveraging first-party data to allocate spend toward high-value audiences.
As privacy restrictions tighten and tracking capabilities decline, it’s important to shift your focus from broad automated bidding to first-party audience targeting.
Use customer match to prioritize high-value audiences
Instead of spending equally across all users, advertisers can upload Customer Match lists (based on past purchasers, high-LTV customers, or CRM data) and adjust budgets accordingly.
Example
If historical data shows that repeat customers generate a higher ROAS than new users, more budget should be allocated to remarketing campaigns targeting Customer Match audiences.
Advanced tip
To maximize campaign efficiency, consider using value-based bidding (VBB) to ensure your budget prioritizes high-value conversions rather than just the volume of leads.
By assigning different conversion values based on customer lifetime value (LTV), using Customer Match, GA4 insights, or CRM data, you can direct more spending toward audiences that generate the highest long-term revenue.
Changes to customer match lists
Google recently introduced two key updates to Customer Match lists that will impact how advertisers manage audience data.
As of Jan. 13, stricter policy enforcement means you must comply with Google’s advertising standards. Violations could lead to restricted access or account suspension after a seven-day warning.
To stay compliant and maximize audience targeting, be sure to regularly refresh your lists and align your data collection with Google’s updated policies.
Apply GA4 data for smarter budget scaling
Google Analytics 4 (GA4) provides insights into conversion paths, high-value audience segments, and multi-channel attribution.
Instead of relying solely on Google Ads conversion tracking, use GA4 to determine which audience segments should receive higher budgets.
Best practice
Create custom lists/audiences around users with high engagement signals (repeat visits, add-to-cart actions, lead form interactions) and allocate more budget toward these users.
Create custom lists/audiences around low-intent users who bounce after viewing one page. To reduce wasted ad spend, decrease your bids or exclude them.
Instead of distributing the budget equally across all hours, allocate more to high-converting time periods.
Example
If the lead volume is highest between 8 a.m. and 2 p.m., increase bids and budget during these hours.
If your business hours are from 12 p.m. to 10 p.m., lower your bids during the hours you aren’t operating to prevent unnecessary ad expenses.
Industry-specific budgeting approaches
As we all know, no two industries are the same, so the approach to budgeting should also be different. Here’s how different business models should think about budget allocation:
B2B lead generation
Budgeting for B2B lead generation requires a long-term view.
As such, budget pacing should be planned over months. Don’t make frequent (i.e., daily or weekly) adjustments that could cause instability in the account.
Because the cycle is longer, conversions often take some time to materialize, so conversion delays should be considered when evaluating Smart Bidding performance.
If budgets are adjusted too soon based on incomplete data, campaigns may be underfunded before the true impact of conversions is realized.
Seasonality plays a large role in budgeting decisions for ecommerce brands.
Aggressively increase budgets ahead of major sales events, like Black Friday, Cyber Monday, and holiday shopping, to capitalize on higher purchase intent.
Reacting to performance mid-season will likely result in missed opportunities if the budget is exhausted too early.
Also, rather than spreading spend evenly across all potential buyers, prioritize high-LTV customers using Customer Match lists and past purchase data.
This ensures that ad spend is directed toward audiences likely to generate repeat purchases and higher average order values (AOVs).
Budget allocation for local businesses should be narrowly geo-targeted.
Instead of distributing spend evenly across an entire service area (although you should have some presence in the area), analyze past geographic conversion data to determine which locations typically generate the highest return.
The budget should then be allocated accordingly, ensuring that high-performing areas receive the majority of ad spend.
Another important factor is setting up call tracking.
Since many conversions happen over the phone rather than through online forms, integrate call-tracking data to identify which campaigns generate high-quality leads.
By analyzing call duration, lead quality, and customer inquiries, you can refine budget allocation to optimize for calls that convert into sales or appointments.
Each industry requires a different budgeting approach tailored to its sales cycles, customer behavior, and conversion patterns.
Understanding these nuances ensures that your PPC budgets are allocated strategically for maximum impact, whether it’s long-term pacing for B2B, seasonal surges for ecommerce, or localized targeting for service-based businesses.
A smarter approach to budgeting
Budgeting for your PPC campaigns doesn’t involve choosing between static and dynamic models; it involves strategically using both.
Smart Bidding and PMax improve efficiency but require human oversight.
First-party data should play a bigger role in spend allocation.
Budget scaling should be incremental and structured.
Industry-specific needs should dictate budget pacing strategies.
The best budgets are adaptable, data-driven, and aligned with long-term profitability rather than short-term spend fluctuations.
Those who master this approach will gain a competitive advantage in an increasingly automated advertising landscape.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2025/02/PPC-budgeting-in-2025-When-to-adjust-scale-and-optimize-with-data-800x450-57PRRv.png?fit=800%2C450&ssl=1450800http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-02-25 14:00:002025-02-25 14:00:00PPC budgeting in 2025: When to adjust, scale, and optimize with data
There are numerous reports that the Google Search Console API is delayed and not showing data sooner than this past Thursday, February 20th. If you use this API for your own tools, or bring in this data through Looker Studio reports, Big Query or other tools, your reports may be delayed.
More details. The delays started around last Wednesday and some are now saying some data for Thursday is slowly coming in. However, generally, data is as recent as today through the Search Console API.
The web interface is not impacted, so you can get data from going to Google Search Console directly.
Some are saying data for Thursday is now coming in, but others are not sure yet.
Google has not comments on this issue yet.
Why we care. If you are noticing weird data in your tools or reports and that data generally comes from Google Search Console’s API, this is why.
I suspect the data flow will return to normal in the coming days, but if you do report and you see weirdness in those reports, this is your explanation.
For more, if you need that data, access it directly through the web interface.
If you feel like you’re being pulled in different directions with your SEO program, you aren’t alone.
How do you know where to focus first for the most impact? And when that’s done, what do you do next?
It can be challenging to decide which SEO tasks to prioritize because they all impact the end user in some way – but some more than others. This is where discernment comes into play.
This article will help you build a path to get your SEO program organized from point A to point B and figure out how to prioritize tasks to get ROI quicker.
Frameworks for identifying high-impact SEO opportunities
When every SEO task feels urgent, knowing where to focus first can make or break your strategy. These three frameworks can help you prioritize what moves the needle.
1. Technical SEO audit
A technical SEO auditis your roadmap for identifying and fixing the issues that directly impact search visibility and user experience.
The right audit reveals the most urgent technical barriers to ranking – and helps you prioritize based on impact.
But not all audits are created equal. Here’s a breakdown of the different types:
Basic SEO audit
This is where automated software scans your site and flags common SEO issues. While the insights can be helpful, they come in a generic, one-size-fits-all report.
This type of audit is ideal if you’re working with a tight budget or just want to get a basic overview before bringing in an expert.
It’s never a bad idea, but it won’t provide an in-depth analysis.
Mid-level SEO audit
Here, you can expect a professional SEO specialist or vendor to go beyond automated reports and offer additional insights that software alone might miss.
While these can pinpoint issues that require attention, they may not provide detailed solutions.
This approach is useful when you need to identify potential problem areas but aren’t ready for a full-scale SEO strategy.
Comprehensive SEO audit
This is a full technical audit conducted by experienced technical SEOs.
This deep dive involves top-tier tools, data analysis, and an in-depth website and SEO review by skilled analysts specializing in technical SEO and business strategy.
Tools assist the process, but the real value comes from expert analysis, which makes it a time-intensive but highly valuable investment.
Knowing these key differences in audits can help you make an informed decision before you invest.
The Eisenhower Matrix is a powerful tool for prioritizing tasks by urgency and importance.
Applying it to your SEO strategy helps you determine which tasks need immediate attention and which can wait.
To get started, divide tasks into four quadrants:
Quadrant 1: Urgent and important
These are the critical issues that directly impact rankings and user experience.
For example, this could be a slow site or fixing a misconfigured robots.txt file that is blocking search engines from crawling and indexing key pages.
Whatever tasks you put in this category will be non-negotiable. Addressing these items can sometimes have an immediate impact on your ability to compete.
Quadrant 2: Important but not urgent
These will be the longer-term strategies that build sustainable growth.
For instance, maybe developing a long-term content strategy focused on topic authority and evergreen content falls here.
These efforts don’t require immediate attention but are essential for long-term SEO success.
Quadrant 3: Urgent but not important
This bucket is for handling tasks that are time-sensitive but don’t significantly influence rankings or user experience.
This could be something like responding to a minor Google Search Console alert about a non-critical issue.
While these tasks may not have a high impact, taking care of them prevents minor issues from accumulating into big projects.
Quadrant 4: Neither urgent nor important
Anything that falls into this category is something you avoid.
One example might be spending hours tweaking meta descriptions that already meet best practices without significant SEO gains.
These activities consume time and resources without delivering meaningful results.
Using the Eisenhower Matrix helps your SEO by enhancing:
Clarity: Identify and fix what demands attention now versus what can wait.
Efficiency: Prioritize the highest ROI tasks without getting bogged down.
Focus: Stay aligned with business goals, eliminating distractions.
3. The Pareto Principle (80/20 Rule)
The Pareto Principle suggests that 80% of outcomes come from 20% of efforts.
In SEO, focusing on the most impactful tasks helps you drive faster, more meaningful results without spreading yourself too thin.
Keyword targeting
It’s common for a small subset of your keywords to drive most organic traffic.
Instead of spreading your efforts thin across all keywords, focus on optimizing the ones that deliver the most value.
Use SEO tools to identify the top-performing 20% of keywords that bring in most of your traffic and conversions.
Prioritize pages that rank between Positions 5 and 20 for those high-value keywords. These are low-hanging fruit that can move up with improvements.
Expand content for high-value keywords by answering related questions and creating supporting content.
Content focus
Most of your website’s traffic and engagement likely comes from a handful of high-performing pages.
Instead of endlessly creating new content, invest in improving the 20% of pages that already generate the most traffic and leads.
Identify your top 20% of pages by traffic and conversions using analytics tools.
Revamp those pages by updating outdated content to enhance optimization and engagement.
Create supporting content to build topical authority around your best pages.
Technical fixes
Technical SEO can feel overwhelming because there’s always more to fix. But, a small subset of technical issues typically has the most impact on site performance.
Focus on fixing the top 20% of technical issues that cause 80% of your performance problems.
Prioritize high-impact fixes like:
Resolving crawl errors so search engines can access your site.
Improving page load speed for user experience and rankings.
Fixing broken links to avoid losing link equity and frustrating users.
Optimizing usability to retain visitors and improve your ability to compete in the search results.
Optimizing existing content by adding internal links, updating outdated information, or including relevant keywords.
Quick wins are valuable because they deliver early signs of progress. This helps build momentum and gain stakeholder buy-in.
However, relying solely on quick wins isn’t enough to achieve a sustainable SEO program.
That’s where long-term strategies come in.
Long-term strategies
Long-term strategies require more time and effort but are key to creating a strong foundation.
These strategies help your website become more authoritative, trustworthy, and relevant in the eyes of both search engines and your audience.
Examples of long-term strategies include:
Content creation that targets important keywords and answers user questions in-depth. Try SEO siloing to build authority around a topic.
Earning backlinks through your high-quality content and partnerships.
Refreshing top-performing content to make sure it remains evergreen and relevant. I recommend spending 50% of your content resources on maintaining older but high-performing content.
Continuing education so you can stay ahead of the curve. Consider annual SEO training with additional learning opportunities throughout the year. Search evolves fast, and you want to be able to forecast what’s coming up so you can start working on it early.
Foundational efforts don’t deliver instant results, but as your site’s authority grows, you’ll see compounding benefits with higher rankings, more traffic, and increased user trust.
Fast gains, lasting growth: Crafting a balanced SEO plan
A good SEO roadmap should include both short-term quick wins and long-term projects. But where to start?
Here’s one scenario: You could focus 70% of your time on quick wins early on to show immediate results and 30% on long-term efforts.
Over time, you might adjust the balance to a 50/50 split as your site becomes more stable and foundational work becomes a bigger priority.
Prioritizing your SEO strategies is the key to driving meaningful results.
SEO isn’t about doing everything at once. It’s about doing the right things at the right time.
When you focus on high-impact tasks and continuously refine your approach, you’ll build a more competitive search engine presence that pays off for years to come.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2025/02/Eisenhower-matrix-example-MTqGQj.png?fit=1024%2C768&ssl=17681024http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-02-25 13:00:002025-02-25 13:00:00SEO prioritization: How to focus on what moves the needle
Chegg, the publicly traded education technology company, has sued Google over its AI Overviews, claiming they have hurt its traffic and revenue. The company said that AI Overviews is “materially impacting our acquisitions, revenue, and employees.”
Second, we announced the filing of a complaint against Google LLC and Alphabet Inc. These two actions are connected, as we would not need to review strategic alternatives if Google hadn’t launched AI Overviews, or AIO, retaining traffic that historically had come to Chegg, materially impacting our acquisitions, revenue, and employees. Chegg has a superior product for education, as evident by our brand awareness, engagement, and retention. Unfortunately, traffic is being blocked from ever coming to Chegg because of Google’s AIO and their use of Chegg’s content to keep visitors on their own platform. We retained Goldman Sachs as the financial advisor in connection with our strategic review and Susman Godfrey with respect to our complaint against Google.
More details. CNBC reports that “Chegg is worth less than $200 million, and in after-hours trading Monday, the stock was trading just above $1 per share.” Chegg has engaged Goldman Sachs to look at options to get acquired or other strategic options for the company.
Chegg reported a $6.1 million net loss on $143.5 million in fourth-quarter revenue, a 24% decline year over year, according to a statement. Analysts polled by LSEG had expected $142.1 million in revenue. Management called for first-quarter revenue between $114 million and $116 million, but analysts had been targeting $138.1 million. The stock was down 18% in extended trading.
The report goes on to say that Google forces companies like Chegg to “supply our proprietary content in order to be included in Google’s search function,” said Schultz, adding that the search company uses its monopoly power, “reaping the financial benefits of Chegg’s content without having to spend a dime.”
Here is more from Chegg’s statement:
While we made significant headway on our technology, product, and marketing programs, 2024 came with a series of challenges, including the rapid evolution of the content landscape, particularly the rise of Google AIO, which as I previously mentioned, has had a profound impact on Chegg’s traffic, revenue, and workforce. As already mentioned, we are filing a complaint against Google LLC and Alphabet Inc. in the U.S. District Court for the District of Columbia, making three main arguments.
First is reciprocal dealing, meaning that Google forces companies like Chegg to supply our proprietary content in order to be included in Google’s search function.
Second is monopoly maintenance, or that Google unfairly exercises its monopoly power within search and other anti-competitive conduct to muscle out companies like Chegg.
And third is unjust enrichment, meaning Google is reaping the financial benefits of Chegg’s content without having to spend a dime.
As we allege in our complaint, Google AIO has transformed Google from a “search engine” into an “answer engine,” displaying AI-generated content sourced from third-party sites like Chegg. Google’s expansion of AIO forces traffic to remain on Google, eliminating the need to go to third-party content source sites. The impact on Chegg’s business is clear. Our non-subscriber traffic plummeted to negative 49% in January 2025, down significantly from the modest 8% decline we reported in Q2 2024.
We believe this isn’t just about Chegg—it’s about students losing access to quality, step-by-step learning in favor of low-quality, unverified AI summaries. It’s about the digital publishing industry. It’s about the future of internet search.
In summary, our complaint challenges Google’s unfair competition, which is unjust, harmful, and unsustainable. While these proceedings are just starting, we believe bringing this lawsuit is both necessary and well-founded.
Google statement. Google spokesperson Jose Castaneda said, “With AI Overviews, people find Search more helpful and use it more, creating new opportunities for content to be discovered. Every day, Google sends billions of clicks to sites across the web, and AI Overviews send traffic to a greater diversity of sites.”
Why we care. Will Chegg win in a court against Google? Will Google have to rethink its AI Overviews and find better ways to send traffic to publishers and site owners? It is hard to imagine but this may be the first large lawsuit over Google’s new AI Overviews.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2025/02/google-legal3-name-ss-1920-800x450-ZHJQZI.jpeg?fit=800%2C450&ssl=1450800http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-02-24 23:10:412025-02-24 23:10:41Google sued by Chegg over AI Overviews hurting traffic and revenue
Microsoft is testing a new version of Bing named Copilot Search, where it uses Copilot AI to provide a different style of search results. It looks different from the main Bing Search, it looks different from Copilot and it looks different from the Bing generative search experience.
More details. The folks over at Windows Latests reported, “Microsoft is testing a new feature on Bing called “AI Search,” which replaces blue links with AI-summarized answers. Sources tell me it’s part of Microsoft’s efforts to bridge the gap between “traditional search” and “Copilot answers” to take on ChatGPT. However, the company does not plan to make “AI search” the default search mode.”
You can access it at bing.com/copilotsearch?q=addyourqueryhere – just replace the text “addyourqueryhere” with your query.
What it looks like. Here is a screenshot I captured of this interface:
Why we care. Everyone is looking to build the future of search now – with Google Gemini, Google’s AI Overviews, Microsoft Bing, Copilot, ChatGPT Search, Perplexity and the dozens of other start up AI search engines – the future of search is something they are all trying to crack.
This seems to be one new test that Microsoft is trying out for a new approach to AI search.
This article tackles the key differences between each attribution method to help you determine which one best fits your business needs.
The growing need for smarter marketing attribution
With Google’s recent update to its open-source marketing mix model, Meridian, interest in marketing mix analysis and channel modeling has surged.
While enterprise brands have long benefited from these insights, smaller businesses running multi-channel marketing can also gain value.
Two leading methodologies have emerged to tackle this challenge:
Multi-touch attribution (MTA).
Marketing mix modeling (MMM).
Both aim to measure marketing effectiveness but differ significantly in methodology, scope, and application.
Every business investing in marketing needs to assess whether its efforts are paying off.
SEO, email campaigns, search ads, and social media all demand time and budget.
But without the right measurement approach, it’s difficult to know which channels truly drive results.
Many marketers rely on in-platform data, but this only provides a partial view due to differing attribution models and settings.
Third-party attribution tools attempt to bridge the gap, but they often favor specific marketing channels and impose predefined attribution rules, which may not align with long-term business goals.
For businesses serious about optimizing their marketing, a customized approach is essential – one that fully leverages their own data while integrating additional insights.
Multi-touch attribution is a digital-first methodology that tracks individual customer interactions across various touchpoints in their journey to purchase.
It assigns credit to each marketing touchpoint based on its contribution to the final conversion.
Operating at a granular, user-level scale, MTA collects data from cookies, device IDs, and other digital identifiers to create a detailed picture of the customer journey.
MTA is commonly supported by marketing channels like Google Ads, which offer different attribution settings – data-driven being the most recommended.
However, first and last touch models are not considered part of MTA, as they only account for a single touchpoint.
Beyond in-platform attribution, most analytics tools also support multi-touch attribution.
For SMBs with strong tracking and high data quality, these tools can be sufficient.
However, taking attribution to the next level requires a customized MTA by:
Using a tool that allows customization.
Or building custom attribution reports, often in combination with a data warehouse.
A tailored MTA ensures attribution is aligned with your business and customer journey, leading to more accurate insights.
The need for a customized MTA becomes clear with the following example:
Imagine a user encounters two social touchpoints – an Instagram ad and a TikTok ad – before converting through a Google Search ad.
A standard MTA might allocate 20% credit to each social channel for awareness and 60% to Google Search, assuming search played the most crucial role due to its intent-driven nature.
Instagram ad: 20%
TikTok ad: 20%
Google Search: 60%
You might conclude that increasing your Google Ads budget and investing more in search is the right move.
While this could work, it could also backfire – without a customized MTA, your decision-making may be flawed.
Let’s take a closer look at the user journey to see what might be wrong:
Instagram ad – Cold awareness: 50%
TikTok ad – Remarketing: 40%
Google Search – Branded search: 10%
Instead of Google Search being the primary driver, it could be that:
Instagram is generating initial awareness.
TikTok is handling remarketing.
Google is simply capturing conversions from users already familiar with your brand.
In this case, increasing Google Ads spend wouldn’t necessarily drive more sales. It would just reinforce the final step while neglecting the earlier, more influential touchpoints.
With this in mind, MTA weightings can look completely different.
Investing more in cold traffic and remarketing while minimizing spend on Google Search might be the smarter approach, as search doesn’t generate demand but rather supports the last step and defends your brand against competitors.
This example highlights why a customized MTA is essential. It allows you to tailor attribution to your specific strategy, funnel, and customer journey.
However, if data quality is poor or customization is lacking, it can lead to inaccurate insights, poor decisions, and short-term thinking.
Marketing mix modeling
Marketing mix modeling, on the other hand, takes a top-down, aggregate approach.
It analyzes historical marketing spend across channels along with external factors to assess their impact on business outcomes.
Using advanced statistical techniques, MMM identifies correlations between marketing investments and results.
An effective marketing mix model incorporates both historical and current data, making it resilient to outliers and short-term fluctuations.
Depending on the model, it also allows for the inclusion of seasonal trends, industry benchmarks, growth rates, and marketing volume.
Additionally, MMM can account for brand awareness and loyalty in base sales, as well as measure incremental sales.
MTA is a valuable tool for digital marketing teams that need immediate insights and real-time tracking to optimize campaigns quickly.
Its granular data helps marketers refine conversion paths and personalize customer interactions.
However, increasing privacy restrictions and the phase-out of third-party cookies make MTA more challenging to implement effectively.
Additionally, its digital-first nature means it struggles to account for offline marketing efforts and may lead businesses to prioritize short-term conversions over long-term brand growth.
MMM, by contrast, provides a broader, privacy-friendly approach that captures both digital and offline marketing performance.
It is particularly useful for long-term budget planning, helping businesses allocate resources effectively across multiple channels.
However, its reliance on historical data and aggregate trends makes it less suited for rapid campaign adjustments.
Companies that operate across both digital and traditional marketing channels may benefit from combining MTA’s real-time insights with MMM’s strategic guidance for a more balanced approach.
To determine which model best suits your needs, it’s helpful to experiment by uploading test datasets and exploring their functionalities.
While these models share a common approach, they differ in customization depth and fine-tuning capabilities.
In my experience, Meridian is the most advanced, offering deep integration with first-party, organic, and third-party data. However, its complexity may require a steeper learning curve.
For a quicker setup, Robyn from Meta is a solid starting point.
Hybrid approach
As marketing measurement evolves, organizations increasingly adopt hybrid approaches that combine the strengths of both MTA and MMM. This unified framework aims to:
Leverage MTA’s granular digital insights for tactical optimization.
Use MMM for strategic planning and budget allocation.
Cross-validate findings between both methodologies.
Provide a more complete view of marketing effectiveness.
For digital-first companies, MTA is often the preferred starting point, offering real-time insights for rapid campaign adjustments.
In contrast, businesses investing heavily in traditional marketing tend to benefit more from MMM, as it:
Aligns with privacy regulations.
Accounts for external factors.
Delivers a holistic view of marketing performance.
A hybrid approach provides the best of both worlds – combining MTA’s agility with MMM’s long-term perspective.
While managing both requires additional resources, businesses implementing this strategy gain precise, channel-specific insights and a broader strategic understanding.
This dual approach is particularly valuable for organizations balancing short-term performance optimization with sustainable, long-term growth.
Boost your marketing performance with the right attribution model
Both MTA and MMM offer valuable insights into marketing effectiveness, but they serve different purposes and have distinct advantages.
As the marketing landscape becomes more complex and privacy-focused, it’s essential to assess your measurement needs and capabilities to determine the best approach – or a combination of both.
The future of marketing measurement likely lies in hybrid solutions that blend MTA’s granular insights with MMM’s strategic perspective while adapting to evolving privacy regulations and technological changes.
By integrating these methodologies, you’ll be better equipped to optimize marketing investments and drive long-term business growth.
https://i0.wp.com/dubadosolutions.com/wp-content/uploads/2025/02/MTA-vs.-MMM-Key-differences-qpfSsA.png?fit=1150%2C533&ssl=15331150http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-02-24 15:00:002025-02-24 15:00:00MTA vs. MMM: Which marketing attribution model is right for you?
Largest Contentful Paint (LCP) is one of Google’s three Core Web Vitals.
Like the other two (Cumulative Layout Shift and Interaction to Next Paint), it’s not exactly clear what it means.
Lots of tools can show your LCP score and outline ways to improve it. But their tips are often generic, and lack the detail you need to actually take action.
So, in this guide I’ll walk you through actionable steps to improve your LCP. I’ll separate them by:
Their potential impact
The effort required to make the fix
Which specific aspect of your LCP score they help with
But first, let’s talk about what LCP actually means for your website (jump to this part for the fixes).
What Does Largest Contentful Paint Even Mean?
Largest Contentful Paint measures how long it takes for the main content of your webpage to appear on your user’s screen—whether that’s a hero image, heading, or block of text.
It’s not the most intuitive phrase, so let’s break it down word by word:
Largest: The biggest piece of visible content on the screen. This could be a large image, a big headline, or any major element that stands out.
Contentful: It’s something that has actual content—like text or an image—and isn’t just a background or frame.
Paint: This refers to how your browser “draws” (or renders) that element on your screen.
For example, imagine clicking a link to read a news article.
The page might load various elements quickly, like the header menu at the top and placeholders for ads.
But if the article text takes five seconds to show up, that’s a poor experience. That delay is what LCP measures.
When you think about LCP, think about your visitors. It’s the difference between someone seeing your main product image or headline right away versus waiting and possibly leaving.
A faster LCP generally means a better user experience. And a better experience means happier visitors who trust your site and want to hang around (and potentially buy from you).
Further reading: For more on how loading speed can affect your website experience and optimization, check out our full guide to page speed and SEO.
These benchmarks serve as useful guidelines, but your users’ actual experience matters most.
A visually rich photography portfolio might take longer to load but still satisfy visitors. Meanwhile, a simple text-based article that loads in three seconds might frustrate users who
expect instant access.
So, focus on your audience’s expectations and behavior. Check your analytics to see if slower LCP correlates with higher bounce rates or lower conversion rates.
These numbers tell you more about your site’s real performance than any benchmark can.
If your conversion rate is 10x the industry average, it likely won’t make a massive dent in your bottom line if you improve your LCP score.
But if people aren’t staying long on your important pages, improving your LCP score could help boost your site’s performance. This, in turn, can lead to better results for your business.
How to Measure Your LCP Score
There are lots of tools you can use to measure your LCP. But you don’t want to just get your score.
You also want to learn these two things:
What your LCP element is
Which stage of your LCP is longest
Finding these two pieces of information is key for prioritizing which methods you should use to improve your LCP.
For example, you could spend hours minifying your code, inlining your CSS, and deferring JavaScript. But it won’t make much of a difference if your LCP element is a hero image you just
haven’t optimized yet.
As for the stages:
LCP is made up of four stages:
Time to First Byte (TTFB)
Resource load delay
Resource load time
Element render delay
Each stage is affected by different factors (and methods of optimization). So, if you can identify which stages of your LCP are taking the longest, you can prioritize your fixes accordingly.
Here are two ways to find this information.
Note: With many tools, you’ll get different LCP scores depending on whether you check the mobile or desktop version of your site. Optimizing for both helps improve your experience for all users.
Google PageSpeed Insights
Google’s PageSpeed Insights (PSI) is a popular choice if you want a simple, web-based report.
Just plug in your URL, and you’ll get a quick overview of your Core Web Vitals, including LCP.
PSI is great if you’re not a big fan of digging around in complex dashboards. It gives you clear visuals and actionable tips without much fuss.
It also has a handy diagnostics section which tells you some of the main ways you can reduce your score. Just make sure you select the “LCP” option next to “Show audits relevant to.”
Click the “Largest Contentful Paint element” option to see which element on that page is the LCP element.
It also shows you the breakdown (as a percentage) of each stage of your LCP. From the example above, you can see the vast majority (88%) of our LCP time comes from the render delay stage.
Knowing this lets us focus our efforts on the methods in the next section that specifically help reduce that stage of the LCP score.
Chrome DevTools
Chrome’s DevTools can give you detailed, real-time feedback on various aspects of your page’s performance.
It’s especially useful for testing changes on the fly, but it might feel a bit overwhelming if you’re completely new to web development.
Access it in Chrome on any webpage by right clicking and selecting “Inspect.”
In the interface that appears, head to the “Performance” tab.
(You can select the three dots next to the cog icon and change where the dock goes—I find horizontal is best for analyzing LCP.)
This view shows your LCP score. If you hover over the “LCP element” underneath the score, you’ll see which part of the content is the largest contentful element.
Then, get a breakdown of the LCP stages by clicking the “Record and reload” button. This will run the performance checks again on the page, and you’ll see more information along with a
waterfall chart.
Ignore that for now, and instead click the “LCP by phase” drop-down. This breaks the LCP down into its four constituent parts, showing the actual time for each stage along with a percentage.
As before, you can use this information to prioritize your optimization efforts and more effectively improve your LCP.
How to Improve Your LCP
You can improve your LCP in several ways, and some methods will help you more than others.
The table below sorts the methods by impact, also indicating the effort level each one requires and which stage of your LCP it’ll help reduce.
Your own skill level, your website’s setup, and your budget will affect how easy or cost-effective these changes are for you.
I’ve taken each method in isolation, as the relative impact of each fix may decrease as you implement each one.
For example, if you implement lots of these methods but don’t use a CDN, your LCP score will likely improve to the point that using a CDN might not make much difference to the score
(although it may still improve your user experience).
Finally, a few of these might help reduce different stages of your LCP. As with every change you make to your website, there’s usually a bit of overlap in terms of what it’ll affect.
I’ll explain more of the nuances and who each fix is best suited to below.
Impact: High | Effort: Low | Helps Reduce: Resource Load Time
A Content Delivery Network (CDN) stores (cached) copies of your content across servers around the world. When people visit your site, they’re served files from the closest server to them.
That means faster load times for your users.
If you’re running a small local blog, you might not absolutely need a CDN. But if you have visitors from all over, a CDN can boost your LCP by reducing the travel time for your data.
This is most impactful for:
Websites with visitors from multiple regions
Sites with lots of large images or media files
Anyone wanting to improve global load times without lots of coding
How to Implement This
You can sign up for a CDN service like Cloudflare, KeyCDN, or StackPath. They’ll provide instructions for changing your domain’s settings to route traffic through their servers.
Once set up, the CDN will serve your website files to users from the server that’s physically located closest to them.
There are cheap and free options, but it can get expensive for larger sites with lots of traffic.
If you use WordPress or a similar content management system (CMS), there are often plugins that make the setup process even smoother.
Optimize Your Images
Impact: High | Effort: Medium | Helps Reduce: Resource Load Time
Large image files are a common reason for poor LCP scores. This is especially true if you use a large hero image at the top of your pages or blog posts.
By compressing images before uploading them, you reduce their file size to make them load faster.
This is most impactful for:
Sites with lots of large product or blog images
Photographers or ecommerce stores with high-res visuals
Anyone looking for a straightforward way to speed up load times
How to Implement This
You can optimize your images using online tools, and there are lots of free options. Or you can use plugins that auto-compress images when you upload them to your content management system.
Squoosh is a free tool that lets you tweak the optimization settings, choose a format to convert to, and resize the image:
To do this in bulk, you can also use a tool like TinyPNG:
Just keep an eye on quality—if you compress too much, your images might look blurry. But most of the time, you can shrink them a lot without anyone noticing.
Pro tip: Beyond images, it’s usually best to avoid having a video above the fold. This can lead to poor LCP scores.
Use WordPress Plugins
Impact: High | Effort: Low | Helps Reduce: Potentially all stages
For many WordPress users, plugins are the easiest way to speed up your site and fix LCP issues with minimal effort. They can handle image optimization, caching, code minification, and
more—all from a simple dashboard.
The caveat is that the best ones aren’t always free. So you’re often paying a convenience cost. But there are still some unpaid options out there.
Another downside is the risk of plugin “bloat,” which can slow your site if you install too many or choose poorly optimized ones.
Compatibility issues may also pop up, especially if you try to use multiple optimization plugins at one time.
But as long as you don’t have hundreds of plugins, and check for compatibility, I find the benefits typically outweigh the downsides here.
Note: If you use a different CMS, like Shopify, there are likely apps or add-ons that can help with your LCP score.
This is most impactful for:
WordPress users without technical know-how
Anyone who wants a quick fix for multiple performance issues
Those willing to spend a bit of money to solve a lot of issues at once (although there are free options)
How to Implement This
There are lots of WordPress plugins that are great for improving your LCP in particular, and your page speed in general.
One example is WP Rocket. It’s a paid WordPress optimization plugin that does a lot of the things on this list for you.
Including:
Image optimization
Code minification
Preloading/prefetching resources
CDN implementation
Caching
There are lots of customization options, making this a useful plugin a quick and fairly easy solution to improve your LCP.
Autoptimize is a free WordPress plugin that does a lot of the same things as WP Rocket.
It does lack a few features, like generating critical CSS and caching. But it’s a good starting point for beginners on a budget with a WordPress site.
Implement Caching
Impact: High | Effort: Low | Helps Reduce: Time to First Byte
Caching stores parts of your site on your user’s browser so it doesn’t have to request them from scratch every time they visit the site.
This can speed up your LCP because your server won’t need to work as hard to deliver the key page elements the next time the user visits.
Many hosting providers include caching by default.
You can also install plugins that handle caching for you.
This is most impactful for:
Sites with repeat visitors (e.g., blogs, online magazines)
Websites on platforms that generate pages dynamically (like WordPress)
Sites experiencing slow server response times
How to Implement This
If your host offers caching, enable it in your hosting dashboard. Otherwise, consider a caching plugin.
If you use a CDN, it already relies on caching to serve your content to users with faster load times.
Note: You only need to use one effective caching setup or plugin at a time. Using multiple can lead to no performance improvements at best, and various compatibility issues at worst.
Use a Faster Web Host
Impact: High | Effort: Low | Helps Reduce: Time to First Byte
Switching to a more powerful hosting plan or provider can make a big difference in how quickly your site’s main content loads.
That’s because your web host’s speed is going to have the largest impact on your Time to First Byte.
This is often the simplest route if you don’t want to tinker with technical details. However, premium hosting can be expensive.
If you have a small site or a tight budget, you might find it hard to justify the cost for LCP gains alone. But for large businesses or sites that generate a lot of revenue, investing in better hosting can pay off.
Note: This is also unlikely to put a dent in your LCP if your host is already pretty quick. I’d generally only recommend considering this option if your Time to First Byte is exceptionally long. Or if you’re noticing other performance issues or extended periods of website downtime.
This is most impactful for:
High-traffic sites that need consistent speed
Businesses with a budget to invest in premium hosting
Sites that have outgrown their current hosting plan
How to Implement This
When upgrading your web host, look for:
Reliable uptime
Scalability
Good support
Security features
Robust backup options
Migrating your site can be as simple as using a migration plugin if you’re on WordPress, or asking your new host for help.
It’s usually fairly straightforward if you’re staying with your current host and just upgrading your plan. But moving hosts can be a little more effort-intensive.
Impact: Medium | Effort: Low | Helps Reduce: Resource Load Time
Minifying code involves stripping out anything “unnecessary,” like extra spaces or new lines, from your site’s HTML, CSS, and JavaScript files. This makes them smaller and faster to load.
If you’re not a developer, you can still do this using tools or plugins that automate the process (like WP Rocket mentioned above).
Just be sure to back up your site or test it in a staging environment. Sometimes, minification can cause layout or script issues.
This is most impactful for:
Sites with lots of CSS and JavaScript files
Pages that rely on multiple libraries or frameworks
How to Implement This
You can minify your code with free tools like Minifier:
If you use a CMS like WordPress, use plugins (e.g., WP Rocket, Autoptimize) that automatically shrink your CSS, JS, and HTML.
Here’s how it looks in the “File Optimization” screen of WP Rocket:
Most plugins let you choose which files to minify, so if you see any issues, uncheck or exclude the problematic file and test again.
Alternatively, reach out to a developer to help with this instead.
Optimize Your Fonts
Impact: Medium | Effort: Medium | Helps Reduce: Resource Load Time
Fancy fonts can look great, but they can also slow down your page.
Custom fonts often have to be downloaded from a separate server. If you optimize or host them locally, you reduce delays that stop your text (like big headlines) from being visible.
You do want to maintain your site’s style, so it’s a balance between looking good and loading fast. Some sites solve this by using system fonts that don’t need extra downloads.
This is most impactful for:
Sites using multiple custom fonts or large font families
Design-heavy pages with fancy typography
Anyone noticing a “flash of invisible text” when pages load
How to Implement This
Hosting fonts locally is often faster than pulling them from external servers. If you use Google Fonts, you can download them and serve them from your own domain.
But honestly, this just won’t be necessary for most site owners. While it might reduce your LCP, it’s unlikely to be a massive gain and may not be worth the effort.
Alternatively, let a plugin handle font optimization for you. Minimize the number of font weights you use—if you only need bold and regular, don’t load the entire family.
Lazy loading is a feature that only loads images when you scroll down to them. In other words, images only load when they’re in the user’s “viewport” (on their screen).
It’s great for boosting page load time, and is typically regarded as a best practice for fast websites.
But if you lazy load images that are right at the top of your page, your visitors will see a blank space before anything else pops in. That can really hurt your LCP.
The idea behind lazy loading is to not load images the user doesn’t need to see yet. But when it’s the first image you want a user to see as soon as they land on your page, clearly you don’t want to delay loading at all.
So, it’s usually best to load above-the-fold content right away, then lazy load what’s below.
This is most impactful for:
Sites that lazy load everything by default
Above-the-fold areas with key images or banners
Pages where the main header image is crucial for user engagement
How to Implement This
Many lazy-loading tools let you exclude certain images. Find the settings or plugin option that specifies “above the fold” or “first contentful paint” images, and disable lazy loading for those.
In WP Rocket, you do that in the “Media” area:
If you’re not using a CMS like WordPress, just make sure the LCP image’s HTML looks like this, with either loading=“eager” or no loading attribute (“eager” is the default):
Rather than like this, with the loading=“lazy” attribute:
Remove Elements You Don’t Need
Impact: Medium | Effort: Medium | Helps Reduce: Element Render Delay
Every script, image, or widget on your site adds to the time it takes for your page to fully load. So you need to think carefully about what appears above the fold.
If there’s a huge banner, multiple images, or extra code that doesn’t add real value, consider removing it or placing it below the fold.
Just make sure you don’t strip away elements that are crucial for your users or your brand message.
This is most impactful for:
Content-heavy sites filled with widgets or ads
Homepages stuffed with multiple banners, slideshows, or animations
Anyone looking to simplify their design without sacrificing core features
How to Implement This
Audit your site’s above-the-fold area and ask, “Does this element help my user right away?”
If not, move it below the fold (or remove it entirely).
Think about collapsing large sign-up forms or extra images.
Removing unnecessary scripts, like old tracking codes, can also help. To pinpoint snippets you might want to remove, look out for the “Reduce unused JavaScript” opportunity in PageSpeed Insights:
Use Defer/Async for JS
Impact: Medium | Effort: Medium | Helps Reduce: Element Render Delay
JavaScript files can block the rendering of your page if they load first. By deferring or asynchronously loading scripts, you let your main content appear before any heavy scripts run.
This helps your LCP because the biggest chunk of your page shows up without waiting for all your JS to finish loading.
The main reason you’ll likely want to look into async and defer is if the tool you’re measuring your LCP with says you have render blocking resources.
Like this:
Basically, without any attributes, the browser will attempt to download and then execute your JavaScript as it encounters it. This can lead to slower load times, and longer LCP times if it blocks the LCP element from loading.
With async, it won’t pause parsing (breaking down and analyzing) of the HTML during the download stage. But it still pauses as the script executes after downloading.
With defer, the browser doesn’t pause HTML parsing for the download or execution of your JavaScript. This can lead to lower LCP scores, but it means your JavaScript won’t execute until the browser has finished parsing the HTML.
You might need a developer’s help if you’re not sure which scripts to defer or load asynchronously, or how to do it.
Some optimization plugins for platforms like WordPress can also handle this for you.
This is most impactful for:
Sites that rely on several JavaScript libraries
Pages slowed down by loading scripts too early
Website owners looking for a middle-ground solution without full SSR (more on that below)
How to Implement This
If you’re on WordPress, look for an optimization plugin that includes deferring or async-loading scripts.
In custom setups, you’d add attributes like “defer” or “async” to your script tags in the HTML.
Just make sure you don’t delay any critical scripts (like core functionality) too much.
Inline Critical CSS
Impact: Medium | Effort: High | Helps Reduce: Element Render Delay
Inlining CSS means putting small blocks of CSS code right into your HTML, so your page doesn’t need to fetch a separate file for that part.
It can speed up how quickly your main elements appear. But you can’t inline everything, or you’d end up with a massive HTML file that defeats the purpose.
This method can be helpful for critical (above-the-fold) styles, but it shouldn’t replace your entire stylesheet.
“In general, inlining your style sheet is only recommended if your style sheet is small since inlined content in the HTML cannot benefit from caching in subsequent page loads. If a style sheet is so large that it takes longer to load than the LCP resource, then it’s unlikely to be a good candidate for inlining.”
This is most impactful for:
Sites with a small amount of critical CSS for the header area
Minimalist designs that don’t rely on big external stylesheets
Anyone looking to shave off small load delays
How to Implement This
Identify the essential CSS you need to style your page’s top section, and place it directly in the HTML <head>. This can reduce the time it takes to render the crucial above-the-fold part.
Keep the rest of your CSS in external files to avoid bloating your HTML. Some performance plugins can automate this “critical CSS” approach for you.
Autoptimize offers a cheap solution, while it’s baked into plugins like NitroPack and WP Rocket.
While there are also dedicated critical CSS plugins, I’d generally recommend going for a more feature-rich option for a bit of extra money (if you have the budget). You’ll typically get more value than spending $10 a month on one feature that may have limited impact on your LCP.
Switch to SSR
Impact: Medium | Effort: High | Helps Reduce: Element Render Delay
CSR (Client-Side Rendering) means your user’s browser does a lot of the work to build the page.
SSR (Server-Side Rendering) means most of the work happens before the page hits the user’s browser.
SSR can help LCP for sites heavy in JavaScript, because the biggest content is already “pre-built” for the user. But switching from CSR to SSR can be a big project if you’re not familiar with it.
For some sites, it’s overkill. For others, it’s the key to big performance gains.
This is one method where you really need to weigh up the benefits and how they might apply to your specific situation:
Run a fairly standard blog, service website, or ecommerce store? Switching to SSR might bring noticeable performance gains.
Got a highly interactive web app? You might want to stick with CSR for a better user experience.
Generally, if you implement other methods like caching and using a CDN, you’ll see performance benefits with SSR that outweigh the potential server load increase.
This is most impactful for:
JavaScript-heavy web apps (e.g., React, Vue)
Sites noticing a significant delay before content appears
Advanced users or teams that can handle more complex architecture
How to Implement This
Switching from Client-Side Rendering to Server-Side Rendering (or a hybrid approach) typically involves using frameworks (like Next.js for React) that pre-render your content on the server.
This can speed up LCP since the browser receives a ready-made page. However, it’s a bigger project requiring code changes and a good understanding of your tech stack.
If you’re not comfortable with that, you might need to hire a developer or agency.
Preload Important Resources
Impact: Medium | Effort: Medium | Helps Reduce: Resource Load Delay
Preloading tells the browser which files it should grab or prepare in advance.
It can shave off a bit of loading time and help your main content appear slightly faster. For many small sites, these optimizations won’t create dramatic changes.
But on bigger sites or those with lots of images and unique fonts, it can make a difference.
This is most impactful for:
Sites that rely on off-site resources (e.g., fonts or images)
Those comfortable editing HTML headers or using plugins that can do this at scale
How to Implement This
You can preload fonts and images by adding special link tags in your site’s <head>. They tell the browser to grab or prepare certain resources before they’re actually needed.
You simply add rel=“preload” to the <link> tag. Like this:
How much effort this requires depends on your specific setup and how many pages you want to deploy it on. But it’s a fairly simple process that can help reduce your LCP score.
Note: As with a lot of the other methods on this list, WordPress plugins can help here too.
Boost Your Rankings by Improving Your Page Experience
Improving your LCP is one way to boost your overall page experience for users.
In turn, this can actually end up having an impact on your rankings beyond Google’s page experience signals.
Check out our guide to user behavior and SEO to learn how the way your users behave on your website could potentially impact how Google ranks your site.
(It makes optimizing for factors like LCP and the other Core Web Vitals A LOT more important.)
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-02-20 15:56:112025-02-20 15:56:11How to Improve Largest Contentful Paint (LCP) in Under an Hour
Most Popular AI Apps by Downloads in the US in First 18 Days Since Launch in the App Store
ChatGPT’s US launch ranked #1 with 1.4 million downloads in the App Store in the first 18 days since app release, followed by Google Gemini (951 thousand) and Microsoft Copilot (518 thousand).
Here’s the complete list of the most downloaded AI apps in the US in the first 18 days of release for each app:
http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png00http://dubadosolutions.com/wp-content/uploads/2017/05/dubado-logo-1.png2025-02-20 15:54:352025-02-20 15:54:35Most Popular AI Apps