LCP, FID, CLS? We explain why you'll need to pay attention to these new user-centered metrics.
Core Web Vitals are a new set of metrics from Google, due to be incorporated into the search engine’s ranking algorithm from June 2021.
This forms a part of a general trend from Google to incorporate UX signals into its algorithm, effectively pushing sites to focus more on creating a better page experience for users.
Essentially, poor UX could be bad not only for users, affecting the site’s conversion rates, but could result in sites losing search rankings and the chance to attract potential visitors and customers.
In this article, we’ll look at what Google’s new metrics are, how you can track your site’s performance against them, and whether these metrics are helpful for sites looking to improve UX.
Google’s algorithm has incorporated a few UX signals for some time - page speed and mobile ‘friendliness’, and disruptive interstitials are already factored in when the search engine decides which pages to rank.
Essentially, if your site is slow, or not optimized for mobile, then you’re less likely to rank highly for your target keywords. Exactly how important these factors are is unclear, and is the case with any ranking factor Google uses.
They fit alongside the search engine’s existing page experience metrics, to be used alongside other ranking factors such as backlinks, relevance to keywords, domain authority and more.
As Google itself says, these metrics are to be considered alongside other key areas of page optimization:
“While all of the components of page experience are important, we will prioritize pages with the best information overall, even if some aspects of page experience are subpar. A good page experience doesn’t override having great, relevant content.”
However, the introduction of these new metrics makes UX even more important from an SEO perspective.
Core Web Vitals are a set of metrics which it sees as important to the user’s experience on a site. As Google explains:
“Each of the Core Web Vitals represents a distinct facet of the user experience, is measurable in the field, and reflects the real-world experience of a critical user-centric outcome.”
Each metric is designed to measure on-page factors which can lead to a poor user experience. They do seem to address some key UX annoyances.
For each metric, Google will rate pages as ‘good’, ‘needs improvement’ or ‘poor’.
On some sites - think of some ad-heavy news websites - it can take time for the content you clicked through to be visible. This may be the result of ads and other page elements loading up, and can be a frustrating experience for the user.
LCP measures the time it takes for the largest piece of content on a page to be visible, which may be the text of a page, or perhaps a product image on an ecommerce site.
In essence, this is the time from clicking a link or typing in the URL to having a sense that the page is loaded - any longer than a couple of seconds feels like too long and can deter users.
To achieve a ‘good’ score, the main content should load within 2.5 seconds.
This is a measure of how long it takes before visitors can interact with a page. Have you ever clicked on a link only for the page to load a little more and push that link down? The result is a frustrating experience for the user.
When users interact with a web page, they have the expectation that things should happen instantly, and any delays can cause frustration.
On some sites, you may sometimes find that you can’t click on a link or add text to a form field until various page elements have finished loading. For a ‘good’ classification from Google, this delay before interaction should be less than 100ms.
This aims to measure the visual stability of a page. If elements on a page are unstable as it loads, it can lead to mistaken clicks, and is a generally poor experience.
I’m sure we’ve all experienced new sites and blogs where ads are loading and pushing other elements up and down. Even the obligatory cookie notices can affect performance against this measure.
This seems to be trickier to measure, as it’s not about speed, and Google’s metric does seem abstract - a CLS score of less than 0.1 will earn the page a ‘good’ classification.
Performance against Core Web Vitals can be tracked in a number of ways. For example, Search Console will tell you how many of your pages are good, poor or need improvement.
You can then view these URLs in other tools, such as Page Speed Insights. For example, this John Lewis landing page for TVs needs some work as far as Google is concerned.
Google will lay out scores for each metric, and suggest areas for improvement. Once these issues have been fixed, you can ask Google to validate the fix via Search Console, and the page will be scanned and rated again.
With these three metrics, Google is covering specific aspects that can negatively impact the user experience.
A page that loads important content and elements first, and one that users can interact with immediately is likely to be much easier and less frustrating to use.
However, they do not address all the issues which can affect the user experience. It’s certainly good to have a site that loads fast, but more substantial testing and research are required to truly address the user experience.
The way users interact with forms, the clarity of information and the identification of key barriers to conversion are not addressed by Google's metrics. In addition, it’s often the journey through the site (from product page to checkout for example) where user experience really matters, not single page performance.
The results of a recent study suggest that many retailers may have some work to do. Reddico looked at the performance of the top 500 retail brands against the Core Web Vitals.
The study found plenty of room for improvement, with just 35% of retailers scoring ‘good’ for Largest Contentful Paint, and 32% ‘good’ for Cumulative Layout Shift. Sites scored higher for First Input Delay, with 83% achieving a ‘good’ score.
These metrics, due to the fact that they are linked to ranking, will provide a challenge for many sites. Many sites are reliant on search traffic for acquiring users and driving sales, and a drop of even a couple of positions can impact revenue.
With these new metrics, Google is extending its attempts to assess UX with measurable metrics, and drive sites to improve their performance.
Google isn't necessarily measuring user experience as a whole, but its Core Web Vitals has the potential to rank sites with a better UX higher than those with performance issues, particularly around page speed.
The simple fact that this is Google that's introducing these measures, and therefore will affect search performance, means that many sites will need to act on improving their UX in specific areas to remain competitive in search engine results.
In our new ebook, we offer practical guidance for launching, managing, and scaling a UX measurement program. One that helps you drive a roadmap of UX improvements and secure the budgets you need to run larger-scale research projects.