Google’s Lighthouse doesn’t use the Interaction to Next Paint (INP) metric in its standard tests, despite INP being one of the Core Web Vitals.
Barry Pollard, Web Performance Developer Advocate on Google Chrome, explained the reasoning behind this and offered insights into measuring INP.
Lighthouse measures a simple page load and captures various characteristics during that process.
It can estimate the Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS) under specific load conditions, identify issues, and advise on improving these metrics.
However, INP is different as it depends on user interactions.
Pollard explained:
“The problem is that Lighthouse, again like many web perf tools, typically just loads the page and does not interact with it. No interactions = No INP to measure!”
While Lighthouse can’t measure INP, knowing common user journeys allows you to use “user flows” to measure INP.
Pollard added:
“If you as a site-owner know your common user journeys then you can measure these in Lighthouse using ‘user flows’ which then WILL measure INP.”
These common user journeys can be automated in a continuous integration environment, allowing developers to test INP on each commit and spot potential regressions.
Related: How You Can Measure Core Web Vitals
Although Lighthouse can’t measure INP without interactions, it can measure likely causes, particularly long, blocking JavaScript tasks.
This is where the Total Blocking Time (TBT) metric comes into play.
According to Pollard:
“TBT (Total Blocking Time) measures the sum time of all tasks greater 50ms. The theory being:
- Lots of long, blocking tasks = high risk of INP!
- Few long, blocking tasks = low risk of INP!”
TBT has limitations as an INP substitute.
Pollard noted:
“If you don’t interact during long tasks, then you might not have any INP issues. Also interactions might load MORE JavaScript that is not measure by Lighthouse.”
He adds:
“So it’s a clue, but not a substitute for actually measuring INP.”
Some developers optimize for Lighthouse scores without considering the user impact.
Pollard cautions against this, stating:
“A common pattern I see is to delay ALL JS until the user interacts with a page: Great for Lighthouse scores! Often terrible for users 😢:
- Sometimes nothing loads until you move the mouse.
- Often your first interaction gets a bigger delay.”
Understanding Lighthouse, INP, and TBT relationships is necessary for optimizing user experience.
Recognizing limitations in measuring INP helps avoid misguided optimizations.
Pollard’s advice for measuring INP is to focus on real user interactions to ensure performance improvements enhance UX.
As INP remains a Core Web Vital, grasping its nuances is essential for keeping it within an acceptable threshold.
To monitor site performance and INP:
Featured Image: Ye Liew/Shutterstock