Lighthouse
Audit
Run a full Lighthouse audit — performance scores, accessibility checks, SEO analysis, and actionable improvement opportunities. Powered by the PageSpeed Insights API.
What's in the audit
Performance insights from lab to production
Category Scores
Four pillars of page quality
Performance, Accessibility, Best Practices, and SEO — each scored 0–100 with color-coded status so you know where to focus first.
Lab Metrics
Six performance signals
FCP, LCP, TBT, CLS, Speed Index, and TTI — measured in a controlled environment to pinpoint exactly what’s slowing your page down.
Opportunities
Actionable savings
Concrete recommendations ranked by estimated time savings — eliminate render-blocking resources, compress images, and reduce unused code.
Resource Breakdown
See what you’re shipping
Scripts, images, fonts, and stylesheets — see the full weight breakdown and identify what’s bloating your page.
Visual Timeline
Watch the page load
Frame-by-frame filmstrip of the visual loading experience — see exactly when content appears and the page becomes usable.
Go deeper
Lab data tells half the story
See what real users actually experience
Lighthouse shows lab conditions — CrUX shows the real world. Check 6 months of field data from actual Chrome users to see if lab findings match production.
Is this page the problem, or the whole site?
Compare a URL's real-world metrics against its origin. Separate page-level regressions from site-wide issues before optimizing.
Benchmark against competitors
See how your real-world Core Web Vitals compare to up to 4 other sites using Chrome field data.
Track over time
Don't just check once — monitor it
Set up alerts to get notified when Core Web Vitals regress, or subscribe to weekly reports that land in your inbox when CrUX data updates.
Frequently Asked Questions
Common questions about Lighthouse audits, scoring methodology, and how PerfKit helps you analyze and track web performance.
- What is a Lighthouse audit?
- Lighthouse is an open-source automated tool developed by Google that audits web pages for performance, accessibility, best practices, and SEO. It runs a series of tests against a URL and generates a report with scores and actionable recommendations to improve page quality.
- How is this different from running Lighthouse in Chrome DevTools?
- PerfKit runs Lighthouse via the PageSpeed Insights API, which uses Google's infrastructure to audit your page from a standardized environment. This eliminates variability from your local machine (CPU throttling, network conditions, browser extensions) and provides consistent, reproducible results. You also get CrUX field data alongside lab results.
- What do the four Lighthouse scores measure?
- Performance measures loading speed, interactivity, and visual stability using lab metrics like FCP, LCP, TBT, CLS, and Speed Index. Accessibility checks WCAG compliance including color contrast, ARIA attributes, and keyboard navigation. Best Practices audits security, modern APIs, and browser compatibility. SEO verifies crawlability, meta tags, structured data, and mobile-friendliness.
- What is the difference between lab data and field data?
- Lab data (Lighthouse) is collected in a controlled, simulated environment with fixed device and network conditions — useful for debugging and reproducing issues. Field data (CrUX) is collected from real Chrome users visiting your site, reflecting actual user experiences across diverse devices and connections. Both are valuable: lab data for diagnostics, field data for understanding real-world impact.
- How often should I run Lighthouse audits?
- Run audits after every significant deployment, and periodically (weekly or monthly) to catch regressions. If you're actively optimizing, run before and after each change to measure impact. PerfKit saves your audit history so you can track scores over time and spot trends.
- Why do my Lighthouse scores vary between runs?
- Lighthouse scores can fluctuate due to server response times, third-party scripts loading at different speeds, CDN cache state, and network variability. Performance scores are especially sensitive — a few hundred milliseconds difference in LCP or TBT can shift the score. Running multiple audits and comparing trends is more reliable than focusing on a single result.