A core web vitals audit is supposed to tell you what's wrong with your site's performance and give you a plan to fix it in the right order. Most don't. Most agency CWV audits are a PageSpeed Insights screenshot with a handful of generic recommendations. Activity, not a plan. If your team implemented everything the tool suggested and your LCP still failed, that's the pattern this guide addresses.
The page does three things competing guides don't. It separates the CWV audit (the diagnosis plus priority plan) from CWV remediation (the engineering work that executes the fixes). Most guides conflate them because tool-exported reports look like audits. It gives you a way to decide which fix ships first when you have multiple failing metrics across multiple templates and finite engineering cycles. And it shows what a substantive CWV audit deliverable should contain, a checklist you can use to evaluate the one you received or the one you're about to commission.
David Drewitz built Rank Outlaw's technical SEO audit methodology on a diagnosis-first model: understand what's actually broken before recommending any fix. This guide applies that methodology to core web vitals using Google's published CWV thresholds as the quantitative floor. The thresholds are Google's. The priority logic for which fix ships first is Rank Outlaw's.
If you want to skip to the evaluation checklist, it's the sixth section.
What a Core Web Vitals Audit Is (and What It Is Not)
A core web vitals audit is a diagnosis artifact plus a prioritized remediation plan. The diagnosis names what's failing on your site (which metrics on which templates for which user segments) and why it's failing (which underlying site-build issue causes each failure). The plan names which fix ships first and why in that order. The audit is not the implemented fixes themselves, that's remediation, a separate work package.
This boundary matters because most of what gets sold as a "core web vitals audit" is actually a tool export: a PageSpeed Insights screenshot, a Lighthouse report, or a Site Audit dashboard. Tool exports are an input to an audit, not the audit. They tell you what's failing today. They don't tell you why, they don't tell you what to fix first, and they don't separate the sitewide patterns from the one-page anomalies.
A substantive audit translates tool output into three things the tool can't produce on its own. First, template-level findings, because one CLS failure on your blog template affects every blog post on the site, and that's an architecture problem, not a page problem. Second, a diagnostic read: when LCP fails at 5 seconds, what does that tell you about the underlying site build? Third, a priority-ordered remediation plan that accounts for traffic exposure and fix complexity. If a deliverable doesn't do these three things, it's a tool screenshot with a bill attached.
The Three Metrics Your Core Web Vitals Audit Measures
Core Web Vitals is Google's set of three user-experience metrics: Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS). Each has a published good/poor threshold. A substantive audit reads each failing metric not as a number to improve, but as a diagnostic signal pointing at a specific class of underlying site-build issue.
| Metric | "Good" threshold | "Poor" threshold | What a failure signals |
|---|---|---|---|
| Largest Contentful Paint (LCP) | ≤ 2.5 seconds | > 4.0 seconds | Image delivery issues, render-blocking scripts, or slow server response |
| Interaction to Next Paint (INP) | ≤ 200 milliseconds | > 500 milliseconds | Third-party script overload, heavy main-thread work, or inefficient event handlers |
| Cumulative Layout Shift (CLS) | ≤ 0.1 | > 0.25 | Layout instability, dynamically injected content, or unreserved media dimensions |
LCP failing at 4 seconds on a content-heavy template signals one of three things you should investigate in order: the hero image or main visual isn't being delivered efficiently (wrong format, wrong size, no lazy-load priority hints), a render-blocking script is delaying paint (a third-party tag or an unminified stylesheet loading before the main content), or the server's taking too long to return the HTML (hosting, caching, or database response). The audit's job is to tell you which of the three it is on your site.
INP failing at 500 milliseconds usually signals third-party scripts hogging the main thread: analytics, chat widgets, ad networks, tag managers. Google replaced FID with INP in March 2024 specifically because FID missed these mid-session interaction problems. Any source citing FID as the current metric is stale and should be treated with caution.
CLS failing at 0.2 signals the template, not the page. It's a theme-level or CMS-level problem: dynamically injected content without reserved space, ads loading into unreserved containers, fonts swapping without font-display: optional, or media without width/height attributes. Fixing individual pages doesn't solve it. Fixing the template does.
How Do You Run a Core Web Vitals Audit?
A core web vitals audit is a workflow, not a tool. The workflow is the methodology. The tools are interchangeable implementations of each step. If your budget or tool stack changes, the workflow survives.
Collect field data
You need real-user performance data, not lab simulations. Use Google Search Console's Core Web Vitals report (free, URL-group-level), the CrUX API (free, URL-level for pages with enough traffic), or a real-user monitoring platform like DebugBear or SpeedCurve. Field data tells you what your actual users experienced over the last 28 days. Lab data, which is what PageSpeed Insights shows by default, tells you what one synthetic run produced under lab conditions. Both are useful. Field data is authoritative for SEO impact.
Segment by template, not by page
CWV issues usually cluster at the template level. Every blog post, every product page, every landing page in the same template family shares the same performance profile. Group your pages by template and identify which template has the worst CWV profile by traffic-weighted impact.
Build a third-party script inventory
Third-party scripts are the single most common cause of INP and CLS failures. Inventory every analytics tag, chat widget, ad tag, tag-manager container, A/B testing script, heatmap tool, and dynamic content loader running on each template. Note which are load-blocking, which are deferred, and which run on user interaction.
Diagnose root causes per template per metric
For each template-level failure, walk the diagnostic read from section two. Is the LCP failure an image problem, a script problem, or a server problem? Is the CLS failure a font problem, an ad-slot problem, or a content-injection problem? The tool can't answer this. The auditor does.
Apply the priority logic
This is where the triage section below comes in. Tools give you a list. The priority logic sequences the list.
No single step in this workflow requires a specific tool. Ahrefs, Semrush, and Screaming Frog all have CWV audit features. Any of them can collect the data. The workflow is what matters.
Which Core Web Vitals Audit Finding Should Ship First?
This is the question most CWV audit deliverables fail at. A tool export lists every failing metric on every URL. A substantive audit sequences the list. Here's the priority logic Rank Outlaw applies: named inputs, explicit reasoning, priority-ordered output.
The four inputs. Every CWV finding is scored on four dimensions. Two come from field data the audit already collected; the other two, traffic volume and complexity, require judgment.
| Input | What it measures | Source | How it shapes priority |
|---|---|---|---|
| Failing metric class | Which CWV metric fails (LCP / INP / CLS) | Field data from workflow Step 1 | Each metric has a different fix-complexity profile |
| Page template affected | Which template family the failure lives in (homepage / product / blog / category) | Template segmentation from workflow Step 2 | Template-level fixes beat page-level fixes by scope |
| Template traffic volume | Last-28-day traffic exposure of the affected template (high / medium / low) | Field data from workflow Step 1 | Remediation ROI multiplies with traffic exposure |
| Fix complexity estimate | Engineering effort to ship (hours / days / sprints / architectural) | Operator judgment from workflow Step 4 | Complexity breaks ties when other three inputs tie |
The priority logic. A failure on a high-traffic template at Google's "Poor" threshold (LCP > 4.0s, INP > 500ms, CLS > 0.25) ships first, because remediation ROI multiplies with traffic exposure. A failure on a low-traffic template at the same threshold ships later, even if the metric is worse, because the engineering cost is the same but the user-experience and ranking impact is smaller. A "Needs Improvement" failure on a high-traffic template often ships before a "Poor" failure on a low-traffic template for the same reason. Fix complexity breaks ties. If two findings have equal traffic and threshold severity, the faster fix ships first to build momentum.
The priority-ordered output. The remediation plan lists findings in the order they should ship, with a one-line rationale per finding that names which input drove the priority. "LCP Poor on product template at 4.2s, 180k monthly sessions, ~2 sprint fix: priority 1 because the product template drives 40% of organic traffic and the fix is one template, not one page." A reader can challenge the priority at every step and understand why each finding sits where it does.
Why this works. Google's own guidance, "fix Poor before Needs Improvement," is status-level and useful as a first pass, but it doesn't help when you have a dozen Poor findings competing for engineering cycles. The four-input logic is the next layer down. It doesn't eliminate judgment (the complexity estimate always requires experience), but it makes the judgment explicit and defensible to the engineering lead who has to sign off on the sprint plan.
How Does a Core Web Vitals Audit Fit Into a Broader Technical SEO Audit?
A core web vitals audit is usually treated as a standalone performance topic. That framing misses what CWV findings tell you about the broader technical health of the site. A good CWV audit is an input to the broader technical SEO audit, not a silo next to it.
Three integration points illustrate the pattern.
CLS at the template level is an architecture signal
If your blog template has a 0.3 CLS score and your product template has a 0.05 CLS score, the problem is not "blogs are slow." The problem is the blog template's build: probably ad slots, dynamic content injection, or a theme-level layout issue. Fixing individual blog posts is painting the walls in a house with a structural problem. The broader audit identifies the template as the unit of remediation.
Render-blocking scripts overlap with crawl-budget waste
The same scripts that hurt LCP on user-facing page loads also slow Googlebot's rendering. A technical audit that looks at CWV and crawl stats together often finds that one script fix produces two wins: faster CWV on real user visits and faster rendering for indexation. Treating CWV as a standalone topic misses that overlap.
Template-level LCP failure can signal indexation risk
If a template consistently takes 6+ seconds to deliver the main content, Google may not be fully rendering the page during crawl. A page that renders to Googlebot as half-loaded will have fewer indexed signals than a page that renders completely. Search Engine Land's broader CWV coverage notes this connection; most tool-vendor content doesn't, because it's outside the tool's measurement surface.
This is the integration lens a broader technical SEO audit applies. CWV findings become inputs to architectural decisions, not isolated performance tasks. One CWV audit insight, a template-level CLS failure, may unlock three technical-SEO wins when investigated properly. Or the reverse: a suspected content problem may turn out to be a CWV-adjacent rendering problem. The audit asks those questions; the siloed tool export doesn't.
What a Substantive Core Web Vitals Audit Deliverable Should Contain
Whether you're evaluating a CWV audit you've received from an agency or commissioning one, here's the deliverable content checklist. These are the items a substantive deliverable should contain, not signals about agency trustworthiness, but properties of the artifact itself. If the deliverable has these, it's substantive. If it's missing most of them, it's a tool export with a cover page.
The six checklist items:
A report that says "your site's LCP is 3.4 seconds" tells you nothing actionable. A report that says "your product template LCP is 4.2 seconds (180k monthly sessions), your blog template LCP is 2.8 seconds (40k monthly sessions), your homepage LCP is 2.1 seconds (120k monthly sessions)" gives you a prioritization surface.
Findings depthThe deliverable should name every third-party tag running on each template and attribute its estimated LCP, INP, and CLS impact. Without this inventory, no INP or CLS remediation plan is complete. Third-party scripts are the most common root cause.
Script inventoryThe deliverable should specify what the current field-data baseline is (LCP, INP, CLS by template at the 75th percentile) and how the team will validate each fix against real-user data after deployment. Without baseline and validation, you can't prove a fix worked; you can only prove the lab score moved.
Baseline + validationNot a list of findings, an ordered list with a one-line rationale per finding naming the input that drove its priority (the four-input logic from section four applied to your specific site). A reader should be able to challenge the priority at every step.
Priority planThe deliverable should say how long each remediation work package takes (hours for a CDN config change, days for a script refactor, sprints for a template rebuild) and what it costs if outsourced. Vague estimates protect the seller; specific estimates help you plan.
EstimatesWhat did the audit not examine? Mobile but not desktop? Public pages but not logged-in? The last 28 days of field data but not longer? Every audit has boundaries. Substantive deliverables name them.
ScopeThree red flags in the deliverable itself (not in the agency, in the document):
- Tool-screenshot-only deliverable. The deliverable is a PDF of PageSpeed Insights or Lighthouse output with minor commentary. Tool output is an input to the audit, not the audit. A substantive deliverable translates tool output into diagnosis and plan.
- No prioritization beyond Google's status rubric. The deliverable lists findings grouped by Poor / Needs Improvement / Good with no next-layer priority logic. The status rubric is a first pass, not a plan.
- No baseline-and-validation section. The deliverable proposes fixes without specifying how the team will verify each fix moved the real-user metric for the affected template. Without validation, you're implementing changes on faith.
If the deliverable you're evaluating has the six checklist items and doesn't trip any of the three red flags, it's substantive. If it's missing three or more checklist items, or trips any of the red flags, ask for a revision before you pay.
Core Web Vitals Audit FAQ
How long should a Core Web Vitals audit take?
For a site under 500 templates-worth of pages, a substantive CWV audit typically takes three to five business days of focused work: one day for field-data collection and template segmentation, one to two days for third-party script inventory and root-cause diagnosis, one to two days for priority sequencing, baseline documentation, and deliverable production. Audits that claim to be complete in an hour are tool exports. Audits that take three weeks usually include the first sprint of remediation work, which is fine but should be scoped separately.
What should a Core Web Vitals audit cost?
Published pricing for standalone CWV audits ranges from around $900 at the entry tier to $2,200+ for deeper methodology work per published corewebvitals.io tiers. Agency pricing varies by site complexity and whether the audit bundles into a broader technical SEO engagement. A substantive standalone audit for a mid-size site usually falls in the $1,500-$5,000 range. Pricing under $500 is almost always a tool subscription rebranded as an audit.
Is PageSpeed Insights enough to do a CWV audit myself?
PageSpeed Insights is an excellent diagnostic tool. It combines lab data from Lighthouse with field data from the Chrome UX Report in a single view. It's one of the steps in the workflow described above, not a substitute for the workflow. You can absolutely run a CWV audit yourself if you have the engineering time to apply the diagnostic read, the template segmentation, the third-party inventory, and the priority logic across your whole site. Backlinko's Core Web Vitals guide covers the foundational measurement side in depth if you want additional reference. Most in-house teams don't have that time, which is why the deliverable-evaluation checklist matters.
Does Core Web Vitals actually affect rankings?
Yes, but less than most marketing pages claim. CWV is a confirmed Google ranking factor. Passing "good" thresholds removes a ranking penalty, and failing severely hurts. It is not a growth lever. Google's own Search Advocates have publicly framed CWV as closer to a tiebreaker than a primary ranking factor. The reason to audit CWV is real-user experience (which affects conversion and retention), with ranking stabilization as the secondary benefit, not the reverse. Search Engine Journal's analysis of CWV and PageSpeed Insights scoring covers the field-vs-lab distinction that drives most of the confusion here.
How often should we audit core web vitals?
Quarterly is enough for most sites. The CrUX dataset operates on a rolling 28-day window, so sub-monthly reauditing rarely surfaces signal changes. You're reading noise. Quarterly cadence catches seasonal third-party-script bloat and template updates, which is where most regressions come from. The exception is after a major site change (CMS migration, theme rebuild, checkout redesign). Run an audit immediately to catch regressions while they're still isolated to the change.
Want to see how broader site architecture connects to Core Web Vitals findings before talking to anyone? Read Website Architecture for SEO, the sibling piece on how structural decisions shape CWV outcomes at the template level.