Atomize
Updated May 4, 202610 min read

Figma Contrast Audit: Find WCAG Failures at Scale

Atomize's Contrast Audit scans an entire Figma file for WCAG AA failures across text, fills, and strokes - and groups results by token, not by layer.

Contrast Audit is the Atomize scanner that walks a Figma file - or a single page or selection - and reports every text, fill, and stroke that fails the WCAG 2.1 contrast threshold for its element type. Unlike one-element-at-a-time checkers, it audits thousands of layers in a single pass, samples real backgrounds through layered fills and image parents, resolves each failing color back to its Variable name, and groups the results by Issues, Atoms, and Primitives so the fix lives at the token layer rather than on the symptom layer. The math follows the W3C formula: 4.5:1 for body text, 3:1 for large text and UI elements, with composite alpha applied along the parent chain so a 50% red on white is judged as the pink it actually appears. This guide explains what Contrast Audit checks, how the ratio is computed on real layered backgrounds, how it compares to Stark and the popular community plugins, and how to slot it into a normal release workflow.

What WCAG actually requires

WCAG 2.1 success criterion 1.4.3 sets a 4.5:1 minimum for the contrast between text and its background, with a 3:1 carve-out for large text. Criterion 1.4.11 extends a 3:1 minimum to non-text UI components and graphical objects - icons, focus rings, the visible part of a checkbox, the stroke around an input. The math is the relative-luminance ratio defined in the spec: linearize each sRGB channel, weight by 0.2126 / 0.7152 / 0.0722, take (L1 + 0.05) / (L2 + 0.05). Two colors that pass on paper can still feel weak in practice - WCAG 2 ignores font weight and polarity - but it is the legal baseline most teams ship against. The full formula and intent live in the W3C WCAG 2.1 Contrast (Minimum) reference and the companion Non-text Contrast page.

Large text and the 3:1 carve-out

Large text in WCAG terms means 18 pt regular or 14 pt bold and above - roughly font/size/24 regular or font/size/18 semibold in token vocabulary. Atomize applies the carve-out automatically: for any TEXT node whose fontSize and fontWeight clear the threshold, the AA bar drops from 4.5:1 to 3:1. Designers rarely remember the exact pixel cutoff under pressure, which is why the audit decides per-node rather than asking you to classify each layer manually.

What Contrast Audit checks

The scanner inspects three property classes across the entire visible node tree. Text fills, non-text fills (the visible color of frames, components, shapes), and visible strokes are each evaluated against the WCAG threshold appropriate for their element type. Hidden layers, locked-out symbols inside bg section named groups, and component-set thumbnails are skipped to avoid noise. The walk is iterative and stack-based, capped at 100 levels deep, and yields control every twenty nodes so the canvas stays responsive on a fifty-thousand-layer file.

What Contrast Audit evaluates

Element typeFigma property checkedAA threshold applied
Text (body)TEXT node fill4.5:1
Text (large)TEXT node fill, fontSize ≥18 or ≥14 bold3:1
Background / ShapeSolid fill on FRAME, COMPONENT, INSTANCE, RECTANGLE, etc.3:1
BorderSolid stroke on any non-text node3:1
Image backgroundSampled center pixel of image / gradient parentSame as element type

How the scan computes contrast

Computing a ratio is easy when both colors are solid hex values. The hard part is figuring out what the background actually is on a real Figma layer, where fills sit at varying opacities on top of sibling shapes, parent frames, and image objects. Contrast Audit resolves the effective background per node before it touches the WCAG formula, so the numbers it reports match what a user would see on the rendered canvas.

Composite alpha along the parent chain

For each candidate layer the scanner walks upward through parent.parent.parent..., collecting every overlapping sibling and ancestor fill that is visible and non-transparent. Each layer is alpha-composited from front to back using the standard (srcA × src) + (dstA × dst × (1 - srcA)) formula. The final composited color is the one fed into the contrast calculation. That means a text/secondary token at 70% opacity over a panel that is itself 90% over the page background is judged against the actual visible background - not the literal hex of either parent in isolation.

Sampling image and gradient backgrounds

When the chain includes an image fill or any gradient, Atomize exports the parent at 32-pixel width, decodes the PNG inline, and samples the pixel at the center of the child's bounding box. Sampling once per parent, then reusing the decoded buffer for every child, keeps the audit fast even on a hero section with dozens of overlay layers. The result is marked with an image bg warning chip so reviewers know the ratio is a single-point estimate rather than a guarantee across the full image area - this is the case where an APCA-style algorithm or a manual squint test still wins.

Resolving the offending color back to a Variable

If the failing fill or stroke is bound to a Figma Variable, the report names it. Atomize follows alias chains until it lands on a literal - so a surface/inverse semantic that aliases gray/950 produces both labels in the result. When no alias is present but a local primitive happens to hold the same hex, the primitive name is used as a fallback so the row is not anonymous. That mapping is what makes the Atoms and Primitives tabs useful: instead of seeing the same low-contrast hex repeated across 40 components, you see the one token that is the real source of the problem.

Three views of the same audit

The result panel has three tabs - Issues, Atoms, and Primitives - because the same fail can be looked at from three different angles. Issues is the per-layer view that most other contrast plugins stop at. Atoms is the deduplicated view that aggregates failures by semantic token. Primitives is the raw-value view that shows whether the underlying primitive is the problem. Triaging top-down through these tabs is faster than fixing one node at a time.

Atomize Contrast Audit results panel showing WCAG findings grouped by Text, Background, Shape, and Border with contrast ratios, A and AA pass/fail badges, foreground and background swatches per row, Skip and Skip-all controls, and an Export Issues Report button at the bottom
Contrast Audit groups failing layers by element type. Each row shows the ratio, the foreground and background colors, A and AA pass/fail badges, and a Skip control for intentional cases like decorative gradients.

Issues - the per-layer view

Issues lists every failing layer with its node name, layer path, page, contrast ratio, and the foreground/background swatches. Click a row to jump to the node on the canvas; clicking Skip removes it from the visible list without editing the file. Use this tab when you have a small report - under a hundred items - or when a designer is iterating on one screen and wants the canvas-level detail.

Atoms - the token-level view

Atoms aggregates findings by the bound semantic Variable. If text/secondary fails on six different surfaces, you see one row showing the worst ratio it ever produced and the list of background tokens it conflicts with. This is the right tab when you want to fix the design system, not the design - changing the alias once propagates to every component that binds to it. Most large reports collapse into a manageable handful of token rows here.

Primitives - the raw-value view

Primitives strips away the semantic layer and shows the raw color values that are responsible for the failures. It is the view that makes the case for adding or splitting a primitive - if gray/500 keeps surfacing in low-contrast pairs, the answer is usually a new step in the gray ramp rather than a per-component override. Pair it with the primitive and semantic token architecture so the new value is added at the right layer.

WCAG thresholds at a glance

The thresholds Contrast Audit applies map directly to the WCAG 2.1 wording. Reading the conditional below in token vocabulary makes it easier to remember which rule applies to which kind of layer - and explains why the same fill can fail as text yet pass as a UI element on the same background.

Contrast Audit vs Stark, Contrast, and other Figma plugins

The Figma Community has several contrast tools. Most check one selection at a time, which is fine for a quick spot-check on a button but slow for a screen-level review. Designers on r/DesignSystems and the Figma forum complain about this exact pattern - and about Stark's pricing model, which gates bulk audits behind the team plan. The table below compares Atomize against the most-used alternatives based on their public listings; it is not a feature-by-feature replacement, since each plugin has its own niche.

Atomize Contrast Audit vs other Figma contrast plugins

PluginWhole-file scanImage-bg samplingToken-level groupingVariable-awareFree for bulk audits
Atomize - Contrast AuditYesYes (PNG center sample)Yes (Issues / Atoms / Primitives)YesYes
StarkYesPartialNoPartialNo (team plan)
Contrast (Will Hudgins)No (per selection)PartialNoNoYes
AbleNo (per selection)NoNoNoYes
PolychromPartialNoNoNoYes

If the only thing you need is a single-element check during early iteration, the small free plugins are perfectly fine. Once the file has hundreds of components and a real Variables setup, the limitation becomes obvious - failing layers and failing tokens are not the same problem, and a tool that only sees the first cannot help you fix the second. For deeper accessibility work the WebAIM contrast guide and the APCA reference explain where WCAG 2's math gets stretched in modern UI.

Using Contrast Audit in a release workflow

Running the audit once is satisfying. Running it on a cadence is what changes a team's accessibility posture. The most useful pattern is: Selection scope while iterating, Page scope during screen-level review, File scope before publishing a library version. Skip becomes a triage tool for intentional cases - decorative chrome, watermarks over a hero image - and the export pushes the rest into whatever ticketing system the team uses.

Selection scope while iterating

While a designer is shaping a single component, Selection scope gives a feedback loop measured in seconds. The audit covers exactly the nodes that are selected, the report fits on screen without scrolling, and the canvas stays interactive. This is also the lightest scope on Pro accounts that share a workspace - it does not block the rest of the team from running their own scans.

File scope before publishing a library

Before tagging a new library version, run the audit at File scope, work through the Atoms tab top-down, and only fall back to Issues for the leftover one-offs. A typical first File scan on a long-running product produces hundreds of findings - that is the baseline, not a panic signal. Budget the cleanup across iterations and treat the report as a regression detector once the baseline is below your team's tolerance.

Skip is for triage, not for fixes

Skip removes a row from the visible list for the current report. It does not edit the file, change a Variable, or persist across scans - the next File scan will surface the same item again. Use Skip to acknowledge cases that fail the math but are intentionally raw (a brand watermark over an image, a placeholder gradient, a third-party logo) so the rest of the report stays focused on the items you actually plan to fix.

Limitations to know before you scan

  • Image and gradient backgrounds are sampled at a single center pixel, so a layer over a busy hero image is best treated as a hint, not a verdict.
  • Hidden layers and groups whose name starts with bg section are skipped intentionally - rename or unhide them if they should be checked.
  • Component-set thumbnails are skipped to avoid duplicate findings for every variant; check variants individually when you suspect a regression in one state.
  • WCAG 2 contrast does not factor in font weight, glyph anti-aliasing, or polarity - APCA covers those nuances and is worth a manual second pass on borderline rows.
  • Hard timeouts at 90 seconds; very large files may return a partial report flagged with a banner so progress is never lost.

Where Contrast Audit fits with the rest of Atomize

Contrast Audit is the accessibility layer in a workflow that also covers token coverage and dark-mode parity. Pair it with Find Untokenized Values to make sure the same colors are bound everywhere before you start fixing ratios - an unbound #1F2937 cannot be fixed once across the system. If your library uses Figma Variables for dark mode, audit both modes; a token that passes in light can fail in dark, and the report will show the failure under whichever mode is currently active. Treat the cycle as part of the broader design system best practices that keep the Variables structure healthy in the first place. The Figma Variables documentation is the most reliable reference for which property exposes which binding shape.

Final verdict - Contrast Audit

Contrast checking inside a design system is not a per-layer chore - it is a token-level test. Contrast Audit treats it that way: it walks the file, composites the real background, applies the right WCAG threshold per element type, and groups the result by token so the fix lands once and propagates everywhere. Run it before every library release and before any major component refactor, and the audit shifts from a manual checklist into a regression signal you can actually trust.

Frequently Asked Questions: Figma Contrast Audit

WCAG 2.1 requires 4.5:1 for body text and 3:1 for large text, icons, borders, and other non-text UI elements. Contrast Audit applies the right threshold per node automatically: a TEXT node is held to 4.5:1 unless its font size and weight qualify as large, while strokes and non-text fills are always evaluated at 3:1.

Stark and the popular Contrast plugin check one selection at a time. Atomize Contrast Audit walks the entire file in a single pass, samples real composited backgrounds through layered fills and image parents, and groups the result by Variable so you can fix the design token instead of the symptom layer. It is also free for bulk audits, which the Stark team plan is not.

Yes. When a parent fill is an image or any gradient, Atomize exports the parent at 32-pixel width, decodes the PNG inline, and samples the pixel at the center of the child's bounding box. The row is marked with an image bg chip so reviewers know the ratio is a single-point estimate rather than a measurement across the entire image area.

The same contrast failure can be looked at three ways. Issues is the per-layer view, Atoms aggregates failures by semantic Variable, and Primitives shows the raw values underneath. Working top-down through Atoms is the fastest way to fix a large report, because changing one alias propagates to every component that binds to it.

Yes. The scope picker offers Selection, Page, and File. Selection is the right choice while iterating on one component or frame, Page covers a whole screen at a time, and File is the pre-release sweep. Selection typically completes in under two seconds even on heavy components.

No. The report names the failing token and the conflicting background, but binding a new value is still a manual step in Figma's variable picker - intentionally so, since the right fix is usually a token decision rather than a one-off override. Skip and Restore manage the report itself, not the file.

The walker yields control every twenty nodes to keep the canvas responsive and emits a progress message every two hundred nodes so the spinner reflects actual work. A hard timeout fires at 90 seconds and returns whatever has been collected so far, flagged as a partial report - so an audit on a 100,000-layer file never hangs the plugin.

WCAG 2.1 is the legal and contractual baseline most teams ship against, and it is the standard Atomize Contrast Audit measures. APCA factors in font weight and polarity and is a useful manual second pass on borderline rows, but the candidate WCAG 3 standard that includes APCA has not shipped, so audits should still be planned against WCAG 2.