Atomize
Updated May 6, 202611 min read

Find Untokenized Values in Figma: Token Coverage Audit

Atomize's Find Untokenized Values scanner audits Figma for hardcoded fills, padding, radius, and typography not bound to Variables - in one scan.

Find Untokenized Values is the Atomize scanner that audits a Figma file - or a single page or selection - and lists every hardcoded color, spacing, radius, typography metric, stroke, opacity, and effect that is not bound to a Figma Variable. Token drift creeps in whenever someone types a hex into the inspector or nudges a number field by hand, and on long-running products it accumulates faster than design review can catch. A single scan covers thirteen property categories across the entire node tree, groups results by category, suggests a matching token when one exists in your library, and exports the report as JSON or XLSX. This guide explains what the feature checks, how the binding logic works, how to use it inside a normal design-system maintenance workflow, and how it compares to the other audit plugins in the Figma community.

What untokenized actually means

An untokenized value is a property typed directly into the Figma inspector instead of bound to a Variable. The frame fill is #0D99FF instead of color/primary/default. The padding is 16 instead of space/4. The corner radius is 8 instead of radius/md. From a pixel point of view nothing looks wrong, but the value is now disconnected from the system: it does not respond to mode switches, it does not flow through your token export pipeline, and it does not benefit from anything you change at the primitive and semantic token layers.

Figma exposes binding state through each node's boundVariables property. If a fill, stroke, padding side, or text metric has a VARIABLE_ALIAS entry pointing to a variable ID, it is tokenized. If it does not, the value lives only as a literal and the scanner flags it. That definition is mechanical, not subjective - which is why an automated scan is the only honest way to measure token coverage on a real file.

Tokenized vs untokenized - a quick example

The same button frame can look identical in two files and still be in completely different states. The first one is fully bound; the second has the same values, typed in by hand. Atomize ignores the first and reports every property on the second.

Why untokenized values appear in mature design systems

Even teams with a real Variables setup end up with hundreds of untokenized properties. The reasons are predictable: a designer drops in a competitor screenshot to iterate on, a hotfix asks for a one-off shade that nobody upstreams, a new component lands before its tokens do, a junior contributor copy-pastes a frame from an unpublished library. Each individual case is harmless. The aggregate is the gap people complain about on r/DesignSystems, where one recent thread put it directly: tokens and variables become "inconsistent or missing" once a product runs long enough to drift away from the original spec.

This is not a discipline problem you can lecture into compliance. Variables in Figma are an opt-in mechanism applied per-property, per-node, and the only way to know whether the opt-in actually happened across thousands of layers is to audit the file. Manual review never finishes; a scanner does it in seconds and never gets bored.

What the Atomize scanner checks

The scanner inspects thirteen property categories grouped by visual concern. Color, geometry, spacing, typography, and effects are all covered, including the per-side variants Figma exposes for stroke weight, padding, and corner radius. Bound paint styles still count as tokenized when they reference variables internally, and gradient fills are reported with their gradient type rather than a hex so the report stays unambiguous.

Coverage of the Find Untokenized Values scanner

CategoryFigma properties checkedVariable type matched
Fill (color)Solid paint, gradient paintCOLOR
Stroke (color)Stroke paintCOLOR
Stroke (weight)strokeWeight, strokeTopWeight, strokeRightWeight, strokeBottomWeight, strokeLeftWeightFLOAT
Corner radiuscornerRadius and per-corner radiiFLOAT
PaddingpaddingTop, paddingRight, paddingBottom, paddingLeftFLOAT
GapitemSpacingFLOAT
MargincounterAxisSpacingFLOAT
OpacityopacityFLOAT
EffectsDrop shadow, inner shadow, layer blur, background blurCOLOR / FLOAT
Font familyfontFamily, fontNameSTRING
Font sizefontSizeFLOAT
Font weightfontStyle, fontWeightSTRING / FLOAT
Line heightlineHeightFLOAT

If you have built only a primitive layer so far - which is fine for early-stage systems - the scanner still finds matches when raw values line up with primitive variables. For a more complete result, pair this audit with a healthy primitive and semantic token structure so the suggestions point at the names you actually want components to bind to.

How the scan works under the hood

The scan is a tree walk over the Figma node graph, not a snapshot of the inspector. It starts from your scope - selection, page, or file - and descends into every child up to a depth cap of 100. At each node, it reads the relevant properties and the matching boundVariables keys, decides whether the property is tokenized, and emits a finding if it is not. The walk yields control every twenty nodes so that even a 50,000-layer file keeps the canvas responsive while the report assembles.

Walking the node tree

The walker is iterative and stack-based, which matters for large files where a recursive descent would blow the call stack. A small context object travels with each node so that downstream rules can know things like "we are inside an icon component" or "this branch is a banner" - useful for skipping irrelevant spacing checks on internal vector groups.

Verifying the binding via boundVariables

Each property has a known key in boundVariables. Fills use an array, padding uses one entry per side, line height has its own slot. The scanner reads the alias ID where the property points, and only when no alias is present does it record an untokenized item. Bound style references are treated as tokenized too, since the style itself can route through Variables.

Suggested tokens

Where possible, the report does not just say "this is hardcoded" - it names the token you probably wanted. Atomize loads every local variable collection, follows alias chains until it hits a literal, and builds two lookup maps: one from hex value to color variable, one from numeric value to numeric variable. When a hardcoded #030712 appears on a fill, the report suggests text/primary if your library defines it; when an 8 appears as a corner radius, it suggests radius/md. The suggestion is a hint, not an automatic action - you stay in control of what gets bound.

Three scan scopes - Selection, Page, File

The scope picker decides what the walker traverses. There is no right answer; the trade-off is precision versus completeness. Most teams use Selection while iterating on a single component, Page during a screen-level review, and File before publishing a library version.

Comparing the three scan scopes

ScopeWhat it walksBest forTypical scan time
SelectionOnly the nodes you have selected on the current pageAuditing one component before mergingUnder 2 seconds
PageAll top-level nodes on the current pageReviewing a screen, frame set, or working area1-10 seconds
FileAll top-level nodes across all pagesPre-release audits and design-system maintenance5-60 seconds depending on size

On long-running files the File scope can produce thousands of findings on the first run. Treat that initial scan as a baseline and budget the cleanup across iterations rather than blocking work on a single hero ticket.

Reading the report and acting on it

Findings are grouped by property category. Each row carries the node name, its position in the layer path, the page it lives on, the raw value (a hex for colors, a number with unit for everything else), and a suggested token when one was matched. Clicking the node name selects it on the canvas and switches Figma to the right page if needed, so you can fix and rescan without losing context.

Atomize Find Untokenized Values results panel showing groups for Fill, Opacity, Border-radius, Margin, Gap, Padding, Stroke weight, Effects, and Font-family with per-group counts, an expanded Ellipse fill of #505A73, an expanded Frame opacity of 40%, Skip controls, and an Export Report button
The results panel groups findings by property category. Each row shows the node, its layer path, the page, and the raw value - with Skip per item, Skip all per group, and Export Report at the bottom.

Per-item actions

  • Click the node name to jump to it on the canvas
  • Skip an item to remove it from the visible list without deleting the finding
  • Restore a skipped item if you change your mind
  • Collapse or expand entire groups when one category dominates the report

Skip and Restore

Skip is local to the current report - it does not edit the file or the variables. Use it to acknowledge "this one is intentionally raw" cases, like a marketing image with a brand-specific gradient, without losing track of them. Skip all and Restore all act on a whole group, which is the fastest way to triage when a single category dominates the noise.

Export to JSON or XLSX

When you want to track the cleanup outside Figma, export the report. JSON is convenient for tooling and dashboards; XLSX is friendlier for design-ops review with a non-technical stakeholder. The exported shape mirrors what the panel shows, with one row per finding.

Find Untokenized Values vs other Figma audit plugins

The Figma community has several plugins in this space. They differ in coverage, depth, and how they report results. The table below compares Atomize against the most-used alternatives based on their public listings; it is not a feature-by-feature replacement, since each plugin has its own strengths.

Atomize Find Untokenized Values vs other Figma audit plugins

PluginColor auditSpacing / radius auditTypography auditEffects auditToken suggestionsExport
Atomize - Find Untokenized ValuesYesYesYesYesYesJSON, XLSX
TokenOpsYesYesPartialPartialNoJSON
RelinkyYesYesPartialNoPartialNo
Figxed Design System AuditYesYesNoNoNoNo
Design Token CheckerNoYesNoNoNoNo

If you live in Tokens Studio, the Tokens Studio Tree Inspector remains useful for the Tokens Studio docs workflow specifically. For native Figma Variables setups, Atomize covers more property categories in one pass and is the only one in the list that exports both JSON and XLSX. Pair it with a token export step from your Figma plugin workflow to close the loop into code.

Best practices for keeping coverage high

Running the scan once is satisfying. Running it on a cadence is what changes behavior. Treat token coverage like test coverage: not a number to chase to 100%, but a signal that flags regressions early. The teams who get the most out of this also tend to follow the broader design system best practices that keep their Variables structure healthy in the first place.

Run it on every release branch

Before publishing a new library version, run a File-scope scan and resolve the high-impact categories first - colors, then padding and gap, then typography. Effects and opacity tend to produce the most intentional one-offs and are good candidates for a Skip pass rather than a fix pass.

Scope by component, not by page

When designers are mid-iteration, a Page scan can be too noisy because work-in-progress frames pollute the report. Selection scope on the active component gives you a faster feedback loop and respects the fact that exploration always involves some hardcoded values that will be normalized later.

Limitations to know before you scan

  • Gradients are reported as gradient-{type} rather than a hex, so the suggested-token match falls back to manual review for gradient fills.
  • Effects are checked at the style-binding level, not per shadow property, so a partially bound shadow may still be flagged as untokenized.
  • Spacing inside groups named icon or banner is intentionally skipped to avoid false positives on internal vector layouts.
  • The scanner walks at most 100 levels deep - extreme nesting beyond that is rare in practice but worth knowing.
  • Auto-binding from the report is not exposed in the current UI; the report points at suggestions, you do the bind in Figma's variable picker.

These constraints reflect the underlying Figma APIs more than product decisions. The Figma Plugin API exposes some bindings as styles and others as direct variables, and a scanner has to be cautious about which it can confidently classify. The official Figma Variables documentation is the most reliable reference if you want to know exactly which property exposes which binding shape.

Where this fits in a token-driven workflow

Find Untokenized Values is one half of the maintenance loop. The other half is the export side - moving Variables out to code as DTCG JSON, CSS custom properties, or TypeScript constants - which is why teams who run the scanner consistently usually pair it with a design-to-code parity workflow. The W3C Design Tokens Community Group has been standardizing the file format for that exchange in the DTCG specification, and reference systems like Material Design Tokens and build tools like Style Dictionary show the full pipeline end to end. The audit ensures the design side actually conforms to the contract before it reaches code.

Final verdict - Find Untokenized Values

Token coverage is not measurable by inspection - drift hides in properties that look correct on the canvas. Find Untokenized Values takes the guesswork out of design-system maintenance: it walks the node tree, verifies binding through boundVariables, names the token you probably wanted, and exports the result so the cleanup can move into a real ticketing flow. Run it before every library release and treat the report as a regression signal, not a moral judgment on the team.

Frequently Asked Questions: Find Untokenized Values in Figma: Token Coverage Audit

Run the Find Untokenized Values scanner in Atomize, choose Selection, Page, or File as the scope, and review the grouped report. Each finding shows the node, its path, the raw value, and a suggested token where one matches an existing variable in your library.

It means the property holds a literal value typed into the inspector instead of a reference to a Figma Variable. The pixel result looks identical, but the value will not respond to mode switches, will not flow through token exports, and is invisible to design-system tooling.

Yes. The scanner offers three scopes - Selection, Page, and File. Selection is the fastest and is the right choice while iterating on a single component or frame.

Not in the current version. The report names the suggested token next to each finding, but binding the value is still a manual step in Figma's variable picker. Skip and Restore manage the report itself, not the file.

The finding still appears in the report, just without a suggested token. That gap is the signal to add a primitive or semantic token to your library before binding the property - it is exactly the kind of decision the audit is meant to surface.

Vector layouts inside groups named icon or banner use spacing in ways that should not be tokenized at the system level - they belong to the asset, not the page. Skipping them prevents false positives that would otherwise dominate every report.

Yes. Exports are available as JSON for tooling and dashboards, or as XLSX for design-ops review. The file is named after the project so multiple audits can sit side by side without manual renaming.

Use Selection scope continuously while iterating, Page scope during screen-level reviews, and a full File scan before publishing a new library version. Treating the audit as a release-time gate is what keeps coverage from drifting downward over time.