The two definitions
AI exposure is the share of tasks within an occupation that current AI can technically perform. It is measured by the ILO 2025 refined index across approximately 30,000 ISCO-08 occupational tasks. Exposure is a measure of feasibility; it does not say what AI is doing in any particular employer's workflow.
AI displacement is the observed labour-market outcome: workers laid off, roles eliminated, headcount reduced. It is measured indirectly through workforce trackers (Challenger, Layoffs.fyi), unemployment data (BLS), and industry-specific reports. Displacement is what actually happens; it is downstream of exposure but does not follow it linearly.
Why the distinction matters
Exposure can be high while displacement is low. The two largest findings of 2024-2025 illustrate the gap. ILO 2025 places customer service representatives in the very high exposure gradient; BLS 2024-2034 projects the role to decline by 5%. ILO 2025 places marketing managers in the moderate exposure gradient; BLS projects marketing manager employment to grow 8%. Brookings 2025 (No AI Jobs Apocalypse, For Now) finds aggregate-labour-market data does not show generative AI as a discrete driver of mass displacement through mid-2025.
Calculators that output a single risk percentile typically conflate exposure with displacement. The user reads "72% risk" and infers "72% chance my job disappears". That is not what the underlying data measures. Frey-Osborne 2013 made this conflation explicit (computerisation probability), and the prediction did not hold against subsequent labour-market data.
What this calculator does about it
The calculator outputs exposure (ILO 2025 gradient) and growth (BLS 2024-2034 outlook) as separate panels. Both are shown together for any matched occupation. A reader can see the exposure gradient and the BLS outlook side by side and form their own reading on whether the high-exposure-and-growing pattern (a knowledge-work occupation in transition) or the high-exposure-and-declining pattern (a clerical occupation in displacement) applies to their case.
The site does not output a single "displacement risk" percentile. Doing so would require fabricating precision the source data does not support. The four-band gradient plus the BLS outlook is the calibrated answer.
What "exposure" tells you
Exposure tells you whether the tasks in your role are in scope for current generative AI. A high-exposure gradient means more of the discrete work is technically feasible to delegate. It does not tell you whether your specific employer is delegating the work, whether the augmentation effect on the role is positive or negative, or whether the role's overall employment trajectory is up or down.
What "displacement" tells you
Displacement, when observable, tells you whether AI is actually driving workforce reduction in the role. The honest 2024-2025 reading is that displacement attributable specifically to generative AI is real but limited; aggregate reductions in white-collar hiring have multiple drivers (post-pandemic normalisation, interest rates, sector-specific cycles) and AI is one factor rather than the dominant cause.
The full glossary entries are at /glossary/#ai-exposure and /glossary/#ai-displacement. For pre-empted critiques of the distinction, see /how-to-argue-with-this/.