Two-axis assessment
The Brookings 2024 task-level rubric assesses each task on two axes. The first axis is technical feasibility: can current generative AI perform the task to a useful quality bar? The second axis is contextual feasibility: can an organisation actually delegate the task to AI given workplace, regulatory, accountability, and stakeholder constraints?
The three categories
Displaceable. Technical feasibility AND contextual feasibility are present. The task is in scope for current generative AI and organisations are visibly delegating it. Examples: standard customer-service responses, routine bookkeeping ledger work, standard-form contract preparation, basic copywriting.
Changing. Technical feasibility is present but contextual feasibility is constrained, OR the task is being augmented rather than replaced. The human stays in the loop; the work shifts. Examples: clinical interpretation augmented by AI imaging diagnostics, legal research where final attorney sign-off remains, software development where AI accelerates implementation but humans own architecture.
Growing. The task is in the augmentation-prone category per Brookings 2024 (high social-emotional, high judgement, high accountability), OR the task is BLS-flagged growing for the occupation. Examples: stakeholder coordination, client briefing, patient counsel, classroom management, in-person service delivery, team leadership.
Where the calculator's tags come from
For each priority occupation, the top five O*NET 30.2 tasks (by O*NET Importance score) are pulled from the public O*NET task statements. The Brookings 2024 rubric is then applied to each task statement and a tag assigned. The rationale for each tag is documented inline on the per-occupation page so a reader can challenge the classification on the merits.
Where this approach is fragile
O*NET task statements are generic; the actual task at a specific employer may be more or less exposed than the generic statement suggests. The Brookings rubric is itself derived in part from OpenAI's task-completion data (Brookings 2024 flags this), which is the methodology's most-significant single limitation. The site inherits the limitation rather than substitute a less-sourced alternative.
The cluster cross-link
Task-level analysis at the workflow layer is the subject of the cluster's swimlanes site, agenticswimlanes.com. That site looks at the workflow modelling question (how AI agents fit into multi-step processes); this page covers the same territory at the per-occupation tag layer.
For the full methodology, see /methodology/. For pre-empted methodology critiques, see /how-to-argue-with-this/.