Private Labs
Anthropic, Google DeepMind, and OpenAI are scaling empirical safety, evals, and alignment engineering.
Explore high-impact paths in technical safety, governance, and strategy.
AI safety workforce (2025)
1,100+ FTEs
Growth from ~400 FTEs in 2022 to >1,100 in 2025.
Technical vs. non-technical split
600 / 500 FTEs
Technical safety remains under-scaled relative to capability teams.
Senior private-lab compensation
$500k-$1M+
Top roles can exceed this range when equity is included.
US government technical ceiling
$197,200
Public-sector compensation has adjusted to compete for expert talent.
Anthropic, Google DeepMind, and OpenAI are scaling empirical safety, evals, and alignment engineering.
US and UK AI Safety Institutes are formalizing testing standards, model audits, and regulatory capacity.
Think tanks and non-profits shape strategy, policy analysis, field-building, and talent pipelines.
The field has shifted from philosophical speculation to empirical engineering: interpretability, robustness, and evals now govern deployment decisions.
| Domain | |||||
|---|---|---|---|---|---|
Technical AI Safety | 95 | 88 | 70 | 94 | 60 |
Specialized Intersections (Bio, Cyber, Law) | 90 | 82 | 76 | 84 | 86 |
AI Governance & Policy | 88 | 68 | 62 | 72 | 96 |
Strategy, Operations & Field-Building | 78 | 52 | 55 | 66 | 74 |
Relative scores (0-100) built from evidence-weighted rubric scoring. Use tooltips on headers/rows to inspect assumptions and source basis.
Filter by sector and geography to compare min, median, and max compensation ranges.
Industry labs - Entry Research Engineer
$200,000 - $300,000
Median: $250,000
Industry labs - Senior Research Scientist
$500,000 - $1,000,000
Median: $750,000
Think Tank - Junior Researcher
$70,000 - $100,000
Median: $85,000
Think Tank - Senior Fellow / Manager
$120,000 - $180,000
Median: $150,000
AI Security - Specialist Roles
$125,000 - $320,000
Median: $180,000
San Francisco Bay Area
Highest density of frontier labs and technical safety talent.
London
Strong technical + governance concentration (DeepMind, GovAI, UK AISI).
Washington, D.C.
US policy and regulatory center for federal AI governance careers.
Beijing
Growing safety/governance ecosystem around BAAI and related institutions.
MATS
Mentorship-based technical research
12-week program; competitive admission (~4-7%) and close mentor matching.
ARENA
Technical upskilling
4-5 week intensive with practical RL/Transformer/evals track and open curriculum.
Horizon Fellowship
US federal policy placement
AI and biosecurity pathways into executive branch, Congress, and policy institutions.
TechCongress
Legislative advising
Places technologists directly in US Congressional offices.
GovAI Fellowships
Research / policy / operations tracks
Oxford/London ecosystem with structured fellow pathways.
AAAS Fellowships
Science policy entry
Large placement engine for PhD-level talent in US government.
1Read foundation guides
Start with 80,000 Hours career materials and CAIS-style intro pathways.
2Build network surface area
Join AI Alignment / EA communities and engage in public technical discussion.
3Pick one high-signal upskilling path
BlueDot for governance or ARENA/MATS-style tracks for technical depth.
4Ship a proof-of-work artifact
Publish evals, replications, or applied governance memos tied to real model risk.
Shift from theory to empiricism
Alignment work now depends on measurable experiments and eval infrastructure.
Intersections are expanding
Biosecurity, cybersecurity, and legal engineering now require hybrid career profiles.
Operations is leverage
Research outcomes depend on managerial throughput, funding operations, and community infrastructure.
Click any citation token in the page or use the source index below to open details.