Your way in

Fellowships

Fellowships are one of the most common entry points into AI safety research. These programs provide mentorship, funding, community, and a structured path from "interested" to "contributing."

How to think about fellowships

Most programs are looking for a combination of technical ability, genuine interest in AI safety, and the capacity to do independent research. You don't need to be a published researcher โ€” many fellows come from adjacent fields or are early in their careers. The key is showing you've engaged seriously with the ideas and can articulate what you want to work on.

๐Ÿ”ฌ

Research Fellowships

Berkeley, CA
MATS (ML Alignment Theory Scholars)
The flagship alignment research fellowship. Scholars are matched with a mentor from a leading safety lab or research org and spend several months working on a focused alignment project. Based at Constellation in Berkeley. Highly competitive but one of the strongest signals you can have on your CV.
Flagship program Mentored research Funded ~3 months
Berkeley, CA
Astra Fellowship
A fellowship program run out of Constellation for people working on reducing existential risk from AI. Astra provides funding, community, and mentorship to help fellows develop their research agendas and build connections in the safety field.
Fellowship Funded X-risk
Cambridge, UK
ERA (Existential Risk Alliance) Fellowship
A Cambridge-based fellowship connecting researchers with the existential risk community. Fellows work on research projects related to x-risk, with access to the Cambridge safety ecosystem including Meridian and Mantle coworking spaces.
Research X-risk Cambridge community
London, UK
LASR (London Alignment Summer Research)
A summer research program in London bringing together researchers to work on alignment problems. Provides structure, mentorship, and a cohort of peers โ€” a good way to spend a summer building your alignment research portfolio.
Summer program Alignment research London
Prague, Czech Republic
PIBBSS (Prague Institute for Bridging Bayesian and Safety Sciences)
A research fellowship that bridges mathematical and computational approaches with AI safety. Fellows work on projects at the intersection of Bayesian reasoning, cognitive science, and alignment theory. Based in Prague with a unique interdisciplinary community.
Interdisciplinary Bayesian methods Funded Prague
Cambridge, UK
MARS (ML Alignment Research Society)
A Cambridge-based research program focused on ML alignment. Provides a structured environment for researchers to work on alignment problems, with regular seminars and collaboration with the broader Cambridge safety community.
ML alignment Research Cambridge
๐Ÿ•๏ธ

Camps & Intensive Programs

Online
BlueDot Impact (AI Safety Fundamentals)
A free, structured course covering the landscape of AI safety โ€” from technical alignment to governance. Includes weekly readings, discussion groups, and a final project. The most common starting point for people new to the field โ€” completing it signals genuine engagement to fellowship reviewers.
Start here Free Online ~8 weeks
London / Various
ARENA (Alignment Research Engineer Accelerator)
A hands-on technical curriculum that takes you from ML fundamentals to alignment research engineering. Covers transformers, RLHF, interpretability, and more through structured exercises. Ideal if you have some coding experience but want to build the specific technical skills needed for alignment work.
Technical skills Curriculum-based Engineering focus
Online
AI Safety Camp
An intensive research program that brings together aspiring alignment researchers from around the world to work on collaborative projects. Runs multiple times per year, with both in-person and remote components. One of the most accessible entry points โ€” designed specifically for people making their first contributions to safety research.
Great first step Collaborative research Multiple cohorts/year
Online / Local Hubs
Apart Research Sprints
Short, focused research sprints (often 1-2 weeks) on specific alignment topics. Lower commitment than a full fellowship โ€” a good way to test whether alignment research is for you and to produce a concrete output you can point to in future applications.
Short sprints Low commitment Concrete output

Didn't get in? Keep going.

These programs are competitive and many strong candidates don't get accepted on their first try. The best thing you can do is keep building: take the AI Safety Fundamentals course, join an Apart Research sprint, write up your ideas on the Alignment Forum, and apply again next round. The field needs more people, and persistence is a signal reviewers notice.