Your first conference
Conferences are one of the fastest ways to break into AI safety. You don't need to be an expert โ most events actively welcome newcomers. Come with questions, introduce yourself to speakers, and follow up with people afterwards. One good conversation can change the trajectory of your career.
Community conference
EA Global (EAG)
The flagship Effective Altruism conference, held multiple times a year in cities like San Francisco, London, and Washington DC. A significant portion of attendees and talks focus on AI safety. The best single event for meeting people across the entire AI safety ecosystem โ researchers, grantmakers, policy people, and org leaders.
Highly recommended
Application required
Multi-city
Recurring
Research workshop
FAR.AI Events
FAR.AI (the Fund for Alignment Research) hosts workshops and research events focused on technical alignment. A good venue for presenting early-stage research, getting feedback from established researchers, and finding collaborators for alignment projects.
Technical
Alignment research
Workshops
Industry conference
AI Security Forum
Focused on the intersection of AI safety and security โ covering topics like red-teaming, adversarial robustness, model security, and safe deployment practices. Brings together researchers from both the safety and security communities.
Security
Red-teaming
Deployment safety
Community conference
EAGx
Regional versions of EA Global, held in cities around the world โ from Austin to Singapore to Berlin. Smaller and more intimate than the main EAG events, making them an easier first conference. Many feature dedicated AI safety tracks.
Good first event
Regional
Application required
Academic workshop
NeurIPS Safety Workshops
NeurIPS hosts several safety-relevant workshops each year, covering topics from alignment to sociotechnical safety to red-teaming. The main conference is the largest ML venue in the world โ attending even just the workshops is a strong networking opportunity.
Academic
ML research
Annual (December)
Academic workshop
ICML Safety Workshops
ICML similarly features workshops on safe and reliable ML, alignment, and AI governance. A top venue for presenting technical safety research to the broader ML community.
Academic
ML research
Annual (July)
Academic conference
AAAI Symposium on AI Safety
AAAI runs dedicated symposia on AI safety topics, including the Safe and Robust AI track. More accessible than NeurIPS/ICML for researchers coming from outside mainstream ML.
Academic
Interdisciplinary
Annual (February)
Academic conference
FAccT (ACM Conference on Fairness, Accountability, and Transparency)
Covers the sociotechnical dimensions of AI safety โ fairness, accountability, transparency, and governance. A good venue for researchers working on the policy and social impacts side of safety.
Sociotechnical
Governance
Annual (June)
Unconference
AI Safety Unconference
Participant-driven unconferences where the agenda is set on the day. These are some of the highest-value events in the community โ small groups, deep conversations, and no gatekeeping on who gets to speak. Often held as side events at larger conferences.
High value
Participant-driven
Intimate
Retreat
AI Safety Camp
An intensive research retreat that brings together aspiring alignment researchers for collaborative projects. More structured than an unconference โ you'll work on a real research problem with mentorship from experienced researchers. One of the best entry points into the field.
Entry point
Research
Application required
Multiple per year
Can't attend in person?
Many of these events publish recorded talks online. NeurIPS and ICML workshops often post papers and videos. EA Global talks appear on the Centre for Effective Altruism YouTube channel. And the AI Alignment Forum regularly hosts online discussions that mirror the depth of in-person events.