By investing half a billion dollars, Effective Altruism has built a thriving ecosystem around the “AI Existential Risk” (human extinction from AI) ideology. (See, part 1).
The detailed information below (part 2) aims to familiarize you with the many players involved.
If you ever wondered what FLI/FHI/FRI, CSET/CSER, CLR/CLTR, CAIS/CAIP/CAIF (and other “creative” acronyms) stand for, this guide is for you.
Map of AI Existential Safety
The source of information is the Map of AI Existential Safety. This “world map” was created for effective altruists, AI Safety students/ researchers/ entrepreneurs to become more acquainted with all potential funding and training sources.
The map illustrates the AI x-risk’s Funding; Strategy and Governance Research; Conceptual Research; Applied Research; Training and Education; Research Support; Career Support; Resources; Media; and Blogs.
The Criteria for inclusion in the map: “An entry must have reducing AI existential risk as a goal; have produced at least one output to that end; must additionally still be active and expected to produce more outputs.”
Each item on the map is listed here with links and descriptions:
Funding
The largest funder in the existential safety space. See the undergraduate scholarship and early career funding.
The main source of grants for individuals working on AI safety.
SFF – Survival and Flourishing Fund
The second largest funder in AI safety, using an algorithm and meeting structure called “The S-process” to allocate grants.
Fast funding for projects that help humanity flourish among the stars.
A community of donors who have pledged to donate a significant portion of their income. Hosts annual donor lotteries.
FLI – Future of Life Institute – Fellowships
PhD and postdoc funding for work improving the future.
Designs and executes bespoke giving strategies for major donors.
A regranting and impact certificates platform for AI safety and other EA activities.
CLR – Center on Long-Term Risk – Fund
Financial support for projects focused on reducing (current and future) s-risks.
AI Safety GiveWiki (Formerly Impact Markets)
Crowdsourced charity evaluator focused on early-stage projects.
EAIF – Effective Altruism Infrastructure Fund
Support for EA infrastructure, including AI safety infrastructure.
Philanthropic initiative supporting researchers working on key opportunities and hard problems that are critical to get right for society to benefit from AI. Invitation only.
SHfHS – Saving Humanity from Homo Sapiens
Small organization with a long history of funding x-risk reduction.
Prize platform for x-risk reduction and other EA contests.
Funder network for x-risk reduction.
GAIA – Grantmakers for AI Alignment
Joinable donor circle for people earning to give or allocating funds towards reducing AI x-risk.
Research paper/essay writing competition platform.
Funding organization aiming to minimize the risk of AI systems, created by PreambleAI.
Grantmaking nonprofit focused on supporting research into how increasingly autonomous systems can cooperate peacefully with one another.
NSF – National Science Foundation – Safe Learning-Enabled Systems
Funds research into the design and implementation of safe learning-enabled systems in which safety is ensured with high levels of confidence.
CAIF – Cooperative AI Foundation
Charity foundation supporting research into cooperative intelligence of advanced AI.
Strategy and Governance Research
FHI – Future of Humanity Institute
Oxford-based longtermist/x-risk research organization led by Nick Bostrom.
FLI – Future of Life Institute
Outreach, policy, grantmaking, and event organization for x-risks, including AI safety.
GovAI – Centre for the Governance of AI
AI governance research group at Oxford, advising government, industry, and civil society.
CLR – Center for Long-Term Risk
Research, grants, and community-building around AI safety, focused on conflict scenarios as well as technical and philosophical aspects of cooperation.
CSET – Center for Security and Emerging Technology
Think tank at Georgetown University, USA, doing policy analysis at the intersection of national and international security and emerging technologies.
Channeling public concern into effective regulation by engaging with policymakers, media, and the public to shape a future where AI is developed responsibly and transparently.
CLTR – Center for Long-Term Resilience
Independent think tank with a mission to transform global resilience to extreme risks.
ICFG – International Center for Future Generations
European think-and-do tank for improving societal resilience in relation to exponential technologies and x-risks.
Play-money prediction markets on many topics, including AI safety.
TFI – Transformative Futures Institute
Research agenda includes raising awareness of the need for urgent action to counter the risks of advanced AI.
Forecasting platform for many topics, including AI.
FRI – Forecasting Research Institute
Advancing the science of forecasting for the public good.
QURI – Quantified Uncertainty Research Institute
Interviews with AI safety researchers.
Strategic research, e.g., expert surveys and AI forecasting via analogies to other technological developments.
Research group studying AI forecasting, produced graph showing compute used by every major ML model.
Research organization with an AI Governance and Strategy (AIGS) team as well as an Existential Security Team (XST).
X-risk research, advises AI organizations on responsible development.
Aligning AI through governance, doing policy research, advisory services, seminars/summits, and educational programs.
GCRI – Global Catastrophic Risk Institute
Small think tank that tries to bridge scholarship, government, and private industry.
CSER – Center for the Study of Existential Risk
Cambridge group doing miscellaneous existential safety research.
CFI – Centre for the Future of Intelligence
Interdisciplinary research centre within the University of Cambridge addressing the challenges and opportunities posed by AI.
GPI – Global Priorities Institute
Oxford research group focusing mainly on moral philosophy but also conceptual AI alignment.
Research group working on models of past and future progress in AI, as well as intelligence enhancement and sociology related to x-risks.
LPP – Legal Priorities Project
Conducting legal research that mitigates x-risk and promotes the flourishing of future generations.
US policy think tank dedicated to effective regulation to mitigate catastrophic risks posed by advanced AI.
Provides strategy consulting services to clients trying to advance AI safety through policy, politics, coalitions, and/or social movements.
Convening diverse, international stakeholders in order to pool collective wisdom to advance positive outcomes in AI.
IAPS – Institute for AI Policy and Strategy
Research organization focusing on AI regulations, compute governance, international governance & China, and lab governance.
Conceptual Research
MIRI – Machine Intelligence Research Institute
The original AI safety technical research organization, doing agent foundations/ conceptual work, founded by Eliezer Yudkowsky.
ARC – Alignment Research Center
Research organization led by Paul Christiano, doing model evaluations and theoretical research focusing on the Eliciting Latent Knowledge (ELK) problem.
Accelerating alignment progress by extending human cognition with AI.
Formal alignment organization led by Tamsin Leake, focused on agent foundations.
ALTER – Association for Long-Term Existence and Resilience
Research organization doing work on infra-bayesianism (led by Vanessa Kosoy) and on policy for bio and AI risks (led by David Manheim).
Independent researcher working on selection theorems, abstraction, and agency.
CHAI – Center for Human-Compatible AI
Academic AI safety research organization led by Stuart Russell in Berkeley.
Obelisk – Brain-like-AGI Safety
Lab building towards aligned AGI that looks like human brains. Featuring Steve Byrnes.
Independent researchers trying to find reward functions that reliably instill certain values in agents.
ACS – Alignment of Complex Systems
Focused on conceptual work on agency and the intersection of complex systems and AI alignment, based at Charles University, Prague.
PAISRI – Phenomenological AI Safety Research Institute
Performs and encourages AI safety research using phenomenological methods.
Independent researcher.
Assistant professor at MIT working on agent alignment.
Assistant professor in Statistics at UC Berkeley working on alignment.
Author of two books on AI safety, Professor at the University of Louisville, background in cybersecurity.
Applied Research
Leading AI capabilities organization with a strong safety team, based in London.
San Francisco-based AI research lab led by Sam Altman, created ChatGPT.
AI research lab focusing on LLM alignment (particularly interpretability), featuring Chris Olah, Jack Clark, and Dario Amodei.
Research and field-building nonprofit doing technical research and ML safety advocacy.
Alignment startup whose work includes interpretability, epistemology, and developing a theory of LLMs.
Researching interpretability and aligning LLMs.
A hacker collective focused on open-source ML and alignment research. Best alignment memes channel on the internet.
Working on factored-cognition research assistants, e.g., Ought.
An Oxford-based startup working on safe off-distribution generalization, featuring Stuart Armstrong.
Ensuring AI systems are trustworthy and beneficial to society by incubating and accelerating research agendas too resource-intensive for academia but not yet ready for commercialization.
A video game company focused on enabling the safe introduction of AI technologies into gaming.
Evals for interpretability and behavior that aim to detect deception.
Research organization focused on singular learning theory and developmental interpretability.
An AI safety research lab in Vermont, USA.
Runs an alignment lab at the University of Cambridge.
Associate Professor at New York University working on aligning LLMs.
Researching AI competition dynamics and building research tools.
Research lab creating tools and projects to keep human values central to AI impact, aiming to avoid catastrophe while improving flourishing.
Training and Education
The standard introductory courses. 3 months long, 3 tracks: alignment, governance, and alignment 201.
MATS – ML Alignment & Theory Scholars program
2 weeks training, 8 weeks onsite (mentored) research, and, if selected, 4 months extended research.
3-month online research program with mentorship.
ERA – Existential Risk Alliance – Fellowship
In-person paid 8-week summer fellowship in Cambridge.
CHERI – Swiss Existential Risk Initiative – Summer Fellowship
Expanding and coordinating global catastrophic risk mitigation efforts in Switzerland.
Regular hackathons around the world for people getting into AI safety.
Fellowship for young researchers studying complex and intelligent behavior in natural and social systems.
Events and training programs in London.
Virtual syllabus for ML safety.
GCP – Global Challenges Project
Intensive 3-day workshops for students to explore x-risk reduction.
HAIST – Harvard AI Safety Team
Harvard student group.
MIT student group.
Oxford student group.
Supports projects in AIS movement building.
Local AI Safety group in Budapest.
AISHED – AI Safety Hub Edinburgh
Community of people interested in ensuring that AI benefits humanity’s long-term future.
CBAI – Cambridge Boston Alignment Initiative
Boston organization for helping students get into AI safety via workshops and bootcamps. Supports HAIST and MAIA.
CLR – Center on Long-Term Risk – Summer Research Fellowship
2–3-month summer research fellowship in London working on reducing long-term future suffering.
HA – Human-aligned AI – Summer School
Program for teaching research methodology.
CHAI – Center for Human-Compatible AI – Internship
Research internship at UC Berkeley.
ARENA – Alignment Research Engineer Accelerator
Technical upskilling program in London with a focus on LLM alignment.
SERI – Stanford Existential Risk Initiative – Fellowship
10-week funded, mentored summer research fellowship for undergrad and grad students (primarily at Stanford).
AISI – AI Safety Initiative at Georgia Tech
Georgia Tech community doing bootcamps, seminars, and research.
AI safety student group at Stanford. Accelerating students into careers in AI safety and building the alignment community at Stanford.
French student collective doing hackathons, conferences, etc.
MLAB – Machine Learning for Alignment Bootcamp
Bootcamp aimed at teaching ML relevant to doing alignment research. Run by Redwood Research.
WAISI – Wisconsin AI Safety Initiative
Wisconsin student group dedicated to reducing AI risk through alignment and governance.
Training For Good – EU Tech Policy Fellowship
An 8-month program for ambitious graduates intent on careers to improve EU policy on emerging technology.
AISG – AI Safety Initiative Groningen
Student group in Groningen, Netherlands.
Research Support
Maintains LessWrong and the Alignment Forum and a funding allocation system.
"Means-neutral" AI safety organization, doing miscellaneous stuff, including offering bounties on small-to-large AI safety projects and maintaining the Nonlinear Library podcast.
CEEALAR – Centre for Enabling EA Learning & Research (formerly EA Hotel)
Free or subsidized accommodation and catering in Blackpool, UK, for people working on AI safety and other EA cause areas.
AED – Alignment Ecosystem Development
Building infrastructure for the alignment ecosystem (volunteers welcome).
Runs alignment hackathons and provides AI safety updates and ideas.
Consultancy for forecasting, machine learning, and epidemiology, doing original research, evidence reviews, and large-scale data pipelines.
BERI – Berkeley Existential Risk Initiative
"Operations as a service" to alignment researchers, especially in academia.
SERI – Stanford Existential Risks Initiative
Runs research fellowships, an annual conference, speaker events, discussion groups, and a frosh-year COLLEGE class.
Operations support for EA projects.
GPAI – Global Partnership on AI
International initiative aiming to bridge the gap between theory and practice on AI by supporting research and applied activities on AI-related priorities.
ENAIS – European Network for AI Safety
Connecting researchers and policymakers for safe AI in Europe.
Berkeley research center growing and supporting the ecosystem of people working to ensure the safety of powerful AI systems.
LISA – London Initiative for Safe AI
Co-working space hosting independent researchers, organizations (including BlueDot Impact, Apollo, Leap Labs), and upskilling & acceleration programs (including MATS, ARENA).
Career Support
Helps people navigate the AI safety space with a welcoming human touch, offering personalized guidance and fostering collaborative study and project groups.
Free calls to advise on working on AI safety, Slack channel, resources list, and newsletter for AI safety opportunities.
Article with motivation and advice for pursuing a career in AI safety.
Helps mid-career and senior professionals transition into AI safety, performing market research and providing a range of services, including coaching, mentoring, and training.
AI Safety Google Group (formerly 80,000 Hours AI Safety Group)
Updates on academic posts, grad school, and training relating to AI safety.
Comprehensive list of job postings related to AI safety and governance.
Effective Thesis – Academic Opportunities
Lists thesis topic ideas in AI safety and coaches people working on them.
Effective Thesis – Early Career Research Opportunities
Lists academic career opportunities for early-stage researchers (jobs, bootcamps, internships).
Nonlinear – Coaching for AI Safety Entrepreneurs
Free coaching for people running an AI safety startup or considering starting one.
Nonlinear – Career Advice for AI Safety Entrepreneurs
Free career advice for people considering starting an AI safety org (technical, governance, meta, for-profit, or non-profit).
Resources
How to pursue a career in technical AI alignment
A guide for people who are considering direct work on technical AI alignment.
Interactive FAQ; Single-Point-Of-Access into AI safety.
List of all available training programs, conferences, hackathons, and events.
Repository of possible research projects and testable hypotheses.
Repository of AI safety communities.
Curated directory of AI safety groups (local and online).
Database of AI safety research agendas, people, organizations, and products.
Ranked & scored contributable compendium of alignment plans and their problems.
Interactive walkthrough of core AI x-risk arguments and transcripts of conversations with AI researchers.
AI Safety Info Distillation Fellowship
3-month paid fellowship to write content for Stampy's AI Safety Info.
Tracker for EA donations.
Website tracking donations to AI safety.
To understand recursion, one must first understand recursion.
Helps you learn and memorize the main organizations, projects, and programs currently operating in the AI safety space.
Media
Not explicitly on AI safety, but x-risk aware and introduces stuff well.
Interviews on pursuing a career in AI safety.
AISS – AI Safety Support – Newsletter
Lists opportunities in alignment.
ML & AI Safety Updates (Apart Research)
Weekly podcast, YouTube, and newsletter with updates on AI safety.
EA – Effective Altruism – Talks
Podcast discussing AI safety and other EA topics.
Podcast of text-to-speech readings of top EA and LessWrong content.
Meme-y interviews with AI safety researchers with a focus on short timelines.
Animations about EA, rationality, the future of humanity, and AI safety.
Comprehensive index of AI safety video content.
Monthly-ish newsletter by Dan Hendrycks focusing on applied AI safety and ML.
Weekly developments in AI (incl. governance), with bonus short stories.
ERO – Existential Risk Observatory
Focused on improving coverage of x-risks in mainstream media.
AXRP – AI X-risk Research Podcast
Podcast about technical AI safety.
FLI – Future of Life Institute – Podcast
Interviews with AI safety researchers and other x-risk topics.
Campaign group dedicated to calling for an immediate global moratorium on AGI development.
Website communicating the risks of god-like AI to the public and offering proposals.
Aiming to establish a global moratorium on AI until alignment is solved.
Campaign group dedicated to increasing public understanding of AI safety and calling for strong laws to stop the development of dangerous and powerful AI.
Developing the auditing infrastructure for general-purpose AI systems, particularly LLMs.
Blogs
Online forum dedicated to improving human reasoning often has AI safety content.
EA – Effective Altruism – Forum
Forum on doing good as effectively as possible, including AI safety.
Central discussion hub for AI safety. Most AI safety research is published here.
Blog about transformative AI, futurism, research, ethics, philanthropy, etc., by Holden Karnofsky.
A blog about many things, including summaries and commentary on AI safety.
Wiki on AI alignment theory, mostly written by Eliezer Yudkowsky.
Generative.ink, the blog of janus the GPT cyborg.
Blog by a research scientist at DeepMind working on AGI safety.
Bounded Regret – Jacob Steinhardt's Blog
UC Berkeley statistics prof blog on ML safety.
Blog on aligning prosaic AI by one of the leading AI safety researchers.
Safety research from DeepMind (hybrid academic/commercial lab).
Episomological Vigilance – Adam Shimi's Blog
Blog by Conjecture researcher working on improving epistemics for AI safety.
Blog about AI risk, alignment, etc., by an agent foundations researcher.
Blog on AI safety work by a math PhD doing freelance research.
Daniel Paleka – AI safety takes
Monthly+ newsletter on AI safety happenings.
Index of Vox articles, podcasts, etc., around finding the best ways to do good.
Final Note
The descriptions above are identical to those on the “AI Safety World Map” without any edits.
I previously called Conjecture CEO’s work AI Panic Marketing, Future of Life Institute’s work Panic-as-a-Business, and the Campaign for AI Safety - the Campaign for Mass Panic.
But those do not appear there.
I let their words stand on their own (“Funding for projects that help humanity flourish among the stars”!!). I hope you enjoy it as much as I did.
Thanks for putting this out there! I think it's good for critics of the AI x-risk community to have a better sense of the ecosystem.
One small correction (that I'll try to forward to the people who made the map): my podcast AXRP is not just about technical AI safety, it also covers people's takes on the overall story of why AI poses a risk, and governance research about what ought to be done about it. The unifying topic is research, not just technical research. See my most recent episode about AI governance (https://axrp.net/episode/2023/11/26/episode-26-ai-governance-elizabeth-seger.html). That said, it's fair enough that most episodes have been about technical topics.
Good job with this... I believe Open Philanthropy had an explicit strategy to create tons of orgs. Part of this may have been due to a belief that small, nimble orgs are better, but I also suspect it was to conceal the centralized funding.
There was also an org I saw whose stated goal was to "spin out as many new AI safety orgs as possible" (on the order of 10 a year). I think it might have been Nonlinear, but I can't remember.
A lot of these orgs, in my view, are doing very non-neglected, low-impact work. I recently published some criticisms that I wrote in 2021: https://moreisdifferent.medium.com/some-criticisms-i-had-of-ea-funded-ai-safety-efforts-mostly-written-in-early-2021-aa49c9b352e8 I believe many of them still hold true.