By investing half a billion dollars, Effective Altruism has built a thriving ecosystem around the “AI Existential Risk” (human extinction from AI) ideology. (See, part 1). The detailed information below (part 2) aims to familiarize you with the many players involved
Thanks for putting this out there! I think it's good for critics of the AI x-risk community to have a better sense of the ecosystem.
One small correction (that I'll try to forward to the people who made the map): my podcast AXRP is not just about technical AI safety, it also covers people's takes on the overall story of why AI poses a risk, and governance research about what ought to be done about it. The unifying topic is research, not just technical research. See my most recent episode about AI governance (https://axrp.net/episode/2023/11/26/episode-26-ai-governance-elizabeth-seger.html). That said, it's fair enough that most episodes have been about technical topics.
Good job with this... I believe Open Philanthropy had an explicit strategy to create tons of orgs. Part of this may have been due to a belief that small, nimble orgs are better, but I also suspect it was to conceal the centralized funding.
There was also an org I saw whose stated goal was to "spin out as many new AI safety orgs as possible" (on the order of 10 a year). I think it might have been Nonlinear, but I can't remember.
Thanks for putting this out there! I think it's good for critics of the AI x-risk community to have a better sense of the ecosystem.
One small correction (that I'll try to forward to the people who made the map): my podcast AXRP is not just about technical AI safety, it also covers people's takes on the overall story of why AI poses a risk, and governance research about what ought to be done about it. The unifying topic is research, not just technical research. See my most recent episode about AI governance (https://axrp.net/episode/2023/11/26/episode-26-ai-governance-elizabeth-seger.html). That said, it's fair enough that most episodes have been about technical topics.
Good job with this... I believe Open Philanthropy had an explicit strategy to create tons of orgs. Part of this may have been due to a belief that small, nimble orgs are better, but I also suspect it was to conceal the centralized funding.
There was also an org I saw whose stated goal was to "spin out as many new AI safety orgs as possible" (on the order of 10 a year). I think it might have been Nonlinear, but I can't remember.
A lot of these orgs, in my view, are doing very non-neglected, low-impact work. I recently published some criticisms that I wrote in 2021: https://moreisdifferent.medium.com/some-criticisms-i-had-of-ea-funded-ai-safety-efforts-mostly-written-in-early-2021-aa49c9b352e8 I believe many of them still hold true.