AI PANIC

Share this post

Panic-as-a-Business is Expanding

www.aipanic.news

Panic-as-a-Business is Expanding

An update to the “Ultimate Guide to ‘AI Existential Risk’ Ecosystem”

Nirit Weiss-Blatt
Apr 15, 2024
3
Share this post

Panic-as-a-Business is Expanding

www.aipanic.news
2
Share
Source: https://aigov.world/. Made by Hamish Huggard and Damin Curtis. Funded by Nonlinear.

About this update (April 15, 2024)

1. Map of “AI Existential Safety”

Since I published it, 27 new entries have been added to the “AI Existential Risk” map. Among the new entries is a small group that wants to “make sure that superintelligent AI also cares about animals.”

Staying informed about the growing “x-risk” ecosystem and learning about the new organizations and groups is important. Therefore, I have added all the new data here and labeled it accordingly. There are now 223 links to explore.

2. Map of “AI Governance”

Recently, the team behind the “AI Existential Safety” map created a separate map dedicated to AI policy and regulation. It was announced on the Effective Altruism Forum.

You should take a good look at this “AI Governance Landscape” map, considering Effective Altruists’ growing influence in the US (on Joe Biden’s AI order, see this rundown of how “The AI Doomers have infiltrated Washington” and how “AI doomsayers funded by billionaires ramp up lobbying”), the UK (influencing Rishi Sunak’s AI agenda), and the EU AI Act (x-risk lobbyists’ celebration).

The new map clarifies the many players involved, but unlike the “AI x-risk” world map, not ALL entries are from the “x-risk” realm.

The following screenshots show the tiles version of this map:


Tip: Don’t miss the endnotes.


Original Opening (December 5, 2023)

By investing half a billion dollars, Effective Altruism has built a thriving ecosystem around the “AI Existential Risk” (human extinction from AI) ideology.1 (See, part 1).

The detailed information below (part 2) aims to familiarize you with the many players involved.

If you ever wondered what FLI/FHI/FRI, CSET/CSER, CLR/CLTR, CAIS/CAIP/CAIF (and other “creative” acronyms) stand for, this guide is for you.


Map of AI Existential Safety

The source of information is the Map of AI Existential Safety.

https://aisafety.world/

This “world map” was created for effective altruists, AI Safety students/ researchers/ entrepreneurs to become more acquainted with all potential funding and training sources.

The map illustrates the AI x-risk’s Funding; Strategy and Governance Research; Conceptual Research; Applied Research; Training and Education; Research Support; Career Support; Resources; Media; and Blogs.

The Criteria for inclusion in the map: “An entry must have reducing AI existential risk as a goal; have produced at least one output to that end; must additionally still be active and expected to produce more outputs.”

Each item on the map is listed here with links and descriptions:


Funding

Open Philanthropy

The largest funder in the existential safety space. See the undergraduate scholarship and early career funding.

LTFF – Long-Term Future Fund

The main source of grants for individuals working on AI safety.

SFF – Survival and Flourishing Fund

The second largest funder in AI safety, using an algorithm and meeting structure called “The S-process” to allocate grants.

Lightspeed Grants

Fast funding for projects that help humanity flourish among the stars.

GWWC – Giving What We Can

A community of donors who have pledged to donate a significant portion of their income. Hosts annual donor lotteries.

FLI – Future of Life Institute – Fellowships

PhD and postdoc funding for work improving the future.

Longview Philanthropy [formerly Effective Giving UK]

Designs and executes bespoke giving strategies for major donors.

Manifund

A regranting and impact certificates platform for AI safety and other EA activities.

CLR – Center on Long-Term Risk – Fund

Financial support for projects focused on reducing (current and future) s-risks.

AI Safety GiveWiki (Formerly Impact Markets)

Crowdsourced charity evaluator focused on early-stage projects.

EAIF – Effective Altruism Infrastructure Fund

Support for EA infrastructure, including AI safety infrastructure.

AI2050

Philanthropic initiative supporting researchers working on key opportunities and hard problems that are critical to get right for society to benefit from AI. Invitation only.

SHfHS – Saving Humanity from Homo Sapiens

Small organization with a long history of funding x-risk reduction.

Superlinear Prizes

Prize platform for x-risk reduction and other EA contests.

Nonlinear Network

Funder network for x-risk reduction.

GAIA – Grantmakers for AI Alignment

Joinable donor circle for people earning to give or allocating funds towards reducing AI x-risk.

AI Alignment Awards

Research paper/essay writing competition platform.

Preamble Windfall Foundation

Funding organization aiming to minimize the risk of AI systems, created by PreambleAI.

Polaris Ventures [formerly, the Center for Emerging Risk Research]

Grantmaking nonprofit focused on supporting research into how increasingly autonomous systems can cooperate peacefully with one another.

NSF – National Science Foundation – Safe Learning-Enabled Systems

Funds research into the design and implementation of safe learning-enabled systems in which safety is ensured with high levels of confidence.

CAIF – Cooperative AI Foundation

Charity foundation supporting research into cooperative intelligence of advanced AI.

OpenAI – Superalignment Fast Grants

10 million USD in grants to support technical research toward aligning superintelligent AI.

The Navigation Fund

Seeking to alleviate global suffering by funding high-impact interventions based on rigorous research and experimentation.

Meta Charity Funders

Funding circle supporting meta charities.

Lionheart Ventures

VC firm investing in ethical founders developing transformative technologies that have the potential to impact humanity on a meaningful scale.

ARM – AI Risk Mitigation Fund

Aiming to reduce catastrophic risks from advanced AI through grants towards technical research, policy, and training programs for new researchers.

AE Studio

Empowering innovators and scientists to increase human agency by creating the next generation of responsible AI. Providing support, resources, and open-source software.


Strategy and Governance Research

FHI – Future of Humanity Institute

Oxford-based longtermist/x-risk research organization led by Nick Bostrom.

FLI – Future of Life Institute

Outreach, policy, grantmaking, and event organization for x-risks, including AI safety.

GovAI – Centre for the Governance of AI

AI governance research group at Oxford, advising government, industry, and civil society.

CLR – Center for Long-Term Risk [formerly the EA Foundation]

Research, grants, and community-building around AI safety, focused on conflict scenarios as well as technical and philosophical aspects of cooperation.

CSET – Center for Security and Emerging Technology2

Think tank at Georgetown University, USA, doing policy analysis at the intersection of national and international security and emerging technologies.

AIPI – AI Policy Institute3

Channeling public concern into effective regulation by engaging with policymakers, media, and the public to shape a future where AI is developed responsibly and transparently.

CLTR – Center for Long-Term Resilience

Independent think tank with a mission to transform global resilience to extreme risks.

ICFG – International Center for Future Generations

European think-and-do tank for improving societal resilience in relation to exponential technologies and x-risks.

Manifold Markets

Play-money prediction markets on many topics, including AI safety.

TFI – Transformative Futures Institute

Research agenda includes raising awareness of the need for urgent action to counter the risks of advanced AI.

Metaculus

Forecasting platform for many topics, including AI.

FRI – Forecasting Research Institute

Advancing the science of forecasting for the public good.

QURI – Quantified Uncertainty Research Institute

Interviews with AI safety researchers.

AI Impacts4

Strategic research, e.g., expert surveys and AI forecasting via analogies to other technological developments.

Epoch AI

Research group studying AI forecasting, produced graph showing compute used by every major ML model.

Rethink Priorities

Research organization with an AI Governance and Strategy (AIGS) team as well as an Existential Security Team (XST).

Convergence Analysis

X-risk research, advises AI organizations on responsible development.

TFS – The Future Society

Aligning AI through governance, doing policy research, advisory services, seminars/summits, and educational programs.

GCRI – Global Catastrophic Risk Institute

Small think tank that tries to bridge scholarship, government, and private industry.

CSER – Center for the Study of Existential Risk

Cambridge group doing miscellaneous existential safety research.

CFI – Centre for the Future of Intelligence

Interdisciplinary research centre within the University of Cambridge addressing the challenges and opportunities posed by AI.

GPI – Global Priorities Institute

Oxford research group focusing mainly on moral philosophy but also conceptual AI alignment.

Median Group

Research group working on models of past and future progress in AI, as well as intelligence enhancement and sociology related to x-risks.

LPP – Legal Priorities Project

Conducting legal research that mitigates x-risk and promotes the flourishing of future generations.

CAIP – Center for AI Policy5

US policy think tank dedicated to effective regulation to mitigate catastrophic risks posed by advanced AI.

Future Matters

Provides strategy consulting services to clients trying to advance AI safety through policy, politics, coalitions, and/or social movements.

PAI – Partnership on AI

Convening diverse, international stakeholders in order to pool collective wisdom to advance positive outcomes in AI.

IAPS – Institute for AI Policy and Strategy

Research organization focusing on AI regulations, compute governance, international governance & China, and lab governance.

LawAI – Institute for Law & AI

Think tank researching and advising on the legal challenges posed by AI.

Previously known as Legal Priorities Project (LPP).

UK AISI – UK AI Safety Institute

UK government organisation aiming to minimise surprise to the UK and humanity from rapid and unexpected advances in AI.

USAISI – U.S. AI Safety Institute

US government organization under NIST developing guidelines and standards for AI measurement and policy.

AIGS Canada – AI Governance & Safety Canada

Catalysing Canada’s leadership in AI governance and safety through advocacy, policy, and community building.

CARMA – Center for AI Risk Management & Alignment6

Conducting interdisciplinary research supporting global AI risk management. Also produces policy and technical research.


Conceptual Research

MIRI – Machine Intelligence Research Institute

The original AI safety technical research organization, doing agent foundations/ conceptual work, founded by Eliezer Yudkowsky.

ARC – Alignment Research Center

Research organization led by Paul Christiano, doing model evaluations and theoretical research focusing on the Eliciting Latent Knowledge (ELK) problem.

Cyborgism

Accelerating alignment progress by extending human cognition with AI.

Orthogonal

Formal alignment organization led by Tamsin Leake, focused on agent foundations.

ALTER – Association for Long-Term Existence and Resilience

Research organization doing work on infra-bayesianism (led by Vanessa Kosoy) and on policy for bio and AI risks (led by David Manheim).

John Wentworth

Independent researcher working on selection theorems, abstraction, and agency.

CHAI – Center for Human-Compatible AI

Academic AI safety research organization led by Stuart Russell in Berkeley.

Obelisk – Brain-like-AGI Safety

Lab building towards aligned AGI that looks like human brains. Featuring Steve Byrnes.

Team Shard

Independent researchers trying to find reward functions that reliably instill certain values in agents.

ACS – Alignment of Complex Systems

Focused on conceptual work on agency and the intersection of complex systems and AI alignment, based at Charles University, Prague.

PAISRI – Phenomenological AI Safety Research Institute

Performs and encourages AI safety research using phenomenological methods.

Eli Lifland

Independent researcher.

Dylan Hadfield-Menell

Assistant professor at MIT working on agent alignment.

Jacob Steinhardt

Assistant professor in Statistics at UC Berkeley working on alignment.

Roman Yampolskiy7

Author of two books on AI safety, Professor at the University of Louisville, background in cybersecurity.

MIT Algorithmic Alignment Group

Working towards better conceptual understanding, algorithmic techniques, and policies to make AI more safe.


Applied Research

DeepMind

Leading AI capabilities organization with a strong safety team, based in London.

OpenAI

San Francisco-based AI research lab led by Sam Altman, created ChatGPT.

Anthropic

AI research lab focusing on LLM alignment (particularly interpretability), featuring Chris Olah, Jack Clark, and Dario Amodei.

CAIS – Center for AI Safety

Research and field-building nonprofit doing technical research and ML safety advocacy.

Conjecture8

Alignment startup whose work includes interpretability, epistemology, and developing a theory of LLMs.

Redwood Research

Researching interpretability and aligning LLMs.

EleutherAI

A hacker collective focused on open-source ML and alignment research. Best alignment memes channel on the internet.

Ought

Working on factored-cognition research assistants, e.g., Ought.

Aligned AI

An Oxford-based startup working on safe off-distribution generalization, featuring Stuart Armstrong.

FAR AI

Ensuring AI systems are trustworthy and beneficial to society by incubating and accelerating research agendas too resource-intensive for academia but not yet ready for commercialization.

Encultured

A video game company focused on enabling the safe introduction of AI technologies into gaming.

Apollo Research

Evals for interpretability and behavior that aim to detect deception.

Timaeus

Research organization focused on singular learning theory and developmental interpretability.

Cavendish Labs

An AI safety research lab in Vermont, USA.

David Krueger

Runs an alignment lab at the University of Cambridge.

Sam Bowman

Associate Professor at New York University working on aligning LLMs.

Modeling Cooperation

Researching AI competition dynamics and building research tools.

AOI – AI Objectives Institute

Research lab creating tools and projects to keep human values central to AI impact, aiming to avoid catastrophe while improving flourishing.

METR – Model Evaluation & Threat Research

Evaluating whether cutting-edge AI systems could pose catastrophic risks to civilisation, including those from OpenAI and Anthropic.

ARG – NYU Alignment Research Group

Group of researchers at New York University doing empirical work with language models aiming to address longer-term concerns about the impacts of deploying highly-capable AI systems.

CBL – University of Cambridge Computational and Biological Learning Lab

Research group using engineering approaches to understand the brain and to develop artificial learning systems.

MAI – Meaning Alignment Institute

Research organization applying their expertise in meaning and human values to help ensure human flourishing in the age of AGI.


Training and Education

AISF – AI Safety Fundamentals

The standard introductory courses. 3 months long, 3 tracks: alignment, governance, and alignment 201.

MATS – ML Alignment & Theory Scholars program

2 weeks training, 8 weeks onsite (mentored) research, and, if selected, 4 months extended research.

AI Safety Camp

3-month online research program with mentorship.

ERA – Existential Risk Alliance – Fellowship [spin off of CERI – Cambridge Existential Risk Initiative]

In-person paid 8-week summer fellowship in Cambridge.

CHERI – Swiss Existential Risk Initiative – Summer Fellowship

Expanding and coordinating global catastrophic risk mitigation efforts in Switzerland.

Alignment Jam

Regular hackathons around the world for people getting into AI safety.

PIBBSS – Principles of Intelligent Behavior in Biological and Social Systems – Summer Research Fellowship

Fellowship for young researchers studying complex and intelligent behavior in natural and social systems.

SAIL – Safe AI London

Events and training programs in London.

Intro to MLS – ML Safety

Virtual syllabus for ML safety.

GCP – Global Challenges Project

Intensive 3-day workshops for students to explore x-risk reduction.

HAIST – Harvard AI Safety Team

Harvard student group.

MAIA – MIT AI Alignment

MIT student group.

OxAI – Oxford AI – Society

Oxford student group.

ASH – AI Safety Hub

Supports projects in AIS movement building.

BudAI – Budapest AI Safety

Local AI Safety group in Budapest.

AISHED – AI Safety Hub Edinburgh

Community of people interested in ensuring that AI benefits humanity’s long-term future.

CBAI – Cambridge Boston Alignment Initiative

Boston organization for helping students get into AI safety via workshops and bootcamps. Supports HAIST and MAIA.

CLR – Center on Long-Term Risk – Summer Research Fellowship

2–3-month summer research fellowship in London working on reducing long-term future suffering.

HA – Human-aligned AI – Summer School9

Program for teaching research methodology.

CHAI – Center for Human-Compatible AI – Internship

Research internship at UC Berkeley.

ARENA – Alignment Research Engineer Accelerator

Technical upskilling program in London with a focus on LLM alignment.

SERI – Stanford Existential Risk Initiative – Fellowship

10-week funded, mentored summer research fellowship for undergrad and grad students (primarily at Stanford).

AISI – AI Safety Initiative at Georgia Tech

Georgia Tech community doing bootcamps, seminars, and research.

SAIA – Stanford AI Alignment

AI safety student group at Stanford. Accelerating students into careers in AI safety and building the alignment community at Stanford.

EffiSciences

French student collective doing hackathons, conferences, etc.

MLAB – Machine Learning for Alignment Bootcamp

Bootcamp aimed at teaching ML relevant to doing alignment research. Run by Redwood Research.

WAISI – Wisconsin AI Safety Initiative

Wisconsin student group dedicated to reducing AI risk through alignment and governance.

Training For Good – EU Tech Policy Fellowship

An 8-month program for ambitious graduates intent on careers to improve EU policy on emerging technology.

AISG – AI Safety Initiative Groningen

Student group in Groningen, Netherlands.

ML4Good

10-day intensive, in-person bootcamps upskilling participants in technical AI safety research.

SPAR – Supervised Program for Alignment Research

Research program providing students the opportunity to spend a semester doing guided research or engineering projects directly related to AI safety.

MARS – Mentorship for Alignment Research Students

Research program pairing students with experienced mentors to work on an AI safety (technical or policy) research project for 2–3 months.


Research Support

Lightcone Infrastructure

Maintains LessWrong and the Alignment Forum and a funding allocation system.

Nonlinear Fund

"Means-neutral" AI safety organization, doing miscellaneous stuff, including offering bounties on small-to-large AI safety projects and maintaining the Nonlinear Library podcast.

CEEALAR – Centre for Enabling EA Learning & Research [formerly “EA Hotel”]10

Free or subsidized accommodation and catering in Blackpool, UK, for people working on AI safety and other EA cause areas.

AED – Alignment Ecosystem Development

Building infrastructure for the alignment ecosystem (volunteers welcome).

Apart Research

Runs alignment hackathons and provides AI safety updates and ideas.

Arb Research

Consultancy for forecasting, machine learning, and epidemiology, doing original research, evidence reviews, and large-scale data pipelines.

BERI – Berkeley Existential Risk Initiative

"Operations as a service" to alignment researchers, especially in academia.

SERI – Stanford Existential Risks Initiative

Runs research fellowships, an annual conference, speaker events, discussion groups, and a frosh-year COLLEGE class.

Impact Ops

Operations support for EA projects.

GPAI – Global Partnership on AI

International initiative aiming to bridge the gap between theory and practice on AI by supporting research and applied activities on AI-related priorities.

ENAIS – European Network for AI Safety

Connecting researchers and policymakers for safe AI in Europe.

Constellation

Berkeley research center growing and supporting the ecosystem of people working to ensure the safety of powerful AI systems.

LISA – London Initiative for Safe AI

Co-working space hosting independent researchers, organizations (including BlueDot Impact, Apollo, Leap Labs), and upskilling & acceleration programs (including MATS, ARENA).

Arkose

AI safety field-building nonprofit. Runs support programs facilitating technical research, does outreach, and curates educational resources.


Career Support

AIS (AI Safety) Quest

Helps people navigate the AI safety space with a welcoming human touch, offering personalized guidance and fostering collaborative study and project groups.

AISS – AI Safety Support

Free calls to advise on working on AI safety, Slack channel, resources list, and newsletter for AI safety opportunities.

80,000 Hours – Career Guide

Article with motivation and advice for pursuing a career in AI safety.

Successif

Helps mid-career and senior professionals transition into AI safety, performing market research and providing a range of services, including coaching, mentoring, and training.

AI Safety Google Group (formerly 80,000 Hours AI Safety Group)

Updates on academic posts, grad school, and training relating to AI safety.

80,000 Hours – Job Board

Comprehensive list of job postings related to AI safety and governance.

Effective Thesis – Academic Opportunities

Lists thesis topic ideas in AI safety and coaches people working on them.

Effective Thesis – Early Career Research Opportunities

Lists academic career opportunities for early-stage researchers (jobs, bootcamps, internships).

Nonlinear – Coaching for AI Safety Entrepreneurs

Free coaching for people running an AI safety startup or considering starting one.

Nonlinear – Career Advice for AI Safety Entrepreneurs

Free career advice for people considering starting an AI safety org (technical, governance, meta, for-profit, or non-profit).

AI Safety Fundamentals – Opportunities Board

Curated list of opportunities to directly work on technical AI safety.


Resources

How to pursue a career in technical AI alignment

A guide for people who are considering direct work on technical AI alignment.

Stampy's AI Safety Info

Interactive FAQ; Single-Point-Of-Access into AI safety.

AI Safety Training

List of all available training programs, conferences, hackathons, and events.

AI Safety Ideas

Repository of possible research projects and testable hypotheses.

AI Safety Communities

Repository of AI safety communities.

AI Safety Groups Directory

Curated directory of AI safety groups (local and online).

AI Watch

Database of AI safety research agendas, people, organizations, and products.

AI Plans

Ranked & scored contributable compendium of alignment plans and their problems.

AI Risk Discussions

Interactive walkthrough of core AI x-risk arguments and transcripts of conversations with AI researchers.

AI Safety Info Distillation Fellowship

3-month paid fellowship to write content for Stampy's AI Safety Info.

OpenBook

Tracker for EA donations.

Donations List

Website tracking donations to AI safety.

Map of AI Existential Safety

To understand recursion, one must first understand recursion.

AI Safety Map Anki Deck

Helps you learn and memorize the main organizations, projects, and programs currently operating in the AI safety space.

AI Digest

Visual explainers on AI progress and its risks. (By Sage Future).


Media

AI Explained

Not explicitly on AI safety, but x-risk aware and introduces stuff well.

80,000 Hours Podcast

Interviews on pursuing a career in AI safety.

EA – Effective Altruism – Talks

Podcast discussing AI safety and other EA topics.

Nonlinear Library

Podcast of text-to-speech readings of top EA and LessWrong content.

The Inside View

Meme-y interviews with AI safety researchers with a focus on short timelines.

Rational Animations

Animations about EA, rationality, the future of humanity, and AI safety.

AI Safety Videos

Comprehensive index of AI safety video content.

ML Safety Newsletter

Monthly-ish newsletter by Dan Hendrycks focusing on applied AI safety and ML.

ImportAI Newsletter

Weekly developments in AI (incl. governance), with bonus short stories.

ERO – Existential Risk Observatory

Focused on improving coverage of x-risks in mainstream media.

AXRP – AI X-risk Research Podcast

Podcast about technical AI safety.

FLI – Future of Life Institute – Podcast

Interviews with AI safety researchers and other x-risk topics.

PauseAI11

Campaign group dedicated to calling for an immediate global moratorium on AGI development.

Stop AGI [previously “Stop AI”]

Website communicating the risks of god-like AI to the public and offering proposals.

GAIM – Global AI Moratorium

Aiming to establish a global moratorium on AI until alignment is solved.

CAS – Campaign for AI Safety12

Campaign group dedicated to increasing public understanding of AI safety and calling for strong laws to stop the development of dangerous and powerful AI.

SaferAI

Developing the auditing infrastructure for general-purpose AI systems, particularly LLMs.

Rob Miles

AI safety explainers in video form.

AISCC – AI Safety Communications Centre

Connects journalists to AI safety experts and resources.

AI for Animals

Trying to make sure that superintelligent AI also cares about animals.


Blogs

LessWrong

Online forum dedicated to improving human reasoning often has AI safety content.

EA – Effective Altruism – Forum

Forum on doing good as effectively as possible, including AI safety.

AF – Alignment Forum

Central discussion hub for AI safety. Most AI safety research is published here.

Cold Takes

Blog about transformative AI, futurism, research, ethics, philanthropy, etc., by Holden Karnofsky.

ACX – Astral Codex Ten

A blog about many things, including summaries and commentary on AI safety.

Arbital

Wiki on AI alignment theory, mostly written by Eliezer Yudkowsky.

janus's Blog

Generative.ink, the blog of janus the GPT cyborg.

Victoria Krakovna's Blog

Blog by a research scientist at DeepMind working on AGI safety.

Bounded Regret – Jacob Steinhardt's Blog

UC Berkeley statistics prof blog on ML safety.

Paul Christiano’s Blog13

Blog on aligning prosaic AI by one of the leading AI safety researchers.

DeepMind Safety Research

Safety research from DeepMind (hybrid academic/commercial lab).

Episomological Vigilance – Adam Shimi's Blog

Blog by Conjecture researcher working on improving epistemics for AI safety.

Carado.moe

Blog about AI risk, alignment, etc., by an agent foundations researcher.

AIZI – from AI to ZI

Blog on AI safety work by a math PhD doing freelance research.

Daniel Paleka – AI safety takes

Monthly+ newsletter on AI safety happenings.

Vox – Future Perfect

Index of Vox articles, podcasts, etc., around finding the best ways to do good.

AI Safety in China

Newsletter from Concordia AI, a Beijing-based social enterprise, giving updates on AI safety developments in China.

Don't Worry about the Vase

Blog by Zvi Mowshowitz on various topics, including AI.14

AI Prospects

Newsletter by Eric Drexler on AI prospects and their surprising implications for global economics, security, politics, and goals.


No Longer Active

ML & AI Safety Updates (Apart Research)

Weekly podcast, YouTube, and newsletter with updates on AI safety.

AISS – AI Safety Support – Newsletter

Lists opportunities in alignment.


Final Note

The descriptions above are identical to those on the “AI Safety World Map” without any edits.

I previously called Conjecture CEO’s work AI Panic Marketing, Future of Life Institute’s work Panic-as-a-Business, and the Campaign for AI Safety - the Campaign for Mass Panic.

But those do not appear there.

I let their words stand on their own (“Funding for projects that help humanity flourish among the stars”!!). I hope you enjoy it as much as I did.


Thank you for reading AI PANIC. This post is public so feel free to share it.

Share


Endnotes

1

Funding: Last month, we learned about the $665.8 million donation that the Future of Life Institute got from Vitalik Buterin through a shitcoin (Shiba Inu).

So, the investment in growing the “x-risk” ideology is not half a billion (as calculated last year) but over a billion dollars.

2

CSET, the Center for Security and Emerging Technology at Georgetown University, got from Open Philanthropy more than $100 million.

Its director of foundational research grants is Helen Tuner. Her background includes: GiveWell, Center for the Governance of AI, and Open Philanthropy as a senior research analyst. In September 2021, she replaced Open Philanthropy’s Holden Karnofsky on OpenAI’s board of directors. Due to the attempted coup at OpenAI, she is no longer on its board. She is still working at CSET.

3

The Executive Director of the AI Policy Institute is Daniel Colson. He launched this organization because public polling can “shift the narrative in favor of decisive government action.” His agenda against “super-advanced AI systems” is “making it illegal to build computing clusters above a certain processing power.” Congress, he suggested, could cap AI models at 10 to the 25th flops, a measure of the speed at which computers can perform complex calculations. Or better yet, he said, set the cap five orders of magnitude lower, at 10 to the 20th flops. “That’s what I would choose.”

I would advise you to take AIPI’s (widely-quoted) survey results with a grain of salt.

4

AI Impacts annual survey is very problematic, as I pointed out several times.

AI Impacts is based at Eliezer Yudkowsky’s MIRI (the Machine Intelligence Research Institute in Berkeley, California).

Recently, its co-founder Katja Grace admitted: 1. Extinction risk: “We did not make sure that this is an informed estimate.” 2. Participants “are very unlikely to be experts in forecasting AI.” 3. “There have been quite notable framing effects.”

In the podcast AI Inside, you can find a more detailed discussion.

5

The Center for AI Policy (CAIP) released a draft bill on April 9, 2024, describing it as a “framework for regulating advanced AI systems.” This “Model Legislation” would “establish a strict licensing regime, clamp down on open-source models, and impose civil and criminal liability on developers.”

Considering the board of directors of CAIP, it’s no surprise that its proposal was called “the most authoritarian piece of tech legislation”: “AI Safety” enthusiasts from the Centre for the Study of Existential Risks (David Krueger), MIRI (Nate Soares and Thomas Larsen, who contracted with OpenAI), and Palisade Research (Jeffrey Ladish, who leads the AI work at the Center for Humane Technology, and previously worked at Anthropic). Together, they declared war on the open-source community.

6

CARMA (Center for AI Risk Management & Alignment) is an affiliated organization of the Future of Life Institute. It aims to “help manage” the “existential risks from transformative AI through lowering their likelihood.” This new initiative was founded by FLI thanks to Buterin’s Shiba Inu donation.

7

Roman Yampolskiy updated his prediction of human extinction from AI to 99.999999%.

That’s a bit higher than Eliezer Yudkowsky’s probability of >99%.

Definitely not an Apocalyptic Cult…

8

Conjecture CEO, Connor Leahy, shared that he “do not expect us to make it out of this century alive; I’m not even sure we’ll get out of this decade!”

Again, this is not an Apocalyptic Cult. Not at all.

9

Recruit new EAs: Missing from the “Training and Education” section are the high-school summer programs. There’s a lesser-known fact that effective altruists recruit new members at an early age and lead them into the “existential risks” realm.

Thanks to the “Effective Altruism and the strategic ambiguity of ‘doing good” research report by Mollie Gleiberman (University of Antwerp, Belgium, 2023), I learned about their tactics:

“To give an example of how swiftly teenagers are recruited and rewarded for their participation in EA: one 17-year-old recounts how in the past year since they became involved in EA, they have gained some work experience at Bostrom’s FHI; an internship at EA organization Charity Entrepreneurship; attended the EA summer program called the European Summer Program on Rationality (ESPR); been awarded an Open Philanthropy Undergraduate Scholarship (which offers full funding for an undergraduate degree); been awarded an Atlas Fellowship ($50,000 to be used for education, plus all-expenses-paid summer program in the Bay Area); and received a grant of an undisclosed amount from CEA’s EA Infrastructure Fund to drop out of high-school early, move to Oxford, and independently study for their A-Levels at the central EA hub, Trajan House, which houses CEA, FHI, and GPI among other EA organizations.”

10

The EA Hotel (CEEALAR) is the Athena Hotel in Blackpool, UK, which Greg Colbourn acquired in 2018 for effective altruists’ “learning and research.” He bought it for £100,000 after cashing in his cryptocurrency, Ethereum.

But why stop with hotels, if the EA movement can have mansions and castles?

The research report by Gleiberman also gave me valuable insight into effective altruism expenditures:

For effective altruists, retreats, workshops, and conferences are so important that they justify spending more than $50,000,000 on luxury properties.

  • Wytham Abbey (Oxford, England) = $22,805,823 for purchase and renovations.

  • Rose Garden Inn (Berkeley, CA) = $16,500,000 + $3,500,000 for renovations.

  • Chateau Hostacov (Czech Republic)= $3,500,000 + $1,000,000 for operational costs.

  • Lakeside Guest House (Oxford, England) = $1.8 million.

  • Bay Hill Mansion (Bodega Bay, north of San Francisco, CA) = $1.7 million. 

Apparently, it’s “rational” to think about “saving humanity” from “rogue AI” while sitting in a $20 million mansion. Elite universities’ conference rooms are not enough (they are for the poor).

Effective Ventures UK recently listed the Wytham Abbey mansion for sale. It was only because it was forced to return a $26.8 million donation to the victims of the crypto-criminal Sam Bankman-Fried.

The EA movement sponsors so many retreats that Canopy Retreats was formed in 2022 to provide logistical support to EA retreat organizers.

11

PauseAI’s advocacy:

12

This two-part series dealt with the Campaign for AI Safety (CAS) and the Existential Risk Observatory (ERO): The Campaign for AI Panic – Part 1 and Part 2. It shows how these organizations constantly examine how to market “AI x-risk” messages to targeted audiences based on political party affiliation, age group, gender, educational level, and residency.

13

Update - April 17, 2024: Feds appoint “AI doomer” to run AI safety at US institute. (ArsTechnica). Paul Christiano once predicted a 50% chance of AI killing all of us.

14

In a Telegraph article, Zvi Mowshowitz claimed:

“Competing AGIs [artificial general intelligence] might use Earth’s resources in ways incompatible with our survival. We could starve, boil or freeze.”

The news about FLI’s $665M donation sparked a backlash against its policy advocacy. In response, Mowshowitz wrote: “Yes, the regulations in question aim to include a hard compute limit, beyond which training runs are not legal. And they aim to involve monitoring of large data centers in order to enforce this. I continue to not see any viable alternatives to this regime.”

We see.

3
Share this post

Panic-as-a-Business is Expanding

www.aipanic.news
2
Share
2 Comments
The Eighth Type of Ambiguity
The Eighth Type of Ambiguity
Apr 16·edited Apr 16Liked by Nirit Weiss-Blatt

What you've mapped here is a system for spreading intellectual pollution.

Not just individual minds, but societies, have a limited "bandwidth" of attention. Spending more money, infiltrating more channels, and ensuring that information comes from multiple points rather than just one source ensures a message occupies more of that bandwidth. Real risks, actual genuine existential threats, are crowded out of our society's attention by a fantasy.

I am reading this piece after a sleepless night after finishing Annie Jacobsen's terrifying book "Nuclear War: A Scenario". For a subject I know a lot about - I've been something of a nuclear war strategy nerd since my early teens - Jacobsen managed to shock me. She details an imminently plausible scenario for how a single missile launch by North Korea (out of madness, spite, accident, the reason doesn't matter) spirals out of control into a global thermonuclear war between the major powers because of the structure of the US nuclear command and control system. And this is a threat that actually exists, today, right now at this very instant. And it is one that could be disarmed by a few policy changes: ones the US could do unilaterally, with minimal or zero loss of deterrence. And the policy changes are the kinds that political pressure and influence campaigns could bring about. I can only imagine what this much money, focused into that effort, could do towards greater strategic stability.

But enough people, and people with wealth and influence, prefer to focus on the risk of death via superintelligent AI. And it's not actually hard to see why: puzzles are more fun than problems. An open ended puzzle, one without a clear, achievable end state (how would you ever know you have succeeded in curbing AGI 'Risk'?), can be played forever - I studied philosophy in grad school: I would know. And the people involved are highly intelligent, and into the same things you are, so there is social proof: I can't be a deluded fool wasted my time and distracting society from real problems, I'm one of an elite vanguard.

But actually addressing a real problem requires playing by the world's rules, not ones you fantasized about in online forums and now get nonprofit money to continue to fantasize about. It requires addressing a hard problem, with entrenched interests and 75 years of institutional inertia behind it, and a nightmarish danger that is, despite what people say, not only not "unthinkable" but eminently imaginable.

Thank you for your work in exposing all of this.

Expand full comment
Reply
Share
1 reply by Nirit Weiss-Blatt
1 more comment...

No posts

Ready for more?

© 2024 Nirit Weiss-Blatt
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great culture
Share