The AI Panic Campaign - part 1
AI Safety organizations constantly examine how to market “x-risk” messages to targeted audiences based on - political party affiliation, age group, gender, educational level, and residency.
“Never underestimate the power of truckloads of media coverage, whether to elevate a businessman into the White House or to push a fringe idea into the mainstream. It's not going to come naturally, though - we must keep working at it.”
The “x-risk campaign” exposé is based on hundreds of publicly available documents. The first two articles that are being released today1 reveal the following:
AI Safety organizations constantly examine how to target “human extinction from AI” and “AI moratorium” messages based on political party affiliation, age group, gender, educational level, field of work, and residency.
Their actions include:
Conducting profiling, surveys, and “Message Testing” trials to determine which type of people are most responsive to x-risk messages.
Optimizing impact on specific subgroups within the population of interest by mapping distinct frames and language variations.
Involving the larger x-risk community in the detailed findings.
Implementing “best practices” in marketing and lobbying.
This 2-part series focuses on two organizations: “Campaign for AI Safety” and “Existential Risk Observatory.”
(However, these are not the only ones whose documents have been thoroughly reviewed).
In the first post, I present their various x-risk “Message Testing”:
“Test of narratives for AGI moratorium support”
and “AI doom prevention message testing.”
“Alternative phrasing of ‘God-like AI’”
Studies on the best ways to influence public opinion
The second post is dedicated to their main target audience - policymakers.
After inflating the perceived danger of AI, the AI Safety organizations call for sweeping regulatory interventions, including widespread surveillance of AI research & development.
Part 2 describes the lobbying efforts and what these organizations are trying to achieve:
Policy submissions to governments worldwide
Targeting the UK AI Safety Summit
Previous billboard/radio ads and protest signs.
There are a few concluding remarks at the end, including the broader AI PANIC ecosystem, non-scientific terms, and the ultimate question: Will this panic campaign succeed?
Background
Effective Altruism (EA), AI existential risk (x-risk), and AI Safety
According to the “Effective Altruism” (EA) movement, the most pressing problem in the world is preventing an apocalypse where an Artificial General Intelligence (AGI) exterminates humanity. With billionaires' backing (like Elon Musk, Vitalik Buterin, Jaan Tallinn, Peter Thiel, Dustin Muskovitz, and Sam Bankman-Fried), this movement founded and funded numerous institutes, research groups, and companies under the brand of “AI Safety.”
The “AI Safety” organizations that warn of long-term existential risks (such as the Machine Intelligence Research Institute, the Future of Life Institute, and the Center for AI Safety), as well as leading AI labs (such as OpenAI and Anthropic) obtain their funding by convincing people that AI existential risk is a real and present danger.
Background materials can be found in “How Elite Schools Like Stanford Became Fixated on the AI Apocalypse,” “How Silicon Valley doomers are shaping Rishi Sunak’s AI plans,” and “How a Billionaire-backed Network of AI Advisers Took Over Washington.”
This movement, which raises the alarm about rogue AI that could wipe out humanity, is “distracting researchers and politicians from other more pressing matters in AI ethics and safety.”
Those Effective Altruism-backed organizations are “pushing policymakers to put AI apocalypse at the top of the agenda — potentially boxing out other worries and benefiting top AI companies with ties to the network.” Despite being “a fringe group within the whole society, not to mention the whole machine learning community,” they have successfully moved “extinction from AI” “from science fiction into the mainstream.”
This newsletter has previously discussed the role of the media in amplifying doomsaying. This series explores the other side of the coin: The advocates working to perfect x-risk messages so they can have the greatest impact on the media, industry, and governments.
Existential Risk Observatory
The Existential Risk Observatory was launched in 2021 with traditional media as its primary target:
“Publication in traditional media generates both a significant audience, and built-incredibility and trustworthiness of the message. Both these things are valuable to us.”
The mission is “existential risk awareness building.” So, the Existential Risk Observatory publishes opinion editorials, organizes events, and examines “how social indicators - age, gender, education level, country of residence, and field of work - affect the effectiveness of AI existential risk communication.“
Their funding comes from the Estonian billionaire Jaan Tallinn (Future of Life Institute, Centre for the Study of Existential Risk) through his Survival and Flourishing Fund, and from the Dutch billionaire Steven Schuurman through his International Center for Future Generations and his Dreamery Foundation.
The organization is based in the Netherlands, and one of its preferred forms of influence is publishing op-eds in traditional media. Its founder, Otto Barten, once claimed they “increased xrisk societal debate by 25%.” TIME magazine (AKA the doomers’ favorite place) published two of his columns: Why Uncontrollable AI Looks More Likely Than Ever and An AI Pause Is Humanity's Best Bet For Preventing Extinction.
Campaign for AI Safety
In 2023, the Campaign for AI Safety was launched in Australia to “broadcast messaging about AI x-risk” and to conduct “research that supports this messaging.” As its founder, Nik Samoylov, informed the Effective Altruism community about this initiative, he said that he “generally agrees with Yudkowsky’s position on AI” (linking to his infamous TIME op-ed “We need to shut it all down”).
The organization is mainly self-funded by Nik Samoylov, who uses his “market research” company, Conjointly, to collect online responses and track the changes in “public opinion regarding AI x-risk.” The desired outcome is a “handbook of communicating existential risk from AI” in order to “convey the extreme danger to the general public.”
Both of these organizations came to the conclusion that when it comes to communicating x-risk: “People’s minds can be changed.”
The purpose of “Message Testing” is to target the right audience with the right message
Message Testing is a type of market research that measures how an organization’s marketing language resonates with an audience. It helps companies to tailor their message to their target audience for maximum impact.
Message testing by AI Safety organizations has a more specific goal: Testing narratives that can convince people of the need for a global AI moratorium.
“A person needs to walk through the chain of thought (or be walked through) in order for them to come to the conclusion about the need for a moratorium,” wrote Nik Samoylov.
To persuade people about the need for an AI moratorium, you must first identify “the most effective ways” to communicate it.
1. “Test of Narratives”
The Campaign for AI Safety launched a series of surveys in the U.S.
They were referred to as “AI doom prevention message testing.”
It was disappointing to them that in two studies (one with 110 slogans and another with 22 slogans), the most agreeable framing was “Artificial Intelligence poses both risks and opportunities” (62 agreement score). A far behind second place was “Stop unsafe deployment of AI capabilities” (35 agreement score).
However, they found some beneficial “challenger statements,” such as “Control artificial intelligence before it controls you.”
People were most likely to be taken away from “risk and opportunities” or “no risk of human extinction” when they heard this phrase:
When they tested specific AI descriptions/terms: “The average respondent perceives references to God and God-like AI even worse than some references to aliens.”
It reinforced one of their other studies, the Test of narratives for AGI moratorium support, in which the “Judeo-Christian spiritual” narrative was “the worst performer in the test” (see Appendix 1).
The term “God-like AI” was repeatedly used in the media, but it wasn’t the most persuasive.
A different narrative test resulted in another disappointing finding: “The imminence of AI danger” was the most disagreeable statement, with a large number of people unsure about it.
The Campaign for AI Safety came to the following conclusions:
“Convincing the public of it should be a priority, but logical arguments alone may be insufficient.”
The findings suggest “creating urgency around AI danger” and “promoting optimism for international cooperation.”
But how do you create such a sense of “urgency”?
The Campaign for AI Safety decided to conduct additional, more detailed studies.
2. Alternative phrasing of “God-like AI”
This survey asked 1,505 respondents (representative of the U.S. general adult population) to respond to 37 AI descriptions. The respondents rated (on a 5-point Likert scale) their agreement/disagreement, concern, and whether we should stop AI labs (“action”).
According to the Campaign for AI Safety, the list of AI descriptions was formed by collecting ideas from “multiple people in the field of AI Safety Communications.”
They came up with the following descriptions:
Dangerous AI | Superintelligent AI species | AI that is smarter than us like we’re smarter than 2-year-olds | Unstoppable AI | AI species 1000x smarter and more powerful than us | Superintelligent AI monster | Uncontrollable machine intelligence | Superfast super-virulent computer viruses | All powerful AI | AI species | Uncontrollable AI | Godlike AI | Human replacement AI | Superhuman machine intelligence | Overpowered AI | AI that is smarter than us like we’re smarter than cows | Oppressive AI | God-like AI | Killer AI | Machine superintelligence | Superhuman AI | Terminator-level AI | Mechanized death AI | Digital alien invasion | Smarter-than-human AI | Matrix-level AI | Skynet-like AI | AI-saviour | Galaxy-eating AI | Superintelligent AI demon | Superintelligent AI | AI overlord | Stupid AI | Earth-reshaping AI | Skynet-level AI | Stochastic parrot | HAL 9000 AI.
They compared:
if “Terminator-level AI” outperforms “Skynet-level AI” or “Skynet-like AI” [it’s the same movie]
if “Superintelligent AI” outperforms “Superintelligent AI demon” [it’s better without the demon]
or if “Godlike AI” outperforms “God-like AI” [it does].
Respondents were also asked to provide the following information for subgroup analysis:
Republican/Democrat/Neither party
Female/Male
Age 18-29/Age 30-44/Age 45+
Christian
Used ChatGPT
This study produced the most bizarre Matrix you could possibly find.
The x-axis represents the “Agreement” with the AI description.
The y-axis represents “Concern” about existential risk + support for stopping the AI labs (“Action”):
Overall, “Dangerous AI” and “Superintelligence” performed better than the other descriptions:
Subgroup analysis - demographic
Political party affiliation
Republicans versus Democrats:
The Campaign for AI Safety recommended the following phrases to communicate effectively with Republicans and Democrats (see detailed table in Appendix 2):
These findings have already been implemented. The Campaign for AI Safety incorporated “dangerous AI” and “Superintelligent AI” into its messaging, e.g., social media posts and press releases.
Age group, gender, residency, field of work, and educational level
The “AI doom prevention message testing analysis found that “Younger people more acutely perceive the dangers of AI than older generations.” A different survey (of 1,481 Australians) found that the most skeptical of AI were older people (aged 50-64), females, and rural Australians.
The geographical location in Australia included, for example, Tasmania, Melbourne, Western Australia and Perth, and in the U.S., four macro-regions, South, West, Midwest, and North-East.
The Existential Risk Observatory found that “Mass media items can successfully raise awareness, with female participants and those with a bachelor’s degree being more receptive.” “Nearly 50 percent of women were convinced, while men were closer to 25-30 percent. It suggests that women are “more likely to change their perceptions towards AI.”
Educational level + field of work: “Some respondents have heard the x-risk arguments before but didn’t find them convincing,” described Otto Barten from the Existential Risk Observatory.
“According to first measurements, this doesn’t correlate too much with education level or field of work” (comparing Business & finance, law & government, arts & media, education, STEM [excluding AI], healthcare, and military / High school, Bachelor, Masters, Doctorate). He shared their conclusion in the Effective Altruism forum, LessWrong:
“Our data is therefore pointing away from the idea that only brilliant people can be convinced of AI x-risk.”
Usage of AI Tools
People who use AI tools are LESS concerned about human extinction from AI
According to the Alternative phrasing analysis, respondents who used ChatGPT showed “a differentially lower concern for ‘dangerous AI.’” They also expressed “a differentially lower need for action for descriptions about AI being smarter and uncontrollable.”
So, people who actually use the tool do not view AI as monstrous and dangerous.
It’s perhaps the most telling finding of all.
Using AI tools to fight AI
The Campaign for AI Safety uses Conjointly, an “all-in-one survey research platform.” A few months ago, its founder, Nik Samoylov, celebrated “the applications of AI in market research.”
His company’s new offering? “LLM-driven surveys” and “Remarkably fast yet insightful AI summaries of open-ended text responses” “powered by GPT-3 NLP model.”
The somewhat remarkable thing is that he uses AI for “message testing,” while he uses “message testing” to fight AI.
3. Studies on the best ways to influence public opinion
The Existential Risk Observatory conducted a few studies on “The Effectiveness of AI Existential Risk Communication to the American and Dutch Public.”
One study aimed “to evaluate the effectiveness of communication strategies in raising awareness of AGI risks.” It measured the changes in participants’ awareness after consuming various media interventions (articles or videos).
The effectiveness was measured through two indicators, “Human Extinction Events” (ranking of events that could cause extinction) and “Human Extinction Percentage” (likelihood, in percentage, of extinction by AI).
Prior to the intervention, 68 percent of the 500 participants were NOT concerned about AI causing human extinction. After the intervention, it decreased to around 52 percent.
“These findings suggest that the media materials had a relevant impact on increasing participants’ concern about AI existential.”
“The conversion rates for newspaper articles and YouTube videos are actually fairly high,” concluded Otto Barten in LessWrong.
Since the main goal is strict AI regulation, this was an important finding: “The percentage of participants who believed that the government should either regulate or prohibit AI development increased following the intervention.”
The Existential Risk Observatory explained:
“This may indicate that after being exposed to the media content, a portion of the participants might be prone to believe that the government should regulate or prohibit AI development as a response to the perceived threat, potentially as a result of a fear response to the possibility of extinction.”
It’s almost as if they admitted to being PROFESSIONAL FEARMONGERS.
An additional survey examined “The views of the American general public on the idea of imposing an AI moratorium and their likelihood of voting.” They asked 300 U.S. residents to state their concerns about human extinction from AI and the need for government intervention (pre-test questions), to read an article about AI x-risk (the intervention), and to respond again (post-test questions).
After the intervention, the respondents’ support for the AI moratorium increased by more than 10%. In terms of mean value, CNN’s Stuart Russell and TIME’s Eliezer Yudkowsky were the most successful articles, followed by CNBC’s Gary Marcus. The conclusion was:
“It can be suggested that the general American public's opinions on the implementation of an AI moratorium can be influenced by exposure to media about the existential risks of AI.” It indicates “the importance of media framing and messaging in shaping public opinion on complex issues such as AI existential risk.”
Therefore, the organization plans to “assess the influence of distinct media narratives, such as alarmist or academic language, on the communication of AI existential threats.”
“Further investigation is necessary to determine which narratives are most appealing to specific groups.”
The parallel to elevating a businessman into the White House
In addition to common market research tools (described above), AI Safety organizations also conduct “other forms of psychological and behavioral experiments and focus groups.” However, most of their work “is in the form of unpublished private commissions.” One such company is Rethink Priorities.
By using the same demographic analysis conducted by Cambridge Analytica (control variables: Age, Sex, Race, Household Income, Education, geographical location, and Political party affiliation), these organizations openly try to persuade people to support their “fringe idea.”
While Cambridge Analytica used fear-based content to influence elections, the AI Safety organizations are using “human extinction” messages to push for an AI moratorium.
In both cases, the aim is to generate emotion and manipulate public opinion.
The whistleblower Christopher Wylie had a different word for it: Mindf*ck.
Part 2 of this story presents the efforts to influence AI regulation through policy submissions to governments worldwide, and the focus on the upcoming AI Safety Summit in the UK.
Appendix 1
Test of narratives for AGI moratorium support. The 9 narratives (including the “Judeo-Christian spiritual” narrative) with “support for moratorium” and “concern for AI-human extinction”:
Appendix 2
Alternative description to replace “God-like AI.”
“Agreement & Concern & Stop AI labs” for each descriptor by subgroups:
The publication date was set for Oct 8.
On Oct 7, Israel was struck by horrific terror attacks. I’ve been devastated ever since.
I took a brief break today, to make these materials available to the AI community and policymakers.
Nonetheless, I’m in deep, deep grief.
Congrats with your well-researched blogposts. We're happy that you are keenly studying our work.
Clearly, you don't believe that AI can cause human extinction. Are you actually right about that? If you are right about that, we're the first to stop our campaign, and personally (as ERO's founder) I'm happy to start sustainable energy engineering and climate activism again (I made about four times with the engineering part what I'm making now if that's relevant). To make that happen, could you please send us the paper that convincingly shows that human-level AI has a 0% chance of leading to human extinction? That the four independent ways that e.g. Dan Hendrycks found (https://arxiv.org/abs/2306.12001) that could lead to extinction are all wrong?
Unfortunately, that paper doesn't exist. And not for lack of discussion. As you've noted yourself, there is plenty of debate around whether AI can lead to human extinction or not. Given that this is the case, one would presume that there are decent arguments from those working on AI that their invention can obviously (to think of the absurd idea!) never lead to human extinction, because of reasons, A, B, and C. There's a small problem: reasons A, B, and C are missing. I've met many AI researchers, both in industry and academia, who actually agree that yes, our invention could lead to human extinction. Some of them have signed the open statement earlier this year, reading that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." If that sounds scary, it's because these people are scared. They are scared because they think that yes, AI can actually lead to human extinction. (1/2)
Hi, just wanted to say as a normal person, I really appreciate the depth you went into with this post and all of your other posts. That’s all! They’re so informative and a great deep dive.