The AI Panic Campaign – part 2
The x-risk campaign is primarily aimed at policymakers. The goal is to persuade them to surveil and criminalize AI development.
“First, they ignore you, then they laugh at you, then they fight you, then you win.”
“There’s a direct connection between publishing articles and influencing policy.
We are moderately optimistic that we can influence policy in the medium term.”
These quotes are from the founders of the Existential Risk Observatory and the Campaign for AI Safety.
The previous post provided the necessary background materials on their market research/”Message Testing” efforts:
“Test of narratives for AGI moratorium support” and “AI doom prevention message testing.”
“Alternative phrasing of ‘God-like AI’”
Studies on the best ways to influence public opinion
Part 2 is dedicated to the lobbying efforts:
Policy submissions to governments worldwide
Targeting the UK AI Safety Summit
Previous billboard/radio ads and protest signs
There are a few concluding remarks at the end, including the broader AI PANIC ecosystem, non-scientific terms, and the ultimate question: Will this panic campaign succeed?
4. Policy submissions to governments worldwide
Fear-based campaigns are designed to promote fear-based AI governance models.
When it comes to the AI Panic campaign, politicians feel more pressured to take action because the x-risk discourse is permeating the media. So, framing AI in extreme terms is intended to motivate policymakers to adopt stringent rules.
After using the media to inflate the perceived danger of AI, AI Safety organizations demand sweeping regulatory interventions in AI usage, deployment, training (cap training run size), and hardware (GPU tracking).
According to the Campaign for AI Safety website, its primary activity is “not limited to any one country” and includes “stakeholder engagement, such as making policy submissions.”
Their policy submissions have the following policies:
An indefinite and worldwide moratorium on further scaling of AI models
Shutting down large GPU and TPU clusters
Prohibition of training ML models greater than 10^23 FLOP in compute
Introduce a licensing scheme for high-risk AI development and recognize licenses issued overseas under similar schemes
Passing national laws criminalizing the development of any form of Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI).
So far, the organization has submitted these recommendations to:
The Canadian Guardrails for Generative AI – Code of Practice by Innovation, Science and Economic Development Canada
The Select Committee on Artificial Intelligence appointed by the Parliament of South Australia
IN GOI's Request for Comment on Encouraging Innovative Technologies, Services, Use Cases, and Business Models through Regulatory Sandbox by the IN Telecom Regulatory Authority of India
U.S. Office of Science and Technology Policy OSTP’s Request for Information to Develop a National AI Strategy
U.K. pro-innovation approach to AI regulation by the UK Department for Science, Innovation and Technology
U.S. National Telecommunications and Information Administration (NTIA)'s Request for Comment on AI Accountability
The U.K. CMA's information request on foundation models
They are currently working on:
UN Review on Global AI Governance.
NSW inquiry into AI (Australia).
Following months of "x-risk campaign" - open letters, conferences, interviews, and op-eds in traditional media - these proposals have become the basis for public policy debates about AI. All the while ignoring potential trade-offs.
The goal now is to influence the UK AI Safety Summit.
5. Targeting the UK AI Safety Summit
“On November 1st and 2nd, the very first AI Safety Summit will be held in the UK.
The perfect opportunity to set the first steps towards sensible international AI safety regulation.”
This is why the Campaign for AI Safety has teamed up with PauseAI to organize an International PauseAI Protest on 21st October 2023 (in multiple countries):
The Campaign for AI Safety is also seeking more donations “to help fund ads in London ahead of the UK Safety Summit.”
6. Previous billboard/radio ads and protest signs
Prior to the studies in part 1, the Campaign for AI Safety ran billboard campaigns in London (and in San Francisco) with the slogan “Control artificial intelligence before it terminates us”:
But the beginning wasn’t satisfactory, and the Billboard testing results weren’t as expected.
“Notably, all the metrics are below the norm,” they summarized. “Likely because of the negativity of the messages.”
As it turns out, those first experimentations did not produce the best slogans…
In June 2023, they protested at the Melbourne Convention and Exhibition Centre (MCEC), where Sam Altman was having a talk.
Their protest sign claimed AI would cause the End of the World.
In a radio ad they ran on Australian radio stations, a computerized male voice said:
"Greetings. I'm an evil artificial intelligence system. I'm being developed in America! As various governments around the world, including yours, allow private AI laboratories to conduct giant experiments. Rest assured that … your concerns and desires will be the least of my priorities."
Moving forward, we can expect an improved campaign, especially around the AI Safety Summit. There are much better alternatives to an “evil artificial intelligence system.”
Their studies found that focusing on “Dangerous Superintelligent AI“ would resonate with both Republicans and Democrats (win-win!).
Concluding remarks
The broader AI PANIC ecosystem
The AI Safety advocacy organizations are just one component of the professional fearmongering ecosystem.
The major players can be characterized as “Panic-as-a-Business” and “AI Panic Marketing”:
Panic-as-a-Business
“We believe humans will be wiped out by Superintelligent AI.
All resources should be focused on that!”
Influential voices: Eliezer Yudkowsky (MIRI), Jaan Tallinn (Future of Life Institute, Centre for the Study of Existential Risk, Survival and Flourishing Fund), Max Tegmark (Future of Life Institute).
Among the additional organizations are: Future of Humanity Institute, Center for AI Safety, Centre for the Governance of AI, AI Impacts, Center for AI Policy, and AI Policy Institute.
AI Panic Marketing
“We're building a powerful Superintelligent AI.
See how much is invested in taming it!”
Influential voices: Sam Altman (OpenAI) and Dario Amodei (Anthropic).
OpenAI and Anthropic both have successful outcomes through Effective Altruism connections and funding. It appears they benefit from their fearmongering (in front of Congress, for example) in two ways: Restrict newcomers from entering the market and raise more money (“As we build the risky God, we warn you about, we are the only ones you can trust to control it”).
Elon Musk is an interesting case. Since he heavily funds both the Future of Life Institute and the new xAI startup, he fits into both categories. A recent example: In a bipartisan (closed-door) AI Safety Forum held on Capitol Hill, Musk’s message to the Senators and the media was on the dangers of Superintelligence. AI presents a “civilizational risk,” he said. There’s a low (but not zero) chance that “AI will kill us all.”
Non-scientific terms
Clément Delangue, Hugging Face CEO, complained about current terminology:
“Very concerning that most public conversations today in AI are happening with non-scientific terms with lose definitions like ‘frontier AI’, ‘AGI’, ‘AI safety’, ‘proliferation’... that are biasing the whole debate with very little to no science associated to them.”
We can now add 37 more “non-scientific” AI descriptions that the AI Safety Communications people have suggested as alternatives to “God-like AI.”
There was much to learn from their list.
They used terms like “Skynet-level AI”/”HAL 9000 AI”/“Galaxy-eating AI” (!) while denying their talking points sound like science fiction.
They tested “Judeo-Christian spiritual” phrases like “Uncontrollable demons will be summoned” (!) while denying their talking points sound like religion.
(There’s a word for what they’re doing. Perhaps I should test whether it’s hypocrisy or absurdity).
Stochastic parrots
Also revealing was the fact that Stochastic parrots did not raise “existential risk concerns” as effectively as the other terms (in the “alternative phrasing” matrix, it can be found at the bottom).
It’s an interesting finding for the researchers who coined this term- Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell.
No wonder Max Tegmark, founder of the x-risk institute “Future of Life,” is trying to distance LLMs from Stochastic Parrots as much as possible.
“No, LLM’s aren’t mere stochastic parrots: Llama-2 contains a detailed model of the world, quite literally!” he tweeted.
The first part of the sentence exists since this is the least concerning term when it comes to “AI wiping out humanity.”
In the second part, he asserts a questionable claim, to which many AI researchers have responded that it does not mean what he thinks it means.
Panic Campaign
The Campaign for AI Safety founder uses his market research company to analyze brand names, as well.
One of Conjointly‘s reports said, “Twitter is better than X,” primarily because it’s easy to pronounce and remember.
Here’s an idea for a brand name: Instead of the Campaign for AI Safety, let’s call it the Campaign for Mass Panic.
It’s more accurate and also easy to pronounce and remember.
Is this x-risk campaign going to succeed?
According to Len Bower, manipulation occurs when someone uses deception or fear to get what they want from others. It’s a persuasion strategy people use to emotionally influence someone for an unfair advantage. Manipulators test your weak spots and are quick to use that knowledge against you.
Those descriptions should be considered when assessing the magnitude of the x-risk manipulation. All the “tests of narratives” for AI “moratorium support” are being utilized in lobbying efforts. So, we need to ask:
How likely is it that politicians would act upon those x-risk messages?
If they do, it would be a surprisingly successful panic campaign.
Any thoughts on e/acc?
"AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies." - this is not a claim by the protesters, but a literal quote from OpenAI CEO Sam Altman. Source: https://www.businessinsider.com/sam-altman-y-combinator-talks-mega-bubble-nuclear-power-and-more-2015-6?international=true&r=US&IR=T