Study: The Public Isn’t Concerned or Mobilized by AI Doomsday Warnings
The failure of existential-risk messaging helps explain the AI doomers’ pivot toward job loss, warfare, and environmental harms
A new study, Which AI harms and risks will mobilize the public to act?, provides insight into an important shift in AI doomers’ messaging. After years of pushing “human extinction” narratives that failed to resonate with the public or policymakers, many existential risk advocates have pivoted toward environmental harms, job loss, and AI-enabled warfare. This study’s findings help explain why.
About the Study
The goal of the “Which AI harms and risks will mobilize the public to act?” study was to “understand what concerns people most concretely, and what is most likely to persuade them to act on those concerns.”
The research was conducted by the Social Change Lab, a UK-based organization that studies protest and people-powered movements. As it explains, “Through research reports, workshops, and training, we provide actionable insights to help movements and funders be more effective.”
Social Change Lab accepts donations through the Effective Altruism organization “Giving What We Can.” This particular study was funded by “Changing Ideas.”
The introduction asks: “What do we know so far about people’s attitudes to AI risks?”
Its answer is revealing:
“People respond more strongly to immediate, tangible harms - particularly job loss and risks to children - than to more abstract scenarios […] Job disruption and threats to children resonate most strongly across demographic groups, and people tend not to engage with catastrophic risk narratives.”
Experimental conditions
The researchers conducted a randomized controlled trial with 3,467 UK participants, drawing on a representative sample of adults aged 18 and older in terms of age, gender, and political affiliation.
The study tested 11 AI risk areas: (1) job displacement, (2) misinformation, (3) disinformation, (4) human extinction, (5) surveillance, (6) AI-enabled warfare, (7) social isolation, (8) bias and discrimination, (9) environmental harms, (10) sophisticated scams and fraud, and (11) loss of human capabilities and cognitive decline.
Key findings
“Of all the AI harms/risks we asked about, people are least concerned about X-risk.”
The extinction risk posed to humanity by AI (X-risk) had the lowest level of concern of any harm/risk.
People are also less willing to act on this risk than on nearly every other risk (only social isolation came lower).
Reading about X-risk does not make much difference to people’s concern or willingness to act on it.
By contrast, environmental harm most galvanized people to act; people were substantially more willing to act on environmental harms of AI after reading about them, suggesting this issue has high mobilization potential.
The risk of AI-enabled warfare caused the greatest level of concern about AI in general.
Job displacement—by far the most salient concern in surveys—ranked surprisingly low on mobilization potential, suggesting an issue’s salience is not a reliable proxy for its mobilization potential.
Mention of existential risk/human extinction = 1%
“An important motivation for carrying out this study was the fact that the campaign groups currently most actively trying to mobilize the public on AI (Control AI, Pause AI, Stop AI) are focused on the existential threat to humanity posed by superintelligent AI.”
The study posed an open-ended question: “Could you tell us in 1 or 2 sentences how you feel about the development of AI?”
The study found that “existential / extinction risk was mentioned by very few people, and even when it was, several wrote of their skepticism of this risk being a real one.”
One example:
“I think that in general the risks are overblown. There are undoubtedly some concerns over potential misuse harming individuals, but claiming that it threatens the existence of humanity is ridiculous.”
To the rescue: AI-enabled welfare and environmental harms
This is where the study becomes more practical to the AI doomers.
“While this is not hugely encouraging for X-risk campaigners, there was an encouraging finding: reading about AI-enabled warfare and environmental harms due to AI significantly increased people’s concern about X-risk relative to the control condition. Reading about AI-enabled warfare also increased X-risk concern more than reading about X-risk itself. In other words, people are more concerned about X-risk if they read about AI-enabled warfare than if they actually read about X-risk.”
“An implication of these findings, put simply, is that X-risk campaigners would likely benefit from talking more about AI warfare.”
It goes on to argue that “tapping into people’s concern about AI warfare could be effective, especially now”: “For movement organizers, this might present a somber window of opportunity: messaging that connects AI development to ongoing military conflicts may tap into existing public attention in ways that more abstract risk framings cannot.”
The study makes a similar case for “environmental harms as a mobilization opportunity”:
“Environmental harms from AI emerged as one of the most promising areas for mobilization. The risk of environmental harms topped the rankings for willingness to act on a risk after reading about it. Perhaps more importantly, environmental harms showed one of the strongest malleability effects: people’s concern and willingness to act shifted significantly after reading about the issue, suggesting this is a risk area where public engagement campaigns could have real traction.”
“Implications for those working on human extinction risk”
The study states the problem plainly:
“The findings present a challenging picture for organizations currently focused on communicating about extinction risks to the public. Concern about X-risk and willingness to act on X-risk were both very low, in absolute terms and relative to other risks we tested.”
It also challenges a common assumption: that the most salient issue is automatically the best one for mobilization.
“Another important finding challenges the assumption that issues with the highest salience are also those which are the most effective mobilizers. We know from multiple surveys as well as from our open-text question here, that the AI risk area most salient to people is job loss. Yet this is not the issue that, according to our data, stands out in terms of the levels of concern or willingness to act connected to it. Moreover, reading about job loss risks did not increase concern or willingness to act on AI particularly strongly.
For activists, this suggests that talking about job loss might be helpful in ‘meeting people where they are at’ but that it might be more effective to move the conversation onto other topics, such as autonomous weapons and environmental harms, in order to see tangible actions.”
My take
After years of AI panic and fear-mongering campaigns (detailed on this Substack), the above findings are reassuring. The public is not buying the AI doomers’ “human extinction” messages. People actually reject the x-risk hype.
About a year ago—a shift that can be traced back to the start of the Trump administration and its pro-innovation agenda—x-risk advocates learned to “meet people where they are” by building broader coalitions around near-term harms. They changed their script to address job loss, data center water/electricity usage, and AI warfare.
But their goal remained the same: to steer people back toward their central obsession, “human extinction from AI.” On that front, this study suggests they have failed.
That does not mean the doomers will stop trying to “bait-and-switch” their audiences. If anything, this new study may encourage them to double down on that tactic, since it suggests that other pathways are the only ones that could work for their movement.
My ask to readers is simple: Be aware of this strategy.




