The Weak Foundations of AI Doomsday
A new study, “AI going rogue? An integrative narrative review of the tacit assumptions underlying existential AI risks,”1 analyzes 81 papers on AI x-risk to examine the conceptual building blocks behind the arguments. It finds that much of the literature rests on highly speculative, anthropomorphic, and unsubstantiated scenarios, while neglecting socio-technical realities.
The research shows that the story AI doomers tell themselves is built on a fragile foundation: it turns vibes into forecasts.
The formula is: humanize the machine, compress the timeline, and ignore the real-world context.
The Speculative Machinery of AI Existential Risk
Step 1: Build on anthropomorphism and “sentient AI”
The first research question asks, “How is AI defined? How is it linked to existential risk?” The review finds that many papers in the x-risk community draw a human–machine analogy, repeatedly sliding into anthropomorphic framing. Those papers also present “benchmarking as an indicator of AI performance with the problematic tendency of inductive generalization.” Meaning, they extrapolate from specific technical performance to sweeping claims about future autonomy and danger.
The study finds that “a considerable proportion of the literature invokes speculative criteria such as consciousness, sentience, or autonomy.” The papers assume with high confidence that AI will attain these criteria, with AI developing into AGI or superintelligence. The review finds “Advocacy for functionalism as a theory of mind, treating AGI as an emergent process in the machine.” Yet the concepts remain vague. “AGI and its development paths are ill-defined.”
In the authors’ own words:
“The navigation of these categories quickly comes along with a new background condition: a jump from assessing Artificial Intelligence to a much more speculative object of investigation, namely the attainment of Artificial General Intelligence—often used interchangeably with terms such as singularity and superintelligence—and the accompanying risks of such development. Both this jump and the related concepts, however, remain vaguely outlined and ill-defined within the corpus at hand.”
The study notes that “Instead of providing evidence that (and showing how) AI can actually attain mental states like consciousness, authors tend to postulate these circumstances simply as a given.” It’s problematic since “other schools of thought in philosophy, psychology, neuroscience, and biology have, for decades, challenged this assumption as implausible and speculative.”
Overall, “this framing not only imbues the scientific discourse with emotional, speculative expectations and, in so doing, undermines its analytical value.”
Step 2: Compress the timeline
The second research question examines how the literature forecasts AI’s future development. The results here range “from the calculation of time and probability horizons to a sudden intelligence explosion”:
“AI future events and occurrences, no matter their complexity, are approached as phenomena that can be formalized, calculated, and modeled.” It’s noted that these probability calculations are widely used in the AI x-risk community.
The numerical models give the x-risk discourse an aura of perceived objectivity, precision, and truth, even when the underlying assumptions are highly uncertain.
“The use of metrics evokes a sense of certainty and triggers a performative notion, calling for action,” says the study. It creates a feeling of “a small window (kairos) to react.” But this illusion of precision can be misleading when the foundations are so speculative.
Step 3: Strip away real-world context
The third research question examines whether x-risk arguments account for the socio-political and material conditions of AI development. The answer is: not enough.
“Almost no contribution in the corpus does justice to mention any background conditions. Instead, AI development is approached as a stand-alone ‘autonomous’ agent, whose steady increase in capabilities is just depicted as a given, pointing to a technological determinist worldview.”
“The disregard for counterarguments, lessons learned from history, and insights from other scientific disciplines […] results in an overshadowing of the social and material, liberating AI development from any real-world constraints.”
The x-risk literature is “dedicating minimal attention to the necessary infrastructural, political, and material preconditions of a supposedly accelerating AI towards loss of control.” Furthermore, it “lacks interdisciplinary approaches” and “critical voices, which do exist, seem to publish elsewhere.”
Conclusion
The existential-risk argument has already been criticized for failing tests of scientific validity, normative validity, and truthfulness (see the 2024 paper “Talking existential risk into being”). This new review helps explain why:
Much of the x-risk literature exhibits questionable calculations and predictions, anthropomorphic assumptions, and an unwavering, quasi-messianic conviction in the “AGI” trajectory, while paying too little attention to the real-world limits that would shape any actual AI future.



