AI and Human Resilience
Imagining the Digital Future
Earlier this year, Prof. Lee Rainie from the Imagining the Digital Future Center reached out to me about their 2026 annual report. Each year, the center “draws on insights gathered through canvassing of thoughtful and far-sighted experts in a wide range of fields.” “Participants represent a wide range of fields, including innovators, professionals, and policy people based in technology businesses, nonprofits, foundations, think tanks, and government, as well as academics and researchers.”
This year’s report focuses on AI and human resilience. The center asked hundreds of experts:
How will people adapt as artificial intelligence (AI) systems take on a significantly larger role than they do today in human activity and decisions?
How might the essence and elements of human resilience change?
The report will be published in April, so I haven’t seen the other responses. However, based on the tone of previous annual reports,1 I suspect many contributions will lean into the critical and gloomy narrative. I decided to offer a more optimistic view.
April 2, 2026 update: The report is out. Here are the special website, a downloadable PDF (it includes more than 160 impassioned essays covering more than 300 pages), a 15-page executive summary, and a four-page news release.
We Shape AI
When we talk about human resilience in the age of AI, we need to look at past technological innovations and how humans adapted to them. As the Pessimist Archive keeps reminding us, we’ve lived through transformative technologies before: the printing press, electricity, cars, and the internet. Each one brought real disruption and followed a predictable emotional cycle: awe, fear, backlash, messy deployment, early adoption, and eventually, a long period of normalization. When harms emerged, society responded with regulation, new standards and social norms, consumer protections, and new literacies, all of which were working together to reduce the worst effects over time. The outcomes were never perfect; progress came through iterative fixes and adjustments.
In the case of AI, I suggest viewing it as an augmentation (rather than replacement). From that perspective, AI is a powerful tool for enhancing what humans can learn, decide, create, and discover. As the AI systems spread, many more people will be assisted by improved access to knowledge and expertise that were once scarce. Used well, it will increase human agency (rather than erode it). People will be better able to solve problems and innovate.
But “used well” is conditioned by how we develop new skills, such as knowing how to verify outputs, and when to demand human review and judgment (especially for high-stakes issues). It also depends on our media and public discourse, which need to cover real tradeoffs, challenge decisions, and demand accountability.
The central point is that we shape AI. AI is a socio-technical product, built by people, trained on selected data, tuned toward chosen metrics, deployed in chosen contexts and settings, wrapped in chosen business models, and governed by various institutions. Many social forces are at play here: researchers, policymakers, industry leaders, journalists, and everyday users. AI will reflect what we build, what we tolerate, what we regulate, and what we teach people to use. Resilience, then, is steering the conversation back to human agency, as we actively shape what AI becomes.
AI Doomers’ Stance
In stark contrast, prominent AI Doomers such as Eliezer Yudkowsky and Nate Soares often frame AI as something like “magic,” a force that bypasses all social, physical, and logistical constraints. The critique of their book, “If Anyone Builds It, Everyone Dies,” addresses their technological determinism and overconfident, binary worldview of AI “Killing Us All.”
One review, memorably titled “If Anyone Reads It, Everyone Laughs,” argues that their doomsday case depends on a “simplistic view of how the world works”: “Yudkowsky and Soares treat technology like a fish without water, a tree without soil, a phone without a connection.” However, “All technologies, ever since technology was a thing, have emerged from and been sustained by their political, economic, and cultural context. The real work of technology is found in implementation, adoption, maintenance, repair, adaptation, debugging, negotiation, administration, and other boring but essential tasks.”
Moreover, the IABIED book is described as betraying “a woeful ignorance about large swaths of actual science”: “The mere existence of enduring debates and large bodies of scholarship on the nature of mind, information, biology, technology, society, history, etc., should be enough to discount confidence in conclusions based on speculative extrapolation from dodgy analogies.”
Yet, based on those extrapolations, AI doomers, and specifically Yudkowsky’s MIRI (Machine Intelligence Research Institute), have been pushing for authoritarian “solutions” that undermine researchers’ freedom to study AI. It includes “Tracking personnel: Surveillance of key AI researchers and their locations, computers, and research activities,” alongside other surveillance mechanisms.
You can see the Doomers’ lack of trust in humans in general and in human agency and resilience, in particular. The accumulation of their advocacy makes it clear that they treat society as too incompetent to adapt; therefore, central control must replace agency.
User Agency
A recent Mike Masnick conversation, Does AI Remove or Provide User Agency?, adds an important component here. Masnick describes his own use of new privacy models and multiple open-source technologies, emphasizing that users are not locked into a single decentralized option. A future with a large plurality of tools and services to choose from, rather than a few centralized proprietary models and gatekeepers, is one concrete way to resist dystopian outcomes.
I think Masnick’s pluralism stance complements the resilience argument. We should build a future where many flowers can bloom: when users can choose what fits their specific needs, without centralized control. Decentralization, open standards, and real competition will expand user agency, giving individuals and organizations more room to adapt.
Closing Remark
Unsurprisingly, AI Doomers often seek the opposite: centralization and control. It’s part of the reason we should not let them dictate what our future with AI will look like.
Endnote
Imagining the Digital Future Center’s previous annual reports:
2024: Experts Imagine the Impact of Artificial Intelligence by 2040
2025: Being Human in 2025: How Are We Changing in the Age of AI?




Yeah. I just started listening to “More Everything Forever” and Decker confirms many of the intuitive reasons why I was already tired of the Technological Salvation hype and rhetoric before it first started in ‘22. Whether it promises apocalypse or utopia, it’s hyperbole and it’s wrong on the merits. I grew up in a vegetarian doomsday cult. “AI” heavens and hells don’t entice or scare me because they’re made of the same stuff as every other religion.