5 Comments
User's avatar
Carolyn Meinel's avatar

Thank you for your well researched, well documented analysis.

As someone who took the MIRI-adjacent BlueDot AI Safety Fundamentals course in 2024 https://bluedot.org/, and participated in the Existential Risks Persuasion Tournament in 2022 https://forecastingresearch.org/xpt, I have some familiarity with those folks. Something that stands out in my mind is their dearth of publications in refereed journals. In neither of those X-risks activities in which I participated, did the extremists present refereed papers to support their contentions. Clearly, the AI extremists are adverse to peer reviewers. That said, for a view of what the peer-reviewed moderates are saying: https://forecastingresearch.org/publications.

Expand full comment
Ingo Reimann's avatar

for the average reader, the evident lack of self-awareness and failure to accurately anticipate how the tone and style of their message will be received by the target audience isn’t exactly reassuring about their grasp on reality.

Expand full comment
Gerd Leonhard's avatar

good stuff, Nirit, very helpful

Expand full comment
Nirit Weiss-Blatt's avatar

Thank you, Gerd.

Expand full comment
Julie Fredrickson's avatar

Well seeing all the reviews lined up like this is quite something. Jolly well done

Expand full comment