Thank you for your well researched, well documented analysis.
As someone who took the MIRI-adjacent BlueDot AI Safety Fundamentals course in 2024 https://bluedot.org/, and participated in the Existential Risks Persuasion Tournament in 2022 https://forecastingresearch.org/xpt, I have some familiarity with those folks. Something that stands out in my mind is their dearth of publications in refereed journals. In neither of those X-risks activities in which I participated, did the extremists present refereed papers to support their contentions. Clearly, the AI extremists are adverse to peer reviewers. That said, for a view of what the peer-reviewed moderates are saying: https://forecastingresearch.org/publications.
for the average reader, the evident lack of self-awareness and failure to accurately anticipate how the tone and style of their message will be received by the target audience isn’t exactly reassuring about their grasp on reality.
Thank you for your well researched, well documented analysis.
As someone who took the MIRI-adjacent BlueDot AI Safety Fundamentals course in 2024 https://bluedot.org/, and participated in the Existential Risks Persuasion Tournament in 2022 https://forecastingresearch.org/xpt, I have some familiarity with those folks. Something that stands out in my mind is their dearth of publications in refereed journals. In neither of those X-risks activities in which I participated, did the extremists present refereed papers to support their contentions. Clearly, the AI extremists are adverse to peer reviewers. That said, for a view of what the peer-reviewed moderates are saying: https://forecastingresearch.org/publications.
for the average reader, the evident lack of self-awareness and failure to accurately anticipate how the tone and style of their message will be received by the target audience isn’t exactly reassuring about their grasp on reality.
good stuff, Nirit, very helpful
Thank you, Gerd.
Well seeing all the reviews lined up like this is quite something. Jolly well done