In July 2023, Steve Rose from The Guardian shared this simple truth: “So far, ‘AI worst case scenarios’ has had 5 x as many readers as ‘AI best case scenarios.’” Similarly, Ian Hogarth, author of the column “We must slow down the race to God-like AI,” shared that it was “
Excellent piece and Substack. Thank you so much for writing this.
On the last letter of "AI PANIC," about sci-fi scenarios being treated as credible predictions, the best (fictional) treatment of this I know of is in Stanislaw Lem's "His Master's Voice," where attempts to understand a message from another galaxy are continually hamstrung by reliance upon concepts from science fiction, to the eventual failure of the entire project. Its a remarkable novel, not just for the quality of the writing (in Michael Kandel's translation, as I don't read Polish), but because I cannot recall any Western SF in which a scientific project just utterly and completely fails - not even in the sense of producing a disaster or monsters, but just not succeeding at all because it's investigators are stuck in inadequate concepts.
Excellent piece and Substack. Thank you so much for writing this.
On the last letter of "AI PANIC," about sci-fi scenarios being treated as credible predictions, the best (fictional) treatment of this I know of is in Stanislaw Lem's "His Master's Voice," where attempts to understand a message from another galaxy are continually hamstrung by reliance upon concepts from science fiction, to the eventual failure of the entire project. Its a remarkable novel, not just for the quality of the writing (in Michael Kandel's translation, as I don't read Polish), but because I cannot recall any Western SF in which a scientific project just utterly and completely fails - not even in the sense of producing a disaster or monsters, but just not succeeding at all because it's investigators are stuck in inadequate concepts.
Discovered you through AI Inside podcast. Thanks for the cool head.