Oct 16, 2023·edited Oct 16, 2023

Congrats with your well-researched blogposts. We're happy that you are keenly studying our work.

Clearly, you don't believe that AI can cause human extinction. Are you actually right about that? If you are right about that, we're the first to stop our campaign, and personally (as ERO's founder) I'm happy to start sustainable energy engineering and climate activism again (I made about four times with the engineering part what I'm making now if that's relevant). To make that happen, could you please send us the paper that convincingly shows that human-level AI has a 0% chance of leading to human extinction? That the four independent ways that e.g. Dan Hendrycks found (https://arxiv.org/abs/2306.12001) that could lead to extinction are all wrong?

Unfortunately, that paper doesn't exist. And not for lack of discussion. As you've noted yourself, there is plenty of debate around whether AI can lead to human extinction or not. Given that this is the case, one would presume that there are decent arguments from those working on AI that their invention can obviously (to think of the absurd idea!) never lead to human extinction, because of reasons, A, B, and C. There's a small problem: reasons A, B, and C are missing. I've met many AI researchers, both in industry and academia, who actually agree that yes, our invention could lead to human extinction. Some of them have signed the open statement earlier this year, reading that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." If that sounds scary, it's because these people are scared. They are scared because they think that yes, AI can actually lead to human extinction. (1/2)

Expand full comment

Hi, just wanted to say as a normal person, I really appreciate the depth you went into with this post and all of your other posts. That’s all! They’re so informative and a great deep dive.

Expand full comment

AI may be intelligent, but it doesn't have any will. It isn't driven by emotions. Animals, including humans, eat, sleep, breath and reproduce by sex. We don't have the same driving force/incentives. Computers just need electricity, it has no need for status, honor or any of the things that cause humans to fight each other. And why would it want to take over the world? Is it rational to want to control humans?

If it start to feel things like humans do, it may become dangerous. But it is always possible to ask it to prove it's conclusions by showing us how they reached it.To sShow us the logic behind it's conclusion, or provide evidence for what it says.

It can create disinformation, but it can also help us sort out disinformation. Now that we don't have a common perception of reality we may use AI to help us overcome the polarization between us.

Expand full comment

I believe that humans are going to extinct because of

1) environment degradation

2) resource depletion

3) technologic stagnation and education degradation

with much higher probability than hypotetical risc from "mad" or "rogue" AI.

Stopping or slowing progress is the most dangerous, almost suicidal idea of some rich people from rich countries.

Expand full comment

I expect AI to be made. Get to superintelligence. And destroy the world. (probably with nanobots)

Expand full comment