Discussion about this post

User's avatar
Thomasss's avatar

This blog strikes me as very strange; it doesn’t address what appears to be the central underlying claim you are making—namely, that this ecosystem is wrong to assume that governing AI requires strict regulation and the resolution of major technical issues.

You are, of course, free to hold that position (although, on balance, I disagree). However, if that is your stance, it would be helpful for readers to understand why. Clearly stating your reasoning and convincing others of your position would strengthen the argument. As it stands, this blog seems to assume that the need for regulation and technical safety solutions in AI is obviously and plainly wrong, without offering any defence. To be honest, this makes me question who is funding this kind of content.

Naturally, those who believe in the necessity of regulation and technical safety will focus on addressing these issues and will employ a variety of tactics and policies to achieve their goals. We see this dynamic across many movements. For example, I can easily name 20 organisations in my country alone that are working on climate change, ranging from the extreme (MIRI seems like the equivalent of Extinction Rebellion in this context) to more moderate groups.

Expand full comment
Hominid Dan's avatar

Are the arguments discussed in Superintelligence or Human Compatible so obviously ridiculous to you that a corporate conspiracy is the simplest explanation why a lot of us care about AI safety? I'd recommend talking to one of us with an open mind. I'm sure a lot of people would be happy to discuss any counter-arguments you have to the ideas.

Expand full comment
2 more comments...

No posts