This blog strikes me as very strange; it doesn’t address what appears to be the central underlying claim you are making—namely, that this ecosystem is wrong to assume that governing AI requires strict regulation and the resolution of major technical issues.
You are, of course, free to hold that position (although, on balance, I disagree). However, if that is your stance, it would be helpful for readers to understand why. Clearly stating your reasoning and convincing others of your position would strengthen the argument. As it stands, this blog seems to assume that the need for regulation and technical safety solutions in AI is obviously and plainly wrong, without offering any defence. To be honest, this makes me question who is funding this kind of content.
Naturally, those who believe in the necessity of regulation and technical safety will focus on addressing these issues and will employ a variety of tactics and policies to achieve their goals. We see this dynamic across many movements. For example, I can easily name 20 organisations in my country alone that are working on climate change, ranging from the extreme (MIRI seems like the equivalent of Extinction Rebellion in this context) to more moderate groups.
With even a basic understanding of media and the philosophy of technology it’s clear that safety often evolves alongside technological progress. Yes, there are unpredictable and sometimes painful consequences but that’s true of any major innovation. What’s truly absurd is the media hysteria claiming AI is taking over our lives and needs to be urgently regulated!! by whom? A handful of millionaires. The irony is staggering. If you haven’t bought into far-left fear-mongering, this article is a real eye-opener.
Well, tell that to Substack :-) The platform doesn't let me organize this in a table format. The original file contains tables. I would have gladly shared the content like that.
A key point for me was making it interactive: I wanted readers to be able to click on the links to all the organizations (for further details).
Are the arguments discussed in Superintelligence or Human Compatible so obviously ridiculous to you that a corporate conspiracy is the simplest explanation why a lot of us care about AI safety? I'd recommend talking to one of us with an open mind. I'm sure a lot of people would be happy to discuss any counter-arguments you have to the ideas.
I’ve always found the author of this post somewhat confusing in that they provide no arguments as to why AI safety efforts are bad, just that somehow the fact that an ecosystem exists and a lot of it is funded by Open Philanthropy points to a big tech conspiracy. It's never made clear why Dustin Muskovitz has this strong vested interest in making AI go well or what his sinister purpose is in putting his wealth behind Open Philanthropy.
Somehow the arguments for why building AI that is smarter than humans are just somehow so bad as to not be worth engaging with.
The UK government also now provides 66 million pounds per year to AI safety efforts with an initial 100 million GBP investment, which is conveniently not mentioned by the author. There are also the safety teams at all the major AI labs who collectively receive a likely greater amount of capex than the described ecosystem outlined above. If the people at the labs are similarly concerned about safety then why should others not be worried?
Hello Nirit. Love your work, but this series is becoming a bit unwieldy! It makes it harder to read and share, and I think the sprawling presentation is starting to detract from its credibility. Perhaps you should tidy up the endnotes into the main body for a more structured essay? The references/links to the staggering number of different orgs, actors and initiatives would also benefit massively from being their own separate page.
Yeah, a valid point. In my Word document, it's in tables—a cleaner and shorter layout. But the tables cannot be copied and pasted into Substack. You are right that it makes the guide longer to navigate.
"Datawrapper" was the only recommendation I found online. I can give it a try.
This blog strikes me as very strange; it doesn’t address what appears to be the central underlying claim you are making—namely, that this ecosystem is wrong to assume that governing AI requires strict regulation and the resolution of major technical issues.
You are, of course, free to hold that position (although, on balance, I disagree). However, if that is your stance, it would be helpful for readers to understand why. Clearly stating your reasoning and convincing others of your position would strengthen the argument. As it stands, this blog seems to assume that the need for regulation and technical safety solutions in AI is obviously and plainly wrong, without offering any defence. To be honest, this makes me question who is funding this kind of content.
Naturally, those who believe in the necessity of regulation and technical safety will focus on addressing these issues and will employ a variety of tactics and policies to achieve their goals. We see this dynamic across many movements. For example, I can easily name 20 organisations in my country alone that are working on climate change, ranging from the extreme (MIRI seems like the equivalent of Extinction Rebellion in this context) to more moderate groups.
With even a basic understanding of media and the philosophy of technology it’s clear that safety often evolves alongside technological progress. Yes, there are unpredictable and sometimes painful consequences but that’s true of any major innovation. What’s truly absurd is the media hysteria claiming AI is taking over our lives and needs to be urgently regulated!! by whom? A handful of millionaires. The irony is staggering. If you haven’t bought into far-left fear-mongering, this article is a real eye-opener.
I recommend putting the companies into a table instead of their own paragraphs.
Well, tell that to Substack :-) The platform doesn't let me organize this in a table format. The original file contains tables. I would have gladly shared the content like that.
A key point for me was making it interactive: I wanted readers to be able to click on the links to all the organizations (for further details).
Are the arguments discussed in Superintelligence or Human Compatible so obviously ridiculous to you that a corporate conspiracy is the simplest explanation why a lot of us care about AI safety? I'd recommend talking to one of us with an open mind. I'm sure a lot of people would be happy to discuss any counter-arguments you have to the ideas.
One camp is convinced AI will kill us all.
The other camp is rolling their eyes as they make it ever more powerful.
Neither fully understands how it works.
Either way, seems we're in trouble.
I’ve always found the author of this post somewhat confusing in that they provide no arguments as to why AI safety efforts are bad, just that somehow the fact that an ecosystem exists and a lot of it is funded by Open Philanthropy points to a big tech conspiracy. It's never made clear why Dustin Muskovitz has this strong vested interest in making AI go well or what his sinister purpose is in putting his wealth behind Open Philanthropy.
Somehow the arguments for why building AI that is smarter than humans are just somehow so bad as to not be worth engaging with.
The UK government also now provides 66 million pounds per year to AI safety efforts with an initial 100 million GBP investment, which is conveniently not mentioned by the author. There are also the safety teams at all the major AI labs who collectively receive a likely greater amount of capex than the described ecosystem outlined above. If the people at the labs are similarly concerned about safety then why should others not be worried?
Hello Nirit. Love your work, but this series is becoming a bit unwieldy! It makes it harder to read and share, and I think the sprawling presentation is starting to detract from its credibility. Perhaps you should tidy up the endnotes into the main body for a more structured essay? The references/links to the staggering number of different orgs, actors and initiatives would also benefit massively from being their own separate page.
Yeah, a valid point. In my Word document, it's in tables—a cleaner and shorter layout. But the tables cannot be copied and pasted into Substack. You are right that it makes the guide longer to navigate.
"Datawrapper" was the only recommendation I found online. I can give it a try.