9 Comments
Dec 6, 2023Liked by Nirit Weiss-Blatt

The first question to ask is: Are existential risks from AI real? Whether AI Safety merits funding rests on this point.

Expand full comment

I liked this, fairly well informed.

Incidentally though, asking "How many lives could have been saved if the hundreds of millions had been donated to other causes, like global health?" is an EA question!

Welcome to the community!👋 You should show up at the next EA Global conference☺️

Expand full comment
author

I know it's an EA question... that was the point

This is why the following question refers to "reckoning from *within*"

I would gladly attend - if they let me in (given all I've written about them) ;-)

Expand full comment

Ah I see. I had read it as 4 separate questions.

I do think though (and this is why I drew attention to this) that cause prioritization is an essential part of EA. And it's more than just human lives today vs abstract SF risks, which definitely makes the later seem more out of left field.

It's also lives vs welfare vs population ethics vs QUAlYs vs longer term economic development and downstream effects from, eg lead poisoning, animals of more or less certain sentience, large populations vs small, movement-building in animal advocacy and etc. The problems of *good doing period* are abstract and difficult to solve assuming you are sufficiently serious. TLDR: AI Safety (wrong or right) is hardly out of place.

But I do take your point that funders drive the community. Individuals do somewhat diverge though - look at the charity election pre-votes on the forum!

I'm not sure what the policy is for journalists - but it's certainly worth trying!

Expand full comment

You've only scratched the surface on MIRI, Yudkowsky and Muehlhauser. Check these out:

https://archive.ph/Kvfus

https://fredwynne.medium.com/an-open-letter-to-vitalik-buterin-ce4681a7dbe

Expand full comment
author

It’s well-known…

- Effective Altruism Promises to Do Good Better. These Women Say It Has a Toxic Culture Of Sexual Harassment and Abuse

Charlotte Alter | TIME | February 3, 2023

“This story is based on interviews with more than 30 current and former effective altruists and people who live among them.”

https://time.com/6252617/effective-altruism-sexual-harassment/

- Why effective altruism struggles on sexual misconduct

Kelsey Piper | Vox | February 15, 2023

https://www.vox.com/future-perfect/2023/2/15/23601143/effective-altruism-sexual-harassment-misconduct

- The Real-Life Consequences of Silicon Valley’s AI Obsession

Ellen Huet | Bloomberg | March 7, 2023

https://archive.is/MFOnU

“Within the subculture of rationalists, EAs and AI safety researchers, sexual harassment and abuse are distressingly common, according to interviews with eight women at all levels of the community.”

…”he also argued that it was normal for a 12-year-old girl to have sexual relationships with adult men”

“On the extreme end, five women, some of whom spoke on condition of anonymity because they fear retribution, say men in the community committed sexual assault or misconduct against them.”

“Women who reported sexual abuse, either to the police or community mediators, say they were branded as trouble and ostracized while the men were protected.”

Expand full comment

While I appreciate the ultimate goal of effective altruism (which is to maximize the amount of good one does over the course of one's career, or something like that), I find it impossibly hard to prove that it works (i.e., given a set of largely aligned goals as nodes at the end of a graph, how do we ensure that people's actions as directed edges do not interact incoherently at certain intersections?) I prefer a "greedy approach" where we focus on doing good in our local communities, caring for friends, colleagues, families, and neighbors, which creates a kind of luminal and incremental growth as spheres. EA also feels a bit cultish, and a bit elitist (I could not bring myself to actually apply for a scholarship, although, God knows, I need some support in my life right now as I transition into AI safety, being a PhD already).

Expand full comment

Given that the CEOs of all the big AI labs said "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." I am glad a lot of resources are being spent on this.

Probably I would like others to be spending money, but you only get what you get.

Expand full comment

> Jaan Tallinn offered one reflection...

Maybe Jaan Tallinn should start reflecting about his own role in EA-scandals too since he kickstarted Alameda and FTX with his loan of 110 million dollars worth of ether: https://www.semafor.com/article/12/08/2022/alameda-research-lost-money-in-its-early-days

Expand full comment