This is a special post for quick takes by AnonymousTurtle. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Besides Ilya Sutskever, is there any person not related to the EA community who quit or was fired from OpenAI for safety concerns?

Gretchen Krueger quit recently: https://x.com/GretchenMarina/status/1793403475260551517

She has a 2-year-old EA forum account https://forum.effectivealtruism.org/users/gretchen-krueger-1 , has written reports in 2020 and in 2021 where most (all?) co-authors are in the EA community, is mentioned in this post, and is Facebook friends with at least one CEA employee

tbf it's pretty hard to do any work in AIS without coauthoring things with EAs at least sometimes, for better or worse (probably worse).

It's possible to work at OpenAI and care about safety without being friends with CEA staff though.

It doesn't seem that anyone OpenAI besides the EA community is too worried, which to me is a positive update.

[anonymous]4
1
0

Potentially Pavel Izmailov– not sure if he is related to the EA community and not sure the exact details of why he was fired.

https://www.maginative.com/article/openai-fires-two-researchers-for-alleged-leaking/

Some other people like Andrej Karpathy and Ryan Lowe have left in the same time period, but have avoided use of safety-based justifications, and so far as one can tell it's unlikely that safety was the reason there.

@Zvi  has a blog post about all the safety folks leaving OpenAI. It’s not a great picture. 

They all seem related to the EA community, and for many it's not clear if they left or were fired.

GiveWell and Open Philanthropy just made a $1.5M grant to Malengo!

Congratulations to @Johannes Haushofer and the whole team, this seems such a promising intervention from a wide variety of views

Potentially self-funding organisations strike me as neglected within EA

Cool! For context, Malengo is helping students from Uganda attend university in Germany, and it also has a program to support students from French-speaking African countries [link in French]. I'm excited about this program not only for its economic benefits, but also for its potential to enable more people to live in liberal democratic countries, and in the long term, increase support for liberal democracy around the globe.

As a quick reply, I'm wondering what evidence you have that education in democratic liberal countries increases support for liberal democracy accross the globe? There's arguments for and against this thesis, but I don't think there's good evidence that it helps. 

 Many dictators in Africa for example were educated in top universities, which gave them better connections and influence which might have helped them oppress their people. Also during the 20ths centure a growing intelligent and motivated middle class seems correlated with higher chance of democracy. - its unclear whether highly skilled migration helps grow this middle class through increasing remittances and a growing economy, or removes the most capable people who could be starting businesses and making their home country a better place. Its worth noting that programs like this don't just take high school graduates, they usually take the cream of the crop who were likely to do very well in their home conutry as well.

I'm not saying you're wrong, just that its complicated and far from a slamdunk that this will increase support for liberal democracies.

In the comment, Scott claims that only 1% of nets are "misused". I wasn't able to find any sources backing this up in the linked articles, does anyone know where this figure comes from? 

the articles state that somewhere from 65-90% of nets are being used, depending on the study, but doesn't state what happened to the unused nets. 

r/philosophy response: https://old.reddit.com/r/philosophy/comments/1bw3ok2/the_deaths_of_effective_altruism_wired_march_2024/

to what extent was the ongoing death of effective altruism, as this article puts it, caused by the various problems it inherited from utilitarianism? The inability to effectively quantify human wellbeing, for instance, or the ways in which Singer's drowning child analogy (a foundation of EA) seems to discount the possibility that some people (say, children that we have brought into the world) might have special moral claims on us that other people do not.

Don't think it's really because of its philosophical consequences. EA as an organization was super corrupt and suspicious. That's why it's falling apart. Like it quickly went from "buy the best mosquito net" to "make sure AI doesn't wipe out humanity". Oh and also let's buy a castle as EA headquarters. Its motivations quickly shifted from charity work to prostelyzation.

Most of its issues seem to fundamentally lie in the fact that it's an organization run by wealthy, privileged people that use "rationality" to justify their actions.

Curated and popular this week
Relevant opportunities