I run Sentinel, a team that seeks to anticipate and respond to large-scale risks. You can read our weekly minutes here. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable. I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:
Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.
I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at nunosempere.com / nunosempere.com/blog rather than on this forum, because:
But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value.
My career has been as follows:
You can share feedback anonymously with me here.
Note: You can sign up for all my posts here: <https://nunosempere.com/.newsletter/>, or subscribe to my posts' RSS here: <https://nunosempere.com/blog/index.rss>
Here are some caveats/counterpoints:
The EA forum has tags. The one for criticisms of effective altruism is here: https://forum.effectivealtruism.org/topics/criticism-of-effective-altruism
Beyond that, here are some criticisms I've heard or made. Hope it helps:
Preliminaries:
Criticism outlines:
Finally, for global health, something which keeps me up at night is the possiblity that subsaharan Africa is trapped in a malthusian equilibrium, where further aid only increases the population which increases suffering.
The previous version of this post had a comment from Julia Wise outlining some of her past mistakes, as well as a reply from Alexey Guzey (now deleted, but you can see some of the same contents below the table of contents here). You can also see comments from Julia here and here reflecting on her handling of complaints against Owen Cotton-Baratt. I think these are all informative in terms of predicting that sometimes the people pointed at in this post can fail as well.
I thought it would be interesting to add uncertainty. If you have
20K 40K # Mean annual salary 2025 pledgers
* 0.1 # 10% given
* beta 1 4 # counterfactual adjustment. Differs from post
* beta 5 5 # effectiveness adjustment
* 5 20 # discounted living lifespan
* 1.1 2 # reporting adjustment
* 800 2K # expected number of pledgers
* 1.2 1.5 # adjustment for largest donors
* beta 2 8 # more adjustments (the product of rows 27:37 is 0.18)
/ 209K # cost of GWWC
The result is a giving multiplier of 0.2 to 30.
To me the key parameter is the counterfactuality of these donations. Your current number is 50%, but not super sure if you are accounting for people being less able to do ambitious things because they have fewer savings.
To some extent you may also want to account for adjustments you haven't thought of generally
Seems like a cry for help. In particular, instead of "isolating [yourself] from all sources of misaligned social motivation" you might be ''isolating yourself from all ways of realizing that you are falsifying your own preferences''.
It also seems dumb because it's not a particularly corrigible action.
Do you have people you can reach out though? Reading through your forum posts some of the projects you have are cool. Any collaborators which you can reach out to? Or are you already pretty isolated?
For a while, I've been thinking about the following problem: as you get better models of the world/ability to get better models of the world, you start noticing things that are inconvenient for others. Some of those inconvenient truths can break coordination games people are playing, and leave them with worse alternatives.
Some examples:
Poetically, if you stare into the abyss, the abyss then later stares at others through your eyes, and people don't like that.
I don't really have many conclusions here. So far when I notice a situation like the above I tend to just leave, but this doesn't seem like a great solution, or like a solution at all sometimes. I'm wondering whether you've thought about this, about whether and how some parts of what EA does are premised on things that are false.
Perhaps relatedly or perhaps as a non-sequitur, I'm also curious about what changed since your post a year ago talking about how EA doesn't bring out the best in you.
Seems like a pretty niche worry, I wouldn't read too much into it not being discussed much. It's just that if true it does provide a reason to discount global health and development deeply.