I'm a Research Fellow at Forethought; before that, I ran the non-engineering side of the EA Forum (this platform), ran the EA Newsletter, and worked on some other content-related tasks at CEA. [More about the Forum/CEA Online job.]
...
Some of my favorite of my own posts:
I finished my undergraduate studies with a double major in mathematics and comparative literature in 2021. I was a research fellow at Rethink Priorities in the summer of 2021 and was then hired by the Events Team at CEA. I later switched to the Online Team. In the past, I've also done some (math) research and worked at Canada/USA Mathcamp.
Some links I think people should see more frequently:
Before looking at what you wrote, I was most skeptical of the existence of (plausibly) cost-effective interventions on this front. In particular, I had a vague background view that some interventions work but are extremely costly (financially, politically, etc.), and that other interventions either haven't been tried or don't seem promising. I was probably expecting your post to be an argument that we/most people undervalue the importance of peace (and therefore costly interventions actually look better than they might) or an argument that there are some new ideas to explore.
So I was pretty surprised by what you write about UN peacekeeping:
...[UN] peacekeeping - no matter how useless individual peacekeepers seem - has been shown to work. The academic literature is very clear on this:
- Walter 2002 finds that if a third party is willing to enforce a signed agreement, warring parties are much more likely to make a bargain (5% vs. 50%) and the settlement is much more likely to hold (0.4% vs. 20%). 20% is not an amazing probability for sustained peace, but it sure beats 0.5%.
- ...
I haven't actually looked at linked papers to check how how convincing I think they are, but thought it was interesting! And I wanted to highlight this in a comment in case any Forum users aren't sure if they want to click through to the post but would be interested to read more with this context.
Another point that was new to me:
The UN Security Council seems to believe the endless articles about how useless peacekeepers are, and doesn’t seem all that enthusiastic about sending peacekeepers to new conflicts. Since 2010, the number of deployed peacekeepers has been largely flat, even as conflict deaths have increased
(Thanks for writing & sharing your post!)
Addendum to the post: an exercise in giving myself advice
The ideas below aren't new or very surprising, but I found it useful to sit down and write out some lessons for myself. Consider doing something similar; if you're up for sharing what you write as a comment on this post, I'd be interested and grateful.
(1) Figure out my reasons for working on (major) projects and outline situations in which I would want myself to leave, ideally before getting started on the projects
I plan on trying to do this for any project that gives me any (ethical) doubts, and/or will take up at least 3 months of my full-time work. When possible, I also want to try sharing my notes with someone I trust. (I just did this. If you want to use my template / some notes, you can find it in this footnote.[1] Related: Staring into the abyss.)
(2) Notice potential (epistemic and moral) “risk factors” in my environment
In many ways, the environment at Los Alamos seemed to elevate the risk that participants would ignore their ethical concerns (probably partly by design). Besides the fact that they were working on a deadly weapon,
(Related themes are also discussed in “Are you really in a race?”)
All else equal, I would like to avoid environments that exhibit these kinds of factors. But shunning them entirely seems impractical, so it seems worth finding ways to create some guardrails. Simply noticing that an environment poses these risks seems like it might already be quite useful. I think it would give me the chance to put myself on "high alert," using that as a prompt to check back in with myself, talk to mentors, etc.
(3) Train some habits and mental motions
Reading about all of this made me want to do the following things more (and more deliberately):
I don't have time right now to set up systems that could help me with these things, but I also just an event to my calendar, to try to figure out how I can do more of this. (E.g. I might want to add something like this to one of my weekly templates or just set up 1-2 recurring events.) Consider doing the same, if you're in a similar situation.
And these ideas were generated very quickly — I'm sure there are more and likely better recommendations, so suggestions are welcome!
Here’s the rough format I just used:
(1) Why I’m doing what I’m doing
[Core worldview + 1-3 key goals, ideally including something that’s specific enough that people you know and like might disagree with it]
(2) Situations in which I would want myself to leave [these are not necessarily things that I (or you, if you're filling this out) think are plausible!]
(2a) Specific red lines — I’d definitely leave if...
(2b) Red flags: very seriously consider leaving if...
(2c) [Optional] Other general notes on this (e.g. how changes in circumstances might affect why I’d leave)
My notes included hypothetical situations like learning something that would cause me to significantly update on the integrity of the people in charge of my organization, situations in which important sources of feedback (sources of correction) seemed to be getting getting closed off, etc.
A note on how I think about criticism
(This was initially meant as part of this post,[1] but while editing I thought it didn't make a lot of sense there, so I pulled it out.)
I came to CEA with a very pro-criticism attitude. My experience there reinforced those views in some ways,[2] but it also left me more attuned to the costs of criticism (or of some pro-criticism attitudes). (For instance, I used to see engaging with all criticism as virtuous, and have changed my mind on that.) My overall takes now aren’t very crisp or easily summarizable, but I figured I'd try to share some notes.
...
It’s generally good for a community’s culture to encourage criticism, but this is more complicated than I used to think.
Here’s a list of things that I believe about criticism:
I don’t have strong overall recommendations. Here’s a post on how I want to handle criticism, which I think is still accurate. I also (tentatively) think that on the margin, the average person in EA who is sharing criticism of someone’s work should probably spend a bit more time trying to make that criticism productive. And I’d be excited to see more celebration or appreciation for people’s work. (I also discussed related topics in this short EAG talk last year.)
This was in that post because I ended up engaging with a lot of discussion about the effects of criticism in EA (and of the EA Forum’s critical culture) as part of running a Criticism Contest (and generally working on CEA’s Online Team).
I’ve experienced first-hand how hard it is to identify flaws in projects you’re invested in, I’ve seen how hard it is for some people to surface critical information, and noticed some ways in which criticism can be shut down or disregarded by well-meaning people.
Kinda related: EA should taboo "EA should"
A lot of what Richard says in Moral Misdirection (and in Anti-Philanthropic Misdirection) also seems true and relevant here.
A note on mistakes and how we relate to them
(This was initially meant as part of this post[1], but I thought it didn't make a lot of sense there, so I pulled it out.)
“Slow-rolling mistakes” are usually much more important to identify than “point-in-time blunders,”[2] but the latter tend to be more obvious.
When we think about “mistakes”, we usually imagine replying-all when we meant to reply only to the sender, using the wrong input in an analysis, including broken hyperlinks in a piece of media, missing a deadline, etc. I tend to feel pretty horrible when I notice that I've made a mistake like this.
I now think that basically none of my mistakes of this kind — I’ll call them “Point-in-time blunders” — mattered nearly as much as other "mistakes" I've made by doing things like planning my time poorly, delaying for too long on something, setting up poor systems, or focusing on the wrong things.
This second kind of mistake — let’s use the phrase “slow-rolling mistakes” — is harder to catch; I think sometimes I'd identify them by noticing a nagging worry, or by having multiple conversations with someone who disagreed with me (and slowly changing my mind), or by seriously reflecting on my work or on feedback I'd received.
...
This is not a novel insight, but I think it was an important thing for me to realize. Working at CEA helped move me in this direction. A big factor in this, I think, was the support and reassurance I got from people I worked with.
This was over two years ago, but I still remember my stomach dropping when I realized that instead of using “EA Forum Digest #84” as the subject line for the 84th Digest, I had used “...#85.” Then I did it AGAIN a few weeks later (instead of #89). I’ve screenshotted Ben’s (my manager’s) reaction.
...
I discussed some related topics in a short EAG talk I gave last year, and also touched on these topics in my post about “invisible impact loss”.
An image from that talk.
It was there because my role gave me the opportunity to actually notice a lot of the mistakes I was making (something that I think is harder if you’re working on something like research, or in a less public role), which also meant I could reflect on them.
If you have better terms for these, I'd love suggestions!
I'm going to butt in with some quick comments, mostly because:
I'm sharing comments and suggestions below, using your (Joel's) numbering. (In general, I'm not sharing my overall views on EA Funds or the report. I'm just trying to clarify some confusions that seem resolvable, based on the above discussion, and suggest changes that I hope would make the report more useful.)
I also want to say that I appreciate the work that has gone into the report and got value from e.g. the breakdown of quantitative data about funding — thanks for putting that together.
And I want to note potential COIs: I'm at CEA (although to be clear I don't know if people at CEA agree with my comment here), briefly helped evaluate LTFF grants in early 2022, and Linch was my manager when I was a fellow at Rethink Priorities in 2021.
E.g.
We have both verbatim and cleaned up/organized notes on this (n.b. we shared both with you privately). So it appears we have a fundamental disagreement here (and also elsewhere) as to whether what we noted down/transcribed is an accurate record of what was actually said.
TLDR: Fundamentally, I stand by the accuracy of our conversation notes.Epistemically, it's more likely that one doesn't remember what one said previously vs the interviewer (if in good faith) catastrophically misunderstanding and recording down something that wholesale wasn't said at all (as opposed to a more minor error - we agree that that can totally happen; see below) ...
In relation to this claim: "They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization."
"...we mean reports of success or having public metrics of success. We didn't view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved)."
I'd suggest using a different term or explicitly outlining how you use "expert" (ideally both in the post and in the report, where you first use the term) since I'm guessing that many readers will expect that if someone is called "expert" in this context, they're probably "experts in EA meta funding" specifically — e.g. someone who's been involved in the meta EA funding space for a long time, or someone with deep knowledge of grantmaking approaches at multiple organizations. (As an intuition pump and personal datapoint, I wouldn't expect "experts" in the context of a report on how to run good EA conference sessions to include me, despite the fact that I've been a speaker at EA Global a few times.) Given your description of "experts" above, which seems like it could include (for instance) someone who's worked at a specific organization and maybe fundraised for it, my sense is that the default expectation of what "expert" means in the report would this be mistaken.
Relatedly, I'd appreciate it if you listed numbers (and possibly other specific info) in places like this:
We interviewed numerous experts, including but not limited to staff employed by (or donors associated with) the following organizations: OP, EA Funds, MCF, GiveWell, ACE, SFF, FP, GWWC, CE, HLI and CEA. We also surveyed the EA community at large.
E.g. the excerpt above might turn into something like the following:
We interviewed [10?] [experts], including staff at [these organizations] and donors who have supported [these organizations]. We also ran an "EA Meta Funding Survey" of people involved in the EA community and got 25 responses.
This probably also applies in places where you say things like "some experts" or that something is "generally agreed". (In case it helps, a post I love has a section on how to be (epistemically) legible.)
I know Grace has seen this already, but in case others reading this thread are interested: I've shared some thoughts on not taking the pledge (yet) here.[1]
Adding to the post: part of the value of pledges like this comes from their role as a commitment mechanism to prevent yourself from drifting away from values and behaviors that you endorse. I'm not currently worried about drifting in this way, partly because I work for CEA and have lots of social connections to extremely altruistic people. If I started working somewhere that isn't explicitly EA-oriented and/or lost my connections to the EA community, I think I'd worry a lot more about drift and the usefulness of the pledge would jump for me. (I plan on thinking about taking some kind of pledge if/when that happens.)
I'll also note that I've recently seen multiple people ~dunking on folks in EA who haven't taken the pledge (or making fun of arguments against taking the pledge), and I think this is pretty unhelpful. I'm really grateful to the GWWC Pledge community, but I really don't think the pledge is right for everyone (and neither does GWWC). Even if you think almost all the people who aren't pledging are wrong and/or biased, dunking is probably a bad way to argue. Additionally, it disincentivizes people from coming out and answering Grace's question, since they might worry that they'll (indirectly) get ridiculed for it. So if you see someone you know ~dunking, consider asking them to avoid doing that (especially if you already know them and/or have been sharing arguments for taking the pledge).
To be clear: I totally believe my conclusion could be wrong, and I'm happy to see (more) arguments about why that could be. (Having said that, I should flag that I don't plan on spending time on this decision right now because I think I have more pressing decisions at the moment, but it's something I want to think more about in the future. So e.g. I might not respond to comments.)
As a datapoint: despite (already) agreeing to a large extent with this post,[1] IIRC I answered the question assuming that I do trust the premise.
Despite my agreement, I do think there are certain kinds of situations in which we can reasonably use small probabilities. (Related post: Most* small probabilities aren't pascalian, and maybe also related.)
More generally: I remember appreciating some discussion on the kinds of thought experiments that are useful, when, etc. I can't find it quickly, but possible starting points could be this LW post, Least Convenient Possible World, maybe this post from Richard, and stuff about fictional evidence.
Writing quickly based on a skim, sorry for lack of clarity/misinterpretations!
My view is roughly something like:
at least in the most obviously analogous situations, it's very rare that we can properly tell the difference between 1.5% and 0.15% (and so the premise is somewhat absurd)