I'm a Research Fellow at Forethought; before that, I ran the non-engineering side of the EA Forum (this platform), ran the EA Newsletter, and worked on some other content-related tasks at CEA. [More about the Forum/CEA Online job.]
...
Some of my favorite of my own posts:
I finished my undergraduate studies with a double major in mathematics and comparative literature in 2021. I was a research fellow at Rethink Priorities in the summer of 2021 and was then hired by the Events Team at CEA. I later switched to the Online Team. In the past, I've also done some (math) research and worked at Canada/USA Mathcamp.
Some links I think people should see more frequently:
A note on mistakes and how we relate to them
(This was initially meant as part of this post[1], but I thought it didn't make a lot of sense there, so I pulled it out.)
“Slow-rolling mistakes” are usually much more important to identify than “point-in-time blunders,”[2] but the latter tend to be more obvious.
When we think about “mistakes”, we usually imagine replying-all when we meant to reply only to the sender, using the wrong input in an analysis, including broken hyperlinks in a piece of media, missing a deadline, etc. I tend to feel pretty horrible when I notice that I've made a mistake like this.
I now think that basically none of my mistakes of this kind — I’ll call them “Point-in-time blunders” — mattered nearly as much as other "mistakes" I've made by doing things like planning my time poorly, delaying for too long on something, setting up poor systems, or focusing on the wrong things.
This second kind of mistake — let’s use the phrase “slow-rolling mistakes” — is harder to catch; I think sometimes I'd identify them by noticing a nagging worry, or by having multiple conversations with someone who disagreed with me (and slowly changing my mind), or by seriously reflecting on my work or on feedback I'd received.
...
This is not a novel insight, but I think it was an important thing for me to realize. Working at CEA helped move me in this direction. A big factor in this, I think, was the support and reassurance I got from people I worked with.
This was over two years ago, but I still remember my stomach dropping when I realized that instead of using “EA Forum Digest #84” as the subject line for the 84th Digest, I had used “...#85.” Then I did it AGAIN a few weeks later (instead of #89). I’ve screenshotted Ben’s (my manager’s) reaction.
...
I discussed some related topics in a short EAG talk I gave last year, and also touched on these topics in my post about “invisible impact loss”.
An image from that talk.
It was there because my role gave me the opportunity to actually notice a lot of the mistakes I was making (something that I think is harder if you’re working on something like research, or in a less public role), which also meant I could reflect on them.
If you have better terms for these, I'd love suggestions!
I'm going to butt in with some quick comments, mostly because:
I'm sharing comments and suggestions below, using your (Joel's) numbering. (In general, I'm not sharing my overall views on EA Funds or the report. I'm just trying to clarify some confusions that seem resolvable, based on the above discussion, and suggest changes that I hope would make the report more useful.)
I also want to say that I appreciate the work that has gone into the report and got value from e.g. the breakdown of quantitative data about funding — thanks for putting that together.
And I want to note potential COIs: I'm at CEA (although to be clear I don't know if people at CEA agree with my comment here), briefly helped evaluate LTFF grants in early 2022, and Linch was my manager when I was a fellow at Rethink Priorities in 2021.
E.g.
We have both verbatim and cleaned up/organized notes on this (n.b. we shared both with you privately). So it appears we have a fundamental disagreement here (and also elsewhere) as to whether what we noted down/transcribed is an accurate record of what was actually said.
TLDR: Fundamentally, I stand by the accuracy of our conversation notes.Epistemically, it's more likely that one doesn't remember what one said previously vs the interviewer (if in good faith) catastrophically misunderstanding and recording down something that wholesale wasn't said at all (as opposed to a more minor error - we agree that that can totally happen; see below) ...
In relation to this claim: "They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization."
"...we mean reports of success or having public metrics of success. We didn't view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved)."
I'd suggest using a different term or explicitly outlining how you use "expert" (ideally both in the post and in the report, where you first use the term) since I'm guessing that many readers will expect that if someone is called "expert" in this context, they're probably "experts in EA meta funding" specifically — e.g. someone who's been involved in the meta EA funding space for a long time, or someone with deep knowledge of grantmaking approaches at multiple organizations. (As an intuition pump and personal datapoint, I wouldn't expect "experts" in the context of a report on how to run good EA conference sessions to include me, despite the fact that I've been a speaker at EA Global a few times.) Given your description of "experts" above, which seems like it could include (for instance) someone who's worked at a specific organization and maybe fundraised for it, my sense is that the default expectation of what "expert" means in the report would this be mistaken.
Relatedly, I'd appreciate it if you listed numbers (and possibly other specific info) in places like this:
We interviewed numerous experts, including but not limited to staff employed by (or donors associated with) the following organizations: OP, EA Funds, MCF, GiveWell, ACE, SFF, FP, GWWC, CE, HLI and CEA. We also surveyed the EA community at large.
E.g. the excerpt above might turn into something like the following:
We interviewed [10?] [experts], including staff at [these organizations] and donors who have supported [these organizations]. We also ran an "EA Meta Funding Survey" of people involved in the EA community and got 25 responses.
This probably also applies in places where you say things like "some experts" or that something is "generally agreed". (In case it helps, a post I love has a section on how to be (epistemically) legible.)
I know Grace has seen this already, but in case others reading this thread are interested: I've shared some thoughts on not taking the pledge (yet) here.[1]
Adding to the post: part of the value of pledges like this comes from their role as a commitment mechanism to prevent yourself from drifting away from values and behaviors that you endorse. I'm not currently worried about drifting in this way, partly because I work for CEA and have lots of social connections to extremely altruistic people. If I started working somewhere that isn't explicitly EA-oriented and/or lost my connections to the EA community, I think I'd worry a lot more about drift and the usefulness of the pledge would jump for me. (I plan on thinking about taking some kind of pledge if/when that happens.)
I'll also note that I've recently seen multiple people ~dunking on folks in EA who haven't taken the pledge (or making fun of arguments against taking the pledge), and I think this is pretty unhelpful. I'm really grateful to the GWWC Pledge community, but I really don't think the pledge is right for everyone (and neither does GWWC). Even if you think almost all the people who aren't pledging are wrong and/or biased, dunking is probably a bad way to argue. Additionally, it disincentivizes people from coming out and answering Grace's question, since they might worry that they'll (indirectly) get ridiculed for it. So if you see someone you know ~dunking, consider asking them to avoid doing that (especially if you already know them and/or have been sharing arguments for taking the pledge).
To be clear: I totally believe my conclusion could be wrong, and I'm happy to see (more) arguments about why that could be. (Having said that, I should flag that I don't plan on spending time on this decision right now because I think I have more pressing decisions at the moment, but it's something I want to think more about in the future. So e.g. I might not respond to comments.)
Not sure if this already exists somewhere (would love recommendations!), but I'd be really excited to see a clear and carefully linked/referenced overview or summary of what various agriculture/farming ~lobby groups do to influence laws and public opinion, and how they do it (with a focus on anything related to animal welfare concerns). This seems relevant.
Just chiming in with a quick note: I collected some tips on what could make criticism more productive in this post: "Productive criticism: what could help?"
I'll also add a suggestion from Aaron: If you like a post, tell the author! (And if you're not sure about commenting with something you think isn't substantive, you can message the author a quick note of appreciation or even just heart-react on the post.) I know that I get a lot out of appreciative comments/messages related to my posts (and I want to do more of this myself).
I'm basically always interested in potential lessons for EA/EA-related projects from various social movements/fields/projects.
Note that you can find existing research that hasn't been discussed (much) on the Forum and link-post it (I bet there's a lot of useful stuff out there), maybe with some notes on your takeaways.
Example movements/fields/topics:
Some resources, examples, etc. (not exhaustive or even a coherent category):
A note on how I think about criticism
(This was initially meant as part of this post,[1] but while editing I thought it didn't make a lot of sense there, so I pulled it out.)
I came to CEA with a very pro-criticism attitude. My experience there reinforced those views in some ways,[2] but it also left me more attuned to the costs of criticism (or of some pro-criticism attitudes). (For instance, I used to see engaging with all criticism as virtuous, and have changed my mind on that.) My overall takes now aren’t very crisp or easily summarizable, but I figured I'd try to share some notes.
...
It’s generally good for a community’s culture to encourage criticism, but this is more complicated than I used to think.
Here’s a list of things that I believe about criticism:
I don’t have strong overall recommendations. Here’s a post on how I want to handle criticism, which I think is still accurate. I also (tentatively) think that on the margin, the average person in EA who is sharing criticism of someone’s work should probably spend a bit more time trying to make that criticism productive. And I’d be excited to see more celebration or appreciation for people’s work. (I also discussed related topics in this short EAG talk last year.)
This was in that post because I ended up engaging with a lot of discussion about the effects of criticism in EA (and of the EA Forum’s critical culture) as part of running a Criticism Contest (and generally working on CEA’s Online Team).
I’ve experienced first-hand how hard it is to identify flaws in projects you’re invested in, I’ve seen how hard it is for some people to surface critical information, and noticed some ways in which criticism can be shut down or disregarded by well-meaning people.
See also the rationale in our Criticism Contest announcement post.
Kinda related: EA should taboo "EA should"
See the “transparency” example in my post on “missing moods”.
Also: You don’t have to respond to every comment.
A lot of what Richard says in Moral Misdirection (and in Anti-Philanthropic Misdirection) also seems true and relevant here.