Lizka

Research Fellow @ Forethought
16468 karmaJoined Working (0-5 years)

Bio

I'm a Research Fellow at Forethought; before that, I ran the non-engineering side of the EA Forum (this platform), ran the EA Newsletter, and worked on some other content-related tasks at CEA. [More about the Forum/CEA Online job.]

...

Some of my favorite of my own posts:

I finished my undergraduate studies with a double major in mathematics and comparative literature in 2021. I was a research fellow at Rethink Priorities in the summer of 2021 and was then hired by the Events Team at CEA. I later switched to the Online Team. In the past, I've also done some (math) research and worked at Canada/USA Mathcamp.

Some links I think people should see more frequently:

Sequences
10

Celebrating Benjamin Lay (1682 - 1759)
Donation Debate Week (Giving Season 2023)
Marginal Funding Week (Giving Season 2023)
Effective giving spotlight - classic posts
Selected Forum posts (Lizka)
Classic posts (from the Forum Digest)
Forum updates and new features
Winners of the Creative Writing Contest
Winners of the First Decade Review
Load more (9/10)

Comments
538

Topic contributions
267

A note on how I think about criticism

(This was initially meant as part of this post,[1] but while editing I thought it didn't make a lot of sense there, so I pulled it out.)

I came to CEA with a very pro-criticism attitude. My experience there reinforced those views in some ways,[2] but it also left me more attuned to the costs of criticism (or of some pro-criticism attitudes). (For instance, I used to see engaging with all criticism as virtuous, and have changed my mind on that.) My overall takes now aren’t very crisp or easily summarizable, but I figured I'd try to share some notes.

...

It’s generally good for a community’s culture to encourage criticism, but this is more complicated than I used to think.

Here’s a list of things that I believe about criticism:

  1. Criticism or critical information can be extremely valuable. It can be hard for people to surface criticism (e.g. because they fear repercussions), which means criticism tends to be undersupplied.[3] Requiring critics to present their criticisms in specific ways will likely stifle at least some valuable criticism. It can be hard to get yourself to engage with criticism of your work or things you care about. It’s easy to dismiss true and important criticism without noticing that you’re doing it. 
    1. → Making sure that your community’s culture appreciates criticism (and earnest engagement with it), tries to avoid dismissing critical content based on stylistic or other non-fundamental qualities, encourages people to engage with it, and disincentivizes attempts to suppress it can be a good way to counteract these issues. 
  2. At the same time, trying to actually do anything is really hard.[4] Appreciation for doers is often undersupplied. Being in leadership positions or engaging in public discussions is a valuable service, but opens you up to a lot of (often stressful) criticism, which acts as a disincentive for being public. Psychological safety is important in teams (and communities), so it’s unfortunate that critical environments lead more people to feel like they would be judged harshly for potential mistakes. Not all criticism is useful enough to be worth engaging with (or sharing). Responding to criticism can be time-consuming or otherwise costly and isn’t always worth it.[5] Sometimes people who are sharing “criticism” hate the project for reasons that aren’t what’s explicitly stated, or just want to vent or build themselves up.[6]
    1. ... and cultures like the one described above can exacerbate these issues.

I don’t have strong overall recommendations. Here’s a post on how I want to handle criticism, which I think is still accurate. I also (tentatively) think that on the margin, the average person in EA who is sharing criticism of someone’s work should probably spend a bit more time trying to make that criticism productive. And I’d be excited to see more celebration or appreciation for people’s work. (I also discussed related topics in this short EAG talk last year.)

  1. ^

    This was in that post because I ended up engaging with a lot of discussion about the effects of criticism in EA (and of the EA Forum’s critical culture) as part of running a Criticism Contest (and generally working on CEA’s Online Team).

  2. ^

    I’ve experienced first-hand how hard it is to identify flaws in projects you’re invested in, I’ve seen how hard it is for some people to surface critical information, and noticed some ways in which criticism can be shut down or disregarded by well-meaning people.

  3. ^
  4. ^

    Kinda related: EA should taboo "EA should" 

  5. ^
  6. ^

    A lot of what Richard says in Moral Misdirection (and in Anti-Philanthropic Misdirection) also seems true and relevant here.

Lizka
44
7
0
3
3

A note on mistakes and how we relate to them

(This was initially meant as part of this post[1], but I thought it didn't make a lot of sense there, so I pulled it out.)

“Slow-rolling mistakes” are usually much more important to identify than “point-in-time blunders,”[2] but the latter tend to be more obvious.

When we think about “mistakes”, we usually imagine replying-all when we meant to reply only to the sender, using the wrong input in an analysis, including broken hyperlinks in a piece of media, missing a deadline, etc. I tend to feel pretty horrible when I notice that I've made a mistake like this.

I now think that basically none of my mistakes of this kind — I’ll call them “Point-in-time blunders” — mattered nearly as much as other "mistakes" I've made by doing things like planning my time poorly, delaying for too long on something, setting up poor systems, or focusing on the wrong things. 

This second kind of mistake — let’s use the phrase “slow-rolling mistakes” — is harder to catch; I think sometimes I'd identify them by noticing a nagging worry, or by having multiple conversations with someone who disagreed with me (and slowly changing my mind), or by seriously reflecting on my work or on feedback I'd received. 

...

This is not a novel insight, but I think it was an important thing for me to realize. Working at CEA helped move me in this direction. A big factor in this, I think, was the support and reassurance I got from people I worked with

Slack screenshot. Lizka: I made another #85 digest :(( Ben: It's a good number.

This was over two years ago, but I still remember my stomach dropping when I realized that instead of using “EA Forum Digest #84” as the subject line for the 84th Digest, I had used “...#85.” Then I did it AGAIN a few weeks later (instead of #89). I’ve screenshotted Ben’s (my manager’s) reaction.

...

I discussed some related topics in a short EAG talk I gave last year, and also touched on these topics in my post about “invisible impact loss”. 

An image from that talk.

  1. ^

    It was there because my role gave me the opportunity to actually notice a lot of the mistakes I was making (something that I think is harder if you’re working on something like research, or in a less public role), which also meant I could reflect on them. 

  2. ^

    If you have better terms for these, I'd love suggestions!

I'm going to butt in with some quick comments, mostly because:

  • I think it's pretty important to make sure the report isn't causing serious misunderstandings 
  • and because I think it can be quite stressful for people to respond to (potentially incorrect) criticisms of their projects — or to content that seem to misrepresent their project(s) — and I think it can help if someone else helps disentangle/clarify things a bit. (To be clear, I haven't run this past Linch and don't know if he's actually finding this stressful or the like. And I don't want to discourage critical content or suggest that it's inherently harmful; I just think external people can help in this kind of discussion.)

I'm sharing comments and suggestions below, using your (Joel's) numbering. (In general, I'm not sharing my overall views on EA Funds or the report. I'm just trying to clarify some confusions that seem resolvable, based on the above discussion, and suggest changes that I hope would make the report more useful.)

  • (2) Given that apparently the claim that "CEA has had to step in and provide support" EA Funds is likely "technically misleading", it seems good to in fact remove it from the report (or keep it in but immediately and explicitly flag that this seems likely misleading and link Linch's comment) — you said you're happy to do this, and I'd be glad to see it actually removed. 
  • (3) The report currently concludes that would-be grantees "wait an unreasonable amount of time before knowing their grant application results." Linch points out that other grantmakers tend to have similar or longer timelines, and you don't seem to disagree (but argue that it's important to compare the timelines to what EA Funds sets as the expectation for applicants, instead of comparing them to other grantmakers' timelines). 
    • Given that, I'd suggest replacing "unreasonably long" (which implies a criticism of the length itself) with something like "longer than what the website/communications with applicants suggest" (which seems like what you actually believe) everywhere in the report. 
  • (9) The report currently states (or suggests) that EA Funds doesn't post reports publicly. Linch points out that they "do post public payout reports." It seems like you're mostly disagreeing about the kind of reports that should be shared.[3] 
    • Given that this is the case, I think you should clarify this in the report (which currently seems to mislead readers into believing that EA Funds doesn't actually post any public reports), e.g. by replacing "EA Funds [doesn't post] reports or [have] public metrics of success" with "EA Funds posts public payout reports like this, but doesn't have public reports about successes achieved by their grantees." 
  • (5), (6), (8) (and (1)) There are a bunch of disagreements about whether what's described as views of "EA Funds leadership" in the report is an accurate representation of the views.
    • (1) In general, Linch — who has first-hand knowledge — points out that these positions are from "notes taken from a single informal call with the EA Funds project lead" and that the person in question disagrees with "the characterization of almost all of their comments." (Apparently the phrase "EA Funds leadership" was used to avoid criticizing someone personally and to preserve anonymity.)
      • You refer to the notes a lot, explaining that the views in the report are backed by the notes from the call and arguing that one should generally trust notes like this more than someone's recollection of a conversation.[1] Whether or not the notes are more accurate than the project lead's recollection of the call, it seems pretty odd to view the notes as a stronger authority on the views of EA Funds than what someone from EA Funds is explicitly saying now, personally and explicitly. (I.e. what matters is whether a statement is true, not whether it was said in a call.) 
        • You might think that (A) Linch is mistaken about what the project lead thinks (in which case I think the project lead will probably clarify), or (B) that (some?) people at EA Funds have views that they disclosed in the call (maybe because the call was informal and they were more open with their views) but are trying to hide or cover up now — or that what was said in the call is indirect evidence for the views (that are now being disavowed). If (B) is what you believe, I think you should be explicit about that. If not, I think you should basically defer to Linch here. 
      • As a general rule, I suggest at least replacing any instance of "EA Funds leadership [believes]" with something like "our notes from a call with someone involved in running EA Funds imply that they think..." and linking Linch's comment for a counterpoint. 
    • Specific examples: 
      • (5) Seems like Linch explicitly disagrees with the idea that EA Funds dismisses the value of prioritization research, and points out that EAIF has given large grants to relevant work from Rethink Priorities. 
        • Given this, I think you should rewrite statements in the report that are misleading. I also think you should probably clarify that EA Funds has given funding to Rethink Priorities.[2]
        • Also, I'm not as confident here, but it might be good to flag the potential for ~unconscious bias in the discussions of the value of cause prio research (due to the fact that CEARCH is working on cause prioritization research). 
      • (6) Whatever was said in the conversation notes, it seems that EA Funds [leadership] does in fact believe that "there is more uncertainty now with [their] funding compared to other points in time." Seems like this should be corrected in the report.
      • (8) Again, what matters isn't what was said, but what is true (and whether the report is misleading about the truth). Linch seems to think that e.g. the statement about coordination is misleading.

I also want to say that I appreciate the work that has gone into the report and got value from e.g. the breakdown of quantitative data about funding — thanks for putting that together. 

And I want to note potential COIs: I'm at CEA (although to be clear I don't know if people at CEA agree with my comment here), briefly helped evaluate LTFF grants in early 2022, and Linch was my manager when I was a fellow at Rethink Priorities in 2021. 

  1. ^

    E.g. 

    We have both verbatim and cleaned up/organized notes on this (n.b. we shared both with you privately). So it appears we have a fundamental disagreement here (and also elsewhere) as to whether what we noted down/transcribed is an accurate record of what was actually said.

    TLDR: Fundamentally, I stand by the accuracy of our conversation notes.

    Epistemically, it's more likely that one doesn't remember what one said previously vs the interviewer (if in good faith) catastrophically misunderstanding and recording down something that wholesale wasn't said at all (as opposed to a more minor error - we agree that that can totally happen; see below) ...

  2. ^

    In relation to this claim: "They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization."

  3. ^

    "...we mean reports of success or having public metrics of success. We didn't view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved)."

I'd suggest using a different term or explicitly outlining how you use "expert" (ideally both in the post and in the report, where you first use the term) since I'm guessing that many readers will expect that if someone is called "expert" in this context, they're probably "experts in EA meta funding" specifically — e.g. someone who's been involved in the meta EA funding space for a long time, or someone with deep knowledge of grantmaking approaches at multiple organizations. (As an intuition pump and personal datapoint, I wouldn't expect "experts" in the context of a report on how to run good EA conference sessions to include me, despite the fact that I've been a speaker at EA Global a few times.) Given your description of "experts" above, which seems like it could include (for instance) someone who's worked at a specific organization and maybe fundraised for it, my sense is that the default expectation of what "expert" means in the report would this be mistaken. 


Relatedly, I'd appreciate it if you listed numbers (and possibly other specific info) in places like this: 

We interviewed numerous experts, including but not limited to staff employed by (or donors associated with) the following organizations: OP, EA Funds, MCF, GiveWell, ACE, SFF, FP, GWWC, CE, HLI and CEA. We also surveyed the EA community at large.

E.g. the excerpt above might turn into something like the following: 

We interviewed [10?] [experts], including staff at [these organizations] and donors who have supported [these organizations]. We also ran an "EA Meta Funding Survey" of people involved in the EA community and got 25 responses.

This probably also applies in places where you say things like "some experts" or that something is "generally agreed". (In case it helps, a post I love has a section on how to be (epistemically) legible.)

I know Grace has seen this already, but in case others reading this thread are interested: I've shared some thoughts on not taking the pledge (yet) here.[1]

Adding to the post: part of the value of pledges like this comes from their role as a commitment mechanism to prevent yourself from drifting away from values and behaviors that you endorse. I'm not currently worried about drifting in this way, partly because I work for CEA and have lots of social connections to extremely altruistic people. If I started working somewhere that isn't explicitly EA-oriented and/or lost my connections to the EA community, I think I'd worry a lot more about drift and the usefulness of the pledge would jump for me. (I plan on thinking about taking some kind of pledge if/when that happens.)

I'll also note that I've recently seen multiple people ~dunking on folks in EA who haven't taken the pledge (or making fun of arguments against taking the pledge), and I think this is pretty unhelpful. I'm really grateful to the GWWC Pledge community, but I really don't think the pledge is right for everyone (and neither does GWWC). Even if you think almost all the people who aren't pledging are wrong and/or biased, dunking is probably a bad way to argue. Additionally, it disincentivizes people from coming out and answering Grace's question, since they might worry that they'll (indirectly) get ridiculed for it. So if you see someone you know ~dunking, consider asking them to avoid doing that (especially if you already know them and/or have been sharing arguments for taking the pledge).

  1. ^

    To be clear: I totally believe my conclusion could be wrong, and I'm happy to see (more) arguments about why that could be. (Having said that, I should flag that I don't plan on spending time on this decision right now because I think I have more pressing decisions at the moment, but it's something I want to think more about in the future. So e.g. I might not respond to comments.)

As a quick update: I did not in fact share two posts during the week. I'll try to post another "DAW post" (i.e. something from my drafts, without spending too much time polishing it) sometime soon, but I don't endorse prioritizing this right now and didn't meet my commitment. 

Answer by Lizka10
2
0

Not sure if this already exists somewhere (would love recommendations!), but I'd be really excited to see a clear and carefully linked/referenced overview or summary of what various agriculture/farming ~lobby groups do to influence laws and public opinion, and how they do it (with a focus on anything related to animal welfare concerns). This seems relevant.

Just chiming in with a quick note: I collected some tips on what could make criticism more productive in this post: "Productive criticism: what could help?"

I'll also add a suggestion from Aaron: If you like a post, tell the author! (And if you're not sure about commenting with something you think isn't substantive, you can message the author a quick note of appreciation or even just heart-react on the post.) I know that I get a lot out of appreciative comments/messages related to my posts (and I want to do more of this myself). 

I'll commit to posting a couple of drafts. Y'all can look at me with disapproval (or downvote this comment) if I fail to share two posts during Draft Amnesty Week. 

Answer by Lizka21
3
0

I'm basically always interested in potential lessons for EA/EA-related projects from various social movements/fields/projects.

Note that you can find existing research that hasn't been discussed (much) on the Forum and link-post it (I bet there's a lot of useful stuff out there), maybe with some notes on your takeaways. 

Example movements/fields/topics: 

  • Environmentalism — I've heard people bring up the environmentalist/climate movement a bunch in informal discussions as an example for various hypotheses, including "movements splinter/develop highly counterproductive & influential factions" or "movements can get widespread interest and make policy progress" etc. 
  • The effectiveness of protest — I'm interested in more research/work on this (see e.g. this and this).
  • Modern academia (maybe specific fields) — seems like there are probably various successes/failures/ideas we could learn from. 
  • Animal welfare
  • Mohism (see also)
  • Medicine/psychology in different time periods

Some resources, examples, etc. (not exhaustive or even a coherent category): 

Load more