Lizka

Research Fellow @ Forethought
16588 karmaJoined Working (0-5 years)

Bio

I'm a Research Fellow at Forethought; before that, I ran the non-engineering side of the EA Forum (this platform), ran the EA Newsletter, and worked on some other content-related tasks at CEA. [More about the Forum/CEA Online job.]

...

Some of my favorite of my own posts:

I finished my undergraduate studies with a double major in mathematics and comparative literature in 2021. I was a research fellow at Rethink Priorities in the summer of 2021 and was then hired by the Events Team at CEA. I later switched to the Online Team. In the past, I've also done some (math) research and worked at Canada/USA Mathcamp.

Some links I think people should see more frequently:

Sequences
10

Celebrating Benjamin Lay (1682 - 1759)
Donation Debate Week (Giving Season 2023)
Marginal Funding Week (Giving Season 2023)
Effective giving spotlight - classic posts
Selected Forum posts (Lizka)
Classic posts (from the Forum Digest)
Forum updates and new features
Winners of the Creative Writing Contest
Winners of the First Decade Review
Load more (9/10)

Comments
542

Topic contributions
267

As a datapoint: despite (already) agreeing to a large extent with this post,[1] IIRC I answered the question assuming that I do trust the premise. 


Despite my agreement, I do think there are certain kinds of situations in which we can reasonably use small probabilities. (Related post: Most* small probabilities aren't pascalian, and maybe also related.) 


More generally: I remember appreciating some discussion on the kinds of thought experiments that are useful, when, etc. I can't find it quickly, but possible starting points could be this LW post, Least Convenient Possible World, maybe this post from Richard, and stuff about fictional evidence

Writing quickly based on a skim, sorry for lack of clarity/misinterpretations! 

  1. ^

    My view is roughly something like: 

    at least in the most obviously analogous situations, it's very rare that we can properly tell the difference between 1.5% and 0.15% (and so the premise is somewhat absurd)

That makes sense and is roughly how I was interpreting what you wrote (sorry for potentially implying otherwise in my comment) — this is still a lot more positive on peacekeeping than I was expecting it to be :) 

Before looking at what you wrote, I was most skeptical of the existence of (plausibly) cost-effective interventions on this front. In particular, I had a vague background view that some interventions work but are extremely costly (financially, politically, etc.), and that other interventions either haven't been tried or don't seem promising. I was probably expecting your post to be an argument that we/most people undervalue the importance of peace (and therefore costly interventions actually look better than they might) or an argument that there are some new ideas to explore. 

So I was pretty surprised by what you write about UN peacekeeping:

...[UN] peacekeeping - no matter how useless individual peacekeepers seem - has been shown to work.  The academic literature is very clear on this:

  • Walter 2002 finds that if a third party is willing to enforce a signed agreement, warring parties are much more likely to make a bargain (5% vs. 50%) and the settlement is much more likely to hold (0.4% vs. 20%).  20% is not an amazing probability for sustained peace, but it sure beats 0.5%.
  • ...

I haven't actually looked at linked papers to check how how convincing I think they are, but thought it was interesting! And I wanted to highlight this in a comment in case any Forum users aren't sure if they want to click through to the post but would be interested to read more with this context. 

Another point that was new to me:

The UN Security Council seems to believe the endless articles about how useless peacekeepers are, and doesn’t seem all that enthusiastic about sending peacekeepers to new conflicts. Since 2010, the number of deployed peacekeepers has been largely flat, even as conflict deaths have increased

(Thanks for writing & sharing your post!)

Addendum to the post: an exercise in giving myself advice

The ideas below aren't new or very surprising, but I found it useful to sit down and write out some lessons for myself. Consider doing something similar; if you're up for sharing what you write as a comment on this post, I'd be interested and grateful.


 (1) Figure out my reasons for working on (major) projects and outline situations in which I would want myself to leave, ideally before getting started on the projects

I plan on trying to do this for any project that gives me any (ethical) doubts, and/or will take up at least 3 months of my full-time work. When possible, I also want to try sharing my notes with someone I trust. (I just did this. If you want to use my template / some notes, you can find it in this footnote.[1] Related: Staring into the abyss.)

(2) Notice potential (epistemic and moral) “risk factors” in my environment

In many ways, the environment at Los Alamos seemed to elevate the risk that participants would ignore their ethical concerns (probably partly by design). Besides the fact that they were working on a deadly weapon,

  • There was a lot of secrecy, and connections to people outside of the project were suspended
  • There was a decent amount of ~blanket admiration for the leaders of the project (and for some of the more senior scientists)
  • Relatedly, there was a sense of urgency and a collective mission (and there was a relatively clear set of “enemies” — this was during a war)
  • Based on how people wrote about Los Alamos later, there seemed to be something playful or adventurous about how many treated their work; the bomb was being called a “gadget,” their material needs were taken care of, etc.
  • And many of the participants were relatively young

(Related themes are also discussed in “Are you really in a race?”)

All else equal, I would like to avoid environments that exhibit these kinds of factors. But shunning them entirely seems impractical, so it seems worth finding ways to create some guardrails. Simply noticing that an environment poses these risks seems like it might already be quite useful. I think it would give me the chance to put myself on "high alert," using that as a prompt to check back in with myself, talk to mentors, etc.

(3) Train some habits and mental motions

Reading about all of this made me want to do the following things more (and more deliberately):

  1. Talking to people who aren’t embedded in my local/professional environment
    1. And talking to people who think very independently or in unusual (relative to my immediate circles) ways, seriously considering their arguments and conclusions
    2. (Also: remembering that I almost always have fallback options, and nurturing those options)
  2. Explicitly prompting myself to take potential outcomes of my work seriously
    1. Imagine my work’s impact is more significant than I expected it to be. How do I feel? Is there anything I’m embarrassed about, or that I wish I had done differently?
  3. Training myself to say — and be willing to believe/entertain — things that might diminish my social status in my community
    1. (This includes articulating confusion or uncertainty.)

I don't have time right now to set up systems that could help me with these things, but I also just an event to my calendar, to try to figure out how I can do more of this. (E.g. I might want to add something like this to one of my weekly templates or just set up 1-2 recurring events.) Consider doing the same, if you're in a similar situation.

And these ideas were generated very quickly — I'm sure there are more and likely better recommendations, so suggestions are welcome! 

  1. ^

    Here’s the rough format I just used:

    (1) Why I’m doing what I’m doing

    [Core worldview + 1-3 key goals, ideally including something that’s specific enough that people you know and like might disagree with it]

    (2) Situations in which I would want myself to leave [these are not necessarily things that I (or you, if you're filling this out) think are plausible!]

    (2a) Specific red lines — I’d definitely leave if...

    (2b) Red flags: very seriously consider leaving if...

    (2c) [Optional] Other general notes on this (e.g. how changes in circumstances might affect why I’d leave)

     

    My notes included hypothetical situations like learning something that would cause me to significantly update on the integrity of the people in charge of my organization, situations in which important sources of feedback (sources of correction) seemed to be getting getting closed off, etc.

A note on how I think about criticism

(This was initially meant as part of this post,[1] but while editing I thought it didn't make a lot of sense there, so I pulled it out.)

I came to CEA with a very pro-criticism attitude. My experience there reinforced those views in some ways,[2] but it also left me more attuned to the costs of criticism (or of some pro-criticism attitudes). (For instance, I used to see engaging with all criticism as virtuous, and have changed my mind on that.) My overall takes now aren’t very crisp or easily summarizable, but I figured I'd try to share some notes.

...

It’s generally good for a community’s culture to encourage criticism, but this is more complicated than I used to think.

Here’s a list of things that I believe about criticism:

  1. Criticism or critical information can be extremely valuable. It can be hard for people to surface criticism (e.g. because they fear repercussions), which means criticism tends to be undersupplied.[3] Requiring critics to present their criticisms in specific ways will likely stifle at least some valuable criticism. It can be hard to get yourself to engage with criticism of your work or things you care about. It’s easy to dismiss true and important criticism without noticing that you’re doing it. 
    1. → Making sure that your community’s culture appreciates criticism (and earnest engagement with it), tries to avoid dismissing critical content based on stylistic or other non-fundamental qualities, encourages people to engage with it, and disincentivizes attempts to suppress it can be a good way to counteract these issues. 
  2. At the same time, trying to actually do anything is really hard.[4] Appreciation for doers is often undersupplied. Being in leadership positions or engaging in public discussions is a valuable service, but opens you up to a lot of (often stressful) criticism, which acts as a disincentive for being public. Psychological safety is important in teams (and communities), so it’s unfortunate that critical environments lead more people to feel like they would be judged harshly for potential mistakes. Not all criticism is useful enough to be worth engaging with (or sharing). Responding to criticism can be time-consuming or otherwise costly and isn’t always worth it.[5] Sometimes people who are sharing “criticism” hate the project for reasons that aren’t what’s explicitly stated, or just want to vent or build themselves up.[6]
    1. ... and cultures like the one described above can exacerbate these issues.

I don’t have strong overall recommendations. Here’s a post on how I want to handle criticism, which I think is still accurate. I also (tentatively) think that on the margin, the average person in EA who is sharing criticism of someone’s work should probably spend a bit more time trying to make that criticism productive. And I’d be excited to see more celebration or appreciation for people’s work. (I also discussed related topics in this short EAG talk last year.)

  1. ^

    This was in that post because I ended up engaging with a lot of discussion about the effects of criticism in EA (and of the EA Forum’s critical culture) as part of running a Criticism Contest (and generally working on CEA’s Online Team).

  2. ^

    I’ve experienced first-hand how hard it is to identify flaws in projects you’re invested in, I’ve seen how hard it is for some people to surface critical information, and noticed some ways in which criticism can be shut down or disregarded by well-meaning people.

  3. ^
  4. ^

    Kinda related: EA should taboo "EA should" 

  5. ^
  6. ^

    A lot of what Richard says in Moral Misdirection (and in Anti-Philanthropic Misdirection) also seems true and relevant here.

Lizka
44
7
0
3
3

A note on mistakes and how we relate to them

(This was initially meant as part of this post[1], but I thought it didn't make a lot of sense there, so I pulled it out.)

“Slow-rolling mistakes” are usually much more important to identify than “point-in-time blunders,”[2] but the latter tend to be more obvious.

When we think about “mistakes”, we usually imagine replying-all when we meant to reply only to the sender, using the wrong input in an analysis, including broken hyperlinks in a piece of media, missing a deadline, etc. I tend to feel pretty horrible when I notice that I've made a mistake like this.

I now think that basically none of my mistakes of this kind — I’ll call them “Point-in-time blunders” — mattered nearly as much as other "mistakes" I've made by doing things like planning my time poorly, delaying for too long on something, setting up poor systems, or focusing on the wrong things. 

This second kind of mistake — let’s use the phrase “slow-rolling mistakes” — is harder to catch; I think sometimes I'd identify them by noticing a nagging worry, or by having multiple conversations with someone who disagreed with me (and slowly changing my mind), or by seriously reflecting on my work or on feedback I'd received. 

...

This is not a novel insight, but I think it was an important thing for me to realize. Working at CEA helped move me in this direction. A big factor in this, I think, was the support and reassurance I got from people I worked with

Slack screenshot. Lizka: I made another #85 digest :(( Ben: It's a good number.

This was over two years ago, but I still remember my stomach dropping when I realized that instead of using “EA Forum Digest #84” as the subject line for the 84th Digest, I had used “...#85.” Then I did it AGAIN a few weeks later (instead of #89). I’ve screenshotted Ben’s (my manager’s) reaction.

...

I discussed some related topics in a short EAG talk I gave last year, and also touched on these topics in my post about “invisible impact loss”. 

An image from that talk.

  1. ^

    It was there because my role gave me the opportunity to actually notice a lot of the mistakes I was making (something that I think is harder if you’re working on something like research, or in a less public role), which also meant I could reflect on them. 

  2. ^

    If you have better terms for these, I'd love suggestions!

I'm going to butt in with some quick comments, mostly because:

  • I think it's pretty important to make sure the report isn't causing serious misunderstandings 
  • and because I think it can be quite stressful for people to respond to (potentially incorrect) criticisms of their projects — or to content that seem to misrepresent their project(s) — and I think it can help if someone else helps disentangle/clarify things a bit. (To be clear, I haven't run this past Linch and don't know if he's actually finding this stressful or the like. And I don't want to discourage critical content or suggest that it's inherently harmful; I just think external people can help in this kind of discussion.)

I'm sharing comments and suggestions below, using your (Joel's) numbering. (In general, I'm not sharing my overall views on EA Funds or the report. I'm just trying to clarify some confusions that seem resolvable, based on the above discussion, and suggest changes that I hope would make the report more useful.)

  • (2) Given that apparently the claim that "CEA has had to step in and provide support" EA Funds is likely "technically misleading", it seems good to in fact remove it from the report (or keep it in but immediately and explicitly flag that this seems likely misleading and link Linch's comment) — you said you're happy to do this, and I'd be glad to see it actually removed. 
  • (3) The report currently concludes that would-be grantees "wait an unreasonable amount of time before knowing their grant application results." Linch points out that other grantmakers tend to have similar or longer timelines, and you don't seem to disagree (but argue that it's important to compare the timelines to what EA Funds sets as the expectation for applicants, instead of comparing them to other grantmakers' timelines). 
    • Given that, I'd suggest replacing "unreasonably long" (which implies a criticism of the length itself) with something like "longer than what the website/communications with applicants suggest" (which seems like what you actually believe) everywhere in the report. 
  • (9) The report currently states (or suggests) that EA Funds doesn't post reports publicly. Linch points out that they "do post public payout reports." It seems like you're mostly disagreeing about the kind of reports that should be shared.[3] 
    • Given that this is the case, I think you should clarify this in the report (which currently seems to mislead readers into believing that EA Funds doesn't actually post any public reports), e.g. by replacing "EA Funds [doesn't post] reports or [have] public metrics of success" with "EA Funds posts public payout reports like this, but doesn't have public reports about successes achieved by their grantees." 
  • (5), (6), (8) (and (1)) There are a bunch of disagreements about whether what's described as views of "EA Funds leadership" in the report is an accurate representation of the views.
    • (1) In general, Linch — who has first-hand knowledge — points out that these positions are from "notes taken from a single informal call with the EA Funds project lead" and that the person in question disagrees with "the characterization of almost all of their comments." (Apparently the phrase "EA Funds leadership" was used to avoid criticizing someone personally and to preserve anonymity.)
      • You refer to the notes a lot, explaining that the views in the report are backed by the notes from the call and arguing that one should generally trust notes like this more than someone's recollection of a conversation.[1] Whether or not the notes are more accurate than the project lead's recollection of the call, it seems pretty odd to view the notes as a stronger authority on the views of EA Funds than what someone from EA Funds is explicitly saying now, personally and explicitly. (I.e. what matters is whether a statement is true, not whether it was said in a call.) 
        • You might think that (A) Linch is mistaken about what the project lead thinks (in which case I think the project lead will probably clarify), or (B) that (some?) people at EA Funds have views that they disclosed in the call (maybe because the call was informal and they were more open with their views) but are trying to hide or cover up now — or that what was said in the call is indirect evidence for the views (that are now being disavowed). If (B) is what you believe, I think you should be explicit about that. If not, I think you should basically defer to Linch here. 
      • As a general rule, I suggest at least replacing any instance of "EA Funds leadership [believes]" with something like "our notes from a call with someone involved in running EA Funds imply that they think..." and linking Linch's comment for a counterpoint. 
    • Specific examples: 
      • (5) Seems like Linch explicitly disagrees with the idea that EA Funds dismisses the value of prioritization research, and points out that EAIF has given large grants to relevant work from Rethink Priorities. 
        • Given this, I think you should rewrite statements in the report that are misleading. I also think you should probably clarify that EA Funds has given funding to Rethink Priorities.[2]
        • Also, I'm not as confident here, but it might be good to flag the potential for ~unconscious bias in the discussions of the value of cause prio research (due to the fact that CEARCH is working on cause prioritization research). 
      • (6) Whatever was said in the conversation notes, it seems that EA Funds [leadership] does in fact believe that "there is more uncertainty now with [their] funding compared to other points in time." Seems like this should be corrected in the report.
      • (8) Again, what matters isn't what was said, but what is true (and whether the report is misleading about the truth). Linch seems to think that e.g. the statement about coordination is misleading.

I also want to say that I appreciate the work that has gone into the report and got value from e.g. the breakdown of quantitative data about funding — thanks for putting that together. 

And I want to note potential COIs: I'm at CEA (although to be clear I don't know if people at CEA agree with my comment here), briefly helped evaluate LTFF grants in early 2022, and Linch was my manager when I was a fellow at Rethink Priorities in 2021. 

  1. ^

    E.g. 

    We have both verbatim and cleaned up/organized notes on this (n.b. we shared both with you privately). So it appears we have a fundamental disagreement here (and also elsewhere) as to whether what we noted down/transcribed is an accurate record of what was actually said.

    TLDR: Fundamentally, I stand by the accuracy of our conversation notes.

    Epistemically, it's more likely that one doesn't remember what one said previously vs the interviewer (if in good faith) catastrophically misunderstanding and recording down something that wholesale wasn't said at all (as opposed to a more minor error - we agree that that can totally happen; see below) ...

  2. ^

    In relation to this claim: "They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization."

  3. ^

    "...we mean reports of success or having public metrics of success. We didn't view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved)."

I'd suggest using a different term or explicitly outlining how you use "expert" (ideally both in the post and in the report, where you first use the term) since I'm guessing that many readers will expect that if someone is called "expert" in this context, they're probably "experts in EA meta funding" specifically — e.g. someone who's been involved in the meta EA funding space for a long time, or someone with deep knowledge of grantmaking approaches at multiple organizations. (As an intuition pump and personal datapoint, I wouldn't expect "experts" in the context of a report on how to run good EA conference sessions to include me, despite the fact that I've been a speaker at EA Global a few times.) Given your description of "experts" above, which seems like it could include (for instance) someone who's worked at a specific organization and maybe fundraised for it, my sense is that the default expectation of what "expert" means in the report would this be mistaken. 


Relatedly, I'd appreciate it if you listed numbers (and possibly other specific info) in places like this: 

We interviewed numerous experts, including but not limited to staff employed by (or donors associated with) the following organizations: OP, EA Funds, MCF, GiveWell, ACE, SFF, FP, GWWC, CE, HLI and CEA. We also surveyed the EA community at large.

E.g. the excerpt above might turn into something like the following: 

We interviewed [10?] [experts], including staff at [these organizations] and donors who have supported [these organizations]. We also ran an "EA Meta Funding Survey" of people involved in the EA community and got 25 responses.

This probably also applies in places where you say things like "some experts" or that something is "generally agreed". (In case it helps, a post I love has a section on how to be (epistemically) legible.)

I know Grace has seen this already, but in case others reading this thread are interested: I've shared some thoughts on not taking the pledge (yet) here.[1]

Adding to the post: part of the value of pledges like this comes from their role as a commitment mechanism to prevent yourself from drifting away from values and behaviors that you endorse. I'm not currently worried about drifting in this way, partly because I work for CEA and have lots of social connections to extremely altruistic people. If I started working somewhere that isn't explicitly EA-oriented and/or lost my connections to the EA community, I think I'd worry a lot more about drift and the usefulness of the pledge would jump for me. (I plan on thinking about taking some kind of pledge if/when that happens.)

I'll also note that I've recently seen multiple people ~dunking on folks in EA who haven't taken the pledge (or making fun of arguments against taking the pledge), and I think this is pretty unhelpful. I'm really grateful to the GWWC Pledge community, but I really don't think the pledge is right for everyone (and neither does GWWC). Even if you think almost all the people who aren't pledging are wrong and/or biased, dunking is probably a bad way to argue. Additionally, it disincentivizes people from coming out and answering Grace's question, since they might worry that they'll (indirectly) get ridiculed for it. So if you see someone you know ~dunking, consider asking them to avoid doing that (especially if you already know them and/or have been sharing arguments for taking the pledge).

  1. ^

    To be clear: I totally believe my conclusion could be wrong, and I'm happy to see (more) arguments about why that could be. (Having said that, I should flag that I don't plan on spending time on this decision right now because I think I have more pressing decisions at the moment, but it's something I want to think more about in the future. So e.g. I might not respond to comments.)

As a quick update: I did not in fact share two posts during the week. I'll try to post another "DAW post" (i.e. something from my drafts, without spending too much time polishing it) sometime soon, but I don't endorse prioritizing this right now and didn't meet my commitment. 

Load more