L

lilly

2666 karmaJoined

Posts
3

Sorted by New
6
lilly
· · 1m read

Comments
123

One feature I think it'd be nice for the Forum to have is a thing that shows you the correlation between your agree votes and karma votes. I don't think there is some objectively correct correlation between these two things, but it seems likely that it should be between, say, .2 and .6 (probably depending on the kind of comments you tend to read/vote on), and it might be nice for users to be able to know and track this. 

Making this visible to individual users (and, potentially, to anyone who clicks on their profile) would provide at least a weak incentive to avoid reflexively downvoting comments that one disagrees with, something that happens a lot, and that I also find myself doing more than I'd like.

The fact that racists is in quotes in the title of this post (“Why so many “racists” at Manifest?”) when there have been multiple, first-hand accounts of people experiencing/overhearing racist exchanges strikes me as wrongly dismissive, since I can only interpret the quotation marks as implying that there weren’t very many racists. (Perhaps relevantly, I have never overheard this kind of exchange at any conference I have ever attended, so the fact that multiple people are reporting these exchanges makes Manifest a big outlier in this regard, in my view.)

Nothing in the post seems to refute that the reported exchanges occurred among attendees, just that the organizers didn’t go out of their way to invite controversial/racist speakers or incite these exchanges. In other words, I think everything in the post is compatible with there having been “so many” racists at Manifest, but the quotation marks in the title seem to imply otherwise.

This isn’t so much a stylistic critique as it is a substantive one, since I think the title implies that not a lot of racist stuff went down, which feels importantly different from acknowledging that it did, but, say, disputing that the organizers caused this or suggesting that Hanania’s presence justified it.

I don't agree with @Barry Cotter's comment or think that it's an accurate interpretation of my comment (but didn't downvote). 

I think EA is both a truth-seeking project and a good-doing project. These goals could theoretically be in tension, and I can envision hard cases where EAs would have to choose between them. Importantly, I don't think that's going on here, for much the same reasons as were articulated by @Ben Millwood in his thoughtful comment. In general, I don't think the rationalists have a monopoly on truth-seeking, nor do I think their recent practices are conducive to it.

More speculatively, my sense is that epistemic norms within EA may—at least in some ways—now be better than those within rationalism for the following reason: I worry that some rationalists have been so alienated by wokeness (which many see as anathema to the project of truth-seeking) that they have leaned pretty hard into being controversial/edgy, as evidenced by them, e.g., platforming speakers who endorse scientific racism. Doing this has major epistemic downsides—for instance, a much broader swath of the population isn't going to bother engaging with you if you do this—and I have seen limited evidence that rationalists take these downsides sufficiently seriously.

lilly
83
34
18
3

I think it would be phenomenally shortsighted for EA to prioritize its relationship with rationalists over its relationship with EA-sympathetic folks who are put off by scientific racists, given that the latter include many of the policymakers, academics, and professional people most capable of actualizing EA ideas. Most of these people aren't going to risk working/being associated with EA if EA is broadly seen as racist. Figuring out how to create a healthy (and publicly recognized) distance between EAs and rationalists seems much easier said than done, though.

lilly
52
23
1

Think about how precious the life is of a young child—concretely picture a small child coughing up blood and lying in bed with a fever of 105. We—the effective altruists—are the ones doing something about that.

The vast majority of people trying to keep kids from dying of malaria are not effective altruists.

lilly
50
12
0
1

Somewhat unrelated, but since people are discussing whether this example is cherry-picked vs. reflective of a systemic problem with infrastructure-related grants, I'm curious about the outcome of another, much larger grant:

Has there been any word on what happened to the Harvard Square EA coworking space that OP committed $8.9 million to and that was projected to open in the first half of 2023?

I really enjoyed this series; thanks for writing it!

One piece of stylistic feedback on Anti-Philanthropic Misdirection: I think the piece's hostile tone—e.g., "Wenar is here promoting a general approach to practical reasoning that is very obviously biased, stupid, and harmful: a plain force for evil in the world"—will make your piece less persuasive to non-EA readers for two reasons. First, I suspect all the italics and adjectives will trigger readers' bias radars, making people who aren't already sympathetic to EA approach the piece more critically/less openmindedly than they would have otherwise (e.g., if you had written: "Wenar promotes a general approach to practical reasoning that is both incorrect and harmful"). Second, it reads as hypocritical, since in the piece you criticize "the hostile, dismissive tone of many critics." (And unless readers have read Wenar's piece pretty closely and are pretty familiar with EA, they're not going to be well-positioned to assess whose hostility and dismissiveness are justified.) So, while I understand the frustration, and think the tone is in some sense warranted, I suspect the piece would be more effective at morally redirecting people if it read as more neutral/measured. The arguments speak for themselves. 

I think it's a nice op-ed; I also appreciate the communication strategy here—anticipating that SBF's sentencing will reignite discourse around SBF's ties to EA, and trying to elevate the discourse around that (in particular by highlighting the reforms EA has undertaken over the past 1.5 years). 

First of all, kudos on writing an op-ed! I think it’s a good thing to do, and I think earning to give is a much better path than what most Ivy League grads wind up doing, so if you persuade a few people, that’s good.

My basic problem with the argument you make here (and with earning to give in general) is that some bad things tend to go along with “selling out” (as you put it), rendering it difficult to maintain one’s initial commitment to earning to give. Some worries I have about college students deciding to do this:

  1. Erosion of values. When your social group becomes full of Meta employees (vs. idealistic college students), you find a partner (who may or may not be EA), you have kids, and so on, your values shift, and it becomes easier to justify not donating. I have seen a lot of people become gradually less motivated to do good between the ages of 20 and 30, but while having committed to a career path in, eg, global health makes it harder for this value shift to be accompanied by a shift in the social value of one’s work (since most global health jobs are somewhat socially valuable), having committed to a career path in earning to give presents no such barriers.

  2. Relatedly, lifestyle creep occurs. As you get richer (and befriend your colleagues at Meta and so on), people start inviting you to expensive birthday dinners and on nice trips and stuff. And so your ability to maintain a relatively more frugal lifestyle can be compromised by desire/pressure to buy nice stuff.

In other words, I think it’s harder to maintain your EA values when you’re earning to give vs. working at, eg, an NGO. These challenges are then further compounded by:

(3) Selection bias. I suspect that the group of EA-interested people who are drawn to earning to give in the first place are more interested in having a bougie lifestyle (etc) than the average EA who isn’t drawn to earning to give. And, correspondingly, I think they’re more likely to be affected by (1) and (2).

Again, I think this post is missing nuance; for example:

  1. Induction of fetal demise is done through a variety of means in multiple respects--different medications are given (i.e., digoxin, lidocaine, or KCl) via different routes (i.e., intra-fetal vs. intra-amniotic). (Given that lidocaine is a painkiller, I could see a different version of this post compellingly making the case that to the extent clinicians have discretion in choosing what agents to use to induce fetal demise, they should prioritize using ones that are likely to have off-target analgesic effects.)
  2. So, the link you post refers to a small minority of abortions, as it's only routine to inject the amniotic fluid (specifically) with potassium chloride (specifically) prior to the delivery of anesthesia in some second-trimester abortions.
  3. Potassium chloride is a medication that's routinely given via IV to replete potassium. The dose has a significant effect on how painful this is, as does the route of administration; people tolerate oral potassium fine. Importantly, the fetus is not even being given KCl intravenously (vs. intra-amniotically or intra-fetally), so it's hard for me to infer from "it is sometimes painful to get KCl via IV" that it would be painful for a fetus to get potassium via a different route. Correspondingly, then, I don't think the "inflames the potassium ions in the sensory nerve fibers, literally burning up the veins as it travels to the heart" applies.
Load more