Jeff Kaufman 🔸

Co-Lead (Near-Term Detection) @ Nucleic Acid Observatory
15479 karmaJoined Working (15+ years)Somerville, MA, USA
www.jefftk.com

Bio

Participation
4

GWWC board member, software engineer in Boston, parent, musician. Switched from earning to give to direct work in pandemic mitigation. Married to Julia Wise. Speaking for myself unless I say otherwise. Full list of EA posts: jefftk.com/news/ea 

Comments
974

Good post, thanks for writing it!

A quibble:

we should have different expectations for a 20-person organization with a $1 million budget than a 2-person $100,000 budget organization.

I know this is a sketch, but even if 100% of costs are labor both of these come out to fully-loaded costs of $50k/employee which seems quite low to me?

As someone who has raised funds from larger funders and is currently considering participating in marginal funding week, I don't think that would work very well:

  • Our main funders have a lot of context on our work, and so our grant applications are missing a lot of information that a typical Forum reader would need. This includes basic stuff like " what problem are you trying to solve?"

  • Because we have engaged with these funders previously, portions of a funding requests can be discussion of specific issues they have previously raised, which might be pretty in the weeds for a Forum reader and require extra context.

  • There is a lot of information you can share in a private grant request that you can't make public. For example, specific quotes you've received from potential partners on pricing, some kinds of strategic planning, potential partnership opportunities, or frank assessments of the capabilities of other organizations.

  • Writing for public consumption requires more attention to how a wide range of potential readers, including both low context Forum readers and potential partners, would interpret things.

I was specifically asking (and am still wondering) whether you stand by every individual point in your original post, such that it would be worth it for me to write a point-by-point response.

(Sometimes when people give high-level instructions to an LLM which results in output where they're willing to stand by the general message, but some of the specific claims aren't actually what they believe. The same thing can also happen when hiring people: if I was trying to deeply engage with a company on one of their policies it wouldn't be productive to write a point-by-point response to an answer I'd received from a first-line support representative.)

I'd be much more interested in reading your prompts to ChatGPT than the output it produced. I suspect this would make it much easier for me (and others) to understand your position.

I'm confused: this seems to me to be a restatement of your main point and not a response to my question?

I think the average community member is pretty savvy, and the community's demonstrated deliberative skill in evaluating funding issues seems pretty strong.

I don't know, this seems overly optimistic to me. The average community member doesn't come in with much skill in evaluating nascent orgs, and is unlikely to get the kind of practice-with-feedback that would allow them to develop this skill.

people deferring somewhat to a ~randomly selected community screening jury (which could hopefully be at least medium-context)

Donor lottery winners?

Or, less flippantly, this seems to me what EA Funds and the other granting groups that give seed funding do.

I do think there are cases where someone has a good idea that isn't a good match for any of these funders (ex: the Global Health and Development Fund isn't accepting applications) or where the grantmakers are overworked, not omniscient, and not able to consider everything that they would ideally fund. In these cases I do think making a public case is good, but then it should either look like:

  • An appeal for "angels" who are interested in engaging somewhat deeply with the org to advise and fund it.
  • An appeal for seed funders that gives enough detail that they can make an informed decision without personal engagement. I think @Habiba Banu and Roxanne Heston's Spiro - New TB charity raising seed funds post is an example of doing this well.

generated what I wanted to say

Overall, do you stand by your comment? If I wrote a point-by-point response would some points get a "that's just something the LLM put in because it seemed plausible and isn't actually my view"?

Diana Fleischman, an evolutionary psychologist at the University of New Mexico, has a part-time role hosting Aporia’s podcast, and is the author of an article on the website headlined: “You’re probably a eugenicist.”"

That article (Aporia: You're probably a eugenicist) seems to be the same article she has on her Substack (Dissentient: You're probably a eugenicist) and that you refer to above (EA Forum: Most people endorse some form of 'eugenics'), which was also initially titled the same.

Which is to say: don't double-count, and don't treat the non-linked "You're probably a eugenicist" as if it has worse content than the linked "Most people endorse some form of 'eugenics'".

Your argument that you would effectively be forced into becoming an anti-animal advocate if you convincingly wrote up your views - sorry I don’t really buy it.

I would be primarily known as an anti-animal advocate if I wrote something like this, even if I didn't want to be.

On whether I would need to put my time into continuing to defend the position, I agree that I strictly wouldn't have to, but I think that given my temperament and interaction style I wouldn't actually be able to avoid this. So I need to think of this as if I am allocating a larger amount of time than what it would take to write up the argument.

Ah, thank you for clarifying! That is a much stronger sense of "doing a good job" than I was going for. I was trying to point at something like, successfully writing up my views in a way that felt like a solid contribution to the discourse. Explaining what I thought, why I thought it, and why I didn't find the standard counter arguments convincing. I think this would probably take me about two months of full-time work, so a pretty substantial opportunity cost.

I think I could do this well enough to become the main person people pointed at when they wanted to give an example of a "don't value animals" EA (which would probably be negative for my other work), but even major success here would probably only result in convincing <5% of animal-focused EAs to change what they were working on. And much less than that for money, since most of the EA money is from OP, which funds animal work as part of an explicit process of worldview diversification.

Load more