I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear.
Blog: aaronbergman.net
@MHR🔸 @Laura Duffy, @AbsurdlyMax and I have been raising money for the EA Animal Welfare Fund on Twitter and Bluesky, and today is the last day to donate!
If we raise $3k more today I will transform my room into an EA paradise complete with OWID charts across the walls, a literal bednet, a shrine, and more (and of course post all this online)! Consider donating if and only if you wouldn't use the money for a better purpose!
See some more fun discussion and such by following the replies and quote-tweets here:
I was hoping he’d say himself but @MathiasKB (https://forum.effectivealtruism.org/users/mathiaskb) is our lead!
But I think you’re basically spot-on; we’re like a dozen people in a Slack, all with relatively low capacity for various reasons, trying to bootstrap a legit organization.
The “bootstrap” analogy is apt here because we are basically trying to hire the leadership/managerial and operational capacity that is generally required to do things like “run a hiring round,” if that makes any sense.
So yeah, the idea is volunteers run a hiring round, and my sense is that some of the haziness of the picture comes from the fact that what thing(s) we’ll be hiring for depends largely on how much money we’ll be able to raise, which is what we’re trying to suss out right now.
All this is complicated by the fact that everyone involved has their own takes and as a sort of proto-organization we lack the decision-making and communications procedures and infrastructure that allows like OpenPhil/Apple/the Supreme Court to act as a coherent, unified agent. Like I personally think we should strongly prioritize hiring a full time lead, but I think others disagree, and I don’t want to claim to speak to SFF!
And thanks for surfacing a sort of hazy set of considerations that I suspect others were also wondering about, if implicitly!
To expand a bit on the funding point (and speaking for myself only):
I’d consider the $15k-$100k range what makes sense as a preliminary funding round, taking into account the high opportunity cost of EA animal welfare funding dollars. This is to say that I think SFF could in fact use much more than that, but the merits and cost effectiveness of the project will be a lot clearer after spending this first $100k; it is in large part paying for value of information.
Again speaking for myself only, my inside view is that the $100k figure is too low of an upper bound for preliminary funding; maybe I’d double it.
Speaking for myself (not other coauthors), I agree that $15k is low and would describe that as the minimum plausible amount to hire for the roles described (in part because of the willingness of at least one prospective researcher to work for quite cheap compared to what I perceive as standard among EA orgs, even in animal welfare).
IIRC I threw the $100k amount out as a reasonable amount we could ~promise to deploy usefully in the short term. It was a very hasty BOTEC-type take by me: something like $30k for the roles described + $70k for a full-time project lead.
~All of the EV from the donation election probably comes from nudging OpenPhil toward the realization that they're pretty dramatically out of line with "consensus EA" in continuing to give most marginal dollars to global health. If this was explicitly thought through, brilliant.
(See this comment for sourcing and context on the table, which was my attempt to categorize all OP grants not too long ago)
Largely but not entirely informed by https://manifold.markets/AaronBergman18/what-donationaccepting-entity-eg-ch#sU2ldPLZ0Edd
Re: a recent quick take in which I called on OpenPhil to sue OpenAI: a new document in Musk's lawsuit mentions this explicitly (page 91)
A little while ago I posted this quick take:
I didn't have a good response to @DanielFilan, and I'm pretty inclined to defer to orgs like CEA to make decisions about how to use their own scarce resources.
At least for EA Global Boston 2024 (which ended yesterday), there was the option to pay a "cost covering" ticket fee (of what I'm told is $1000).[1]
All this is to say that I am now more confident (although still <80%) that marginal rejected applicants who are willing to pay their cost-covering fee would be good to admit.[2]
In part this stems from an only semi-legible background stance that, on the whole, less impressive-seeming people have more ~potential~ and more to offer than I think "elite EA" (which would those running EAG admissions) tend to think. And this, in turn, has a lot to do with the endogeneity/path dependence of I'd hastily summarize as "EA involvement."
That is, many (most?) people need a break-in point to move from something like "basically convinced that EA is good, interested in the ideas and consuming content, maybe donating 10%" to anything more ambitious.
For some, that comes in the form of going to an elite college with a vibrant EA group/community. Attending EAG is another—or at least could be. But if admission is dependent on doing the kind of things and/or having the kinds of connections that a person might only pursue after getting on such an on-ramp, you have a vicious cycle of endogenous rejection.
The impetus for writing this is seeing a person who was rejected with some characteristics that seem plausibly pretty representative of a typical marginal EAG rejectee:
Of course n=1, this isn't a tremendous amount of evidence, I don't have strictly more information than the admissions folks, the optimal number of false-negatives is not zero, etc., etc. But if a person with those above characteristics who is willing to write a reasonably thoughtful application and spend their personal time and money traveling to and taking part in EA Global (and, again, covering their cost)[3] is indeed likely to get rejected, I just straightforwardly think that admission has too high a bar; does CEA really think such a person is actively harmful to the event on net?
I don't want to say that there is literally zero potential downside from admitting more people and "diluting the attendee pool" for lack of a more thoughtful term, but it's not immediately obvious to me what that downside would be especially at the current margin (say for example a 25% increase in the number of attendees, not a 2000% increase). And, needless to say, there is a lot of potential upside via both the impact of this marginal attendee themselves and via the information/experience/etc. that they bring to the whole group.
I'm not sure if this was an option when I wrote my previous take, and if so whether I just didn't know about it or what.
To be clear none of this is because I've ever been rejected (I haven't). Kinda cringe to say but worth the clarification I think.
If this were an econ paper, I'd probably want to discuss the fiscal relevance of marginal vs average cost-covering tickets. I suspect that the "cost covering ticket" actually advertised is based on the average cost, but I'm not sure.
If this is true, and marginal cost < average cost as seems intuitive, then admitting a marginal attendee who then pays the average cost would be financially net-positive for CEA.
I made it into an audiobook
It's almost certainly imperfect - please feel free to let me know if/when you catch a mistake or anything off or distracting
Haha you’re welcome!