Aaron Bergman

2153 karmaJoined Working (0-5 years)Washington, DC, USA
aaronbergman.neocities.org/

Bio

Participation
4

I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear.

Blog: aaronbergman.net

How others can help me

  • Give me honest, constructive feedback on any of my work
  • Introduce me to someone I might like to know :)
  • Offer me a job if you think I'd be a good fit
  • Send me recommended books, podcasts, or blog posts that there's like a >25% chance a pretty-online-and-into-EA-since 2017 person like me hasn't consumed
    • Rule of thumb standard maybe like "at least as good/interesting/useful as a random 80k podcast episode"

How I can help others

  • Open to research/writing collaboration :)
  • Would be excited to work on impactful data science/analysis/visualization projects
  • Can help with writing and/or editing
  • Discuss topics I might have some knowledge of
    • like: math, economics, philosophy (esp. philosophy of mind and ethics), psychopharmacology (hobby interest), helping to run a university EA group, data science, interning at government agencies

Comments
153

Topic contributions
1

Re: a recent quick take in which I called on OpenPhil to sue OpenAI: a new document in Musk's lawsuit mentions this explicitly (page 91)

A little while ago I posted this quick take: 

I didn't have a good response to @DanielFilan, and I'm pretty inclined to defer to orgs like CEA to make decisions about how to use their own scarce resources. 

At least for EA Global Boston 2024 (which ended yesterday), there was the option to pay a "cost covering" ticket fee (of what I'm told is $1000).[1]

All this is to say that I am now more confident (although still <80%) that marginal rejected applicants who are willing to pay their cost-covering fee would be good to admit.[2]

In part this stems from an only semi-legible background stance that, on the whole, less impressive-seeming people have more ~potential~ and more to offer than I think "elite EA" (which would those running EAG admissions) tend to think. And this, in turn, has a lot to do with the endogeneity/path dependence of I'd hastily summarize as "EA involvement."

That is, many (most?) people need a break-in point to move from something like "basically convinced that EA is good, interested in the ideas and consuming content, maybe donating 10%" to anything more ambitious.

For some, that comes in the form of going to an elite college with a vibrant EA group/community. Attending EAG is another—or at least could be. But if admission is dependent on doing the kind of things and/or having the kinds of connections that a person might only pursue after getting on such an on-ramp, you have a vicious cycle of endogenous rejection.

The impetus for writing this is seeing a person who was rejected with some characteristics that seem plausibly pretty representative of a typical marginal EAG rejectee:

  • College educated but not via an elite university
  • Donates 10%, mostly to global health
  • Normal-looking middle or upper-middle class career
  • Interested in EA ideas but not a huge amount to show for it
  • Never attended EAG

Of course n=1, this isn't a tremendous amount of evidence, I don't have strictly more information than the admissions folks, the optimal number of false-negatives is not zero, etc., etc. But if a person with those above characteristics who is willing to write a reasonably thoughtful application and spend their personal time and money traveling to and taking part in EA Global (and, again, covering their cost)[3] is indeed likely to get rejected, I just straightforwardly think that admission has too high a bar; does CEA really think such a person is actively harmful to the event on net? 

I don't want to say that there is literally zero potential downside from admitting more people and "diluting the attendee pool" for lack of a more thoughtful term, but it's not immediately obvious to me what that downside would be especially at the current margin (say for example a 25% increase in the number of attendees, not a 2000% increase). And, needless to say, there is a lot of potential upside via both the impact of this marginal attendee themselves and via the information/experience/etc. that they bring to the whole group.

  1. ^

    I'm not sure if this was an option when I wrote my previous take, and if so whether I just didn't know about it or what.

  2. ^

    To be clear none of this is because I've ever been rejected (I haven't). Kinda cringe to say but worth the clarification I think.

  3. ^

    If this were an econ paper, I'd probably want to discuss the fiscal relevance of marginal vs average cost-covering tickets. I suspect that the "cost covering ticket" actually advertised is based on the average cost, but I'm not sure.

    If this is true, and marginal cost < average cost as seems intuitive, then admitting a marginal attendee who then pays the average cost would be financially net-positive for CEA.

I made it into an audiobook

It's almost certainly imperfect - please feel free to let me know if/when you catch a mistake or anything off or distracting

A few theses that may turn into a proper post:
 

1. Marginal animal welfare cost effectiveness seems to robustly beat global health interventions. It may look more like 5x or 1000x but it is very hard indeed to get that number below 1 (I do think both are probably in fact good ex ante at least, so think the number is positive).

To quote myself from this comment


@Laura Duffy's (for Rethink Priorities) recently published risk aversion analysis basically does a lot of the heavy lifting here (bolding mine):

Spending on corporate cage-free campaigns for egg-laying hens is robustly[8] cost-effective under nearly all reasonable types and levels of risk aversion considered here. 

  1. "Using welfare ranges based roughly on Rethink Priorities’ results, spending on corporate cage-free campaigns averts over an order of magnitude more suffering than the most robust global health and development intervention, Against Malaria Foundation. This result holds for almost any level of risk aversion and under any model of risk aversion."

2. The difference in magnitude of cost effectiveness (under any plausible understanding of what that means) between MakeAWish (or personal consumption spending for that matter) and AMF is smaller than between AMF (or pick your favorite) and The Humane League or AWF. 

So it is more important to convince someone to give to e.g. the EA animal welfare fund if they were previously giving to AMF than to convince a non-donor to give that same amount of money to AMF.

At least to me, this seems counterintuitive, contrary to vibes and social/signaling effects, and also robustly true.

3. What people intuitively think of as the "certainty" that comes along with AMF et al doesn't really exist. To quote my own tweet:

Yes, you can get robust estimates for "E[short run human deaths by malaria prevented per marginal dollar]" but the 2nd, ...,nth order effects don't disappear (or cancel out by default!) just bc u choose not to try to model them

4. The tractability of the two cause areas is similar...

5. But animal welfare receives way less funding. From the same comment as above:

Under standard EA "on the margin" reasoning, this shouldn't really matter, but I analyzed OP's grants data and found that human GCR has been consistently funded 6-7x more than animal welfare (here's my tweet thread this is from) Image

From Reuters:

SAN FRANCISCO, Sept 25 (Reuters) - ChatGPT-maker OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation that will no longer be controlled by its non-profit board, people familiar with the matter told Reuters, in a move that will make the company more attractive to investors.

I sincerely hope OpenPhil (or Effective Ventures, or both - I don't know the minutia here) sues over this. Read the reasoning for and details of the $30M grant here. 

The case for a legal challenge seems hugely overdetermined to me:

  • Stop/delay/complicate the restructuring, and otherwise make life appropriately hard for Sam Altman
  • Settle for a large huge amount of money that can be used to do a huge amount of good
  • Signal that you can't just blatantly take advantage of OpenPhil/EV/EA as you please without appropriate challenge

I know OpenPhil has a pretty hands-off ethos and vibe; this shouldn't stop them from acting with integrity when hands-on legal action is clearly warranted

Thanks and I think your second footnote makes an excellent distinction that I failed to get across well in my post.

I do think it’s at least directionally an “EA principle” that “best” and “right” should go together, although of course there’s plenty of room for naive first-order calculation critiques, heuristics/intuitions/norms that might push against some less nuanced understanding of “best”.

I still think there’s a useful conceptual distinction to be made between these terms, but maybe those ancillary (for lack of a better word) considerations relevant to what one thinks is the “best” use of money blur the line enough to make it too difficult to distinguish these in practice.

Re: your last paragraph, I want to emphasize that my dispute is with the terms “using EA principles”. I have no doubt whatsoever about the first part, “genuinely interested in making the world better”

Thanks, it’s possible I’m mistaken over the degree to which “direct resources to the place you think needs them most” is a consensus-EA principle.

Also, I recognize that "genuinely interested in making the world better using EA principles” is implicitly value-laden, and to be clear I do wish it was more the case, but I also genuinely intend my claim to be an observation that might have pessimistic implications depending on other beliefs people may have rather than an insult or anything like it, if that makes any sense.

There's a question on the forum user survey:

How much do you trust other EA Forum users to be genuinely interested in making the world better using EA principles?

This is one thing I've updated down quite a bit over the last year. 

It seems to me that relatively few self-identified EA donors mostly or entirely give to the organization/whatever that they would explicitly endorse as being the single best recipient of a marginal dollar (do others disagree?)

Of course the more important question is whether most EA-inspired dollars are given in such a way (rather than most donors). Unfortunately, I think the answer to this is "no" as well, seeing as OpenPhil continues to donate a majority of dollars to human global health and development[1] (I threw together a Claude artifact that lets you get a decent picture of how OpenPhil has funded cause areas over time and in aggregate)[2]

Edit: to clarify, it could be the case that others have object-level disagreements about what the best use of a marginal dollar is. Clearly this is sometimes the case, but it's not what I am getting at here. I am trying to get at the phenomenon where people implicitly say/reason "yes, EA principles imply that the best thing to do would be to donate to X, but I am going to donate to Y instead." I'm guessing this mostly takes the form of people failing to endorse that they're donations are optimally directed rather than that they do a bunch of ground-up reasoning and then decide to ignore the conclusion it gives, though.

  1. ^

     See Open Phil Should Allocate Most Neartermist Funding to Animal Welfare for a sufficient but not necessary case against this.

  2. ^

    Data is a few days old and there's a bit of judgement about how to bin various subcategories of grants, but I doubt the general picture would change much if others redid the analysis/binning

A couple takes from Twitter on the value of merch and signaling that I think are worth sharing here:

1) 

2) 

Boy do I have a website for you (twitter.com)!

(I unironically like twitter for the lower stakes and less insanely high implicit standards)

On mobile now so can’t add image but https://x.com/aaronbergman18/status/1782164275731001368?s=46

Load more