Aaron Bergman

1834 karmaJoined Working (0-5 years)Washington, DC, USA
aaronbergman.neocities.org/

Bio

Participation
4

I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear.

Blog: aaronbergman.net

How others can help me

  • Give me honest, constructive feedback on any of my work
  • Introduce me to someone I might like to know :)
  • Offer me a job if you think I'd be a good fit
  • Send me recommended books, podcasts, or blog posts that there's like a >25% chance a pretty-online-and-into-EA-since 2017 person like me hasn't consumed
    • Rule of thumb standard maybe like "at least as good/interesting/useful as a random 80k podcast episode"

How I can help others

  • Open to research/writing collaboration :)
  • Would be excited to work on impactful data science/analysis/visualization projects
  • Can help with writing and/or editing
  • Discuss topics I might have some knowledge of
    • like: math, economics, philosophy (esp. philosophy of mind and ethics), psychopharmacology (hobby interest), helping to run a university EA group, data science, interning at government agencies

Comments
149

Topic contributions
1

Thanks and I think your second footnote makes an excellent distinction that I failed to get across well in my post.

I do think it’s at least directionally an “EA principle” that “best” and “right” should go together, although of course there’s plenty of room for naive first-order calculation critiques, heuristics/intuitions/norms that might push against some less nuanced understanding of “best”.

I still think there’s a useful conceptual distinction to be made between these terms, but maybe those ancillary (for lack of a better word) considerations relevant to what one thinks is the “best” use of money blur the line enough to make it too difficult to distinguish these in practice.

Re: your last paragraph, I want to emphasize that my dispute is with the terms “using EA principles”. I have no doubt whatsoever about the first part, “genuinely interested in making the world better”

Thanks, it’s possible I’m mistaken over the degree to which “direct resources to the place you think needs them most” is a consensus-EA principle.

Also, I recognize that "genuinely interested in making the world better using EA principles” is implicitly value-laden, and to be clear I do wish it was more the case, but I also genuinely intend my claim to be an observation that might have pessimistic implications depending on other beliefs people may have rather than an insult or anything like it, if that makes any sense.

There's a question on the forum user survey:

How much do you trust other EA Forum users to be genuinely interested in making the world better using EA principles?

This is one thing I've updated down quite a bit over the last year. 

It seems to me that relatively few self-identified EA donors mostly or entirely give to the organization/whatever that they would explicitly endorse as being the single best recipient of a marginal dollar (do others disagree?)

Of course the more important question is whether most EA-inspired dollars are given in such a way (rather than most donors). Unfortunately, I think the answer to this is "no" as well, seeing as OpenPhil continues to donate a majority of dollars to human global health and development[1] (I threw together a Claude artifact that lets you get a decent picture of how OpenPhil has funded cause areas over time and in aggregate)[2]

Edit: to clarify, it could be the case that others have object-level disagreements about what the best use of a marginal dollar is. Clearly this is sometimes the case, but it's not what I am getting at here. I am trying to get at the phenomenon where people implicitly say/reason "yes, EA principles imply that the best thing to do would be to donate to X, but I am going to donate to Y instead." I'm guessing this mostly takes the form of people failing to endorse that they're donations are optimally directed rather than that they do a bunch of ground-up reasoning and then decide to ignore the conclusion it gives, though.

  1. ^

     See Open Phil Should Allocate Most Neartermist Funding to Animal Welfare for a sufficient but not necessary case against this.

  2. ^

    Data is a few days old and there's a bit of judgement about how to bin various subcategories of grants, but I doubt the general picture would change much if others redid the analysis/binning

A couple takes from Twitter on the value of merch and signaling that I think are worth sharing here:

1) 

2) 

FYI talks from EA Global (or at least those that that are public on Youtube) are on a podcast feed for your listening convenience!

This was mentioned a few weeks ago but thought it was worth advertising once more with the links and such in a top-level post. 

Recently updated the feed with a few dozen from the recent EA Global London and EAGxAustralia 2023. Comments and suggestions welcome of course

Boy do I have a website for you (twitter.com)!

(I unironically like twitter for the lower stakes and less insanely high implicit standards)

On mobile now so can’t add image but https://x.com/aaronbergman18/status/1782164275731001368?s=46

Yes, this seems like an extremely good thing to do and I hope someone takes it on. LEEP and Shrimp Welfare Project seem like good models for what a small team can do.

I am not from South America, I am not well-positioned to work on this, but somebody really should do something. There’s lots to do.

Same, (sadly), but I'd be very happy to donate to such an effort (even something very early stage, pre 501c3 incorporation) and would try my best to get others to donate as well. 

I'd also emphasize that screwworm elimination is the only intervention I know of in its class - that is, an animal welfare intervention with persistent, long-ish term impact. As I wrote in a shortform post a while back:

More, if you think there’s a non-trivial chance of human disempowerment, societal collapse, or human extinction in the next 10 years, this would be important to do ASAP because we may not be able to later.

Seems important to me. 

Automated interface between Twitter and the Forum (eg a bot that, when tagged on twitter, posts the text and image of a tweet on Quick Takes and vice versa)

Load more