AG

Aaron Gertler

18349 karmaJoined San Diego, CA, USA

Bio

I ran the Forum for three years. I'm no longer an active moderator, but I still provide advice to the team in some cases.

I'm a Communications Officer at Open Philanthropy. Before that, I worked at CEA, on the Forum and other projects. I also started Yale's student EA group, and I spend a few hours a month advising a small, un-Googleable private foundation that makes EA-adjacent donations.

Outside of EA, I play Magic: the Gathering on a semi-professional level and donate half my winnings (more than $50k in 2020) to charity.

Before my first job in EA, I was a tutor, a freelance writer, a tech support agent, and a music journalist. I blog, and keep a public list of my donations, at aarongertler.net.

Sequences
9

Part 7: What Might We Be Missing?
Part 8: Putting it into Practice
Part 6: Emerging Technologies
Part 5: Existential Risk
Part 4: Longtermism
Part 3: Expanding Our Compassion
Part 2: Differences in Impact
Part 1: The Effectiveness Mindset
The Motivation Series

Comments
1854

Topic contributions
270

I reached out to some of the people working to make this day happen to say a few things: one, thank you for being part of making the world a safer place; two, thank you for following through after it lost all attention from the public; three, thank you for inspiring me to work in the same way.

This is outstanding!

For anyone reading who hasn't tried it, I highly recommend sending nice notes to strangers who do good things; it's a fun way to procrastinate, and it doesn't take long to write a compliment that will make someone happy.

In my experience, the term "radical empathy" isn't used very often when people explain these ideas to the public -- I more often see it used as shorthand within the community, as a quick way of referring to concepts that people are already familiar with.

In public communication, I see this kind of thing more often just called "empathy", or referred to in simple terms like "caring for everyone equally", "helping people no matter where they live", etc.

I wrote about getting rejected from jobs at GiveWell and Open Phil in this post.

Other rejections that shaped my career:

  • Soon before graduating, I was rejected from Bridgewater, my dream job at that point, after full-day interview at their office. I got a bit of feedback here, along the lines of "you had trouble connecting the ground and the clouds" (however confusing that sounds, it's how confused I felt at the time).
  • I was rejected by The Onion for a writing position, and by the New York Times when I submitted a review to their newsletter for student writers. I wrote in detail about both rejections, including the content I submitted. (I still kind of like the sample Onion articles I wrote, but most of the headlines are total garbage.)
    • More relevantly, I was rejected by Vox when they hired their first set of Future Perfect writers. I figured it was because I had no professional experience, but then they hired Kelsey Piper, who had no professional experience and turned out to be incredible. A memorable case of "huh, I did not realize how outclassed I was here".
  • When I was in college, I tried many times to get work published in non-student publications and succeeded only twice (for something like a 5% hit rate). The two articles I pitched successfully took dozens of hours in total (one of them involved a multi-day trip) and earned me $100 in total, so maybe the other rejections were a blessing?
  • Despite many applications, I never secured a "real" summer internship in college — some combination of "bad interviewer" and "mediocre GPA", I think. Particularly painful was a rejection from Ideas42, a behavioral-science think tank, since I'd been reading several shelves' worth of relevant books in the year leading up to that. (At the time, I conflated "knowing a lot of things" with "good at doing work".)
  • I was also rejected by McKinsey, BCG, and various other "normal" consulting firms that seemed to be hiring hundreds of my fellow students.
    • Looking back through my journal, I'm reminded that part of this might have been the fact that I sent out ten or so resumes with the wrong link — instead of my homepage, I was sending recruiters to a random, somewhat ominous image hosted on my blog. Shudder.

I ran a contractor hiring round at CEA, and I tried to both share useful feedback and find work for some of the rejected candidates (at least one of whom wound up doing a bunch of other work for CEA and other orgs as a result). 

Given all the work I'd already put into sourcing and interviewing people interested in working for CEA, providing this additional value felt relatively "cheap", and I'd strongly recommend it for other people running hiring rounds in EA and similar spaces (that is, spaces where one person's success is also good for everyone else).

I work for Open Phil, which is discussed in the article. We spoke with Nitasha for this story, and we appreciate that she gave us the chance to engage on a number of points before it was published.

 

A few related thoughts we wanted to share:

  • The figure “nearly half a billion dollars” accurately describes our total spending in AI over the last eight years, if you think of EA and existential risk work as being somewhat but not entirely focused on AI — which seems fair. However, only a small fraction of that funding (under 5%) went toward student-oriented activities like groups and courses. 
  • The article cites the figure “as much as $80,000 a year” for what student leaders might receive. This figure is prorated: a student who takes a gap year to work full-time on organizing, in an expensive city, might get up to $80,000, but most of our grants to organizers are for much lower amounts. 
  • Likewise, while the figure “up to $100,000” is mentioned for group expenses, nearly all of the groups we fund have much lower budgets.
    • This rundown, shared with us by one American organizer, is a good example of a typical budget: ~$2800 for food at events, ~$1500 for books and other reading material, ~$200 for digital services (e.g. Zoom), and ~$3000 for the group’s annual retreat.
  • Regarding the idea, mentioned in the story, that AI safety is a distraction from present-day harms like algorithmic bias:
    • As Mike said in the piece, we think present-day harms deserve a robust response.
    • But just as concerns about catastrophic climate change scenarios like large-scale sea level rise are not seen as distractions from present-day climate harms like adverse weather events, we don’t think concerns about catastrophic AI harms distract from concerns about present-day harms. 
    • In fact, they can be mutually reinforcing. Harms like algorithmic bias are caused in part by the difficulty of getting AI systems to behave as their designers intend, which is the same thing that could lead to more extreme harms. Some of the same guardrails may work for everything on that continuum. In that sense, researchers and advocates working on AI safety and AI ethics are pulling in the same direction: toward policies and guardrails that protect society from these novel and growing threats.

 

We also want to express that we are very excited by the work of groups and organizers we’ve funded. We think that AI and other emerging technologies could threaten the lives of billions of people, and it’s encouraging to see students at universities around the world seriously engaging with ideas about AI safety (as well as other global catastrophic risks, like from a future pandemic). These are sorely neglected areas, and we hope that today’s undergraduates and graduate students will become tomorrow’s researchers, governance experts, and advocates for safer systems.

For a few examples of what students and academics in the article are working on, we recommend:

Regarding point 2, I'd argue that both "honesty" and "non-violence" are implied by the actual text of the fourth principle on the page:

Collaborative spirit: It’s often possible to achieve more by working together, and doing this effectively requires high standards of honesty, integrity, and compassion. Effective altruism does not mean supporting ‘ends justify the means’ reasoning, but rather is about being a good citizen, while ambitiously working toward a better world.

I think this text, or something very similar, has been a part of this list since at least 2018. It directly calls out honesty as important, and I think the use of "compassion" and the discouragement of "ends justify the means" reasoning both point clearly towards "don't do bad things to other people", where "bad things" include (but are not limited to) violence.

I'm not a moderator, but I used to run the Forum, and I sometimes advise the moderation team.

While "disingenuous" could imply you think your interlocutor is deliberately lying about something, Eliezer seems to mean "I think you've left out an obvious counterargument". 

That claim feels different to me, and I don't think it breaks Forum norms (though I understand why JP disagrees, and it's not an obvious call):

  • I don't want people to deliberately lie on the Forum. However, I don't think we should expect them to always list even the most obvious counterarguments to their points. We have comments for a reason!
  • I'm more bothered by criticism that accuses an author of norm-breaking ("seems dishonest") than criticism that merely accuses them of not putting forward maximal effort ("seems to leave out X")
    • To get deeper into this: I read "seems dishonest" as an attack — it implies that the author did something seriously wrong and should be downvoted or warned by mods. I read "seems to leave out X" as an invitation to an argument.
    • The ambiguity of "disingenuous" means I'd prefer to see people get more specific. But while I wish Eliezer hadn't used the word, I also think he successfully specified what he meant by it, and the overall comment didn't feel like an attack (to me, a bystander; obviously, an author might feel differently).

***** 

I don't blame anyone who wants to take a break from Forum writing for any reason, including feeling discouraged by negative comments. Especially when it's easy to read "seems disingenuous" as "you are lying".

But I think the Forum will continue to have comments like Eliezer's going forward. And I hope that, in addition to pushing for kinder critiques, we can maintain a general understanding on the Forum that a non-kind critique isn't necessarily a personal attack.

(Ted, if you're reading this: I think that Eliezer's argument is reasonable, but I also think that yours was a solid post, and I'm glad we have it!)

*****

The Forum has a hard balance to strike:

  • I think the average comment is just a bit less argumentative / critical than would be ideal.
  • I think the average critical comment is less kind than would be ideal.
  • I want criticism to be kind, but I also want it to exist, and pushing people to be kinder might also reduce the overall quantity of criticism. I'm not sure what the best realistic outcome is.

I've watched every episode of Taskmaster and found this post utterly delightful. Hope the next event is a smash!

I think this is a reasonable question to ask here; at least a few Forum users know a lot about malaria.

But the Forum is small, and you might have better luck at r/AskDocs (which likely has at least a few hundred users with similar knowledge). I hope you find some helpful answers!

Load more