Bio

Participation
3

Working in healthcare technology and doing some independent AI alignment research on the side.

MSc in applied mathematics/theoretical ML.

Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help "neartermist" causes.

Comments
807

Strongly upvoted for the explanation and demonstration of how important peer-review by subject matter experts is. I obviously can't evaluate either HLI's work or your review, but I think this is indeed a general problem of EA where the culture is, for some reason, aversive to standard practices of scientific publishing. This has to be rectified.

Thanks for the update! At least from what's described in the post, the team's research seems to have a higher impact potential than most of the AI safety field.

In particular I'm excited about:

  • Research directions being adjusted as you learn more
  • Attention given to governments and to China
  • Reasonable feedback loops
  • Ideas almost all of each seem important, neglected and at least somewhat tractable

What would you define as 'failed'?

There are two straightforward examples from Charity Entrepreneurship for orgs that shut down for various reasons, such as:

And there's Evidence Action's "No Lean Season" campaign which was shut down due to poor performance.

But depending on your definition, one could probably point out some more, e.g. AI safety orgs that have operated for a long time without producing any meaningful output that's related to current or foreseeable technology.

While I ostensibly agree regarding the activist community, I think this problem is probably not unique to activism vs. interventions that 'tip the scales' after it had been done. Many systems have interdependent parts, some of which are easier to measure than others.

It's true that it doesn't reveal the right split - but on the other hand, it shines a light on us in EA funneling money towards the easily measurable parts of complex interdependent systems, and neglecting to account for the less measurable parts.

I'm sadly skeptical about cultured meat, mostly because of reports I've read here on the forum but don't have the time to find at the moment.

There are many parts of this piece worth reading, especially for people who hold Singer in very high regard (and perhaps will stop seeing him that way after reading about his conduct with women*).

However, the part which is in my opinion most relevant to EA, and which sparked some doubts in me personally, is this:

[O]ne cannot always numerically calculate impact. Malcolm Gladwell spells that out in The Tipping Point, a book all activists should read. The Effective Altruism movement urges funders to donate to charities that can prove how many animals they help. One of the top recommendations is a group that urges food companies to stop using eggs from hens in battery cages. That effort will surely help end that one hideous farming practice and ease some of the suffering of billions of animals. But those approaching the companies would have no success if other activists weren’t changing public opinion, pushing the envelope, and putting societal pressure on those companies to at least make some improvements. Thanks to Effective Altruism, however, the guys negotiating the deals to get millions of animals bigger cages are grabbing the bulk of funding, while those changing the way society views animals, who can’t count the number of animals they have helped, are, by Effective Altruism standards, not worth funding.

Effective Altruism starves out the activists creating the sparks, and Peter Singer wonders why our movement isn’t lighting up the world.

*Edit: I don't know if the many down/disagreevotes are related to the "conduct with women" part or the "effective altruism" part, but I'll expand on the former: what I find especially damning, regardless of the exact details of his particular relationship with the author, is the accusation that he only gave important professional opportunities to women activists who had slept with him.

This is interesting, thank you.

Edit: maybe I'll add that I don't think fairness and transparency should diminish in importance at all. Lawmakers should address both the very certain problems that already affect us, and those uncertain ones that might be even worse. A world where AU doesn't kill everyone but it concentrates all the power in the hands of some rich person wouldn't be very nice.

I also doubt that the men in question actually speak honestly and with the same immediacy. The choice to say this and not something else is motivated by things other than honesty.

This kind of conclusion is a great example of why a totalist utilitarian view is absurd.

Load more