B

Benquo

23 karmaJoined

Posts
1

Sorted by New
10
Benquo
· · 5m read

Comments
3

The relevance to EA is that we have this problem where we try to help people by looking at what they say they want (or even what they demonstrate they want), but sometimes those preferences are artifacts of threat models rather than actual desires. Like how in the post's primate example, low-status males aren't actually uninterested in mating - they're performing disinterest to avoid punishment. This matters because a lot of EA work involves studying revealed preferences in contexts with strong power dynamics (development economics, animal welfare, etc). If we miss these dynamics, we risk optimizing for the same coercive equilibria we're trying to fix.

Ben_Todd, it seems to me like you're saying both these things:

  • GWWC is very busy and can’t reasonably be expected to write up all or most of the important considerations around things like whether or not to take the GWWC Pledge.

  • Considerations around the pledge are in GWWC's domain, & sensitive, so people should check in with GWWC privately before discussing them publicly, and failing to do so is harmful in expectation.

I'm having a hard time reconciling these. In particular, it seems like if you make both these claims, you're basically saying that EAs shouldn't publicly criticize the pledge without GWWC's permission because that undercuts GWWC's goals. That seems very surprising to me. Am I misunderstanding you?

This is awesome! I'm going to try this out next time I get to explain effective altruism to someone.

(I originally wrote "have to explain," but in the spirit of this article I rewrote it as "get to" before posting.)