No comment here really engages with the core argument in Erik Hoels text. Utilitarianism must be the basis of EA. How would you measure effectiveness if not through maximizing utility?
I identify as an EA, and have been drawn to utilitarianism my whole adult life, but always got stuck on the seemingly impossible problem of comparing suffering and pleasure. I guess some form of rule utilitarianism is where I ended up. (e.g. don't kill unless to stop a killing), but it doesn't completely solve the problem.
I really think EA should get involved in trying to answer how to best deploy billions of dollars, and not just focus on the marginal impact of millions USD.
No comment here really engages with the core argument in Erik Hoels text. Utilitarianism must be the basis of EA. How would you measure effectiveness if not through maximizing utility?
I identify as an EA, and have been drawn to utilitarianism my whole adult life, but always got stuck on the seemingly impossible problem of comparing suffering and pleasure. I guess some form of rule utilitarianism is where I ended up. (e.g. don't kill unless to stop a killing), but it doesn't completely solve the problem.