BW

Brad West

849 karmaJoined

Comments
148

Strong disagree. If the proponent of an intervention/cause area believes the advancement of it is extremely high EV such that they believe it is would be very imprudent for EA resources not to advance it, they should use strong language.

I think EAs are too eager to hedge their language and use weak language regarding promising ideas.

For example, I have no compunction saying that advancement of the Profit for Good (companies with charities in vast majority shareholder position) needs to be advanced by EA, in that I believe it not doing results in an ocean less counterfactual funding for effective charities, and consequently a significantly worse world.

https://forum.effectivealtruism.org/posts/WMiGwDoqEyswaE6hN/making-trillions-for-effective-charities-through-the

You're the one who's redefining utilitarianism- which is commonly defined as maximization of happiness and well-being of conscious beings. You can consider integrating other terminal values into what you'd like to do, but you're not really discussing utilitarianism at that point, as it's commonly used. For instance, Greenberg points to truth as a potential terminal value, which would be at odds with utilitarianism as it's typically used.

I think Singer is a hedonic utilitarian for what it's worth, and I think I subscribe to it while acknowledging that weighing the degrees of positive and negatively subjective experiences of many kinds is daunting.

As for having other instrumental values (which is why I don't really think the "burnout" argument is very good as against utilitarianism, I agree with you on that one.

In order for the lawyers to be credited that utility, we need to look at the counterfactual.

Need to look at the marginal effect of adding one more attorney to the field. The number of attorney/vaiue curve is likely to be logarithmic, because the first attorneys will likely be going after the low hanging fruit in litigation. If you are truly outstanding and able to provide better expected value than an alternative in the role, there might be more value...

ETG is likely much higher value, imo, because the counterfactual of the person making lots of money is someone who likely donates less and/or ineffectively, if at all.

In fact the limiting factor may be funding for high value litigation opportunities, so maybe even in the Civil Rights battles, you may have had higher impact funding litigation and other high EV activities than direct work.

Seconding Ribon as an awesome donation multiplier, I mentioned them in my comment.

Don't really feel like it's an either/or... It can both the case that we should use political processes to require the extremely wealthy to do more to solve world problems AND that we less wealthy, but still comfortable, in global terms, are morally required to do more. After all, even 10% of 30k is saving 6 lives in a decade at 5k/life and would result in mere struggle for one in the developed world.

Just a quick impression:

I definitely love EA for its intellectual bent... We need to evaluate how we can do the most good, which can be a tricky process with reality often confounding our intuitions.

But I also love EA for wanting to use that reason to profoundly better the world... Action. What I get from this strategy is an emphasis on the cerebral without the emphasis on action. I think EA will appeal more broadly if we highlight action as well as cogitation, and these functions in furtherance of a world with far less suffering, more joy and ability of people to pursue their dreams, and a firm foundation for a wonderful world to persist indefinitely.

Vin's BOAS company is an example of a Profit for Good business that I referenced in a different comment.

And yeah, other than maybe AI Safety, IMHO, Profit for Good is by far the most promising of any cause area because it can multiply funding for effective charities that are potentially popular among consumers (global health and development, animal welfare, climate change). The fundamental premise it boils down to is that people have a nonzero preference for such causes over the enrichment of random investors. If people could buy some damn laundry detergent of the same price  and quality and the Against Malaria Foundation would profit rather than random investors, they would.

I have been immensely disappointed in EAs lack of interest in Profit for Good. If we had EA funds, expertise, time, and wisdom behind the endeavor, there is no reason that we could not present such a choice to the people of the world. I suppose the people of the world have shown that they are extremely selfish, most not donating even if it could benefit the recipient over 50-100x more than it could the prospective donor. However, we believe most people would still choose to benefit AMF, for instance, rather than a wealthy shareholder, if no sacrifice was required whatsoever, and we believe EA's lack of interest in testing this proposition is absurd. The hundreds of millions of people living in extreme poverty and billions of animals being tortured to death every year deserve better than a collective "oh, this sounds cool; glad someone is doing it."

In the event that this comment tree is the first you've heard about this idea, this is a reading list of some of our writings and thoughts on Profit for Good.


 

If art production is critical to EA's ability to maximize well-being and EA is failing to do so, then this is a failure of EA not to be utilitarian enough. Your criticism perhaps stems from the culture and notions of people who happen to subscribe to utilitarianism, not utilitarianism itself. Utilitarians are human, and thus capable of being in error as to what will do the most good.

If you want to criticize utilitarianism itself, you would have to say the goal of maximizing well-being should be constrained or subordinated by other principles/rules, such as requirements of honesty or glorifying God/etc. You could say something like the production of art/beauty is intrinsically valuable apart from the well-being it produces and thus utilitarianism is flawed in that it fails to capture this intrinsic value (and only captures the instrumental value.

I think a more apt target for your criticism would not be utilitarianism itself, but rather the cultures and mentalities of those who practice it.

I think we are just using two different definitions of utilitarian. I am talking about maximizing well-being... If that means adding more ice cream or art into agents' lives, then utilitarianism demands ice cream and art. Utilitarianism regards the goal... Maximization of net value of experience.

A more apt comparison than a specific political system such as communism, capitalism, or mercantilism would be a political philosophy that defined the goal of governmental systems as "advancing the welfare of people within a state." Then, different political systems could be evaluated by how well they achieve that goal.

Similarly, utilitarianism is agnostic as to whether one should drink Huel, produce and enjoy art, work X hours per week, etc. All of these questions come down to whether the agent is producing better outcomes for the world.

So if you're saying that the habits of EAs are not sustainable (and thus aren't doing the greatest good, ultimately), you're not criticizing utilitarianism. Rather, you're saying they are not being the best utilitarians they can be. You can't challenge utilitarianism by saying that utilitarians' choices don't produce the most good. Then you're just challenging choices made by them within a utilitarian lens.

It's amusing how you argue against hardcore utilitarianism by indicating that factoring in an agent's human needs is indispensable for maximizing impact. To the extent that being good to yourself is necessary for maximizing impact, a hardcore utilitarian would do so.

Utilitarianism is optimizing for whatever agent is operative... Humans or robots. It's just realizing that the experiences of other beings throughout space and time matter just as much as your own. There is nothing wrong with being extreme and impartial in your compassion for others, which is the essence of utilitarianism. To the extent you are lobbing criticisms of people not being effective because they're not taking care of themselves, it isn't a criticism of "hardcore" utilitarianism. It's a criticism of them failing to integrate the productivity benefits from taking care of themselves into the analysis.

Load more