Hide table of contents

 The Plight of the Every-day Altruist

Allow me to articulate the plight myself and other EAs (I assume) go through in trying to do the most good can:

At some point in your life, probably between 16-30 years old, you begin considering your impact on the world. At first, choices seem pretty obvious: Go vegetarian and maybe vegan, give a proportionately large amount of your wealth to charity, individual or collective environmentalism, etc. Some people might stop here and go back to their life as pretty much normal.

You however, when looking into what charities to give to, likely having your first online encounter with EA and EA-adjacent databases and philosophy. The AMF is presented to you as the most cost-effective way to save human lives. In your research, you're exposed to some new ideas: applying Expected Value to morality, QALYs, and the like. 

Learning of these terms, in and of itself, prompts your brain to generate questions: Should we really be so calculative and mathematical with morality? Is saving a life, all things equal, really just as good as the quantity and quality of years they can experience following your intervention? Is creating those years via procreation also a good thing? Is "goodness" measured by what an agent wants, or just the sensations their brain?

By no means are these easy questions, but they're some I can grapple with. I, and my sense is, most within EA tend to lean towards the more utilitarian, agent-neutral responses.

The Questions too Complex for Many of us to Satisfyingly Answer

But those aren't the last of them. Later along the line, there comes infinity and fanaticism. There are many things I can't quite understand: comparing different infinite sets, infinity-"induced" paralysis? The thought experiments are enough to give my head a wobble, but are there ways it practically applies to decision making? Religion, manipulating an infinite multiverse/universe to create an infinite amount of utility? Is an arbitrarily high quality of experience the best you can get, or does combining that with an arbitrarily high amount of time to experience it infinitely better? Is there a finite likelihood of creating infinite utility, or is there a greater likelihood of an infinite amount of disvalue? How can we distinguish a probability from being infinitesimal and extremely small, and what does an infinitesimal probability times an infinite value come out to, in terms of EV?

Many people might adopt an approach of anti-fanaticism (I feel like most arguments in its favor just accept this as true rather than justifying it, and usually end up being intransitive and even more counter-intuitive, however). Others might just say it's too complicated for a casual do-gooder with a normal profession and a busy life to understand. These people generally support highly-accepted initiatives such as the AMF or animal welfare.

 

But what about those who aren't convinced by anti-fanaticism, who think that in order to do the most good by any metric, they need to understand all of the questions I posed above? We don't have the time, the intelligence, the patience to come up with any really satisfying answers. Any conclusion I reach will have a high chance of being wrong.

 

The idea

But there are people out there who are much smarter than me, who have the ability to devote a lot of time to answering these sorts of questions. Who also want to do the most good they can in the world. And presumably, they will be much more accurate than me.

What if I could donate my money to one such philosopher directly, expecting that they will be much better at navigating the uncertainty than I will. 

Many organizations already have a fairly similar option (GiveWell, ACE, etc.) but these are not organizations who are either targeted at a specific cause area (saving lives, sparing farm animals, etc.) and who are not considering the likelihood that in the far future we can use the infinite energy from our universes collapse to create and design infinitely more universes of infinite bliss.

Potential Benefits

While much of what I'm talking about seems unique to EV fanatics, it still applies to many other people as well. Almost everyone has a fairly distinct set of values which can make it difficult to figure out the best thing to do. One might be anti-fanatical, but how do they go about quantifying it? If one heavily incorporates non-consequentialist considerations into their altruism (such as valuing present good more than future good, people more than animals, their own community slightly higher than those in different continents), how should they go about finding the right charities to give for.

 

Essentially, my idea is that one could match with EA philosophers and leaders who share common values. Then, they could discuss the best-seeming options for how to use their money and career considering their values, or as I mentioned earlier, donate directly to the philosopher in order to use it.

If it works, it would be a great moral good for both parties. The casual EA knows that in expectation their money is going to a better cause aligning with their values, and the philosopher/mentor can be essentially certain that they are diverting the money to a more effective cause than it would have gone to, in expectation.

 

Ultimately, I don't really have much knowledge on the practicability of my idea (from an administrative and a safety standpoint). This is not really an idea I've seen discussed very much, and while it feels näive to me, I think it is definitely something worth discussing and potentially pursuing in some analogous form. 

I'd appreciate any feedback you have on this!

7

0
1

Reactions

0
1

More posts like this

Comments3
Sorted by Click to highlight new comments since:

Interesting idea, thanks for putting it out there. I'm currently trying to figure out better answers to some of the things you mentioned (at least "better" in terms of more in-line with my own intuitions). For example, I've been working on incorporating apparently non-consequentialist considerations into a utilitarian framework:

https://forum.effectivealtruism.org/posts/S5zJr5zCXc2rzwsdo/a-utilitarian-framework-with-an-emphasis-on-self-esteem-and

https://forum.effectivealtruism.org/posts/fkrEbvw9RWir5ktoP/creating-a-conscience-calculator-to-guard-rail-an-agi

I'm currently doing this work unpaid and independently. I don't have a Patreon page for individuals to support it directly, in part because the lack of upvotes on my work has indicated little interest. If you'd like to support my work, though, please consider buying my ebook on honorable speech:

Honorable Speech: What Is It, Why Should We Care, and Is It Anywhere to Be Found in U.S. Politics?

Thanks!

I have major reservations about your conclusion (in part because I embrace anti-fanaticism, in part because I see big challenges and some downsides to outsourcing moral reflection and decision-making to another person). However, I really appreciate how well you outlined the problem and I also appreciate that you don't shy away from proposing a possible solution, even while retaining a good measure of epistemic humility. Thanks for posting!

Thanks for responding!

I would definitely see an issue with a society which told people than when considering how to impact the world, "leave it to people who are smarter than you."

At the same time, I think that when dealing with such complex issues that one doesn't have the time or intelligence to satisfyingly understand, to responsibly defer to a relative expert with similar values to you, seems safer.

And I don't think this is a unique problem to fanaticism, though dealing with infinity always complicates things. Incorporating stochastic dominance into one's moral framework can also introduce major confusion. Furthermore, for people less inclined towards hard totalist utilitarianism (valuing community and the present, holding the person-affecting view, opposing exploitation regardless of suffering caused, etc.) it can also be very hard to find the initiatives that optimize based on a balance of different values.

Curated and popular this week
Relevant opportunities