This is a special post for quick takes by Shimmy Shai. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I am also curious about another thing: For me, I identified over my long 31 years of lifetime that was spent mostly behind a computer, that the 3 biggest challenges facing humankind so far are an unhealthy relationship with nature, the lack of a socio-cultural-political milieu that provides a solid guarantee of global peace (just look at with Russia now!), and finally the lack of similar on the ethical development and deployment of technology.

What do you think?

Moreover, given that I am hopefully at a point where I can make the transition from mental health recovery and college to finally a "proper" career and to break free of the shackles of the computer screen, what should I be aiming at if I want to maximize the utility value on all these fronts, and why should I accept that, and why should I accept the evidence, and where can I find countermanding arguments as to those whys?

I have for a while been thinking about this idea of "effective altruism" but have a couple of questions about it more fundamentally.

The first is purely practical - why is it that for contributions to a thing to be doing a lot of good, it must be one in which not a lot of people are working on them, specifically, are required? Ultimately, we need everyone doing good, because evil is an intolerable path for a human to live by, and one could argue that the absence of good is at least "half of evil", but that means that, if we are to approach that seriously, then we will necessarily have lots of people working on lots of issues.

But the second is more philosophical, and kind of related to that "we need everyone doing good" and "evil is intolerable": does this "effective altruism" not merely constitute a moral decision-making method, but also a moral judgment to pass on other people in that if you don't help as many people as another because of what you don't have (financial, talent, circumstances, etc.), then you are a more evil  or less good person, even if you are still making the best choices with what you do have? If so, is that sort of "relative evil" tolerable? But yet, if so, then why consider it "evil" at all, which inherently must imply (for it to be even morally meaningful - that is, to have relevance to how we should and should not act) a certain level of intolerance?

The reason I ask these is that for a while now I have been dogged by feeling like I am an evil person - and not being recognized and judged accordingly - because my mind seems to naturally operate on some framework that seems rather broadly along these lines, and seems to invite comparisons based on total utility generation with attendant self-flagellation.

[anonymous]1
0
0

re: first para, tractability matters because we have finite resources. If you're considering how to allocate $1000 of capital or 100 hours of your time, you naturally want to focus on where you can create the most good. Often this will be something important that others are not doing.

Also your framing seems more deontological than consequentialist, EA is very consequentialist, it cares about the most good getting done, rather than about making all humans good people. Plenty of good people do not do much good (or to frame this better, some good people who focus on doing good, do a lot more good than other good people).

"we need everyone doing good" and "evil is intolerable"

I don't think anyone defines EA using these exact phrases. But yes there are some differences between motivations of different EAs, some are opportunities-framing and focus on the good they can do through EA work, some instead focus on the bad they would be doing if they didn't do EA work. I'm in the former camp, I don't think guilt is a good motivator.  Check out the replacing guilt series by Nate Soares.

why is it that for contributions to a thing to be doing a lot of good, it must be one in which not a lot of people are working on them, specifically, are required?

two big things:

one: replaceability often nukes the utility of doing something. lets say i am gonna get a job at Redwood. there is some expected value from my outputs, but the real calculation is [expected value of my outputs] - [expected value of the outputs of who would have been hired that isn't me]. of course i'm also freeing up their time by taking the job, so there is a sort of cascade, but in many cases its between them get hired and doing not much.  

two: vast majority people arent trying at all to do a lot of good, so naturally if you are, you will do things that few others are trying to do

Curated and popular this week
Relevant opportunities