My name is Bill, I have been into effective altruism for a few years, dipping in and out of the community.
I work as a policy researcher for a think tank, where my work focuses on UK innovation, productivity and transportation. My educational background is a mix of philosophy, mathematics and data science.
Comments about moral uncertainty and wild animal suffering are valid, but I think kind of unneccessary. I don't think the argument works at all in its current form.
I think the argument is something like this:
If so, the conclusion is invalid. At most, it shows the world would be better on net if humans suddenly stopped existing. But there is something quite absurd about trying to protect animals from the risks of anthropogenic extinction... via anthropogenic extinction. The more obvious thing to do would be to reduce the risks of anthropogenic extinction.
So for the argument to work, you need to believe that it's not possible to significantly reduce anthropogenic risk (implausible I think), but it is possible to engineer a human extinction event that is, in expectation, much less risky to animal life than an accidential human extinction event. Engineering such an extinction might well be possible, but since you only get one shot, you would surely need an implausibly high level of confidence.
But how do we estimate the EV of estimating the EV of general intellectual progress?
On a less facetious note, it's about the average effect of intellectual progress on innovation right? What EV comes from general intellectual progress that is not a result of innovation?
So you try to causally estimate the effect of innovation on things you value (e.g. GDP), and you try to create measures of general intellectual progress to see how those causally impact innovation. That's obviously easier said than done.
We did not use it in a name calling way but rather as a neutral term to describe the intellectual movement.
I have no doubt that the term was used in good faith. I apologise that my post was worded a bit poorly, so it sounded like I was accusing you of name-calling.
What's your basis for claiming that 'randomista' is a non-neutral term?
The '-ista' suffix sounds pejorative to me in English,like someone who is a zealous dogmatic advocate. Corbynista was the example I referred to, which is a term used often to in the UK to bash the left.
Etymologically, it sounds like my suspicion was correct (see Hauke's post above). Of course these words often get reclaimed, and it appears that's happened here too, hence why I asked whether the RCT proponents call themselves that.
It's obviously not that important, and I don't want to start a battle over words, but David makes a good point about how you engage your critics.
Interesting post, very stimulating. A couple of thoughts:
Isn't factory farming a clear-cut case of injustice? A pretty standard view of justice is that you don't harm others, and if you are harming them then you should stop and compensate for the harm done. That seems to describe what happens to farmed animals. In fact, as someone who finds justice plausible, I think it creates a decent non-utilitarian argument to care about domestic animal suffering more than wild animal suffering.
As my last sentence suggests, I do think that justice views are likely to affect cause prioritisation. I think you're right that justice may lead you to different conclusions about inter-generational issues, and is worth a deeper look.
I think high amounts of concern for wild animals is actually a bit of a defect in utilitarianism. A quite compelling reason for caring more about factory farmed animals is that we are inflicting a massive injustice against them, and that isn't the case for wild animals generally. We do often feel moral obligations to wild animals when we are responsible for their suffering (think oil spills for example). That's not to say wild animals don't matter, but they might be further down our priority list for that reason.
I think the visualization is great. I think the exploding red dots is very powerful, demonstrates just an immense amount of bloodshed.
Thank you for sharing this Holly. Have you read Strangers Drowning by Larissa MacFarquhar? It's a book full of stories of extraordinarily committed "do-gooders" (some effective altruists, some not), as well as some interesting analysis on the mixed reaction that they receive from society. I think there's a lot of overlap with some of what you've written and the experiences of the individuals in Strangers Drowning, so you're definitely not alone.
I suppose the extent that anyone experiences any of these 8 challenges really depends on how motivated they are by morality. I think most people think it's important that they have a positive impact on the world (or at least, don't have a negative one), but they think it's less important to maximize their positive impact. Even being convinced of EA doesn't necessarily change this: it might just lead you to conclude that you can have a much greater positive impact on the world at little cost to yourself, so you might as well...
I guess personally I think that morality should be my most important motivator abstractly, but just looking at my behaviour, it clearly isn't in practice (at least right now). I suppose I'm glad that I don't find altruism very emotionally difficult, but I also suppose that I feel slightly guilty about not feeling very guilty about not doing more.
I am a little confused about the purpose of this post, because surely meta-EA is just EA? I feel like the major innovation of EA is the idea that altruists can and should compare the value of different interventions (which you appear to consider meta-EA). In other words, EA is meta-altruism.
The content might be useful as a road-map, but I think that the terminology is a bit misleading. What these areas have in common is that they are indirect, as opposed to having some kind of abstract meta-ness property.