I'm living in France. Learned about EA in 2018, found that great, digged a lot into the topic. The idea of "what in the world improves well-being or causes suffering the most, and what can we do" really influenced me a whole lot - especially when mixed with meditation that allowed me to be more active in my life.
One of the most reliable thing I have found so far is helping animal charities : farmed animals are much more numerous than humans (and have much worse living conditions), and there absolutely is evidence that animal charities are getting some improvements (especially from The Humane League). I tried to donate a lot there.
Long-termism could also be important, but I think that we'll hit energy limits before getting to an extinction event - I wrote an EA forum post for that here: https://forum.effectivealtruism.org/posts/wXzc75txE5hbHqYug/the-great-energy-descent-short-version-an-important-thing-ea
I just have an interest in whatever topic sounds really important, so I have a LOT of data on a lot of topics. These include energy, the environment, resource depletion, simple ways to understand the economy, limits to growth, why we fail to solve the sustainability issue, and how we got to that very weird specific point in history.
I also have a lot of stuff on Buddhism and meditation and on "what makes us happy" (check the Waking Up app!)
Interesting, thank you.
On the second point this reads like very optimistic (the way animals are treated in rich countries is just very bad). I agree that it's maybe easier to appeal to ethical values and develop alternatives now but it's hard to know if this will be enough to offset all the negative stuff associated by 'more power and money = easier to buy animal products'. But I won't have much time to engage and it's not that important since we can't change this part of the trajectory.
The post is interesting and well argued, but I am not sure I agree - one example I have in mind is Microsoft using AI to double the productivity of a shrimp farm, likely by increasing density.
Regarding this : "The industry also operates under finite resource constraints, including feed, water, energy, and land" It is also possible that AI, by increasing economic growth and developing better energy sources, can indirectly increase animal consumption by giving more resources to people.
I agree that animal welfare activists should use AI to boost their outreach, however.
This would be great!
Even better would be something dedicated to the topic of the impact AI will have on animals. It's very likely (unavoidable?) that most of the beings affected by AI will be animals (although artificial sentience could be somewhere).
An AI aligned with humans but not animals would have terrible effects for many beings in the world, so just pushing for AI safety for humans is not enough by itself to bring a positive world.
The intersection on AI x animals seems promising though.
Pain feels worse when it's conscious than when it's unconscious?
I mean, sometimes I have a stomachache that I barely notice and which is mostly unconscious until I look at it. And it doesn't motivate me to change a lot. However, having someone whipping me really motivates me to move elsewhere - something I wouldn't do if the feeling were mostly unconscious (I'd mostly just step back by reflex). Probably the same reason that wakes me up when I'm hit in my sleep.
So pain as a conscious valenced negative experience seems like a strong motivator to act on.
The fact that things can be perceived unconsciously is interesting, but if it were enough to survive in nature, I don't see a lot of reasons why we, humans, would have developed conscious pain in the first place?
For the typical EA, this would likely imply donating more to animal welfare, which is currently heavily underfunded under the typical EA's value system.Opportunities Open Phil is exiting from, including invertebrates, digital minds, and wild animals, may be especially impactful.
I strongly agree: the comparative underfunding of these areas always felt off to me, given their very large numbers of individuals and low-hanging fruits.
However, it feels like more and more people are recognizing the need for more funding for animal welfare, given the results of the recent debate.
Another comment : regarding the value of longtermist intervention, while I understand numbers can be very high, my main uncertainty is that I'm not even sure a lot of common interventions have a positive impact.
For instance, is working against X-risks good when avoiding an S-risks would allow factory farming to continue? The answer will depend on many questions (will factory farming continue in the future, what is the impact of humanity on wild animals, what will happen regarding artificial sentience, etc.), none of which have a clear answer.
Reducing S-risks seems good, though.
I agree with this comment. Thanks for this clear overview.
The only element where I might differ is whether AI really is >10x neglected animals.
My main issue is that while AI is a very important topic, it's very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact.
First, it's hard to know what will work and what won't accidentally increase capabilities. More importantly, if we end up in a future aligned with human values but not animals or artificial sentience, this could still be a very bad world in which a large number of individuals are suffering (e.g., if factory farming continues indefinitely).
My tentative and not very solid view is that work at the intersection of AI x animals is promising (eg work that aims to get AI companies to commit towards not committing animal mistreatment), and attempts for a pause are interesting (since they give us more time to figure out stuff).
If you think that an aligned AGI will truly maximise global utility, you will have a more positive outlook.
But since I'm rather risk averse, I devote most of my resources to neglected animals.