I'm do community building with Effective Altruism at Georgia Tech. My primary focus areas are animal welfare and artificial intelligence.
It's great that you're doing what you can on this front, despite all the challenges! I don't have specific nutritional advice, though maybe the writer of the first post you linked would.
You may have already considered this (some of your ideas hinted in this direction), but I think it's important to focus on suffering intensity, which you could measure in terms of suffering per calorie or suffering per pound of food. Doing so will minimize your overall suffering footprint. My understanding is that the differences in capacity for suffering between large and small animals (such as cows and shrimp) aren't large enough to outweigh the difference in the number of animals you have to eat to get the same number of calories. Additionally, cows seem to be kept in some of the least awful conditions of any factory-farmed animal.
This website, foodimpacts.org, shows this difference in a useful graphic. It also lets you weight the importance you place on welfare vs. climate impacts (though I would set climate to 0%, it may be helpful for you if you prioritize differently).
Brian Tomasic's How Much Direct Suffering Is Caused by Various Animal Foods? could also be a useful guide, and Meghan Barrett's work on insect sentience is worth a read if you want to decide whether it's better to eat insects or other animals.
Destroying viruses in at-risk labs
Thanks to Garrett Ehinger for feedback and for writing the last paragraph.
Military conflict around or in the vicinity of biological research laboratories could substantially increase the risk of releasing a dangerous pathogen into the environment. The fighting and mass movement of refugees combine with other risk factors to magnify the potential ramifications of this risk. Garrett Ehinger elaborates on this issue in his excellent Chicago Tribune piece, and proposes the creation of nonaggression treaties for biological labs in war zones as additional pillars to shore up biosecurity norms.
This seems like a great option, but I think there may be a more prompt technical solution as well. Viruses, bacteria, and other dangerous materials in at-risk labs could be stored in containers that have built-in methods to destroy their contents. A strong heating element could be integrated into the storage compartment of each virus and activated by scientists at the lab if a threat seems imminent. Vibration sensors could also automatically activate the system in case of a bombing or an earthquake. This solution would require funding and engineering expertise. I don’t know how much convincing labs would need to integrate it into their existing setups.
If labs might consider the purchase and implementation of entirely new heating elements with their existing containers to be too tall of an order, there are other alternatives. For example, “autoclaves” (the chemist's equivalent of a ceramic kiln or furnace) are already commonplace in many biological laboratories for purposes such as medium synthesis or equipment sterilization. There could be value for these labs in developing SOPs and recommendations for the safe disposal of risky pathogens via autoclaves. This solution would be quicker and easier to implement, but in an emergency situation, could require slightly more time to safely destroy all the lab’s pathogens.
One common topic in effective altruism introductory seminars is expected value, specifically the idea that we should usually maximize it. It’s intuitive for some participants, but others are less sure. Here I will offer a simple justification for expected value maximization using a variation of the veil of ignorance thought experiment. This line of thinking has helped make my introductory seminar participants (and me) more confident in the legitimacy of expected value.
The thought experiment begins with a group of rational agents in the “original position”. Here they have no knowledge of who or what they will be when they enter the world. They could be any race, gender, species, or thing. Because they don’t know who or what they will be, they have no unfair biases, and should be able to design a just society and make just decisions.
Now for two expected value thought experiments from the Cambridge EA introductory seminar discussion guide. Suppose that a disease, or a war, or something, is killing people. And suppose you only have enough resources to implement one of the following two options:
Version A…
Version B…
Now imagine that you’re an agent behind the veil of ignorance. You could enter the world as any of the 500 individuals. What do you want the decision-maker to choose? In both versions of the thought experiment, option 1 gives you an 80% chance of surviving, but option 2 gives you a 90% chance. The clear choice is option 2.
This framework bypasses the common objection that it’s wrong to take risks with others’ lives by turning both options into a risk. In my experience, part of this objection often has to do with understandable feelings of discomfort with risk-taking in high-stakes scenarios. But here risk-taking is the altruistic approach, so a refusal to accept risk would ultimately be for the emotional benefit of the decider. This topic can also lead to discussion about the meaning of altruism, which is a highly relevant idea for intro seminar participants.
This argument isn’t new (reviewers noted that John Harsanyi was the first to make this argument, and Holden Karnofsky discusses it in his post on one-dimensional ethics), but I hope you find this short explanation useful for your own thinking and for your communication of effective altruism.
I appreciate how this post adds dimension to community building, and I think the four examples you used are solid examples of each approach. I'm not sure what numbers I'd put on each area as current or ideal numbers, but I do have some other thoughts.
I think it's a little hard to distinguish between movement support and field building in many community building cases. When someone in a university group decides to earn to give instead of researching global priorities, does that put them in movement support instead of the field? To what extent do they need to be involved in evaluating their giving to count as being part of the field? And when a group runs an intro fellowship, is that movement support or field building?
I'm still very excited about network development and wouldn't change its fraction of the portfolio. I personally tend to get a lot of value out of meeting other people within EA and understanding EA orgs better. Networks facilitate field building and movement support. I'm also less excited about promoting the uptake of our practices by outside organizations. I think we're at a pretty low percentage and should stay there. A project or two like this would be great, but I don't think we need enough of it to round away from 5%, mostly because of tractability concerns. These projects are also supported by field building work.
Thanks for the post!
I don't think that the development of sentience (the ability to experience positive and negative qualia) is necessary for an AI to pursue goals. I'm also not sure what it would look like for an AI to select its own interests. This may be due to my own lack of knowledge rather than a real lack of necessity or possibility though.
To answer your main question, some have theorized that self-preservation is a useful instrumental goal for all sufficiently intelligent agents. I recommend reading about instrumental convergence. Hope this helps!
Different group organizers have widely varying beliefs that affect what work they think is valuable. From certain perspectives, work that’s generally espoused by EA orgs looks quite negative. For example, someone may believe that the harms of global health work through the meat eater problem dominate the benefits of helping reduce human suffering and saving lives. Someone may believe that the expected value of the future with humans is negative, and as such, biosecurity work that reduces human extinction risk is net-negative. In this post I’ll briefly consider how this issue can affect how CBs do their work.
Since many major EA orgs and community members provide support to groups, there may be obligations to permit and/or support certain areas of work in the group. Open Phil, for example, funds EA groups and supports biosecurity work. There’s no mandate that organizers conduct any particular activities, but it’s unclear to me what degree of support for certain work is merited. It currently seems to me that there is no obligation to support work in any given area (e.g. running a biosecurity seminar), but there may be an obligation to not prevent another organizer from engaging in that activity. This seems like a simple solution, but there is some moral conflict when one organizer is providing background support such as managing finances, conducting outreach, and running social events that facilitate the creation and success of the controversial work.
CBs could choose to accept that we (generally) aren’t philosophy PhDs or global priorities researchers and weigh the opinions of those people and the main organizations that employ them heavily. This sort of decision making attempts to shift responsibility to other actors and can contribute to the problem of monolithic thinking.
Maybe the organizers of groups A, B, and C, think that the meat eater problem makes global health work net negative, but the organizers of groups D, E, and F prioritize humans more, which makes global health look positive. If everyone focuses on their priorities, organizers from A, B, and C miss out on great animal welfare promoters from D, E, and F, and organizers from D, E, and F miss out on great global health supporters from A, B, and C. On the other hand, if everyone agrees to support and encourage both priorities, everyone’s group members get into their comparative advantage areas and everyone is better off. This plan does ignore counteracting forces between interventions and the possibility that organizers will better prepare people for areas that they believe in. Coordinating this sort of trade also seems quite difficult.
I don’t see a simple way to solve these issues. My current plan is to reject the “deferring” solution, not prevent other organizers from working on controversial areas, accept that I’ll be providing them with background support, and focus on making, running, and sharing programming that reflects my suffering-focused values.
Fantastic post, thank you for writing it! One challenge I have with encouraging effective giving, especially with a broader non-EA crowd, is that global health and development will probably be the main thing people end up giving to. I currently don't support that work because of the meat eater problem. If you have any thoughts on dealing with this, I'd love to hear them.
Some arguments to support global health work despite the meat eater problem that I see are:
"People in low-income countries that are being helped with Givewell-style interventions don't actually eat many animal products." (I think this is true relative to people in high-income countries, but I don't think the amounts are negligible, and are very likely sufficient to override the positive impact of helping people. This post is the type of analysis I'm thinking of. Some commenters rejected the whole line of reasoning, but I do think it's relevant here.)
"Some interventions improve lives more than save lives, so those that benefit don't eat more animal products." (Which ones are most in this category? Will I end up getting people to donate to these, or would they still end up donating to the others?)
"People are more likely to accept arguments focused on nonhuman animal welfare when they have healthier and more stable lives." (This one feels a bit fluffy to me despite some plausible degree of truth. I think some people will change in that way, but not enough to compensate for the added harm.)
"Those that start doing effective giving with global health nonprofits will be more likely to engage with animal advocacy and other suffering-focused work in the future." (This argument is more convincing than any of the others. Personally I haven't seen someone who didn't learn about effective global health giving through EA do it, but I could see myself buying into this idea with more evidence, even some anecdotes.)
I quite like how you distinguish approaches at the individual level! I think focusing on which area they support makes sense. One lingering question I have is the relative value a donor's donations vs. the value of their contribution toward building a culture of effective giving. I also think it's at least somewhat common for people to get into other areas of EA after starting out in effective giving.
Agreed on the intro fellowship point as well! Long-term it supports field-building since plenty of participants filter through, but it's more directly movement support.
I'm a little less sure on the networking point. I notice that because I'm exploring lots of EA-related areas in relatively low depth, I haven't hit diminishing returns from talking to people in the community. I do imagine that people who have committed more strongly to an area would get more value from exploring more. I do agree that lots of people outside the traditional EA geographical areas could do fantastic work. Enabling this doesn't seem very resource-intensive though. I would guess that EA Virtual Programs is relatively cheap, and it allows anyone to get started in EA. Maybe you'd like to see more traditional local groups, though, which would be more costly but could make sense.
I think the uptake of practices category can be separated into two areas. Area one would be promoting the uptake of EA-style thinking in existing foundations and the other work you list under "How I would describe EA’s current approach to social change". Area two would be pushing for the implementation of policies that have come out of EA research in existing organizations, which is what LEEP and lots of animal welfare orgs do (and I suppose more biosecurity and AI people are getting into the regulatory space as well now). I only question the tractability of area one work, area two work seems to be going quite well! The main challenge in that domain is making sure the policy recommendations are good.
Thank you for the detailed response!