Yay for me: I have found that I can increase my donations in way that seems long-term sustainable in terms of finances and emotional engagement. I have also found that a moderate engagement with the movement is the best way for me to maintain an interest, while avoiding getting depressed by the state of the world.
Sometimes when I see people writing about opposition to the death penalty I get the urge to mention Effective Altruism to them, and suggest it is borderline insane to think opposition to capital punishment in the US is where a humanitarian should focus their energies. (Other political causes don't cause me to react in the same way because people's desire to campaign for things like lower taxes, feminism or more school spending seems tied up with self-interest to a much larger degree, so the question if it is the most pressing issue seems irrelevant.) I always refrain from mentioning EA because I think it would do more harm then good, so I will just vent my irrational frustation here.
I endorse using Shortform posts to vent! I think you're right that mentioning EA would be likely to do more harm than good in those cases, but your feelings are reasonable and I'm glad this can be a place to express them.
Some object-level thoughts not meant to interfere with your venting:
I don't feel the same way about people who oppose the death penalty, I think largely because I have a strong natural sense that justice is very important and injustice is very especially extra-bad. This doesn't influence my giving, but I definitely feel worse about the stories "innocent person is killed by the state" or "guilty person who is now wholly reformed is killed by the state" than I do the story "innocent child dies of malaria", despite knowing logically that the last of these is likely the saddest (because many more years were lost). I can understand how someone who feels similarly to me would end up spending a lot of energy opposing capital punishment.
The death penalty also has a hint of self-interest in that it is funded by tax money. I can imagine people being exceptionally angry that they are paying even the most minute fraction of the cost of executing someone. Similarly, the documentary "Life in a Day" briefly features someone who deliberately earns a very low income so they can pay no taxes and thus ensure that none of their money goes toward "war".
Sometimes the concern is raised that caring about wild animal welfare is seen as unituitive and will bring conflict with the environmental movement. I do not think large-scale efforts to help wild animals should be an EA cause at the moment, but in the long-term I don't think environmentalist concerns will be a limiting factor. Rather, I think environmentalist concerns are partially taken as seriously as they are because people see it as helping wild animals as well. (In some perhaps not fully thought out way.) I do not think it is a coindince that the extinction of animals gets more press than the extinction of plants.
I also note that bird-feeding is common and attracts little criticism from environmental groups. Indeed, during a cold spell this winter I saw recommendations from environmental groups to do it.
From the first link, which looked at attitudes among scholars and students in life sciences towards helping wild animals in urban settings, with vaccinations and for weather events:
Responses were mostly favorable in all cases. Levels of support and perceived support by others ranged, depending on the question, from over 60% to over 90%. Students and scholars tended to give similar responses. The level of support was highest in almost all cases for the second project, Urban Ecology. The first project, Vaccination, also received substantial support. It was ranked second except in one very important category – expected support at university departments, in which it was ranked third. The third project, Weather Effects, was ranked first in this category. The results showed no substantial conflict between the perceptions and attitudes among scholars and students.
I do not think large-scale efforts to help wild animals should be an EA cause at the moment, but in the long-term I don't think environmentalist concerns will be a limiting factor.
For what it's worth, I think the current focus is primarily research, advocacy for wild animals and field building, not the implementation or promotion of specific direct interventions.
The current conflict with Russia has increased my estimate of the importance of democratization. I think a democratic Russia would be unlikely to go to war with brother country like Ukraine. Many efforts to spread democracy seem pretty unsuccesful.
I wonder whether democratic countries sometimes could make deals with dictators to allow a gradual change to democracy, only finishing when the dictator dies or decides to retire. Assuming the dicator cares somewhat about his country's long-term future he might be persuaded that democracy is best way of ensuring peace and prosperity for it long-term.
Scott Aaronsson has received a grant to redistribute and is asking for charity recommendations. https://scottaaronson.blog/?p=6232 Note that he indicates AI-risk and other rationalist-flavored organisations are disfavored, but the blog post might still be of some interest.
There is a lot of EA content on Twitter. It can't replace this forum for serious debate, but for someone like me who mainly consumes EA content to maintain motivation long-term it does well enough.
I was recently looking for a page with donation advice to link to. I found one, but it struck me that some general EA-organisations could start their homepages more focused on effective donation. (As opposed to getting people involved in other ways.) Most people are not looking to join an organistion or change jobs to more altruistically effective ones, but probably donate something to charity and could repriotize those donations. Having a "hook" which is about what to donate to might be more helpful.
The Giving What We Can donate page is easily the best overall resource for this that I know of — not perfect, but very comprehensive.
I'm not sure how many "general EA organizations" exist, though. All the ones I could think of that are meant for a general audience — EA.org, GWWC, Charity Science Outreach — make it pretty easy to find advice on donating.
Meanwhile, both GiveWell and GWWC (as well as Future Perfect) come up on the front page when I Google "best charity" from an incognito window.
Are there specific organizations that you think should provide easier access to donation advice? (Keep in mind that most "EA orgs" have a specific mission and will want to tell people about their work, not the broader movement.)
I think it was the GWWC page that I eventually linked to.
I looked at the Centre for Effective Altruism home page first, and somewhere else that I can not remember, and did not find them very suitable as a starting point for the general public.
CEA's website is meant to be a place to learn about CEA; the intro EA material on the homepage and "Get Involved" menu send newcomers to appropriate intro resources.
But it seems fine to have the bullet list on our homepage mention donations more explicitly; I've added a direct link to GWWC.
From animal EAs in the US there is talk about upcoming Supreme court case where California import restrictions on pork produced to lower standards are likely to be overturned. A sad turn of events if it happens. Also find it annoying that some activists are trying to ally it with larger left-wing cause, and warn it will lead to general race to bottom when it comes to regulations. As someone who is more right-wing on many issues I am not very worried about race to bottom when it comes to labor market regulation. I also don't see how it is tactically smart to tie defense of animal welfare standards to larger project of ending domestic free trade in US. SC is never going to write an opinion that would allow Californa to ban import from states with lower minimum wage, and that would also be a step much too far for the Biden adminstration and most Democrats, yet animal-friendly lawyers on Twitter seem completely unconcerned to suggest that this is the principle they want.
Been reading about cryptocurrencies and block-chain. Cool technologies, but the valuation of current cryptocurrencies seems like a bubble that must crash, and the people who are "investing" in crypto right now are gambling, and I worry they do not know they are gambling.
I hope current EA-aligned people in crypto manage to cash out, and that there is no reputational harm for the movement from the fact that some well-known proponents work in the field.
Definitely worrying about WW3 or nuclear holocaust at the moment. I gave an extra donation to long-terminst causes this month. Don't usually donate to them, but the argument that some long-term thinking should be promoted seemed convincing now.
I hope, but have no real reason to believe, that western leaders know how far they can support Ukraine without causing the war to spread.
Merry Christmas! I hope you all have great holidays, and are able to draw inspiration from them, even if Christmas presents are often an example of the most inefficient altruism there is.
I have not made any attempt to vet the study, and for studies of this kind you don't expect one study to be more than a small piece of evidence but it is clearly an interesting research question.
There is a well-known argument that rule utilatarianism actually collapses into act utilatarianism. I wonder if rule utilitarians are not getting at the notion of dynamic inconsistency. If might be better if utilitarians can pre-commit to following certain rules, because of the effect that has on society, even if after one has adopted the rules there are circumstances where a utilitarian would be tempted to make exceptions.
I think there might be some interest among the EA community in recent social media discussions about Scott Alexander and SlateStarCodex. My impression is that among some committed leftists the movement will face suspiscion rooted in its support from rich people, its current demographic profile, because some leftists are suspiscious of rationality itself and because the movement might detract from the idea that the causes popular now among leftists are also objectively the most important issues facing the world.
On 80000 hours webpage they have a profile on factory farming, where they say they estimate ending factory farming would increase the expected value of the future of humanity by between 0.01% and 0.1%. I realize one cannot hope for precision in these things but I am still curious if anyone knows anything more about the reasoning process that went into making that estimate.
The fact that there are over 100 billion animals on factory farms is partly why we consider them one of the most important frontiers of today’s moral circle (K. Anthis & J. R. Anthis, 2019).
Note: I don't work for 80,000 Hours, and I don't know how closely the people who wrote that article/produced their "scale" table would agree with me.
For that particular number, I don't think there was an especially rigorous reasoning process. As they say when explaining the table in their scale metric, "the tradeoffs across the columns are extremely uncertain".
That is, I don't think that there's an obvious chain of logic from "factory farming ends" to "the future is 0.01% better". Figuring out what constitutes "the value of the future" is too big a problem to solve right now.
However, there are some columns in the table that do seem easier to compare to animal welfare. For example, you can see that a scale of "10" (what factory farming gets) means that roughly 10 million QALYs are saved each year.
So a scale of "10" means (roughly) that something happens each year which is as good as 10 million people living for another year in perfect health, instead of dying.
Does it seem reasonable that the annual impact of factory farming is as bad as 10 million people losing a healthy year of their lives?
If you think that does sound reasonable, then a scale score of "10" for ending factory farming should be fine. But you might also think that one of those two things -- the QALYs, or factory farming -- is much more important than the other. That might lead you to assign a different scale score to one of them when you try to prioritize between causes.
Of course, these comparisons are far from perfectly empirical. But at some point, you have to say "okay, outcome A seems about as good/bad as outcome B" in order to set priorities.
Should you hope that you are doing good? Perhaps not. For a number of cause areas you should probably hope that you are achieving nothing, or actually doing harm. Eg, if you are working on x-risk reduction you should hope what you are doing is not neccessary, in which case you are probably doing harm by reducing growth.
I would be careful about psycholigical explanations for followers of the EA movement committing fraud. It might be due to ends-justify-the-means thinking, but other possibilities, such as EA alignment being a useful tool to faciliate fraud, are also possible.
That is not a possibility in this case, because SBF was was interested in EA for 5+ years before this fraud, and was raised as a utilitarian since childhood.
Yay for me: I have found that I can increase my donations in way that seems long-term sustainable in terms of finances and emotional engagement. I have also found that a moderate engagement with the movement is the best way for me to maintain an interest, while avoiding getting depressed by the state of the world.
Sometimes when I see people writing about opposition to the death penalty I get the urge to mention Effective Altruism to them, and suggest it is borderline insane to think opposition to capital punishment in the US is where a humanitarian should focus their energies. (Other political causes don't cause me to react in the same way because people's desire to campaign for things like lower taxes, feminism or more school spending seems tied up with self-interest to a much larger degree, so the question if it is the most pressing issue seems irrelevant.) I always refrain from mentioning EA because I think it would do more harm then good, so I will just vent my irrational frustation here.
I endorse using Shortform posts to vent! I think you're right that mentioning EA would be likely to do more harm than good in those cases, but your feelings are reasonable and I'm glad this can be a place to express them.
Some object-level thoughts not meant to interfere with your venting:
I don't feel the same way about people who oppose the death penalty, I think largely because I have a strong natural sense that justice is very important and injustice is very especially extra-bad. This doesn't influence my giving, but I definitely feel worse about the stories "innocent person is killed by the state" or "guilty person who is now wholly reformed is killed by the state" than I do the story "innocent child dies of malaria", despite knowing logically that the last of these is likely the saddest (because many more years were lost). I can understand how someone who feels similarly to me would end up spending a lot of energy opposing capital punishment.
The death penalty also has a hint of self-interest in that it is funded by tax money. I can imagine people being exceptionally angry that they are paying even the most minute fraction of the cost of executing someone. Similarly, the documentary "Life in a Day" briefly features someone who deliberately earns a very low income so they can pay no taxes and thus ensure that none of their money goes toward "war".
Sometimes the concern is raised that caring about wild animal welfare is seen as unituitive and will bring conflict with the environmental movement. I do not think large-scale efforts to help wild animals should be an EA cause at the moment, but in the long-term I don't think environmentalist concerns will be a limiting factor. Rather, I think environmentalist concerns are partially taken as seriously as they are because people see it as helping wild animals as well. (In some perhaps not fully thought out way.) I do not think it is a coindince that the extinction of animals gets more press than the extinction of plants.
I also note that bird-feeding is common and attracts little criticism from environmental groups. Indeed, during a cold spell this winter I saw recommendations from environmental groups to do it.
To add to this, Animal Ethics has done some research on attitudes towards helping wild animals:
From the first link, which looked at attitudes among scholars and students in life sciences towards helping wild animals in urban settings, with vaccinations and for weather events:
For what it's worth, I think the current focus is primarily research, advocacy for wild animals and field building, not the implementation or promotion of specific direct interventions.
Thank you for those links.
The current conflict with Russia has increased my estimate of the importance of democratization. I think a democratic Russia would be unlikely to go to war with brother country like Ukraine. Many efforts to spread democracy seem pretty unsuccesful.
I wonder whether democratic countries sometimes could make deals with dictators to allow a gradual change to democracy, only finishing when the dictator dies or decides to retire. Assuming the dicator cares somewhat about his country's long-term future he might be persuaded that democracy is best way of ensuring peace and prosperity for it long-term.
I was thinking the same in this case.
Also I've wondered (maybe people have explored)
"Rewarding dictators who give up their power (with a cash prize)"
"Setting up a safe, secure and comfortable place for them to live out their days".
<br>
I expect the main objections to be:
Scott Aaronsson has received a grant to redistribute and is asking for charity recommendations. https://scottaaronson.blog/?p=6232 Note that he indicates AI-risk and other rationalist-flavored organisations are disfavored, but the blog post might still be of some interest.
There is a lot of EA content on Twitter. It can't replace this forum for serious debate, but for someone like me who mainly consumes EA content to maintain motivation long-term it does well enough.
I was recently looking for a page with donation advice to link to. I found one, but it struck me that some general EA-organisations could start their homepages more focused on effective donation. (As opposed to getting people involved in other ways.) Most people are not looking to join an organistion or change jobs to more altruistically effective ones, but probably donate something to charity and could repriotize those donations. Having a "hook" which is about what to donate to might be more helpful.
The Giving What We Can donate page is easily the best overall resource for this that I know of — not perfect, but very comprehensive.
I'm not sure how many "general EA organizations" exist, though. All the ones I could think of that are meant for a general audience — EA.org, GWWC, Charity Science Outreach — make it pretty easy to find advice on donating.
Meanwhile, both GiveWell and GWWC (as well as Future Perfect) come up on the front page when I Google "best charity" from an incognito window.
Are there specific organizations that you think should provide easier access to donation advice? (Keep in mind that most "EA orgs" have a specific mission and will want to tell people about their work, not the broader movement.)
I think it was the GWWC page that I eventually linked to.
I looked at the Centre for Effective Altruism home page first, and somewhere else that I can not remember, and did not find them very suitable as a starting point for the general public.
I see!
CEA's website is meant to be a place to learn about CEA; the intro EA material on the homepage and "Get Involved" menu send newcomers to appropriate intro resources.
But it seems fine to have the bullet list on our homepage mention donations more explicitly; I've added a direct link to GWWC.
From animal EAs in the US there is talk about upcoming Supreme court case where California import restrictions on pork produced to lower standards are likely to be overturned. A sad turn of events if it happens. Also find it annoying that some activists are trying to ally it with larger left-wing cause, and warn it will lead to general race to bottom when it comes to regulations. As someone who is more right-wing on many issues I am not very worried about race to bottom when it comes to labor market regulation. I also don't see how it is tactically smart to tie defense of animal welfare standards to larger project of ending domestic free trade in US. SC is never going to write an opinion that would allow Californa to ban import from states with lower minimum wage, and that would also be a step much too far for the Biden adminstration and most Democrats, yet animal-friendly lawyers on Twitter seem completely unconcerned to suggest that this is the principle they want.
I wish Giving What We Can's donation page had my credit card number saved. Would remove a slight moment of annoyance each month.
Been reading about cryptocurrencies and block-chain. Cool technologies, but the valuation of current cryptocurrencies seems like a bubble that must crash, and the people who are "investing" in crypto right now are gambling, and I worry they do not know they are gambling.
I hope current EA-aligned people in crypto manage to cash out, and that there is no reputational harm for the movement from the fact that some well-known proponents work in the field.
Gresham College is hosting an event with the title "Does Philanthropy do the Public Good?" by Professor David King. It can be watched afterwards here https://www.gresham.ac.uk/lectures-and-events/good-philanthropy.
It might be interesting, or alternatively it might be terrible but relevant for EAs to know what views are put out in the public debate.
Definitely worrying about WW3 or nuclear holocaust at the moment. I gave an extra donation to long-terminst causes this month. Don't usually donate to them, but the argument that some long-term thinking should be promoted seemed convincing now.
I hope, but have no real reason to believe, that western leaders know how far they can support Ukraine without causing the war to spread.
Merry Christmas! I hope you all have great holidays, and are able to draw inspiration from them, even if Christmas presents are often an example of the most inefficient altruism there is.
Professor Abigail Marsh writes in NYT that individualism promotes altruism: https://www.nytimes.com/2021/05/26/opinion/individualism-united-states-altruism.html?smid=tw-share
I have not made any attempt to vet the study, and for studies of this kind you don't expect one study to be more than a small piece of evidence but it is clearly an interesting research question.
There is a well-known argument that rule utilatarianism actually collapses into act utilatarianism. I wonder if rule utilitarians are not getting at the notion of dynamic inconsistency. If might be better if utilitarians can pre-commit to following certain rules, because of the effect that has on society, even if after one has adopted the rules there are circumstances where a utilitarian would be tempted to make exceptions.
I think there might be some interest among the EA community in recent social media discussions about Scott Alexander and SlateStarCodex. My impression is that among some committed leftists the movement will face suspiscion rooted in its support from rich people, its current demographic profile, because some leftists are suspiscious of rationality itself and because the movement might detract from the idea that the causes popular now among leftists are also objectively the most important issues facing the world.
On 80000 hours webpage they have a profile on factory farming, where they say they estimate ending factory farming would increase the expected value of the future of humanity by between 0.01% and 0.1%. I realize one cannot hope for precision in these things but I am still curious if anyone knows anything more about the reasoning process that went into making that estimate.
I think it's basically that moral circle expansion is an approach to reduce s-risks (mostly related to artificial sentience), and ending factory farming advances moral circle expansion. Those links have posts on the topic, but the most specific tag is probably Non-humans and the long-term future. From a recent paper on the topic:
I think Sentience Institute and the Center for Reducing Suffering are doing the most research on this these days.
Note: I don't work for 80,000 Hours, and I don't know how closely the people who wrote that article/produced their "scale" table would agree with me.
For that particular number, I don't think there was an especially rigorous reasoning process. As they say when explaining the table in their scale metric, "the tradeoffs across the columns are extremely uncertain".
That is, I don't think that there's an obvious chain of logic from "factory farming ends" to "the future is 0.01% better". Figuring out what constitutes "the value of the future" is too big a problem to solve right now.
However, there are some columns in the table that do seem easier to compare to animal welfare. For example, you can see that a scale of "10" (what factory farming gets) means that roughly 10 million QALYs are saved each year.
So a scale of "10" means (roughly) that something happens each year which is as good as 10 million people living for another year in perfect health, instead of dying.
Does it seem reasonable that the annual impact of factory farming is as bad as 10 million people losing a healthy year of their lives?
If you think that does sound reasonable, then a scale score of "10" for ending factory farming should be fine. But you might also think that one of those two things -- the QALYs, or factory farming -- is much more important than the other. That might lead you to assign a different scale score to one of them when you try to prioritize between causes.
Of course, these comparisons are far from perfectly empirical. But at some point, you have to say "okay, outcome A seems about as good/bad as outcome B" in order to set priorities.
Should you hope that you are doing good? Perhaps not. For a number of cause areas you should probably hope that you are achieving nothing, or actually doing harm. Eg, if you are working on x-risk reduction you should hope what you are doing is not neccessary, in which case you are probably doing harm by reducing growth.
I would be careful about psycholigical explanations for followers of the EA movement committing fraud. It might be due to ends-justify-the-means thinking, but other possibilities, such as EA alignment being a useful tool to faciliate fraud, are also possible.
That is not a possibility in this case, because SBF was was interested in EA for 5+ years before this fraud, and was raised as a utilitarian since childhood.