This is a special post for quick takes by Kirsten. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
There's a lot of "criticize EA" energy in the air this month. It can be useful and energising. I'm seeing more criticisms than usual produced, and they're getting more engagement and changing more minds than usual.
It makes me a little nervous that criticism can get more traction with less evidence than usual right now. I'm trying to be conciously less critical than usual for the moment, and perhaps save any important criticisms for the new year.
Symptoms include hot flushes, difficulty sleeping, vaginal irritation or pain, headaches, and low mood or anxiety. These symptoms normally last around five years, although 10% of women experience them for up to 12 years.
I couldn't see a Disability-Adjusted Life Years rating for menopause. I'd imagine that it might have a similar impact to mild depression, which in 2004 was rated as 0.140.
Currently, about 200 million people are going through menopause, 80% of whom are experiencing symptoms. I'd expect this to increase to 300 million by 2050.
A leading menopause charity in the UK has an annual budget of less than £500k, despite the 4 million British women going through menopause, so I think menopause treatment in the UK could be improved with relatively little money.*
I'm not sure that would create very helpful spillovers to countries where Hormone Replacement Therapy isn't cheaply accessible. On the other hand, online Cognitive Behavioral Therapy is starting to be used to treat some symptoms, and that could probably be scaled up more easily.
*Improving diagnosis and doctor awareness of treatment options seems tractable, but there are some supply chain problems right now which seem less tractable.
https://www.bbc.co.uk/news/health-49308083
I emailed four menopause researchers to get their views on the best way to help women suffering from menopause symptoms. Two have responded so far. Both suggested charities they are affiliated with.
The first suggested the North American Menopause Society. It seems quite reputable. It focuses on the education of women and health professionals in North America. I'm sure there's a lot of work to be done there, but it seems pretty unlikely to do more good than healthcare in the developing world.
The second suggested the International Menopause Society. It's been around for a few decades and has an annual budget of around £300k. They also focus on education of women and healthcare professionals, but on a global scale. They're currently working to translate more educational materials into various languages. They also sponsor young doctors from the developing world to attend educational conferences, and they sponsor one young doctor to do research into menopause each year.
This second researcher also indicated that a lot of research into menopause treatment is already being funded, and treatment is widely available in countries with a decent healthcare system, so it would be better to direct my donation towards education or more basic research about how menopause affects the body (eg the link between menopause and obesity).
I really like the idea of working on a women's issue in a global context. I think women's health has historically been neglected, and IMS seems large enough to be reputable while being small enough that my money would matter to them. I also care a lot about justice and feminism.
Still, I get the feeling that sponsoring a training course for doctors and nurses to be translated into Arabic might not do as much good as buying bednets. It's a really tricky decision! I'm going to think about it for a bit.
I'm feeling most positive about translating materials for healthcare professionals, so if I decide to move forward, my next step will probably be asking for metrics on their training course (how many healthcare professionals registered, how many completed it, etc). I welcome any thoughts on how I can compare IMS with the Against Malaria Foundation.
I really like the idea of working on a women's issue in a global context.
Me too. I'm also wondering about the global burden of period pain, and the tractability of reducing it. Similar to menopause (and non-gender-specific issues such as ageing), one might expect this to be neglected because of a "it's natural and not a disease, and so we can't or shouldn't anything about it" fallacy.
After getting more info, I decided it wasn't so important and neglected as to be competitive with the Against Malaria Foundation. Thanks for following up!
Thanks, that's good to hear enough people seem to be working on it :)
If you have some notes on it you can share, it would be nice if you could collect them and add them to a post together with these shortform posts so that it could be tagged and more discoverable 🙂 (no need to edit anything, and even this bottom line seems important)
Based on a couple of informal Twitter polls, it looks like more candidates prefer feedback than financial compensation for work trials, especially if the feedback is quite specific.
I regularly see EAs misrepresenting the impact of their "policy change" donations.
They'll say something like "$X can save a ton of carbon," but when you look at a detail they're only talking about the cost of lobbying IN ORDER TO INCREASE GOVERNMENT SPENDING. They do not include government spending in the cost of saving a ton of carbon.
I would love to read more posts that takes an assumption or belief and asks "if this were true, what would that mean for EA?"
Examples:
-If [choose one: dignity/fairness/beauty/freedom] is intrinsically valuable, what does that mean for EA? How does that affect our cause areas, charity and career recommendations, and community norms?
-If we assume that input from a wide variety of people provides robustly better outcomes when it comes to representing humanity's values, what would that mean for far future-focused work?
-If we assume the EA movement has ~$200 billion in assets by 2030, such that funders are looking to donate $10+ billion per year, should we be expanding into new cause areas?
I regularly see people write arguments like "One day, we'll colonize the galaxy - this shows why working on the far future is so exciting!"
I know the intuition this is trying to trigger is bigger = more impact = exciting opportunity.
The intuition it actually triggers for me is expansion and colonization = trying to build an empire = I should be suspicious of these people and their plans.
Do you consider this intuition to be a reason that people should be wary of making this type of argument? Or maybe specifically avoid the word "colonize"?
Maybe something like "populate the galaxy" would be better, as it emphasizes that there are no native populations whose members would be harmed by space colonization?
I'm really glad that people have done the work to identify good donation options for people who are particularly focused on COVID-19. However, I don't think most people in EA should be focusing on donating to COVID-19 efforts. I'm particularly concerned that global health charities are getting less attention in the EA community than usual.
Who should pay the cost of Googling studies on the EA Forum?
Many EA Forum posts have minimal engagement with relevant academic literature
If you see a Forum post that doesn't engage with literature you think is relevant, you could make a claim without looking up a citation based on your memory, but there's a reasonable chance you'll be wrong.
Many people say they'd rather see an imperfect post or comment than not have it at all.
But people tend to remember an original claim, even if it's later debunked.
Maybe the best option is to phrase my comment as a question: "Have you looked at the literature on X?"
3. was discussed here. My impression of that discussion is that many of the forum readers thought that it's important that one familiarises oneself with the literature before commenting. Like I say in my comment, that's certainly my view.
I agree that too many EA Forum posts fail to appropriately engage with relevant literature.
I've always thought there's a lower bar for commenting than a top-level post, but maybe both should be reasonably high (you should be able to provide some evidence for your claim in a comment, and have some actual engagement with relevant literature in a post, for example)
I was listening to the 80,000 Hours podcast today and heard Ben Todd say, "The issue is [longtermism is] a new idea."
I've seen this view around EA a few times. It might be true about a certain narrow form of longtermism. It's NOT true of longtermism broadly.
The first time I was introduced to long-termist ideas was in a university Native Studies class, discussing the traditional teaching that the current generation should focus on the well-being of seven generations in the future.
There's even a specific term I can't recall for intentional changes in the environment that a social group would make to domesticate a landscape and provide services for future. It will take me some time to find it.
On the other hand, besides the specifics of strong longtermism, I guess that the conjugation of these ideas is pretty recent: a) concern for humanity as a whole, b) a scope longer than 150 years, c) the existence of a trade-off between present and future welfare, d) the balance is tipped in favor of the long-term. [epistemic status: just an insight, would take me too long to look for a counter-example)
Evidence Action (got lots of EA funding for some projects but not others, wound up shutting down a project, not sure how much the shutdown was their initiative vs. external pressure from GiveWell)
Any other GiveWell top charity (especially if they dropped off the Top Charities list at some point, or were added after being considered but rejected)
J-PAL (has gotten big GiveWell grant, also works a lot on evidence of impact and might have thoughts on whether they think the spread of GiveWell-type standards has been impactful/positive)
Mercy for Animals (maybe the biggest charity focused farm animal work before EA came along? Not sure about that)
But I see people using that as an excuse to not identify as... anything. As in, they avoid affiliating themselves with any social movements, sports teams, schools, nation-states, professions, etc.
It can be annoying and confusing when you ask someone "are you an EA?" or "are you a Christian?" or "are you British?" and they won't give you a straight answer. It's partly annoying because I'm very rationally trying to make some shortcut assumptions about them (if they're an EA, they've probably heard of malaria) and they're preventing me from doing that.
But I also sometimes get the sense that they're trying to protect themselves by not affiliating with a movement, and I find that a bit annoying. I feel like they're a free rider.
What are they trying to protect themselves from? Effectively they're protecting their reputation. This could be from an existing negative legacy of the group. eg If they don't identify as British (even though they're a British citizen) maybe they can dodge questions about the ongoing negative effects of the British empire. They could also be hedging against future negative reputation eg If I call myself an EA but then someone attempts a military coup in the name of EA, I would look bad. By avoiding declaring yourself a group member, you can sometimes avoid your reputation sinking when your chosen group makes bad choices.
Unfortunately, that means that those of us with our reputations on the line are the ones who have the most skin in the game to keep people from doing stupid unilateralist things that make everyone in the community look bad.
I would prefer it if people would take that big scary step of saying they're an EA or Christian or Brit or whatever, and then put in the work to improve your community's reputation. Obviously open to hearing reasons why people shouldn't identify as members of groups, though.
My perspective (which may not differ too much from yours -- just thinking out loud, Shortform-style):
I try to avoid using "effective altruist" as a noun for what I think of as "members of the EA community" or "people interested in effective giving/work", because I want the movement to feel very open to people who aren't ready to label themselves in that way.*
For example:
I like thinking of EA Global as "a conference for people who share a small set of common principles and do a wide variety of different things that they believe to be aligned with those principles", rather than "a conference for people who think of themselves as effective altruists". If you come to our conference regularly, I default to seeing you as a member of our community unless you tell me otherwise, but I don't default to seeing you as an "effective altruist".
If you have strong and well-researched views on global health and development, I'd love to have you at my EA meetup even if you're not very interested in the EA movement.
I support anyone who wants to identify themselves as an effective altruist, and I'm comfortable referring to myself as such, but I don't feel any desire to push people toward adopting that term if their inclination is to answer "are you an EA?" by talking about their values and goals, rather than their group affiliation.
*There's also the tricky bit where calling oneself "effective" could be taken to indicate that you're relatively confident that you're having a lot of impact compared to your available resources, which many people in the community aren't, especially if they focus on more exploratory work/cause areas.
I don't think having people label themselves with a noun - "Christian", "dancer", "student" - necessarily makes other people uncomfortable associating with them. I don't think it's wrong for people who aren't Christians to attend church, but I also don't think nobody referring to themselves as Christians would be a useful way to make people more comfortable at church. If you're worried about people being uncomfortable at EAG, I think the name "EA" is the least likely to be causing the problem.
I don't think there's anything necessary or inevitable about it! My sentiments reflect things I've seen other people say (e.g. "I don't know if I count as an 'effective altruist', I'm new here/don't have belief X"), but how people feel about this and other identity questions is (of course) all over the map. And as I said, I have no problem with anyone referring to themselves as an effective altruist -- I just don't have a problem with the opposite, either.
To use the church analogy: If some people at a church call themselves "Christians", others "Southern Baptists", others "religious seekers", others "spiritual", and still others "agnostic/uncertain", I wouldn't expect that to make things less comfortable for newcomers. (Though attending Unitarian church as a kid might have left me biased in this area!)
I agree that there are many reasons someone might feel uncomfortable at a conference or community event, and I think we both see the particular question of when to use "effective altruist" is just one tiny facet of community cohesion.
In some cases, I think people feel that they have a nuanced position that isn't captured by broad labels. I think that reasoning can go to far, however: if that argument is pushed far enough, no one will count as a socialist, postmodernist, effective altruist, etc. And as you imply, these kinds of broad categories are useful, even while in some respects imperfect.
Yep, makes sense to me! It's difficult for me to identify with a particular denomination of Christianity because I grew up at a non-denominational church and since then I've attended 3 different denominations. So I definitely get the struggle to identify yourself when none of the usual labels quite fit! But I don't have to be a complete mystery - at least I can still say I'm "Christian" or "Protestant"
Unfortunately, that means that those of us with our reputations on the line are the ones who have the most skin in the game to keep people from doing stupid unilateralist things that make everyone in the community look bad.
Surely if someone doesn't identify as an EA, their actions incur less reputational risk for the movement?
I'm 60% sure that LessWrong people use the term "Moloch" in almost exactly the same way as social justice people use the term "kyriarchy" (or "capitalist cis-hetero patriarchy").
I might program my browser to replace "Moloch" with "kyriarchy". Might make Christian Twitter confusing though.
I have often struggled to get started on projects that are particularly important to me so I thought I'd jot down a couple ways I handle procrastination.
Check if I actually want to do the project. Sometimes I like the idea of the project but don't actually want to do it (maybe I can post the idea here instead), or I'm conflicted because working on this task would conflict with my other values (can I change the plan so it meets my needs more fully?).
Check if I have an actually realistic plan. My subconscious is better at expected value calculations than I am and will not go forward with an unpleasant project that is doomed to fail. Sometimes I'm procrastinating because deep down I know this plan would never work.
Lower the stakes. If I think "I'm going to write the perfect blog post, convince everyone to become EA, and save hundreds of lives" that can easily turn into "I have to write the perfect blog post or else I might miss out on convincing someone and people will literally die." That mindset does not help me to produce my best work. A better approach is remember perfect is the enemy of done, and take things in stages - for example writing a first draft and sharing it with a friend for comments.
"FERGUSON: I think the problem is that we are haunted by doomsday scenarios because they’re seared in our subconscious by religion, even though we think we’re very secular. We have this hunch that the end is nigh. The world is going to end in 12 years, or no, it must be 10. So I think part of the problem of modernity is that we’re still haunted by the end time.
We also have the nasty suspicion — this is there in Nick Bostrom’s work — that we’ve created a whole bunch of technologies that have actually increased the probability rather than reduced the probability of an extinction-level event. On the other hand, we’re told that there’s a singularity in prospect when all the technologies will come together to produce superhuman beings with massively extended lifespans and the added advantage of artificial general intelligence.
The epistemic problem, as I see it is — Ian Morris wrote this in one of his recent books— which is the scenario? Extinction-level events or the singularity? That seems a tremendously widely divergent set of scenarios to choose from. I sense that — perhaps this is just the historian’s instinct — that each of these scenarios is, in fact, a very low probability indeed, and that we should spend more time thinking about the more likely scenarios that lie between them.
Your essay, which I was prompted to read before our conversation, about the epistemic problem and consequentialism set me thinking about work I’d done on counterfactual history, for which I would have benefited from reading that essay sooner.
I think that if you ask what are the counterfactuals of the future, we spend too much time thinking about the quite unlikely scenarios of the end of the world through climate change or some other calamity of the sort that Bostrom talks about, or some extraordinary leap forward. I can’t help feeling that these are — not that we can attach probabilities; they lie in the realm of uncertainty — but they don’t seem likely scenarios to me.
I think we’ll end up with something that’s rather more mundane, and perhaps a relief if we’re really serious about the end of the world, or perhaps a disappointment if we’re serious about the singularity."
What about donor coalitions instead of donor lotteries?
Instead of 50 people putting $2000 into a lottery, you could have groups of 5-10 putting $2000 into a pot that they jointly agree where to distribute.
Pros:
-People might be more invested in the decision, but wouldn't have to do all the research by themselves.
-Might build an even stronger sense of community. The donor coalition could meet regularly before the donation to decide where to give, and meet up after the donation for updates from the charity.
-Avoids the unilateralist's curse.
-Less legally fraught than a lottery.
Cons:
-Time consuming for all members, not just a few.
-Decision-making by committee often leads to people picking 'safe', standard options.
I'm considering donating to the Centre for Women's Justice. With a budget of about £300k last year, they have undertaken strategic litigation against the government , Crown prosecutors etc for mismanagement of sexual assault cases. The cases seem well-chosen to raise the issue on the political agenda. I think more rapists being successfully prosecuted would have a very positive impact so I'm excited to see this work. I'm planning to email them soon.
https://www.centreforwomensjustice.org.uk/strategic-plan
An odd observation: He cites someone who's done such stuff before -- John Nolt, a philosopher. He himself is professor of the psychology of music. I think the calculations of both of them are extremely useful (even if extremely speculative). But there's a big question here: what prevented *scientists* from offering such numbers? Are they too afraid of publishing guesstimates? Does it not occur to them that these numbers are utterly relevant for the debate?
I'm becoming concerned that the title "EA-aligned organisation" is doing more harm than good. Obviously it's pointing at something real and you can expect your colleagues to be familiar with certain concepts, but there's no barrier to calling yourself an EA-aligned organisation, and in my view some are low or even negative impact. The fact that people can say "I do ops at an EA org" and be warmly greeted as high status even if they could do much more good outside EA rubs me the wrong way. If people talked about working at a "high-impact organisation" instead, that would push community incentives in a good way I think.
I have exactly the opposite intuition (which is why I've been using the term "EA-aligned organization" throughout my writing for CEA and probably making it more popular in the process).
"EA-aligned organization" isn't supposed to mean "high-impact organization". It's supposed to mean "organization which has some connection to the EA community through its staff, or being connected to EA funding networks, etc."
This is a useful concept because it's legible in a way impact often isn't. It's easy to tell whether an org has a grant from EA Funds/Open Phil, and while this doesn't guarantee their impact, it does stand in for "some people at the community vouch for their doing interesting work related to EA goals".
I really don't like the term "high-impact organization" because it does the same sneaky work as "effective altruist" (another term I dislike). You're defining yourself as being "good" without anyone getting a chance to push back, and in many cases, there's no obvious way to check whether you're telling the truth.
Consider questions like these:
Is Amazon a high-impact organization? (80K lists jobs at Amazon on their job board, so... maybe? I guess certain jobs at Amazon are "high-impact", but which ones? Only the ones 80K posts?)
Is MIRI a high-impact organization? (God knows how much digital ink has been spilled on this one)
It seems like there's an important difference between MIRI and SCI on the one hand, Amazon and Sunrise on the other. The first two have a long history of getting support, funding, and interest from people in the EA movement; they've given talks at EA Global. This doesn't necessarily make them most impactful than Amazon and Sunrise, but it does mean that working at one of those orgs puts you in the category of "working at an org endorsed by a bunch of people with common EA values".
*****
The fact that people can say "I do ops at an EA org" and be warmly greeted as high status even if they could do much more good outside EA rubs me the wrong way.
I hope this doesn't happen very often; I'd prefer that we greet everyone with equal warmth and sincere interest in their work, as long as the work is interesting. Working at an EA-aligned org really shouldn't add much signaling info to the fact that someone has chosen to come to your EA meetup or whatever.
That said, I sympathize with theoretical objections like "how am I supposed to know whether someone would do more good in some other job?" and "I'm genuinely more interested in hearing about someone's work helping to run [insert org] than I would if they worked in finance or something, because I'm familiar with that org and I think it does cool stuff".
Terms that seem to have some of the good properties of "EA-aligned" without running into the "assuming your own virtue" problem:
"Longtermist" (obviously not synonymous with "EA-aligned", but it accurately describes a subset of orgs within the movement)
"Impact-driven" or something like that (indicating a focus on impact without insisting that the focus has led to more impact)
"High-potential" or "promising" (indicating that they're pursuing a cause area that looks good by standard EA lights, without trying to assume success — still a bit self-promotional, though)
Actually referring to the literal work being done, e.g. "Malaria prevention org", "Alternative protein company"
...but when you get at the question of what links together orgs that work on malaria, alternative proteins, and longtermist research, I think "EA-aligned" is a more accurate and helpful descriptor than "high-impact".
Oh, I would have thought it's the other way around - sometimes people don't want to be known as EA-aligned because that can have negative connotations (being too focused on numbers, being judgmental of "what's worthy", slightly cult-like etc). I think "high-impact organisation" may be a good idea as well.
Sometimes I see criticisms of EA that argue, "Historically, groups of white people deciding the direction of the future hasn't been great for groups who aren't represented in that decision-making process."
The responses I see to this are usually something like, "Don't worry about it, we're altruists." But I feel like this would be a good opportunity to take the outside view and do some proper forecasting.
Can you elaborate on the criticism? There have been a ton of bad decisions made by all kinds of groups affecting all kinds of other groups who have not been involved in the decision making process. The most charitable argument, I can come up with, is something like this:
Group X has acted badly in some way.
EA is sufficiently similar to Group X.
Sufficiently similar groups are likely to act the same.
C: EA is likely to act badly in some way.
So group X needs to be specified and "white people" seems far too general.
I agree, except I think stage 1 implies something more like "Group X acts badly in about 80% of examples of Situation Y."
I think the criticism tends to be something like "white people" or "rich white men", which I agree is very vague. I'm really keen we get better at predicting how likely EA is to screw up in particular ways by finding a better reference class.
My model is that most of the people applying for those jobs are not interested in x risk reduction. So if I land one of those jobs, I'm one of a very few people in the world doing government ai policy with an eye towards x risk reduction. So you could say "AI policy with an eye towards x risk reduction" is neglected, but if I were to say "AI policy" is neglected that's what I'd mean. And then, something something pr something something and I think you have why it's not more clear.
It's true that few civil servants are currently thinking about x-risks from AI.
If you believe artificial general intelligence won't emerge for several decades, you might be happy that there will be hundreds of experts with decades worth of experience at that point, and not worry about doing it yourself.
There's a lot of "criticize EA" energy in the air this month. It can be useful and energising. I'm seeing more criticisms than usual produced, and they're getting more engagement and changing more minds than usual.
It makes me a little nervous that criticism can get more traction with less evidence than usual right now. I'm trying to be conciously less critical than usual for the moment, and perhaps save any important criticisms for the new year.
What is the global burden of menopause?
Symptoms include hot flushes, difficulty sleeping, vaginal irritation or pain, headaches, and low mood or anxiety. These symptoms normally last around five years, although 10% of women experience them for up to 12 years.
I couldn't see a Disability-Adjusted Life Years rating for menopause. I'd imagine that it might have a similar impact to mild depression, which in 2004 was rated as 0.140.
Currently, about 200 million people are going through menopause, 80% of whom are experiencing symptoms. I'd expect this to increase to 300 million by 2050.
A leading menopause charity in the UK has an annual budget of less than £500k, despite the 4 million British women going through menopause, so I think menopause treatment in the UK could be improved with relatively little money.*
I'm not sure that would create very helpful spillovers to countries where Hormone Replacement Therapy isn't cheaply accessible. On the other hand, online Cognitive Behavioral Therapy is starting to be used to treat some symptoms, and that could probably be scaled up more easily.
*Improving diagnosis and doctor awareness of treatment options seems tractable, but there are some supply chain problems right now which seem less tractable. https://www.bbc.co.uk/news/health-49308083
I emailed four menopause researchers to get their views on the best way to help women suffering from menopause symptoms. Two have responded so far. Both suggested charities they are affiliated with.
The first suggested the North American Menopause Society. It seems quite reputable. It focuses on the education of women and health professionals in North America. I'm sure there's a lot of work to be done there, but it seems pretty unlikely to do more good than healthcare in the developing world.
The second suggested the International Menopause Society. It's been around for a few decades and has an annual budget of around £300k. They also focus on education of women and healthcare professionals, but on a global scale. They're currently working to translate more educational materials into various languages. They also sponsor young doctors from the developing world to attend educational conferences, and they sponsor one young doctor to do research into menopause each year.
This second researcher also indicated that a lot of research into menopause treatment is already being funded, and treatment is widely available in countries with a decent healthcare system, so it would be better to direct my donation towards education or more basic research about how menopause affects the body (eg the link between menopause and obesity).
I really like the idea of working on a women's issue in a global context. I think women's health has historically been neglected, and IMS seems large enough to be reputable while being small enough that my money would matter to them. I also care a lot about justice and feminism.
Still, I get the feeling that sponsoring a training course for doctors and nurses to be translated into Arabic might not do as much good as buying bednets. It's a really tricky decision! I'm going to think about it for a bit.
I'm feeling most positive about translating materials for healthcare professionals, so if I decide to move forward, my next step will probably be asking for metrics on their training course (how many healthcare professionals registered, how many completed it, etc). I welcome any thoughts on how I can compare IMS with the Against Malaria Foundation.
Me too. I'm also wondering about the global burden of period pain, and the tractability of reducing it. Similar to menopause (and non-gender-specific issues such as ageing), one might expect this to be neglected because of a "it's natural and not a disease, and so we can't or shouldn't anything about it" fallacy.
Do you have any updates here?
After getting more info, I decided it wasn't so important and neglected as to be competitive with the Against Malaria Foundation. Thanks for following up!
Thanks, that's good to hear enough people seem to be working on it :)
If you have some notes on it you can share, it would be nice if you could collect them and add them to a post together with these shortform posts so that it could be tagged and more discoverable 🙂 (no need to edit anything, and even this bottom line seems important)
Based on a couple of informal Twitter polls, it looks like more candidates prefer feedback than financial compensation for work trials, especially if the feedback is quite specific.
https://twitter.com/EAheadlines/status/1487578467889786883?s=20&t=ag59VuOhXKfNK3KXoRy9wg
I regularly see EAs misrepresenting the impact of their "policy change" donations.
They'll say something like "$X can save a ton of carbon," but when you look at a detail they're only talking about the cost of lobbying IN ORDER TO INCREASE GOVERNMENT SPENDING. They do not include government spending in the cost of saving a ton of carbon.
This is very misleading.
I would love to read more posts that takes an assumption or belief and asks "if this were true, what would that mean for EA?"
Examples:
-If [choose one: dignity/fairness/beauty/freedom] is intrinsically valuable, what does that mean for EA? How does that affect our cause areas, charity and career recommendations, and community norms?
-If we assume that input from a wide variety of people provides robustly better outcomes when it comes to representing humanity's values, what would that mean for far future-focused work?
-If we assume the EA movement has ~$200 billion in assets by 2030, such that funders are looking to donate $10+ billion per year, should we be expanding into new cause areas?
I regularly see people write arguments like "One day, we'll colonize the galaxy - this shows why working on the far future is so exciting!"
I know the intuition this is trying to trigger is bigger = more impact = exciting opportunity.
The intuition it actually triggers for me is expansion and colonization = trying to build an empire = I should be suspicious of these people and their plans.
Do you consider this intuition to be a reason that people should be wary of making this type of argument? Or maybe specifically avoid the word "colonize"?
Maybe something like "populate the galaxy" would be better, as it emphasizes that there are no native populations whose members would be harmed by space colonization?
Or "fill the universe/galaxy with life".
I'm really glad that people have done the work to identify good donation options for people who are particularly focused on COVID-19. However, I don't think most people in EA should be focusing on donating to COVID-19 efforts. I'm particularly concerned that global health charities are getting less attention in the EA community than usual.
Who should pay the cost of Googling studies on the EA Forum?
Many EA Forum posts have minimal engagement with relevant academic literature
If you see a Forum post that doesn't engage with literature you think is relevant, you could make a claim without looking up a citation based on your memory, but there's a reasonable chance you'll be wrong.
Many people say they'd rather see an imperfect post or comment than not have it at all.
But people tend to remember an original claim, even if it's later debunked.
Maybe the best option is to phrase my comment as a question: "Have you looked at the literature on X?"
3. was discussed here. My impression of that discussion is that many of the forum readers thought that it's important that one familiarises oneself with the literature before commenting. Like I say in my comment, that's certainly my view.
I agree that too many EA Forum posts fail to appropriately engage with relevant literature.
I've always thought there's a lower bar for commenting than a top-level post, but maybe both should be reasonably high (you should be able to provide some evidence for your claim in a comment, and have some actual engagement with relevant literature in a post, for example)
I was listening to the 80,000 Hours podcast today and heard Ben Todd say, "The issue is [longtermism is] a new idea."
I've seen this view around EA a few times. It might be true about a certain narrow form of longtermism. It's NOT true of longtermism broadly.
The first time I was introduced to long-termist ideas was in a university Native Studies class, discussing the traditional teaching that the current generation should focus on the well-being of seven generations in the future.
Cool. Any special reason for 7?
There's even a specific term I can't recall for intentional changes in the environment that a social group would make to domesticate a landscape and provide services for future. It will take me some time to find it.
On the other hand, besides the specifics of strong longtermism, I guess that the conjugation of these ideas is pretty recent: a) concern for humanity as a whole, b) a scope longer than 150 years, c) the existence of a trade-off between present and future welfare, d) the balance is tipped in favor of the long-term. [epistemic status: just an insight, would take me too long to look for a counter-example)
Question to look into later: How has the EA community affected the charities it has donated to over the past decade?
Some charities that seem like they'd be able to provide especially good feedback on this:
There are some pretty good reasons to keep your identity small. http://www.paulgraham.com/identity.html
But I see people using that as an excuse to not identify as... anything. As in, they avoid affiliating themselves with any social movements, sports teams, schools, nation-states, professions, etc.
It can be annoying and confusing when you ask someone "are you an EA?" or "are you a Christian?" or "are you British?" and they won't give you a straight answer. It's partly annoying because I'm very rationally trying to make some shortcut assumptions about them (if they're an EA, they've probably heard of malaria) and they're preventing me from doing that.
But I also sometimes get the sense that they're trying to protect themselves by not affiliating with a movement, and I find that a bit annoying. I feel like they're a free rider.
What are they trying to protect themselves from? Effectively they're protecting their reputation. This could be from an existing negative legacy of the group. eg If they don't identify as British (even though they're a British citizen) maybe they can dodge questions about the ongoing negative effects of the British empire. They could also be hedging against future negative reputation eg If I call myself an EA but then someone attempts a military coup in the name of EA, I would look bad. By avoiding declaring yourself a group member, you can sometimes avoid your reputation sinking when your chosen group makes bad choices.
Unfortunately, that means that those of us with our reputations on the line are the ones who have the most skin in the game to keep people from doing stupid unilateralist things that make everyone in the community look bad.
I would prefer it if people would take that big scary step of saying they're an EA or Christian or Brit or whatever, and then put in the work to improve your community's reputation. Obviously open to hearing reasons why people shouldn't identify as members of groups, though.
My perspective (which may not differ too much from yours -- just thinking out loud, Shortform-style):
I try to avoid using "effective altruist" as a noun for what I think of as "members of the EA community" or "people interested in effective giving/work", because I want the movement to feel very open to people who aren't ready to label themselves in that way.*
For example:
I support anyone who wants to identify themselves as an effective altruist, and I'm comfortable referring to myself as such, but I don't feel any desire to push people toward adopting that term if their inclination is to answer "are you an EA?" by talking about their values and goals, rather than their group affiliation.
*There's also the tricky bit where calling oneself "effective" could be taken to indicate that you're relatively confident that you're having a lot of impact compared to your available resources, which many people in the community aren't, especially if they focus on more exploratory work/cause areas.
I don't think having people label themselves with a noun - "Christian", "dancer", "student" - necessarily makes other people uncomfortable associating with them. I don't think it's wrong for people who aren't Christians to attend church, but I also don't think nobody referring to themselves as Christians would be a useful way to make people more comfortable at church. If you're worried about people being uncomfortable at EAG, I think the name "EA" is the least likely to be causing the problem.
I don't think there's anything necessary or inevitable about it! My sentiments reflect things I've seen other people say (e.g. "I don't know if I count as an 'effective altruist', I'm new here/don't have belief X"), but how people feel about this and other identity questions is (of course) all over the map. And as I said, I have no problem with anyone referring to themselves as an effective altruist -- I just don't have a problem with the opposite, either.
To use the church analogy: If some people at a church call themselves "Christians", others "Southern Baptists", others "religious seekers", others "spiritual", and still others "agnostic/uncertain", I wouldn't expect that to make things less comfortable for newcomers. (Though attending Unitarian church as a kid might have left me biased in this area!)
I agree that there are many reasons someone might feel uncomfortable at a conference or community event, and I think we both see the particular question of when to use "effective altruist" is just one tiny facet of community cohesion.
In some cases, I think people feel that they have a nuanced position that isn't captured by broad labels. I think that reasoning can go to far, however: if that argument is pushed far enough, no one will count as a socialist, postmodernist, effective altruist, etc. And as you imply, these kinds of broad categories are useful, even while in some respects imperfect.
Yep, makes sense to me! It's difficult for me to identify with a particular denomination of Christianity because I grew up at a non-denominational church and since then I've attended 3 different denominations. So I definitely get the struggle to identify yourself when none of the usual labels quite fit! But I don't have to be a complete mystery - at least I can still say I'm "Christian" or "Protestant"
Surely if someone doesn't identify as an EA, their actions incur less reputational risk for the movement?
Yeah that's probably true - I guess it goes both ways.
I'm 60% sure that LessWrong people use the term "Moloch" in almost exactly the same way as social justice people use the term "kyriarchy" (or "capitalist cis-hetero patriarchy").
I might program my browser to replace "Moloch" with "kyriarchy". Might make Christian Twitter confusing though.
Reducing procrastination on altruistic projects:
I have often struggled to get started on projects that are particularly important to me so I thought I'd jot down a couple ways I handle procrastination.
I recently wrote about post on procrastination related to my EA work here. Feel free to just check out the references at the end.
Doom: The Politics of Catastrophe by Niall Ferguson examines the way governments have handled catastrophes in the past, with widely varying results.
I enjoyed his podcast with Tyler Cowen on it which touches on AI risk
https://conversationswithtyler.com/episodes/niall-ferguson/
"FERGUSON: I think the problem is that we are haunted by doomsday scenarios because they’re seared in our subconscious by religion, even though we think we’re very secular. We have this hunch that the end is nigh. The world is going to end in 12 years, or no, it must be 10. So I think part of the problem of modernity is that we’re still haunted by the end time.
We also have the nasty suspicion — this is there in Nick Bostrom’s work — that we’ve created a whole bunch of technologies that have actually increased the probability rather than reduced the probability of an extinction-level event. On the other hand, we’re told that there’s a singularity in prospect when all the technologies will come together to produce superhuman beings with massively extended lifespans and the added advantage of artificial general intelligence.
The epistemic problem, as I see it is — Ian Morris wrote this in one of his recent books— which is the scenario? Extinction-level events or the singularity? That seems a tremendously widely divergent set of scenarios to choose from. I sense that — perhaps this is just the historian’s instinct — that each of these scenarios is, in fact, a very low probability indeed, and that we should spend more time thinking about the more likely scenarios that lie between them.
Your essay, which I was prompted to read before our conversation, about the epistemic problem and consequentialism set me thinking about work I’d done on counterfactual history, for which I would have benefited from reading that essay sooner.
I think that if you ask what are the counterfactuals of the future, we spend too much time thinking about the quite unlikely scenarios of the end of the world through climate change or some other calamity of the sort that Bostrom talks about, or some extraordinary leap forward. I can’t help feeling that these are — not that we can attach probabilities; they lie in the realm of uncertainty — but they don’t seem likely scenarios to me.
I think we’ll end up with something that’s rather more mundane, and perhaps a relief if we’re really serious about the end of the world, or perhaps a disappointment if we’re serious about the singularity."
Local group idea:
What about donor coalitions instead of donor lotteries?
Instead of 50 people putting $2000 into a lottery, you could have groups of 5-10 putting $2000 into a pot that they jointly agree where to distribute.
Pros:
-People might be more invested in the decision, but wouldn't have to do all the research by themselves.
-Might build an even stronger sense of community. The donor coalition could meet regularly before the donation to decide where to give, and meet up after the donation for updates from the charity.
-Avoids the unilateralist's curse.
-Less legally fraught than a lottery.
Cons:
-Time consuming for all members, not just a few.
-Decision-making by committee often leads to people picking 'safe', standard options.
I'm considering donating to the Centre for Women's Justice. With a budget of about £300k last year, they have undertaken strategic litigation against the government , Crown prosecutors etc for mismanagement of sexual assault cases. The cases seem well-chosen to raise the issue on the political agenda. I think more rapists being successfully prosecuted would have a very positive impact so I'm excited to see this work. I'm planning to email them soon. https://www.centreforwomensjustice.org.uk/strategic-plan
This is the first serious attempt I've seen at estimating deaths from climate change.
https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02323/full#h15
Thanks a lot for this pointer!
An odd observation: He cites someone who's done such stuff before -- John Nolt, a philosopher. He himself is professor of the psychology of music. I think the calculations of both of them are extremely useful (even if extremely speculative). But there's a big question here: what prevented *scientists* from offering such numbers? Are they too afraid of publishing guesstimates? Does it not occur to them that these numbers are utterly relevant for the debate?
That's a really good question! Maybe there just genuinely is too much uncertainty for any estimates, in their views.
I'd honestly even be interested in deaths currently attributable to climate change, but I'm sure even that is a hard problem.
Estimates of the mortality of Covid19 https://www.google.com/url?sa=t&source=web&rct=j&url=https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-2019-nCoV-severity-10-02-2020.pdf&ved=2ahUKEwjv89nz3drnAhUOWxUIHf79CuUQFjAAegQIBhAC&usg=AOvVaw1JKl5ksIPgOf43c2F5_olD&cshid=1582016318282
I'm becoming concerned that the title "EA-aligned organisation" is doing more harm than good. Obviously it's pointing at something real and you can expect your colleagues to be familiar with certain concepts, but there's no barrier to calling yourself an EA-aligned organisation, and in my view some are low or even negative impact. The fact that people can say "I do ops at an EA org" and be warmly greeted as high status even if they could do much more good outside EA rubs me the wrong way. If people talked about working at a "high-impact organisation" instead, that would push community incentives in a good way I think.
I have exactly the opposite intuition (which is why I've been using the term "EA-aligned organization" throughout my writing for CEA and probably making it more popular in the process).
"EA-aligned organization" isn't supposed to mean "high-impact organization". It's supposed to mean "organization which has some connection to the EA community through its staff, or being connected to EA funding networks, etc."
This is a useful concept because it's legible in a way impact often isn't. It's easy to tell whether an org has a grant from EA Funds/Open Phil, and while this doesn't guarantee their impact, it does stand in for "some people at the community vouch for their doing interesting work related to EA goals".
I really don't like the term "high-impact organization" because it does the same sneaky work as "effective altruist" (another term I dislike). You're defining yourself as being "good" without anyone getting a chance to push back, and in many cases, there's no obvious way to check whether you're telling the truth.
Consider questions like these:
It seems like there's an important difference between MIRI and SCI on the one hand, Amazon and Sunrise on the other. The first two have a long history of getting support, funding, and interest from people in the EA movement; they've given talks at EA Global. This doesn't necessarily make them most impactful than Amazon and Sunrise, but it does mean that working at one of those orgs puts you in the category of "working at an org endorsed by a bunch of people with common EA values".
*****
I hope this doesn't happen very often; I'd prefer that we greet everyone with equal warmth and sincere interest in their work, as long as the work is interesting. Working at an EA-aligned org really shouldn't add much signaling info to the fact that someone has chosen to come to your EA meetup or whatever.
That said, I sympathize with theoretical objections like "how am I supposed to know whether someone would do more good in some other job?" and "I'm genuinely more interested in hearing about someone's work helping to run [insert org] than I would if they worked in finance or something, because I'm familiar with that org and I think it does cool stuff".
Terms that seem to have some of the good properties of "EA-aligned" without running into the "assuming your own virtue" problem:
...but when you get at the question of what links together orgs that work on malaria, alternative proteins, and longtermist research, I think "EA-aligned" is a more accurate and helpful descriptor than "high-impact".
Oh, I would have thought it's the other way around - sometimes people don't want to be known as EA-aligned because that can have negative connotations (being too focused on numbers, being judgmental of "what's worthy", slightly cult-like etc). I think "high-impact organisation" may be a good idea as well.
Sometimes I see criticisms of EA that argue, "Historically, groups of white people deciding the direction of the future hasn't been great for groups who aren't represented in that decision-making process."
The responses I see to this are usually something like, "Don't worry about it, we're altruists." But I feel like this would be a good opportunity to take the outside view and do some proper forecasting.
Can you elaborate on the criticism? There have been a ton of bad decisions made by all kinds of groups affecting all kinds of other groups who have not been involved in the decision making process. The most charitable argument, I can come up with, is something like this:
C: EA is likely to act badly in some way.
So group X needs to be specified and "white people" seems far too general.
I agree, except I think stage 1 implies something more like "Group X acts badly in about 80% of examples of Situation Y."
I think the criticism tends to be something like "white people" or "rich white men", which I agree is very vague. I'm really keen we get better at predicting how likely EA is to screw up in particular ways by finding a better reference class.
AI policy is probably less neglected than you think it is.
There are more than 50 AI policy jobs in the UK government. When one's advertised, it gets 50-100 applicants.
The Social Sciences and Humanities Research Council of Canada is really excited about funding AI policy research. http://www.sshrc-crsh.gc.ca/funding-financement/programs-programmes/fellowships/doctoral-doctorat-eng.aspx
AI policy is very important, but at this point it's also very mainstream.
My model is that most of the people applying for those jobs are not interested in x risk reduction. So if I land one of those jobs, I'm one of a very few people in the world doing government ai policy with an eye towards x risk reduction. So you could say "AI policy with an eye towards x risk reduction" is neglected, but if I were to say "AI policy" is neglected that's what I'd mean. And then, something something pr something something and I think you have why it's not more clear.
It's true that few civil servants are currently thinking about x-risks from AI.
If you believe artificial general intelligence won't emerge for several decades, you might be happy that there will be hundreds of experts with decades worth of experience at that point, and not worry about doing it yourself.