Hide table of contents

I invite you to ask anything you’re wondering about that’s remotely related to effective altruism. There’s no such thing as a question too basic.

Try to ask your first batch of questions by Monday, October 17  (so that people who want to answer questions can know to make some time around then).

Everyone is encouraged to answer (see more on this below). There’s a small prize for questions and answers. [Edit: prize-winning questions and answers are announced here.]

This is a test thread — we might try variations on it later.[1]

How to ask questions

Ask anything you’re wondering about that has anything to do with effective altruism.

More guidelines:

  1. Try to post each question as a separate "Answer"-style comment on the post.
  2. There’s no such thing as a question too basic (or too niche!).
  3. Follow the Forum norms.[2]

I encourage everyone to view asking questions that you think might be “too basic” as a public service; if you’re wondering about something, others might, too.

Example questions

  • I’m confused about Bayesianism; does anyone have a good explainer?
  • Is everyone in EA a utilitarian?
  • Why would we care about neglectedness?
  • Why do people work on farmed animal welfare specifically vs just working on animal welfare?
  • Is EA an organization?
  • How do people justify working on things that will happen in the future when there’s suffering happening today?
  • Why do people think that forecasting or prediction markets work? (Or, do they?)

How to answer questions

Anyone can answer questions, and there can (and should) be multiple answers to many of the questions. I encourage you to point people to relevant resources — you don’t have to write everything from scratch!

Norms and guides:

  • Be generous and welcoming (no patronizing).
  • Honestly share your uncertainty about your answer.
  • Feel free to give partial answers or point people to relevant resources if you can’t or don’t have time to give a full answer.
  • Don’t represent your answer as an official answer on behalf of effective altruism.
  • Keep to the Forum norms.

You should feel free and welcome to vote on the answers (upvote the ones you like!). You can also give answers to questions that already have an answer, or reply to existing answers, especially if you disagree.

The (small) prize

This isn’t a competition, but just to help kick-start this thing (and to celebrate excellent discussion at the end), the Forum team will award $100 each to my 5 favorite questions, and $100 each to my 5 favorite answers (questions posted before Monday, October 17, answers posted before October 24).

I’ll post a comment on this post with the results, and edit the post itself to list the winners. [Edit: prize-winning questions and answers are announced here.]


Maybe don’t ask all of these, as they’re not quite related to EA, but this is sort of what I want the comment section of this post to be like. Source.
  1. ^

     Your feedback is very welcome! We’re considering trying out themed versions in the future; e.g. “Ask anything about cause prioritization” or “Ask anything about AI safety.”

    We’re hoping this thread will help get clarity and good answers, counter some impostor syndrome that exists in the community (see 1 and 2), potentially rediscover some good resources, and generally make us collectively more willing to ask about things that confuse us.

  2. ^

     If I think something is rude or otherwise norm-breaking, I’ll delete it.

New Answer
New Comment

78 Answers sorted by

Does anyone know why the Gates Foundation doesn't fill the GiveWell top charities' funding gaps?

One recent paper  suggests that an estimated additional $200–328 billion per year is required for the various measures of primary care and public health interventions from 2020 to 2030 in 67 low-income and middle-income countries and this will save 60 million lives. But if you look at just the amount needed in low-income countries for health care - $396B - and divide by the total 16.2 million deaths averted by that, it suggests an average cost-effectiveness of ~$25k/death averted. 

Other global health interventions can be similarly or more effective: a 2014 Lancet article estimates that, in low-income countries, it costs $4,205 to avert a death through extra spending on health[22]. Another analysis suggests that this trend will continue and from 2015-2030 additional spending in low-income countries will avert a death for $4,000-11,000[23].

For comparison, in high-income countries, the governments spend $6.4 million to prevent a death (a measure called “value of a statistical life”)[24]. This is not surprising given the poorest countries spend less than $100 per person per year on health on average, while high-income countries almost spend $10,000 per person per year[25].

Giv... (read more)

-6
Henry Howard🔸

Could you post this as a new forum post rather than a link to a Google doc? I think it's a question that gets asked a lot and would be good to have an easy to read post to link to.

5
EdoArad
Agree! Hauke, let me know if you'd want me to do that on your behalf (say, using admin permissions to edit that previous post to add the doc content) if it'll help :)
2
Hauke Hillebrandt
Yes, that's fine. 
7
EdoArad
Edited to include the text. Did only a little bit of formatting, and added the appendix as is, so it's not perfect. Let me know if you have any issues, requests, or what not :) 

This is a great question, and the same should be asked of governments (as in: "why doesn't the UK aid budget simply all go to mosquito nets?")

A likely explanation for why the Gates Foundation doesn't give to GiveWell's top charities is that those charities don't currently have much room for more funding (GiveWell had to rollover funding last year because they couldn't spend it all. A recent blog posts suggests they may have more room for funding soon https://blog.givewell.org/2022/07/05/update-on-givewells-funding-projections/)

A likely explanation for why ... (read more)

When I read Critiques of EA that I want to read, one very concerning section seemed to be "People are pretty justified in their fears of critiquing EA leadership/community norms."

1) How seriously is this concern taken by those that are considered EA leadership, major/public facing organizations, or those working on community health? (say, CEA, OpenPhil, GiveWell, 80000 hours, Forethought, GWWC, FHI, FTX) 

2a) What plans and actions have been taken or considered?
2b) Do any of these solutions interact with the current EA funding situation and distribution? Why/why not?

3) Are there publicly available compilations of times where EA leadership or major/public facing organizations have made meaningful changes as a result of public or private feedback?

(Additional note: there were a lot of publicly supportive comments [1] on the Democratising Risk - or how EA deals with critics post, yet it seems like the overall impression was that despite these public comments, she was disappointed by what came out of it. It's unclear whether the recent Criticism/Red-teaming contest was a result of these events, though it would be useful to know which organizations considered or adopted any of the suggestions listed[2] or alternate strategies to mitigate concerns raised, and the process behind this consideration. I use this as an example primarily because it was a higher-profile post that involved engagement from many who would be considered "EA Leaders".)

  1. ^

    1, 2, 3, 4

  2. ^

    "EA needs to diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders' forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards…amongst other structural changes."

Thanks for asking this. I can chime in, although obviously I can't speak for all the organizations listed, or for "EA leadership." Also, I'm writing as myself — not a representative of my organization (although I mention the work that my team does). 

  1. I think the Forum team takes this worry seriously, and we hope that the Forum contributes to making the EA community more truth-seeking in a way that disregards status or similar phenomena (as much as possible). One of the goals for the Forum is to improve community norms and epistemics, and this (criticism of established ideas and entities) is a relevant dimension; we want to find out the truth, regardless of whether it's inconvenient to leadership. We also try to make it easy for people to share concerns anonymously, which I think makes it easier to overcome these barriers.
    1. I personally haven't encountered this problem (that there are reasons to be afraid of criticizing leadership or established norms) — no one ever hinted at this, and I've never encountered repercussions for encouraging criticism, writing some myself, etc. I think it's possible that this happens, though, and I also think it's a problem even if people in the commu
... (read more)

Are there publicly available compilations of times where EA leadership or major/public facing organizations have made meaningful changes as a result of public or private feedback?

Some examples here: Examples of someone admitting an error or changing a key conclusion.

4
pseudonym
Thanks for the link! I think most examples in the post do not include the part about "as a result of public or private feedback", though I think I communicated this poorly. My thought process behind going beyond a list of mistakes and changes to including a description of how they discovered this issue or the feedback that prompted it,[1]  is that doing so may be more effective at allaying people's fears of critiquing EA leadership. For example, while mistakes and updates are documented, if you were concerned about, say, gender diversity (~75% men in senior roles) in the organization,[2] but you were an OpenPhil employee or someone receiving money from OpenPhil, would the the contents of the post [3] you linked actually make you feel comfortable raising these concerns?[4] Or would you feel better if there was an explicit acknowledgement that someone in a similar situation had previously spoken up and contributed to positive change? I also think curating something like this could be beneficial not just for the EA community, but also for leaders and organizations who have a large influence in this space. I'll leave the rest of my thoughts in a footnote to minimize derailing the thread, but would be happy to discuss further elsewhere with anyone who has thoughts or pushbacks about this.[5]   1. ^  Anonymized as necessary 2. ^ I am not saying that I think OpenPhil in fact has a gender diversity problem (is 3/4 men too much? what about 2/3? what about 3/5? Is this even the right way of thinking about this question?), nor am I saying that people working in OpenPhil or receiving their funding don't feel comfortable voicing concerns. I am not using OpenPhil as an example because I believe they are bad, but because they seem especially important as both a major funder of EA and as folks who are influential in object-level discussions on a range of EA cause areas. 3. ^ Specifically, this would be Holden's Three Key Issues I've Chan

Why is scope insensitivity considered a bias instead of just the way human values work?

Quoting Kelsey Piper:

If I tell you “I’m torturing an animal in my apartment,” do you go “well, if there are no other animals being tortured anywhere in the world, then that’s really terrible! But there are some, so it’s probably not as terrible. Let me go check how many animals are being tortured.”

(a minute later)

“Oh, like ten billion. In that case you’re not doing anything morally bad, carry on.”

I can’t see why a person’s suffering would be less morally significant depending on how many other people are suffering. And as a general principle, arbitrarily bounding variables because you’re distressed by their behavior at the limits seems risky.

Not a philosopher, but scope sensitivity follows from consistency (either in the sense of acting similarly in similar situations, or maximizing a utility function). Suppose you're willing to pay $1 to save 100 birds from oil; if you would do the same trade again at a roughly similar rate (assuming you don't run out of money) your willingness to pay is roughly linear in the number of birds you save.

Scope insensitivity in practice is relatively extreme; in the original study, people were willing to pay $80 for 2000 birds and $88 for 200,000 birds. So if you ... (read more)

1
P
I think the money-pump argument is wrong. You are practically assuming the conclusion. A scope insensitive person would negatively value the total number of bird deaths, or maybe positively value the number of birds alive. So that each death is less bad if other birds also die. In this case it doesn't make sense to talk about $1 per 100 avoided deaths in isolation.
3
Thomas Kwa
This doesn't follow for me. I agree that you can construct some set of preferences or utility function such that being scope-insensitive is rational, but you can do that for any policy.

Two empirical reasons not to take the extreme scope neglect in studies like the 2,000 vs 200,000 birds one as directly reflecting people's values.

First, the results of studies like this depend on how you ask the question. A simple variation which generally leads to more scope sensitivity is to present the two options side by side, so that the same people would be asked both about 2,000 birds and about the 200,000 birds (some call this "joint evaluation" in contrast to "separate evaluation"). Other variations also generally produce more scope sensitive resu... (read more)

2
Dan_Keys
A passage from Superforecasting: Note: in the other examples studied by Mellers & colleagues (2015), regular forecasters were less sensitive to scope than they should've been, but they were not completely insensitive to scope, so the Assad example here (40% vs. 41%) is unusually extreme.

Hm, I think that most of the people who participated in this experiment: 

three groups of subjects were asked how much they would pay to save 2,000 / 20,000 / 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88.1 This is scope insensitivity or scope neglect: the number of birds saved—the scope of the altruistic action—had little effect on willingness to pay.

would agree after the results were shown to them that they were doing something irrational that they wouldn't endorse if aware of it. (Ex... (read more)

[anonymous]4
3
0

I think scope insensitivity could be a form of risk aversion over the difference you make in the world (=difference-making) (scope insensitivity is related at least). I explain here why I think that risk aversion over the difference you make is irrational even though risk aversion over states of the world is not. 

I think they are basically not a bias in the way confirmation bias is, and anyone claiming otherwise is pre-supposing linear aggregation of welfare already. From a thing I wrote recently:

Scope neglect is not a cognitive bias like confirmation bias. I can want there to be ≥80 birds saved, but be indifferent about larger numbers: this does not violate the von Neumann-Morgenstern axioms (nor any other axiomatic systems that underlie alternatives to utility theory that I know of). Similarly, I can most highly value there being exactly 3 flowers in the vase o

... (read more)
4
Thomas Kwa
Anything is VNM-consistent if your utility function is allowed to take universe-histories or sequences of actions. So you will have to make some assumptions.

Various social aggregation theorems (e.g. Harsanyi's) show that "rational" people must aggregate welfare additively.

(I think this is a technical version of Thomas Kwa's comment.)

To answer this question in short: It is so because it's innate. Like any other bias scope insensitivity comes from within, in the case of an individual as well as an organization run by individuals. We may generalize it as the product of human values because of the long-running history of constant 'Self-Value' teachings(not the spiritual ones). But there will always be a disparity when considering the ever-evolving nature of human values, especially in the current era.
 

--------


On the contrary, most of the time, I do consider scope insensitivity as the... (read more)

There's a lot of interesting writing about the evolutionary biology and evolutionary psychology of genetic selfishness, nepotism, and tribalism, and why human values descriptively focus on the sentient beings that are more directly relevant to our survival and  reproductive fitness -- but that doesn't mean our normative or prescriptive values should follow whatever natural selection and sexual selection programmed us to value.

2
P
Then what does scope sensitivity follow from?
1
Geoffrey Miller
Scope sensitivity, I guess, is the triumph of 'rational compassion' (as Paul Bloom talks about it in his book Against Empathy), quantitative thinking, and moral imagination, over human moral instincts that are much more focused on small-scope, tribal concerns.  But this is an empirical question in human psychology, and I don't think there's much research on it yet. (I hope to do some in the next couple of years though).
0
P
That explanation is a bit vague, I don't understand what you mean. By "quantitative thinking" do you mean something like having a textual length simplicity prior over moralities? By triumph of moral imagination do you mean somehow changing the mental representation of the world you are evaluating so that it represents better the state of the world? Why do you call it a triumph (implying it's good) over small-scope concerns? Why do you say this is an empirical question? What do you plan on testing?

Why does most AI risk research and writing focus on artificial general intelligence? Are there AI risk scenarios which involve narrow AIs?

Looking at your profile I think you have a good idea of answers already, but for the benefit of everyone else who upvoted this question looking for an answer, here's my take:

Are there AI risk scenarios which involve narrow AIs?

Yes, a notable one being military AI i.e. autonomous weapons (there are plenty of related posts on the EA forum). There are also multipolar failure modes on risks from multiple AI-enabled superpowers instead of a single superintelligent AGI.

Why does most AI risk research and writing focus on artificial general intelligence?

A misalign... (read more)

What happens when we create AI companions for children that are more “engaging” than humans? Would children stop making friends and prefer AI companions?
What happens when we create AI avatars of mothers that are as or more “engaging” to babies than real mothers, and people start using them to babysit? How might that affect a baby’s development?
What happens when AI becomes as good as an average judge at examining evidence, arguments, and reaching a verdict?
 

  1. "AGI" is largely an imprecisely-used initialism: when people talk about AGI, we usually don't care about generality and instead just mean about human-level AI. It's usually correct to implicitly substitute "human-level AI" for "AGI" outside of discussions of generality. (Caveat: "AGI" has some connotations of agency.)
  2. There are risk scenarios with narrow AI, including catastrophic misuse, conflict (caused or exacerbated by narrow AI), and alignment failure. On alignment failure, there are some good stories. Each of these possibilities is considered reasonab
... (read more)

What are the most ambitious EA projects that failed?

If we're encouraged to be more ambitious, it would be nice to have a very rough idea of how cost-effective ambition is itself. Essentially, I'd love to find or arrive at an intuitive/quantitative estimate of the following variables:

  • [total # of particularly 'ambitious' past EA projects[1]]
  • [total # (or value) of successfwl projects in the same reference class]

In other words, is the reason why we don't see more big wins in EA that people aren't ambitious enough, or are big wins just really unlikely? Are we bottlenecked by ambition?

For this reason, I think it could be personally[2] valuable to see a list,[3] one that tries hard to be comprehensive, of failed, successfwl, and abandoned projects. Failing that, I'd love to just hear anecdotes.

  1. ^

    Carrick Flynn's political campaign is a prototypical example. Others include CFAR, Arbital, RAISE. Other ideas include published EA-inspired books that went under the radar, papers that intended to persuade academics but failed, or even just earning-to-give-motivated failed entrepreneurs, etc.

  2. ^

    I currently seem to have a disproportionately high prior on the "hit rate" for really high ambition, just because I know some success stories (e.g. Sam Bankman-Fried), and this is despite the fact that I don't see much extreme ambition in the water generally.

  3. ^

    Such a list could also be usefwl for publicly celebrating failure and communicating that we're appreciative of people who risked trying. : )

Why there hasn't been a consensus/debate between people with contradicting views on the AGI timelines/safety topic?

I know almost nothing about ML/AI and I don't think I can form an opinion on my own so I try to base my opinion on the opinions of more knowledgeable people that I trust an respect. However what I find problematic is that those opinions vary dramatically, while it is not clear why those people hold their beliefs. I also don't think I have enough knowledge in the area to be able to extract that information from people myself eg. if I talk to a knowledgeable 'AGI soon and bad' person they would very likely convince me in their view and the same would happen if I talk to a knowledgeable 'AGI not soon and good' person. Wouldn't it be good idea to have debates between people with those contradicting views, figure out what the cruxes are and write them down? I understand that some people have vested interests in one side of the questions, for example a CEO of an AI company may not gain much from such debate and thus refuse to participate in it, but I think there are many reasonable people that would be willing to share their opinion and hear other people's arguments. Forgive me if this has already been done and I have missed it (but I would appreciate if you can point me to it).

  1. OpenPhil has commissioned various reviews of its work, e.g. on power-seeking AI.
  2. Less formal, but there was this facebook debate between some big names in AI.

Overall, I think a) this would be cool to see more of and b) it would be a service to the community if someone collected all the existing examples together.

Not exactly what you're describing, but MIRI and other safety researchers did the MIRI conversations and also sort of debated at events. They were helpful and I would be excited about having more, but I think there are at least three obstacles to identifying cruxes:

  • Yudkowsky just has the pessimism dial set way higher than anyone else (it's not clear that this is wrong, but this makes it hard to debate whether a plan will work)
  • Often two research agendas are built in different ontologies, and this causes a lot of friction especially when researcher A's ontol
... (read more)

The debate on this subject has been ongoing between individuals who are within or adjacent to the EA/LessWrong communities (see posts that other comments have linked and other links that are sure to follow). However, these debates often are highly insular and primarily are between people who share core assumptions about:

  1. AGI being an existential risk with a high probability of occurring
  2. Extinction via AGI having a significant probability of occurring within our lifetimes (next 10-50 years)
  3. Other extinction risks (e.g pandemics or nuclear war) not likely m
... (read more)

There was a prominent debate between Eliezer Yudkowsky and Robin Hanson back in 2008 which is a part of the EA/rationalist communities' origin story, link here: https://wiki.lesswrong.com/index.php?title=The_Hanson-Yudkowsky_AI-Foom_Debate

Prediction is hard and reading the debate from the vantage point of 14 years in the future it's clear that in many ways the science and the argument has moved on, but it's also clear that Eliezer made better predictions than Robin Hanson did, in a way that inclines me to try and learn as much of his worldview as possible so I can analyze other arguments through that frame. 

2
leosn
This link could also be useful for learning how Yudkowsky & Hanson think about the issue: https://intelligence.org/ai-foom-debate Essentially, Yudkowsky is very worried about AGI ('we're dead in 20-30 years' worried) because he thinks that progress on AI overall will rapidly accelerate as AI helps us make further progress. Hanson was (is?) less worried.  

What level of existential risk would we need to achieve for existential risk reduction to no longer be seen as "important"?

What's directly relevant is not the level of existential risk, but how much we can affect it. (If existential risk was high but there was essentially nothing we could do about it, it would make sense to prioritize other issues.) Also relevant is how effectively we can do good in other ways. I'm pretty sure it costs less than 10 billion times as much (in expectation, on the margin) to save the world as to save a human life, which seems like a great deal. (I actually think it costs substantially less.) If it cost much more, x-risk reduction would be less appealing; the exact ratio depends on your moral beliefs about the future and your empirical beliefs about how big the future could be.

1
pseudonym
Thanks! Presumably both are relevant, or are you suggesting if we were at existential risk levels 50 orders of magnitude below today and it was still as cost-effective as it is today to reduce existential risk by 0.1% you'd still do it?
2
Zach Stein-Perlman
I meant risk reduction in the absolute sense, where reducing it from 50% to 49.9% or from 0.1% to 0% is a reduction of 0.1%. If x-risk was astronomically smaller, reducing it in absolute terms would presumably be much more expensive (and if not, it would only be able to absorb a tiny amount of money before risk hit zero).
2
pseudonym
I'm not sure I follow the rationale of using absolute risk reduction here, if you drop existential risk from 50% to 49.9% for 1 trillion dollars that's less cost effective than if you drop existential risk from 1% to 0.997% at 1 trillion dollars, even though one is a 0.1% absolute reduction, and the other is a 0.002% absolute reduction. So if you're happy to do a 50% to 49.9% reduction at 1 trillion dollars, would you not be similarly happy to go from 1% to 0.997% for 1 trillion dollars? (If yes, what about 1e-50 to 9.97e-51?)

What is the strongest ethical argument you know for prioritizing AI over other cause areas? 

I'd also be very interested in the reverse of this. Is there anyone who has thought very hard about AI risk and decided to de-prioritise it?

I think Transformative AI is unusually powerful and dangerous relative to other things that can plausibly kill us or otherwise drastically affect human trajectories, and many of us believe AI doom is not inevitable. 

I think it's probably correct for EAs to focus on AI more than other things.

Other plausible contenders (some of which I've worked on) include global priorities research, biorisk mitigation, and moral circle expansion. But broadly a) I think they're less important or tractable than AI, b) many of them are entangled with AI (e.g. global priorities research that ignores AI is completely missing the most important thing).

8
Lizka
I largely agree with Linch's answer (primarily: that AI is really likely very dangerous), and want to point out a couple of relevant resources in case a reader is less familiar with some foundations for these claims:  * The 80,000 Hours problem profile for AI is pretty good, and has lots of other useful links  * This post is also really helpful, I think: Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover * More broadly, you can explore a lot of discussion on the AI risk topic page in the EA Forum Wiki

Thank you for asking this! Some fascinating replies!

A related question:

Considering other existential risks like engineered pandemics, etc., is there an ethical case for continuing to escalate the advancement of AI development despite the possibly-pressing risk of unaligned AGI for addressing/mitigating other risks, such as developing better vaccines, increasing the rate of progress in climate technology research, etc.?

[I'll be assuming a consequentialist moral framework in this response, since most EAs are in fact consequentialists. I'm sure other moral systems have their own arguments for (de)prioritizing AI.]

Almost all the disputes on prioritizing AI safety are really epistemological, rather than ethical; the two big exceptions being a disagreement about how to value future persons, and one on ethics with very high numbers of people (Pascal's Mugging-adjacent situations).

I'll use the importance-tractability-neglectedness (ITN) framework to explain what I mean. The ITN... (read more)

Reasonable people think it has the most chance of killing all of us and ending future conscious life. Compared to other risks it is bigger, compared to other cause areas it will extinguish more lives.

3
Ula Zarosa
"Reasonable people think" - this sounds like a very weak way to start an argument. Who are those people - would be the next question. So let's skip the differing to authority argument. Then we have "the most chance" - what are the probabilities and how soon in the future? Cause when we talk about deprioritizing other cause areas for the next X years, we need to have pretty good probabilities and timelines, right? So yeah, I would not consider differing to authorities a strong argument. But thanks for taking the time to reply.
4
Nathan Young
A survey of ML researchers (not necessarily AI Safety, or EA) gave the following That seems much higher than the corresponding sample in any other field I can think of. I think that an "extremely bad outcome" is  probably equivalent to 1Bn or more people dying.  Do a near majority of those who work in green technology (what feels like the right comparison class) feel that climate change has a 10% chance of 1 Bn deaths? Personally, I think there is like a 7% chance of extinction before 2050, which is waaay  higher than anything else. 
4
Howie_Lempel
FYI - subsamples of that survey were asked about this in other ways, which gave some evidence that "extremely bad outcome" was ~equivalent to extinction.   1. ^ Or, ‘human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species’ 2. ^ That is, ‘future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species’
1[anonymous]
There is a big gap between killing all of us and ending future conscious life (on earth, in our galaxy, entire universe/multiverse?)
3
Nathan Young
Yes, but it's a much smaller gap than any other cause doing this.  You're right, conscious life will probably be fine. But it might not be.

What's the best way to talk about EA in brief, casual contexts? 

Recently I've been doing EA-related writing and copyediting, which means that I've had to talk about EA a lot more to strangers and acquaintances, because 'what do you do for work?' is a very common ice-breaker question.  I always feel kind of awkward and like I'm not doing the worldview justice or explaining it well. I think the heart of the awkwardness is that 'it's a movement that wants to do the most good possible/do good effectively' seem tautologous (does anyone want to do less good than possible?); and because EA is kind of a mixture of philosophy and career choice and charity evaluating and [misc], I basically find it hard to find legible concepts to hang it on. 

For context, I used to be doing a PhD in Greek and Roman philosophy - not exactly the most "normal" job- and I found that way easier to explain XD

 

Related questions:
-what's the best way to talk about EA on your personal social media?
-what's the best way to talk about it if you go viral on Twitter? (this happened to me today)
-what's the best way to talk about it to your parents and older family members? 
etc. 

I think, kind of, 'templates' about how to approach these situations risk seeming manipulative and being cringey, as 'scripts' always are if you don't make them your own, but I'd really enjoy reading a post collecting advice from EA community builders, communicators, marketers, content w... (read more)

2
SeanFre
I think a collection like the one you're proposing would be an incredibly valuable resource for growing the EA community. 

Here's an excellent resource for different ways of pitching EA (and sub-parts of EA). Disclaimer - I do not know who is the owner of this remarkable document. I hope sharing it here is acceptable! As far as I know, this is a public document. 

My contingently-favorite option:

Effective Altruism is a global movement that spreads practical tools and advice about prioritizing social action to help others the most. We aim that when individuals are about to invest time or money in helping others, they will examine their options and choose the one with the hig... (read more)

2
Amber Dawn
Thanks! This looks extremely comprehensive.
2
Rona Tobolsky
This resource is also robust (and beautifully outlined).

What would convince you to start a new effective animal charity?

Has anyone produced writing on being pro-choice and placing a high value on future lives at the same time? I’d love to read about how these perspectives interact!

FYI I'm also interested in this. 
I do think it's consistent to be pro-choice and place a high value on future lives (both because people might be able to create more future lives by (eg) working on longtermist causes than by having kids themself, and because you can place a high value on lives but say that it is outweighed by the harm done by forcing someone to give birth. But I think that pro-natalist and non-person-affecting views do have implications for reproductive rights and the ethics of reproduction that are seldom noticed or made explicit.

Richard Chappell wrote this piece, though IMHO it doesn't really get to the heart of the tension.

I've only just stumbled upon this question and I'm not sure if you'll see this, but I wrote up some of my thoughts on the problems with the Total View  of population ethics (see "Abortion and Contraception" heading specifically). 

Personally, I think there is a tension there which does not seem to have been discussed much in the EA forum. 

Here is a good post on the side of being pro-life: https://forum.effectivealtruism.org/posts/ADuroAEX5mJMxY5sG/blind-spots-compartmentalizing

I have thought about this a lot, and I think pro-life might actually win out in terms of utility maximization if it doesn't increase existential risk.

1) What level of funding or attention (or other metrics) would longtermism or AI safety need to receive for it to no longer be considered "neglected"?

2) Does OpenPhil or other EA funders still fund OpenAI? If so, how much of this goes towards capabilities research? How is this justified if we think AI safety is a major risk for humanity? How much EA money is going into capabilities research generally?

(This seems like something that would have been discussed a fair amount, but I would love a distillation of the major cruxes/considerations, as well as what would need to change for OpenAI to be no longer worth funding in future).

  1. See here. (Separating importance and neglectedness is often not useful; just thinking about cost-effectiveness is often better.)
  2. No.
1
pseudonym
Thanks! This makes sense. In my head AI safety feels like a cause area that can just have room for a lot of funding etc, but unlike nuclear war or engineered pandemics which seem to have clearer milestones for success, I don't know what this looks like in the AI safety space. I'm imagining a hypothetical scenario where AI safety is overprioritized by EAs, and wondering if or how we will discover this and respond appropriately.

I’ve asked this question on the forum before to no reply, but do the people doing grant evaluations consult experts in their choices? Like do global development grant-makers consult economists before giving grants? Or are these grant-makers just supposed to have up-to-date knowledge of research in the field?

I’m confused about the relationship between traditional topic expertise (usually attributed to academics) and EA cause evaluation.

[My impression. I haven't worked on grantmaking for a long time.] I think this depends on the topic, size of the grant, technicality of the grant, etc. Some grantmakers are themselves experts. Some grantmakers have experts in house. For technical/complicated grants, I think non-expert grantmakers will usually talk to at least some experts before pulling the trigger but it depends on how clearcut the case for the grant is, how big the grant is, etc.

'If I take EA thinking, ethics, and cause areas more seriously from now on, how can I cope with the guilt and shame of having been so ethically misguided in my previous life?'

or, another way to put this:

'I worry that if I learn more about animal welfare, global poverty, and existential risks, then all of my previous meat-eating, consumerist status-seeking, and political virtue-signaling will make me feel like a bad person'

(This is a common 'pain point' among students when I teach my 'Psychology of Effective Altruism' class)

I might be missing the part of my brain that makes these concerns make sense, but this would roughly be my answer: Imagine that you and everyone in your household consume water with lead it in every day. You have the chance to learn if there is lead in the the water. If you learn that it does, you'll feel very bad but also you'll be able to change your source of water going forward. If you learn that it does not, you'll no longer have this nagging doubt about the water quality. I think learning about EA is kind of like this. It will be right or wrong to eat animals regardless of whether you think about it, but only if you learn about it can you change for the better. The only truly shameful stance, at least to me, is to intentionally put your head in the sand.

My secondary approach would be to say that you can't change your past but you can change your future. There is no use feeling guilt and shame about past mistakes if you've already fixed them going forward. Focus your time and attention on what you can control.

My two cents: I view EA as supererogatory, so I don't feel bad about my previous lack of donations, but feel good about my current giving.

Changing the "moral baseline" does not really change decisions: seeing "not donating" as bad and "donating" as neutral leads to the same choices as seeing "not donating" as neutral and "donating" as good.

4
Geoffrey Miller
In principle, changing the moral baseline shouldn't change decisions -- if we were fully rational utility maximizers. But for typical humans with human psychology, moral baselines matter greatly, in terms of social signaling, self-signaling, self-esteem, self-image, mental health, etc.
5
Lorenzo Buonanno🔸
I agree! That's why I'm happy that I can set it wherever it helps me the most in practice (e.g. makes me feel the "optimal" amount of guilt, potentially 0)

Meta:

  1. Seems like a more complicated question than [I could] solve with a comment
  2. Seems like something I'd try doing one on one, talking with (and/or about) a real person with a specific worry, before trying to solve it "at scale" for an entire class
  3. I assume my understanding of the problem from these few lines will be wrong and my advice (which I still will write) will be misguided
  4. Maybe record a lesson for us and we can watch it?

Tools I like, from the CFAR handbook, which I'd consider using for this situation:

  1. IDC (maybe listen to that part afraid you'll think
... (read more)
5
Geoffrey Miller
Yanatan -- I like your homunculus-waking-up thought experiment. It might not resonate with all students, but everybody's seen The Matrix, so it'll probably resonate with many.

If you haven't come across it, a lot of EAs have found Nate Soares' Replacing Guilt series useful for this. (I personally didn't click with it but have lots of friends who did).

I like the way some of Joe Carlsmith's essays touch on this. 

4
Howie_Lempel
Much narrower recommendation for nearby problems is Overcoming Perfectionism (~a CBT workbook).  I'd recommend to some EAs who are already struggling with these feelings (and know some who've really benefitted from it). (It's not precisely aimed at this but I think it can be repurposed for a subset of people.) Wouldn't recommend to students recently exposed to EA who are worried about these feelings in future.

What has helped me most is this quote from Seneca:

Even this, the fact that it [the mind] perceives the failings it was unaware of in itself before, is evidence for a change for the better in one's character.

That helped me feel a lot better about finding unnoticed flaws and problems in myself, which always felt like a step backwards before. 

I also sometimes tell myself a slightly shortened Litany of Gendlin:

What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
People can stand what is true,
for ... (read more)

My personal approach:

  • I no longer think of myself as "a good person" or "a bad person", which may have something to do with my leaning towards moral anti-realism. I recognize that I did bad things in the past and even now, but refuse to label myself "morally bad" because of them; similarly, I refuse to label myself "morally good" because of my good deeds. 
    • Despite this, sometimes I still feel like I'm a bad person. When this happens, I tell myself: "I may have been a bad person, so what? Nobody should stop me from doing good, even if I'm the worst perso
... (read more)

I think You Don't Need To Justify Everything is a somewhat less related post (than others that have been shared in this thread already) that is nevertheless on point (and great).

I think it's okay to feel guilty, shame, remorse, rage, or even hopeless about our past "mistakes". These are normal emotions, and we can't or rather shouldn't purposely avoid or even bury them. It's analogous to someone being dumped by a beloved partner and feeling like the whole world is crumbling. No matter how much we try to comfort such a person, he/or she will feel heartbroken.

In fact, feeling bad about our past is a great sign of personal development because it means we realize our mistakes! We can't improve ourselves if we don't even know what we d... (read more)

4
Geoffrey Miller
Kiu -- I agree. It reminds me of the old quote from Rabbi Nachman of Breslov (1772-1810):  “If you won’t be better tomorrow than you were today, then what do you need tomorrow for?” https://en.wikipedia.org/wiki/Nachman_of_Breslov

Does anyone have a good list of books related to existential and global catastrophic risk? This doesn't have to just include books on X-risk / GCRs in general, but can also include books on individual catastrophic events, such as nuclear war. 

Here is my current resource landscape (these are books that I have personally looked at and can vouch for; the entries came to my mind as I wrote them - I do not have a list of GCR / X-risk books at the moment; I have not read some of them in full): 

General: