Hide table of contents

James Faville and I think that it would be valuable for people to get feedback on posts they are planning on writing, in particular in getting an idea of what others would be most excited to read.

We think this will accomplish a few things:

1. Encourage people to publish the posts

2. Help them prioritize between post ideas based on community feedback

3. Get directed to useful readings/resources

4. (For everyone) Get a sense of what the community is working on

Edit: If you'd like community feedback on a post, there is an EA Editing and Review facebook group.


New Answer
New Comment

28 Answers sorted by

"The EA Doldrums: Drifting for no good reason"

A piece exploring why it took me so long to go from "leader of moderately successful student group" to "actually applying for jobs in EA", and speculating that there may be a lot of other people who aren't aware of how qualified they actually are for direct work (with reference to at least one more anecdotal example of someone who was in the "doldrums" for a while). Includes thoughts on what kinds of prompting might actually get people in these positions to take EA jobs seriously.

I feel I should note that there is an opposite problem happening as well. Robert Wiblin once wrote:

It's a problem for 80,000 Hours that people range from wildly overconfident in themselves to wildly under-confident in themselves. The extent of people's inaccurate self-assessments has surprised me and might surprise you too.

As a result, almost anything we say to help people figure out whether they can plausibly pursue a given career path will still lead to some combination of confident but unsuitable people pushing ahead, and under-confident but suitable people not even bothering to try. Both of these are significant costs.

The ideal is to give objective measures like test scores, but i) many roles have no such clear entry criteria, ii) even those that do usually also require some softer skills that are harder to measure, iii) most people won't have done the test, so we're back to people's guesses about how well they would do, and iv) some people have such strong positive and negative convictions about themselves even this wouldn't help.

Anyway, the bottom line is that if you could all go and achieve perfect self-knowledge it would make my job slightly easier, thank you.

There are certainly people on both ends of the (confidence / ability) spectrum. I suspect that "skilled people deciding not to try entering EA work" is a bigger problem than "people trying to push ahead when they shouldn't".

Reasoning:

  • From an individual's perspective, "wasting time trying to enter a field" doesn't seem much worse than "missing your chance to enter a field where you'd have had a much higher impact than you did otherwise".
  • From an org's perspective, it's much more costly to miss out on a great employee than to say "no" to one more person.

But there are a lot of other ways you could look at the issue, and this is just my first impression.

9
Stefan_Schubert
Generally, I would expect more people to overestimate themselves (illusory superiority) than underestimate themselves. I also expect that there is a social desirability bias at play here: it's more socially acceptable to point out that people underestimate themselves, than that they overestimate themselves.

Did you ever write this? I'd love to read it.

Unsolicited advice-seeking (respond to all, some, or none, as your schedule and interests permit): Is being the "leader of a moderately successful student group" in itself a useful qualification for getting EA jobs? And if so, where do you find openings where it's relevant? (I'm the leader of a moderately successful student group! :D) I just finished a bachelors in economics and my very preliminary search of EA-adjacent job postings has turned up a lot of opportunities for grad students,... (read more)

3
Aaron Gertler 🔸
Didn't write it, but have two-thirds of a draft lying around to finish someday. Leading a group is a good signal, but for most jobs, I think other qualifications will also be important (though these could include "having a strong application and doing well on work tests"). If you're trying to do something that makes use of your econ knowledge (rather than your ops/organizing ability or general research skills), competing with PhDs will be tough. I'm an unusual case, because I went to a one-off retreat for people interested in ops work at a time lots of orgs were hiring at once -- it was a bit like a "job fair". Had I not gone there, I'd have just kept checking the 80K job board, the "Effective Altruism Job Postings" Facebook group, and the websites of a few orgs I liked (if I'd seen that their jobs weren't being added to the board).
3
Linch
FWIW, I think tutoring EAs can be a valuable intervention, though maybe won't ever be big enough for an org (or possibly even a single person) to work on this full-time.
2
JP Addison🔸
Now on a massive tangent, but maybe you could offer to subsidize people buying tutoring from Wyzant?

"Examples of good EA hiring practices":

A list of good things I've seen various EA orgs do in their hiring processes (in the process of applying to at least seven of them). Meant as inspiration for other organizations; I'd hope that it would get lots of additional material from commenters who have also applied for EA jobs.

(in no particular order)

1. The application of social movement theory to EA group building

a. The tensions between a member-organising movement (grassroots) and a centrally organised (top down) movement (early draft)

b. historical case studies of movement building to learn from (brainstorming - environmental movement)

2. Ideas to improve the presence of EA in developing countries and non-EA Hubs (editing stage)

3. Climate Change and EA

a. A research agenda for EA and climate change (early draft)

b. How to make room for climate change research in the EA movement (editing stage)

4. Career Change Resources in the EA Community Research project (research stage)

Wow, they all sound so fascinating!

Checking back on this thread now that everyone's spending more time cooped up inside :-/

Have you made progress on any of these ideas? I'd be happy to help!

1
Vaidehi Agarwalla 🔸
Thanks for checking Aaron! I've been meaning to update this thread. 1a) I came very close to publishing this in November, but realised it needed a lot more work to be readable and ended up splitting the post into 3 to make it more readable. I've been prioritising other projects, aim to publish by April 2020. 1b) I have a bunch of interesting papers collected but haven't made progress yet. Will likely start after 1a) 1. I wrote and never published this because: * I think it was too generalized and overly simplistic * I think some of the things I wrote were likely wrong/inaccurate * I felt the most effective way to help developing EA presence was assisting existing projects and direct work. * Why writing the post was still valuable: * Helped me clarify my own theories of movement building * Ended up witing a few other posts to explain some of my assumptions * I've shared it with others trying to answer these questions 3a) This became a much more ambitious and comprehensive volunteer project, but it also means that progress has been slow and incremental. I plan on writing a post about how the project failed and lessons learnt (but I'm experimenting with some new ways to make progress on this and want to see the results first). b) This post is written, but i didn't see the value of posting another call for climate change on the forum since, as with 2), I updated towards doing direct work to make progress on this space. (I'd be curious to hear if you think there's still value in posting such a post) We now have an Effective Environmentalism directory and have started weekly calls on different EE related topics on facebook. Would be curious to hear your thoughts on this. 1. I created an (almost) comprehensive Effective Environmentalism Resources page. Some of us are now working on a more user-friendly introductory resource for non-EAs.
2
MichaelA
My two cents: I can understand why you'd want to not post 2, if you believe it had those issues. But it seems like, if 3b is already written, it might as well be posted, unless you think it's fundamentally mistaken. If you just think that EA climate change research is a less valuable approach than you used to, then maybe you could slap some extra caveats and updates at the top. It could still potentially serve as some useful thoughts for people who do pursue that approach, or serve as an explanation of why you think that approach isn't that valuable, or that sort of thing. I'm not personally very focused on climate change, and don't think I'd personally read the post. But I have a general sense that posts that are just "maybe not very novel or useful" still might as well be posted, once the effort has gone into writing them. It seems like they may at least be appreciated in some way by some niche audience, or suggest to others that that topic isn't worth them writing about. And worst case scenario is usually just they don't get read much, or slightly waste a few people's time. This doesn't apply to posts that are so incorrect they'd leave people with worse beliefs, or posts that pose information hazards, but it didn't sound like you thought those things were true of 3b?

I'm looking forward to 3a and 3b!

2
Milan_Griffes
See also: * Climate change, geoengineering, and existential risk * Climate Change Is, In General, Not An Existential Risk * Founders Pledge report on climate change

"Health and happiness: some open research topics"

This has been 90% complete for >6 months but finishing it has never seemed the top priority. The draft summary is below, and I can share the drafts with interested people, e.g. those looking for a thesis topic.

Summary

While studying health economics and working on the 2019 Global Happiness and Wellbeing Policy Report, I accumulated a list of research gaps within these fields. Most are related to the use of subjective wellbeing (SWB) as the measure of utility in the evaluation of health interventions and the quantification of the burden of disease, but many are relevant to cause prioritisation more generally.

This series of posts outlines some of these topics, and discusses ways they could be tackled. Some of them could potentially be addressed by non-profits, but the majority are probably a better fit for academia. In particular, many would be suitable for undergraduate or master's theses in health economics, public health, psychology and maybe straight economics – and some could easily fill up an entire PhD, or even constitute a new research programme.

The topics are divided into three broad themes, each of which receives its own post.

Part 1: Theory

The first part focuses on three fundamental issues that must be addressed before the quality-adjusted life-year (QALY) and the disability-adjusted life-year (DALY) can be derived from SWB measures, which would effectively create a wellbeing-adjusted life-year (WELBY).

Topic 1: Reweighting the QALY and DALY using SWB

Topic 2: Anchoring SWB measures to the QALY/DALY scale

Topic 3: Valuing states 'worse than dead’

Part 2: Application

Assuming the technical and theoretical hurdles can be overcome, this section considers four potential applications of a WELBY-style metric.

Topic 4: Re-estimating the global burden of disease based on SWB

Topic 5: Re-estimating disease control priorities based on SWB

Topic 6: Estimating SWB-based cost-effectiveness thresholds

Topic 7: Comparing human and animal wellbeing

Parts 1 and 2 include a brief assessment of each topic in terms of importance, tractability and neglectedness. I'm pretty sceptical of the ITN framework, especially as applied to solutions rather than problems, and I haven't tried to give numerical scores to each criterion, but I found it useful for highlighting caveats. Overall, I'm fairly confident that these topics are neglected, but I'm not making any great claims about their tractability, importance or overall priority relative to other areas of global health/development, let alone compared to issues in other cause areas. It would take much more time than I have at the moment to make that kind of judgement.

Part 3: Challenges

The final section highlights some additional questions that require answering before the case for a wellbeing approach can be considered proven. These are not discussed in as much detail and no ITN assessment is provided (the Roman numerals reinforce their distinction from the main topics addressed in Parts 1 and 2).

(i) Don’t QALYs and DALYs have to be derived from preferences?

(ii) In any case, shouldn’t we focus on improving preference-based methods?

(iii) Should the priority be reforming the QALY rather than the DALY?

(iv) Are answers to SWB questions really interpersonally comparable?

(v) Which SWB self-report measure is best?

(vi) Whose wellbeing is actually measured by self-reported SWB scales?

(vii) Whose wellbeing should be measured?

(viii) How feasible is it to obtain the required data?

(ix) Are more objective measures of SWB viable yet?

Part 3 also concludes the series by considering the general pros and cons of working on outcome metrics.

I know it's been a while since you posted this, but if you still hope to post it someday, and if there's anything I can do to help with the last 10%, please let me know!

(With everyone cooped up inside, I figured this might be a good chance for folks to get to the writing projects they thought they'd never have time for, though of course not everyone has become less busy as a result of the pandemic.)

4
Derek
Hah! I was working on them before getting sidelined with covid stuff. I can send you the drafts if you send me a PM. The content is >80% done (I've decided to add more, so the % complete has dropped) but they need reorganising into ~10 manageable posts rather than 3 massive ones.

These are important topics IMO.

A sequence on moral anti-realism and its implications

I published the first post "What is moral realism?" last year and have about five half-finished drafts stored somewhere, but then I got sidetracked massively. Tentative titles were:

1. What is moral realism? [published]

2. Against irreducible normativity

3. Is there a wager for moral realism?

4. Metaethical fanaticism (dialogue about the strange implications of an infinite "moral realism wager")

5. [Untitled – something about "People aren't born consequentialists; people live their lives in different modes; vocations are not just discovered but also chosen"]

6. Introspection-based moral realism

7. Why I'm a moral anti-realist (sequence summary)

8. Anti-realism is not nihilistic

9. Anti-realism: What changes?

  • Less bullet biting?
  • Treating peer disagreements about values differently
  • Moral uncertainty vs. moral underdetermination

I might find some time later this year to finish more of the posts, but I'm not sure I still want to do the entire sequence. I considered just skipping to posts 7. - 9. because that used to be my original plan, but then the project somehow took on a much larger scale. I'd be curious to what degree there's interest on the following topics:

(a) What are the arguments against (various angles of) moral realism?

(b) What is it that people are even doing when they do moral philosophy?

(c) What do anti-realists think they're doing; why do they care?

(d) Implications for moral reasoning if anti-realism is correct


What's the status of this project? Even if you no longer plan to publish most of these posts, I suspect that some people would be interested in seeing even very rough versions of the material, and I'd be happy to look over anything you weren't sure about posting!

6
Lukas_Gloor
I started working on them in December. The virus infected my attention, but I'm back working on the posts now. I have two new ones fully finished. I will publish them once I have four new ones. (If anyone is particularly curious about the topic and would like to give feedback on drafts, feel free to get in touch!)
2
MichaelA
Great to hear you're still planning to write these! I currently assign very high credence to anti-realism, but: * I don't really know what I mean by that * I (at least believe I) basically act as if moral realism is true, due to: * "wager"-style reasoning (but I don't know if it makes sense to do that) * not feeling I get why to care if anti-realism is "correct" * I don't really know if I'd actually act differently if I decided to "act as if an antirealist" So all the tentative titles and four topics you listed sound very interesting to me, and like things I've wanted to write about but doubt I'll get around to (partly because I lack the relevant background).

I included links to my working drafts to help understand the projects better, but please keep in mind that they contain statements that I will change my mind on after further research or contemplation. Also, they are not very tidy.

Year-by-year analysis of corporate campaigns (~50% done, draft)

This is basically an appendix to my cost-effectiveness estimate of corporate cage-free and broiler campaigns. Will contain graphs that will show how many animals were affected by campaigns each year, how cost-effectiveness has changed, and why we shouldn’t overreact to the analysis.

Numbers of animals slaughtered (~40% done, draft)

A collection of estimates of how many animals are kept in captivity for various purposes. E.g., meat, fur, wool, experiments, zoos, fish stocking, silk, etc.

Numbers of wild animals affected by humans in various ways (~30% done, draft)

Another collection of estimates. E.g. how many wild fish we catch, how many animals are killed by domestic cats, how many birds die after colliding with man-made objects, etc.

Surveys about veg*ism in the U.S. (not started)

I previously examined surveys about veganism and vegetarianism in the U.S. here. Results were conflicting. Now I want to conduct my own surveys to try to figure out what’s happening. This SSC post provides a hypothesis about why 2-6% of people claim to be vegetarians in surveys but then >60% of them report eating meat on at least one of two days for which they were asked to fill a dietary recall survey. I want to test it by seeing how many people will claim that they eat a breatharian diet (eat no solids at all). I think that ~3% of people will claim that they do it because they answer questions without reading, or purposefully answer incorrectly, or misunderstand the question. This would explain why surveys that simply ask people “Are you a vegan?” find such unreasonably high percentages. I also want to test other survey designs in a similar way and then make a better survey on the subject.

Trends of vegetarianism and veganism in the UK (not started)

Similar to what I wrote for the U.S. (link) but for the UK. I want to see if there will be similar patterns.

Relatedly, I put some of my posts that I decided are not good enough to go on the EA Forum on a wordpress site here (I’ve never advertised this website before).

7
Aaron Gertler 🔸
I strongly recommend you add more of these posts to the Forum -- in particular, I really like the post on ways that cost-effectiveness estimates can be misleading.
4
saulius
Thanks. I think I'm afraid to publish posts if I'm unsure they are good/useful. But I will consider publishing some these, especially ways that cost-effectiveness estimates can be misleading.
3
MichaelA
[8 months on] ...well, that went very well, haha. I believe it's now got the 8th most karma on the forum. Has this updated you to being more willing to post on the forum? Also, for ones that you're still not sure are worth posting, have you considered posting them as shortforms?
2
saulius
Yes, it made me a bit more willing to post here. But I put another week of work into that post before publishing. And I worked 2 more days on that post that I posted a couple of days ago which is also from my blog. I'm sure that some other posts from that blog are worth publishing after I put more work into them but I'm unsure if this is what I should be spending my time on. E.g., I don't want to post Cost-effectiveness of trap-neuter-return programs for cats on the EA forum without doing more to make sure it's correct (e.g. reading recent related research by other EAs). I'm unsure if I want to post Should you donate to a fund-raising meta-charity? without looking into the current situation of these charities (e.g. if there is room for more funding) and just generally thinking more about the topic. I guess it would be fine to still post it with a disclaimer but I would be afraid of giving people the wrong advice and also hurting my credibility. And I don't think posting it on the shortform would make much impact but I'd still care about saying the right things so I don’t want to bother with that.
4
Milan_Griffes
cf. The Optimizer's Curse & Wrong-Way Reductions
2
Aaron Gertler 🔸
I found saulius' post useful in different ways than Chris Smith's. I especially like that it covers mistakes that seem more "basic" and easier to avoid/correct for. But "The Optimizer's Curse" is also worth looking at.
5[anonymous]
I just skimmed some of the recent posts on your website and liked them! What makes you think that they're not good enough to be posted here? They definitely seem less comprehensive than some of your (very comprehensive) posts here, but still more than good enough to post here.

It's been cool to see some of these go up on the Forum since you posted this!

I'd be interested to see the veg*ism survey if you still think you might work on it at some point. And of course, I'm happy to look over drafts of anything you write if you want feedback.

Both of these posts sound great! I would especially like to see the second one, because there is a lot of outward emphasis on being successful and doing things that signal success (like attending Ivy Leagues).

1[anonymous]
[Deleted]
4
Vaidehi Agarwalla 🔸
I agree with khorton - it really depends your goal with the post. If you want to offer support to others who feel the same way, a feelings post is good (including specific examples would be great, and point at broader issues without needing to explicitly research them. If you want to make a broader point, you could even just make those thoughts in a question and encourage people to share their experiences (I would love to see this!). Then it could be an informal resource for others feeling that way, and might give you some ideas if you (or someone else) want to write the comprehensive version of the post.
3
Kirsten
You could ask it as a question if your response would be ~3 paragraphs. I think that would work well, but I'm not sure if that would give you enough space to express your feelings.
1
Milan_Griffes
+1

Thanks for collating these "criticism of EA" posts.


is that EAs are generally too eager to read and upvote any nicely written criticism by an intelligent person that sounds non-threatening enough.

Reminds me a bit of sealioning, though I think what you're pointing to is not exactly that.

6[anonymous]
[Deleted]
3
Eevee🔹
Concern trolling?

Is this user now inactivated (in case someone reads this comments and knows)? It would be a shame if that person actually did not feel accepted and therefore left. One idea I had when reading this is that EAs might want to connect over other things than EA. For example, hobbies, sports, etc. might be a way for people to connect in EA across "status".

1. "Survey of arguments for focusing on suffering reduction"
-I'm particularly interested in arguments from and for the nonexistence of positive mental states.

2."The case for studying abroad at Oxford"
-Argue, based on personal experience, that students across the world who are interested in EA should seriously consider studying abroad at Oxford and provide advice on how to make the most of that experience.

3."The case for recruiting for AI safety research in Brazil"
-Lay out the reasons for thinking Brazil is a low hanging fruit for recruiting in AI safety research

I'm especially curious about (2) if you include "spending time in the city of Oxford" and not just "getting into Oxford" (which, as noted below, is hard). I've been looking for posts about what it's like to be part of EA culture in the cities where it is most present (I now live in one of those, but I'm guessing that Oxford differs from Berkeley in many ways).

Re:2. I hope you're not going to ignore that is really hard to get into Oxford? There's also the general tendency in EA to glorify Ivy League education, which makes a lot of people feel inadequate/excluded.

I would be really interested in hearing the case for 3)!

"List of public donation logs":

A list of people who have made their donations public. Meant as inspiration for people who might consider doing the same, or information for people who want more perspective on causes they might consider supporting.

Is it a list of blog posts that explain why people made the donations they made? Or just a list of donors and their donations similar to this?

3
Aaron Gertler 🔸
Closer to Vipul's list. I've spoken to him already as I drafted the idea, and I think it would be helpful to have a more focused list of specifically people who've created their own web pages/spreadsheets to share the information. My goal is to use the post to show people that it isn't totally unusual to make these things public, and to nudge people closer to making donations public if they were interested but worried about seeming "weird". Part of that is showing others doing it, and part of it is showing different strategies for making these disclosures.
1
MichaelA
1) Data point: Until reading these comments just now, I’d seen that some people had spreadsheets/webpages like these, and I think I vaguely felt that that was good, but I also think I simply hadn’t even considered for a second the idea of doing so myself. (I'd considered writing blogposts about specific or annual planned or prior donations, where I also discussed some of my rationale, but hadn't considered a comprehensive, public spreadsheet/webpage.) I'm now very likely to do this, as a result of these comments. 2) Do you still plan to make a post collecting these lists? 3) Do you think it would be possible and/or good for there to just be a button on the EA Pledge dashboard for people to opt into making their reported donations from there publicly visible? This may increase the number of people who do this, as it'd be easier and might seem a bit closer to "sort-of a default" than "strange thing these 3 people somewhere have done". I guess one downside would be that, if that button was displayed prominently, it could make the dashboard as a whole seem "weird". 4) Somewhat separately, I like Claire Zabel's statement that: (I tried to contribute to this norm with this recent post.) Do you have any thoughts on how that spreadsheet/webpage approach (or a post about it) could also contribute to or tie in with that norm? (A fair response to 3 and 4 could be "Hey Michael, why don't you spend at least 2 minutes of your own damn time thinking about it?" :D)
3
Aaron Gertler 🔸
1. Wonderful! 2. Yes, I do plan to do this at some point -- in fact, I've added it as something to do this week thanks to your comment, thanks for the push. 3. That's an interesting idea. I'll pass it along to CEA's tech team, though I'd guess it wouldn't be something that would happen soon (no guaranteed demand, unlikely to increase people's use of the platform, some risk that people accidentally expose sensitive information). 4. I'm a fan of Claire's suggestion. Not likely to do it myself, because my reasons for donating are pretty quirky and difficult to explain, but I've liked all the posts of this kind that I've seen from others on the Forum.

I'm going to list my answers separately for easier upvoting/commentary.

"Effective Altruism 2050: The Grand Story", which explores how people might think about EA in the future, and especially how "credit" might be allocated for whatever we've accomplished.

The thesis of the piece is that most of our current concerns about which kinds of work are high-status or not may fade away over time, to be replaced by a general sense that everyone who did EA-adjacent things was part of the same "story", trying to do their best under conditions of extreme uncertainty.

Hmm.. I would like to see this with caveats or something: EA is far from being sure of success and there are a number of failure modes I can imagine. The risk of this article might be that it would paint an overly optimistic picture of EA. Although I would love to see the description of a best-case scenario!

7
Aaron Gertler 🔸
It seems more likely that not (at least to me) that EA will make only a small dent in history, if it is remembered at all. The post explores what might happen in the timelines where we succeed.
1
SiebeRozendal
Alright that seems cool! I look forward to it. I think plenty of people have dreamed of a best case scenario, but it's definitely good to write that up :)

I would be really interested in seeing this written up. I have many thoughts related to the idea of geting credit (probably not directly related to your post)

I have been thinking a lot about how much of a role high-status plays in influencing people to make decisions, and whether this is always a good thing. For example, many things are highly uncertain but ones endorsed by the community might get a sense of security that even if this doesn't pan out, the person has the support of the community that they did the best thing. Whereas, another cause or ... (read more)

"EAs within non-EA charities"

A post to explore the following and put a lot more detailed thought, based on my own professional experience of trying to do this for a few months or so, into how it could work...

I work in a large charity in the UK and although I think the work we do is important, it doesn't fit into the highly valuable cause areas commonly accepted by the EA community.

Still, there are lots of reasons that someone like me might continue to work in a less effective job. For example:

  • It's a good employer in your area and you need to stay living around there for caring/family reasons
  • You're building up your skills in an early or new career position
  • You've worked there for ages and only recently discovered EA principles

So skipping part the "go work on a more effective cause" answer, what can people who support EA ideas do in a non-EA charity?

I think there might be crossover with the kind of recommendations you might give to someone work in government, especially when you consider how bound up a lot of UK charities are with public work (Alzheimers Society, Citizens Advice, Church of England, Trussell Trust)

Apart from that I would have thought you could bring over EA principles and play a sort of activist role to make a positive impact when it comes to:

  • prioritising research and product development
  • raising awareness of good impact based decision making within the organisation
  • encouraging a more enlightened view of career development within the organisation
  • sharing and collaborating more generously with the wider social sector
  • in the case of large organisations, doing more to shape the market in terms of what funders aim for when they award grants or commission work

That's all I've got for now but I've actually been able to put some of this into effect, in a fairly modest way, where I work. I wondered if this seems like an interesting topic to explore in more detail?

In particular, assuming that there are people who will stay in a non-EA role but still have some capacity and interest in doing a bit more good by using EA principles, what are the methods/tools/guidelines they can use?

I'm enthusiastic about seeing EAs do good work in a variety of fields, including those unrelated to standard EA cause areas. I'd be really interested to see you work on this post, and I'd be happy to read over a draft if you want feedback before you publish.

PSA: the EA Editing and Review facebook group is intended for this use-case. It has 650 members; feedback on posted drafts is generally good.

Thanks! edited the post to include a link to the group.

"My EA Origin Story":

An attempt to answer the question "why did I become part of the EA movement" in excruciating detail. Would examine every factor I can think of, from the circumstances of my birth to movies I liked as a teenager to the specific set of classes I took in my freshman year of college.

The goal: Get other people to think about what really got them into EA -- not just what happened right before the transition, but all the factors that led to their being ready to accept the ideas. I'd hope to see other people write similar stories (maybe in less detail) after reading mine.

Have you seen this post? It seems to have done something very similar to what you proposed.

"Possible Edge Cases in Dietary Effects on Animal Welfare"

When I do consume meat, it's 'humanely raised' (grass-fed etc. etc.) or wild-caught. I think the state of the art on the ethics and evidence around these food sources (vs. plausible substitutes) is muddy, and I want to publish my thoughts so someone can help me see things more clearly.

I would personally find this very useful!

Thinking of writing a shallow cause profile on lobbying for country-to-country debt relief

I'd be really interested to see this! It's one of those causes that pops up from time to time in writing by EA-adjacent organizations, but I don't have a sense for what the core numbers even look like (e.g. what debt relief allows countries to accomplish that isn't feasible without debt relief, what the actual cost of relief is to countries that hold debt).

4
Kirsten
Thanks for commenting! I actually forgot I was meaning to do this... Maybe I'll find some time over the next few weeks!

In case this data point is useful when thinking about what knowledge/views some readers may come to the table with: Pretty much all I currently know about debt relief is some half-remembered arguments from The Dictator's Handbook for why debt relief might be actively bad.

(Not saying these arguments are correct. Also not sure if "country-to-country debt relief" differs in important ways from the type of "debt relief" which that book critiqued.)

1. Framing issues with the unilateralist's curse.

I'd like to expand this shortform comment into a more detailed post with slightly better examples, some tentative conclusions, and a clear takeaway for what types of future research would be desirable.

2. A Post on Power Law distributions

Two possible posts here:

A. Power Law Distributions? It's less likely than you think.

a. Basically, lots of EAs arguing that the distribution over {charitable organizations, interventions, people, causes} is ~power law.

b. I claim that this is unlikely. The distribution over most things that matter seem to be a heavy tail distribution that's less extreme than power law.

c. outline here: https://docs.google.com/document/d/17n27ygtUloGrFGqJyOV0Q-yUdGrK5HQoEI-de8lXTy0/edit

d. Unfortunately understanding this well involves some mathematical machinery and a lot of real-world stats that's been somewhat hard for me to make progress on (happy to hand it off to somebody else!)

B. What to do if we live in a power law world

The alternative post is to argue for why if were to take the power law hypothesis about EA-relevant things seriously, we should change our actions dramatically in key ways. I think it might be helpful to start a conversation about this.

3. Thoughts on South Bay EA

I cofounded and co-organized South Bay EA, and had a pretty comprehensive write-up about what futures we should be planning for. My co-organizers and I are still debating between whether to anonymize and share the write-up to benefit future organizers.

4. EA SF tentative plan

Similarly, I've vaguely been thinking of having a public write-up about plans for EA San Francisco so it's easier to a) get feedback through external criticism and b) find collaborators/potential co-organizers online rather than entirely through my network.

I'd be really excited to see 2A written up! Also 3 and 4 (in that order)

I think I'd be interested in 1. Also, I recently collected all prior work I'd found that seemed substantially relevant to the unilateralist's curse; unfortunately it wasn't much, and you may have seen it all already, but just thought I'd mention it in case it could help you with that post idea.

(I've also added your shortform comment to that list now.)

Here's some stuff which I may consider writing when I have more time. The posts are currently too low on the priorities list to work on, but if anyone thinks one of these is especially interesting or valuable, I might prioritize it higher, or work on it a little when I need a break from my current main project. For the most part I'm unlikely to prioritize writing in the near future though because I suspect my opinions are going to rapidly change on a lot of these topics soon (or my view on their usefulness / importance / relevance).

1) Where Does EA take root? The characteristics of geographic regions which have unusually high numbers of effective altruists, with a eye towards guessing which areas might be fertile places to attempt more growth. (Priority 4/10, mostly because I mostly already have the data due to working on another thing, but I'm not sure to which growth is a priority)

2) Systemic Change - What does it mean in concrete terms? How would you accomplish it within an EA framework? How might you begin attempting to quantify your impact? Zooming out from the impact analysis side of things a bit to look at the power structures creating the current conditions, and understanding the "replaceabilty" issues for people who work within the system. (priority 3/10, may move up the priorities list later because I anticipate having more data and relevant experience becoming available soon, but I'm ).

3) A (as far as I know novel) thought experiment meant to complicate utilitarianism, which has produced some very divergent responses when I pose it conversation so far. The intention is to call into question what exactly it is that we suppose ought to be maximized. (priority 3/10)

4) How to turn philosophical intuitions about "happiness", "suffering", "preference", 'hedons" and other subjective phenomenological experiences into something which can be understood within a science/math framework, at least for the purposes of making moral decisions. (priority 3/10)

5) Applying information in posts (3) and (4) to make practical decisions about some moral "edge cases". Edge cases include things like: non-human life, computer algorithms, babies and fetuses, coma, dementia, severe brain damage and congenital abnormalities. (priority 3/10)

6) How are human moral and epistemic foundations formed? If you understand the "No Universally Compelling Arguments" set of concepts, this post is basically helping people apply that principle in practical terms referencing real human minds and cultures, integrating various cultural anthropology and post modernist works. (priority 2/10)



Where Does EA take root?

You may have seen that we analyzed this a bit as part of the EA Survey. I'm curious what data source you have?

4
ishaan
That very EA survey data, combined with Florida et all The Rise Of The Megaregion data which characterizing the academic/intellectual/economic output of each region. It would be a brief post, the main takeaway is that EA geographic concentration seems associated with a region's prominence in academia, whereas things like economic prominence, population size, etc don't seem to matter much.
Systemic Change - What does it mean in concrete terms? How would you accomplish it within an EA framework? How might you begin attempting to quantify your impact? Zooming out from the impact analysis side of things a bit to look at the power structures creating the current conditions, and understanding the "replaceabilty" issues for people who work within the system. (priority 3/10, may move up the priorities list later because I anticipate having more data and relevant experience becoming available soon).

Would be highly interested in this, and a case study showing how to rigorously think about systemic change using systems modeling, root cause analysis, and the like.

"How targeted should donation recommendations be" (sorta)

I've noticed that Givewell targets specific programs (e.g. their recommendation), ACE targets whole organisations, and among far future charities you just kinda get promising-sounding cause areas.

I'm interested in what kind of differences between cause areas lead to this, and also whether anything can be done to make more fine-grained evaluations more desirable in practice.

I'm thinking of writing a post about my experience doing an economics PhD with EA motivations. I think this might be interesting to people considering a career in research and especially in social science research, given that this is a career path 80k hours has discussed in the past (e.g. "Economics PhD the only one worth getting?"). I don't have an overarching thesis, so this would be more of a collection of observations -- what it's like, what's good about it, what's bad about it.

i just want to write about Do plants really feel pain? i think it might be a great topic to share here.

"Genome editing and the replacement, reduction and relief of pain as a cause area"

  • A few individuals lead near-normal lives with the complete absence of pain due to natural genetic variations.
  • Genome editing has the potential to replicate these genetic variations in all animals and people.
  • The problem with eliminating pain is its important role in the detection and avoidance of injury.
  • The challenge is to remove pain while retaining this function. Options include these 3Rs (inspired by the 3Rs of animal testing):
    • Replace pain with a painless sensory system. Complete absence of pain while retaining the detection and avoidance of injury.
    • Reduce the maximum level of pain from 10 to a 1 or 2 on the pain scale. Keep pain but reduce its severity.
    • Relieve pain for those who, out of choice or necessity, have not replaced or reduced pain.

Hello! 

I've recently started to write a post about how our education system could be structured to nurture to the full spectrum of health that an indvidual has (physically, emotionally, psychologically, socially, spiritually). I'm thinking about drawing from many different fields of science such as neuroscience, psychology, sports science, sociology, public health (my own field), and education management. 

As you may know, cardiovascular diseases and mental health are on the rise in the west and are becoming pressing problems for our society, which may accumulate in the future if nothing is done to alter the course. 

Let me know what you think and if this is the right place for such a post. 

Thanks!  

Hi Felix, I'd personally be very interested in reading such a post !

Things I think might make this more interesting / that are may be typically missing from such evaluations are :

  • what country are you talking about? Why the country?
  • what kind of positive effects for society would these changes produce ? On what timescale?
  • which solutions create the most value or could be prioritised above others? (If we would need to implement multiple changes, why?)
  • are any of these solutions cost effective ? I'd be especially curious on the cost of advocacy, not just
... (read more)
1
Felix Rudfeldt
Hey, Vaidehi! Thanks for your feedback, I hadn't considered these questions before and they are of great help. Do you have any idea on where to find more information on the cost of advocacy and implementation costs? I feel like this is outside my current knowledge.
2
Vaidehi Agarwalla 🔸
That's a good question, I am not sure of specific resources on advocacy in particular, but highly recommend checking it Charity Entrepreneurship's resources on their idea evaluation process and how they evaluate different interventions. Some of their research reports also cover interventions that include advocacy (e.g. they previously looked into tobacco policy). It might also be interesting to see about ACE (Animal Charity Evaluators) evaluates their top recommendations, because most of them do advocacy work. Sorry about lack of links, I'm on mobile. But you can just Google the names of the orgs. If you have any trouble finding info or these aren't that useful let me know!
1
Felix Rudfeldt
No worries! I'll be sure to check them out and see if they're relevant to the post I'm thinking about writing. I bet I could also just google randomly lik "advocacy work cost" or something like that to see what comes up. Thanks for your help man! :)

I'm doing a lit review on the effectiveness of lobbying and on some of the relevant theoretical background that I'm planning on posting when I'm done. I feel like this is potentially very relevant but I'm not sure if people will be interested.

I'll throw my hat in as someone who would be interested to read this!

Hi, is be interested and have been thinking about similar stuff (meeting the impact of lobbying, etc) from a uk policy perspective.

If helpful happy to chat and share thoughts. Feel free to get in touch to: sam [at] appgfuturegenerations.com

Consider reaching out to Rethink Priorities, Charity Entrepreneurship and Good Policies (a CE-incubated charity). I think they'd be very interested, given that they're doing similar research (RP on ballot initiatives, CE did some on lobbying for animal welfare and has had interest in lobbying for tobacco taxation). Open Philanthropy Project and the managers of the EA Funds would also probably be interested in your findings.

3
MichaelA
I don't follow their work closely, but I believe the Good Food Institute interact with policymakers on the matter of regulation/labelling of alternative proteins, so perhaps they'd also be interested/have interesting thoughts.

I am planning on writing a post summarizing the existing discussion of information cascades in EA and when doing and the different forms and possibilities to do something against it. Lastly, I discuss why the concept of the information cascade might disadvantageous. I would be interested in comments on the draft.

I'm writing a post about how our discussions of emerging technologies could apply technological determinism or social construction theory more rigorously. For example, we often talk about AI in a way that suggests that it is likely to advance towards superintelligence (technological determinism), but then assert that society has the power to shape the development of AI (social constructivism), given that superintelligence will emerge (determinism again). I think this reasoning is muddled, but I am not suggesting that we must choose either-or between determinism and constructivism.

An AMA. I honestly don't think I'm a particularly good person to write one, but I think it would be good to have more on here.

I think if you're in an EA job I'd love to see an AMA from you.

I don't think I will write these posts very soon, but I want to get my ideas out there so that others can help write them if I don't.

  1. A post on the problem of language barriers
  2. How auxiliary languages could help make EA more accessible to people who don't speak English
  3. How political action against artificial intelligence might slow down its development
  4. Associative morality
    1. This concerns how connections between ideas in people's brains could be relevant to moral philosophy, or at least convincing people of new ideas.
  5. Business ethics and its relevance to different problem areas

If you make any posts based on my ideas, please let me know so I can give you feedback.

Importance, Tractibility and Neglectedness should not have equal weight.

TL;Dr, Neglectedness is a useful tiebreaker and gives you information about tractability but the relatively common matrix approach of scoring possible ideas on ITN and then ranking based on the sum of the scores overweights it.

If you're using the formal mathematical definitions of the terms from this section of the 80,000 Hours article, then their product (before taking logs) has an interpretation in natural units, as good done / extra person or $, so if you reweight, this interpretation for the product will be lost. Are you interpreting the ITN terms differently?

3
alex lawsen
Yes, or at least I think the way they are often interpreted is different. I actually have no issue with 80k's formal definition, but qualitative use in practice (not by 80k) has often put both both of 80k's last two points in the tractability metric, then there's this other nebulous factor called 'Neglectedness' which ends up being counted again. The key metric is how much good can be done by one marginal extra person or dollar, and I've seen a few cases of people estimating that (which will clearly be affected by diminishing marginal returns), then adding a Neglectedness score on as well, which seems wrong. I haven't written this up yet as I don't think it's hugely important- it's typically a feature of naïve/rough work, and there's definitely a chance that some of this kind of work is actually using a framework modelled on 80k but just not exposing that well. Most high quality research is just done by an actual CEA rather than by ITN framework, so there's obviously no issue there.
2
MichaelStJules
Ok, makes sense! In case you haven't seen it, this might be helpful to see what other critiques are out there already.
Comments4
Sorted by Click to highlight new comments since:

This a very random small thing but I have been thinking about writing a post about singing / call and response at EAGs / EA meetups. There are a few studies pointing to a relationship between communal feeling and singing together, it seems to be pretty cross-cultural so I feel like it might be a cost-effect  way to increase the feeling of community. I also felt a bit lonely at the start of my first EAG and I was thinking that this might have helped. 

 

It would probably costs too many weirdness points but I feel like it could still be interesting to explore. 

I'd be interested in seeing the research on singing!

I don't know if EAG is really the place for that, but people often bring instruments to retreats and smaller events and I think they add a lot.

This post should win an award for how long it's had active comments coming in

This is really a nice approach as am stuck and needed some help on an article/project am working on. Here it is,


I have experienced and seen from others the frustration grassroots change-makers go through before they get disoriented and let the ills in society and environment go on unabated.

I started thinking of a better model to push funds to change-makers so they concentrate more on grassroots impact and less on pulling funds.

The first approach that came to mind was reducing the 'cost of sacrifice' to zero so that millions of altruists become philanthropists giving purchase decisions where a small x% goes to a grassroots cause they care about. - and latest consumer research supports this model "91% would switch brands for one championing a cause." Deloitte Global Millennial Survey 2019

But before I could test it, and as thought and read further, I discovered that actually society and planet invest in value creation but a bug in the markets makes sure they get a bounced check during wealth sharing. And that we can use technology, AI and others to rally consumers to reclaim planet and society's wealth shares. Then the socially interested AI can fund change at a thrilling scale.

The article am drafting argues ;

Wealth cannot be created without investment from planet, society, government and businesses yet planet and society hardly get to share the wealth. The PlaSo Diversion bypasses social middlemen (philanthropy & development aid) to ensure 'planet and society' get their just share from the wealth creation process right at the counter.

INVESTORS IN VALUE CREATION

GOVERNMENTS: provide physical, economic, political, and legal infrastructure.

BUSINESSES: spot, innovate, and invest time/money into a need for a product/service.

PLANET: every consumer product/service has to use some component of earth.

SOCIETY: From a goldmine of knowledge generated over 5,000 years of global-cooperation, cultural civilizations, to markets, etc. without the global-society, businesses would have to start from a prohibitively costly vacuum.

MONETISING VALUE

At this point a bug in the markets makes buyers/sellers believe that the only creator of the product/service is the business. It’s so established that even the staunchest inequality activists continue to consume billionaire owned products/services even as they shower slur at them.

DISTRIBUTING WEALTH

Businesses and governments receive their shares and keep planet/society’s shares. And when they need it less, they create philanthropic foundations and aid agencies to distribute remains to planet and society.

That we give businesses wealth incentives to innovate, governments wealth incentive to govern, and deny society wealth incentives to cooperate and planet wealth incentives to sustain us is the mother of all injustices in the world. The current monetization system delivers full ownership and control of wealth to self-interested businesses and governments.

To solve this, we can create decentralised autonomous AI agents running on blockchain, whose self-interest is social-interest to divert planet/society’s shares at the point of monetization (PlaSo-Diversion). The AI agents distribute the wealth to the most urgent, neglected and solvable social/environmental problems. If we can make this the norm, we won’t need philanthropy and aid.

"Philanthropy is commendable, but it should not allow the philanthropists to overlook the very injustice which makes philanthropy necessary." Martin Luther

Curated and popular this week
Relevant opportunities