Hey Joey, Arden from 80k here. I just wanted to say that I don't think 80k has "the answers" to how to do the most good.
But we do try to form views on the relative impact of different things, so we do try to reach working answers, and then act on our views (e.g. by communicating them and investing more where we think we can have more impact).
So e.g. we prioritise cause areas we work most on by our take at their relative pressingness, i.e. how much expected good we think people can do by trying to solve them, and we also communicate these views to our readers.
(Our problem profiles page emphasises that we’re not confident we have the right rankings here https://80000hours.org/problem-profiles/#problems-faq and also at the top of the page, and in ranking meta problems like global priorities research fairly highly).
I think all orgs interested in having as much positive impact as the can need to have a stance on how to do that -- otherwise they cannot act. They might be unsure (as we are), and open to changing their minds (as we try to be), and often be asking themselves the question "is this really the way to do the most good?" (as we try to do periodically). I think that's part of what characterises EA. But in the meantime we all operate with provisional answers, even if that provisional answer is "the way to do the most good is to not have a publicly stated opinion on things like which causes are more pressing than others."
This feels fairly tricky to me actually -- I think between the two options presented I'd go with (1) (except I'm not sure what you mean by "If we'd focus specifically on EAs it would be even better" -- I do overall endorse our current choice of not focusing specifically on EAs).
However, some aspects of (2) seem right too. For example, I do think that we talk about a lot of things EAs already know about in much of our content (though not all of it). And I think some of the "here's why it makes sense to focus on impact" - type content does fall into that category (though I don't think it's harmful for EAs to consume that, just not paritcularly useful).
The way I'd explain it:
Our audience does include EAs. But there are a lot of different sub-audiences within the audience. Some of our content won't be good for some of those sub-audiences. We also often prioritise the non-EA sub-audiences over the EA sub-audience when thinking about what to write. I'd say that the website currently does this the majority of the time. but sometimes we do the reverse.
We try to produce different content that is aimed primarily at different sub-audiences, but which we hope will still be accessible to the rest of the target audience. So for example, our career guide is mostly aimed at people who aren't currently EAs, but we want it to be at-all useful for EAs. Conversely, some of our content -- like this post on whether or not to take capabilities-enhancing roles if you want to help with AI safety (https://80000hours.org/articles/ai-capabilities/), and to a lesser extent our career reviews -- are "further down our funnel" and so might be a better fit for EAs; but we also want those to be accessible to non-EAs and put work into making that the case.
This trickiness is a downside of having a broad target audience that includes different sub-audiences.
I guess if the question is "do I think EAs should ever read any of our content" I'd say yes. If the question is "do I think all of our content is a good fit for EAs" I'd say no. If the question is "do I think any of our content is harmful for EAs to read" I'd say "overall no" though there are some cases of people (EAs and non-EAs) being negatively affected by our content (e.g. finding it demoralising).
I'm trying out updating some of 80,000 Hours pages iteratively that we don't have time to do big research projects on right now. To this end, I've just released an update to https://80000hours.org/problem-profiles/improving-institutional-decision-making/ — our problem profile on improving epistemics and institutional decision making.
This is sort of a tricky page because there is a lot of reasonable-seeming disagreement about what the most important interventions are to highlight in this area.
I think the previous version had some issues: It was confusing, and it was common for readers to come away with very different impressions of the problem area. This seems like it is in part because the term "improving institutional decision making" is very very broad, and can include a lot of different things. We didn't do a great job of making clear our views about which sub-areas were most promising. This is partly because those views are not that strongly developed! Basically a lot of people who've thought about it disagree, and we're not confident about who's right. The previous version of the article, though, presented a confident-sounding picture that mostly highlighted forecasting, structured analytic techniques, and behavioral sciences. It was out of date. The opening felt a bit unrealistic.
In the update I sought to address (1)-(2) by just honestly writing that we aren't sure which focus(es) within the broad umbrella area are best, and going through a few of the options that seem most promising to us and some people we asked for advice. I sought to address (3) and (4) by doing a low-hanging-fruit edit to update the information and writing, and cut the opening.
The update was much quicker than most updates we'd make to our problem profiles. It will be far from perfect. I'd be very happy to get feedback — if you want to suggest changes you can do so here as comments or leave a comment on this thread. However, I probably won't respond to most comments — as I said above, people have very different views in this area, so I'd be surprised if there weren't a decent amount of disagreement with the update. That said, I still want to hear views (especially if you think perhaps I haven’t heard them before), and if there are smaller changes that seem positive I'd be very keen (e.g. "X is a bad example of the thing you're talking about.")
Hey Holden,
Thanks for these reflections!
Could you maybe elaborate on what you mean by a 'bad actor'? There's some part of me that feels nervous about this as a framing, at least without further specification -- like maybe the concept could be either applied too widely (e.g. to anyone who expresses sympathy with "hard-core utilitarianism", which I'd think wouldn't be right), or have a really strict definition (like only people with dark tetrad traits) in a way that leaves out people who might be likely to (or: have the capacity to?) take really harmful actions.
Thanks for this post! One thought on what you wrote here:
I feel unsure about this. Or like, I think it's true we have those downsides, but we also probably get upsides from being in the middle here, so I'm unsure we're in the worst of both worlds rather than e.g. the best (or probably just in the middle of both worlds)
e.g. We have upsides of fairly tightly knit information/feedback/etc. networks between people/entities, but also the upsides of there being no red tape on people starting new projects and the dynamism that creates.
Or as another example, entities can compete for hires which incentives excellence and people doing roles where they have the best fit, but also freely help one another become more excellent by e.g. sharing research and practices (as if they are part of one thing).
Maybe it just feels like we're in the worst of both worlds because we focus on the negatives.