Hi there, I'm looking for an explanation of why some problems on the 80k list are ranked as more pressing than others. 

I understand that it's supposedly based on lower ITN (importance/tractability/neglectedness) values for some of the problems. However, from looking at a few examples, some of the problems listed as less pressing seem to have comparable scores in those categories to problems listed as more pressing. 

For example, if we compare AI (top priority), nuclear security (second-highest priority) and factory farming (lower priority), the ratings in a nutshell (according to the problem profile summaries) are:

AI: very large impact / somewhat neglected / moderately tractable
Nuclear: large impact / not very neglected / somewhat-to-moderately tractable
Factory farming: large impact / moderately neglected / moderately tractable

Those ratings don't seem to line up with the overall rating of how pressing those problems are relative to each other, so it seems like there must be additional reasons for 80k's differential rating of them. And it doesn't seem to be explained clearly on the overall list or on the individual profiles.

My best guess (at present) is that there is a "long-term value" judgement going on too, so that, for example, factory farming is judged as lower priority since it's not clear that decreasing the suffering of animals in the short-term will have other positive flow-through effects compared to other issues. This is alluded to in the FF profile. But still, it's not clear if it's one of the decisive reasons for it being less recommended than other possibilities.

Can anyone help clear this up for me? Thanks in advance!

13

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since:

Hey OmariZi,

Partly the ranking is based on an overall judgement call. We list some of the main inputs into it here.

That said, I think for the 'ratings in a nutshell' section, you need to look at the more quantiative version.

Here's the summary for AI:

Scale: We think work on positively shaping AI has the potential for a very large positive impact, because the risks AI poses are so serious. We estimate that the risk of a severe, even existential catastrophe caused by machine intelligence within the next 100 years is something like 10%.

Neglectedness: The problem of potential damage from AI is somewhat neglected, though it is getting more attention with time. Funding seems to be on the order of 100 million per year. This includes work on both technical and policy approaches to shaping the long-run influence of AI by dedicated organisations and teams.

Solvability: Making progress on positively shaping the development of artificial intelligence seems moderately tractable, though we’re highly uncertain. We expect that doubling the effort on this issue would reduce the most serious risks by around 1%.

Here's the summary for factory farming:

Scale: We think work to reduce the suffering of present and future nonhuman animals has the potential for a large positive impact. We estimate that ending factory farming would increase the expected value of the future by between 0.01% and 0.1%.

Neglectedness: This issue is moderately neglected. Current spending is between $10 million and $100 million per year.

Solvability: Making progress on reducing the suffering of present and future nonhuman animals seems moderately tractable. There are some plausible ways to make progress, though these likely require technological and expert support.

You can see that we rate them similarly for neglectedness and solvability, but think the scale of AI alignment is 100-1000x larger. This is mainly due to the potential of AI to contribute to existential risk, or to other very long-term effects.

Thanks Ben, that helps a lot! 

Curated and popular this week
Relevant opportunities