Hide table of contents

July 1-7 will be AI Welfare Debate Week on the EA Forum. We will be discussing the debate statement: “AI welfare[1] should be an EA priority[2]. The Forum team will be contacting authors who are well-versed in this topic to post, but we also welcome posts, comments, quick takes and link-posts from any Forum user who is interested. All participating posts should be tagged with the AI Welfare Debate Week tag. 

We will be experimenting with a banner where users can mark how strongly they agree or disagree with the debate statement, and a system which uses the posts we record as changing your mind to produce a list of the most influential posts. 

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: people in front of computers seeming stressed, a number of faces overlaid over each other, squashed emojis and other motifs.
Illustration found on Better Images of AI

Should AI welfare be an EA priority?

AI welfare — the capacity of digital minds to feel pleasure, pain, happiness, suffering, satisfaction, frustration, or other morally significant welfare states — appears in many of the best and worst visions of the future. If we consider the value of the future from an impartial welfarist perspective, and if digital minds of comparable moral significance to humans are far easier to create than humans, then the majority of future moral patients may be digital. Even if they don’t make up the majority of minds, the total number of digital minds in the future could be vast. 

The most tractable period to influence the future treatment of digital minds may be limited. We may have decades or less to advocate against the creation of digital minds (if that were the right thing to do), and perhaps not much longer than that to advocate for proper consideration of the welfare or rights of digital minds if they are created. 

Therefore, gaining a better understanding of the likely paths in front of us, including the ways in which the EA community could be involved, is crucial. The sooner, the better. 

My hopes for this debate

Take these all with a pinch of salt, the debate is for you, these are my (Toby’s) opinions. 

  • I’d like to see discussion focus on digital minds and AI welfare rather than AI in general.
  • There will doubtless be valuable discussion comparing artificial welfare to other causes, but the most interesting arguments are likely to focus on the merits or demerits of this cause. In other words, it’d be less interesting (for me at least) to see familiar arguments that one cause should dominate EA funding or that another cause should not be funded by EA, even though both arguments would be ways to push towards agree or disagree on the debate statement. 
  • I’d rather we didn’t spend too high a percentage of the debate on the question of whether AI will ever be sentient, although we will have to decide how to deal with the uncertainty here. 

FAQs

How does the banner work?

The banner will show the distribution of the EA Forum’s opinion on the debate question. Users can place their icon anywhere on the axis to indicate their opinion, and can move it as many times as they like during the week. 

Some users might prefer not to see the distribution of the Forum's opinion on the question until the end of the week, so as not to bias their own vote. For this reason, you must click "view results" on the banner in order to see other user's votes. 

Voting on the banner is non-anonymous. You can reset your vote by hovering over your icon and clicking the "x".

How are the “most influential posts” calculated? 

Under the banner, you’ll be able to see a leaderboard of “most influential posts”. When you change your mind and move your avatar on the debate slider, you will be prompted to select the debate week posts which influenced you. These posts will be assigned points based on how far you moved your avatar. You can vote as many times as you like, but only your largest mind change will be recorded for each cited post. The post with the most points will be at the top of the most influential posts list. 

Do I have to write in the style of a debate?

No. The aim of this debate week is to elicit interesting content which changes the audience’s mind. This could be in the form of a debate-style argument for accepting or rejecting the debate proposition. However, the most influential posts could also be link-posts, book reviews, or bullet-point lists of the cruxes in the debate. Don’t feel constrained to a form which doesn’t fit the content you’d like to contribute. 

Further Readings

This list is incomplete, you can help by expanding it. I'll edit suggestions into the post. 

  1. ^

    By AI welfare, I mean the potential wellbeing (pain, pleasure, but also frustration, satisfaction etc...) of future artificial intelligence systems. 

  2. ^

    By “EA priority” I mean that 5% of (unrestricted, i.e. open to EA-style cause prioritisation) talent and 5% of (unrestricted, i.e. open to EA-style cause prioritisation) funding should be allocated to this cause. 

Comments8
Sorted by Click to highlight new comments since:

I like this!

Relevant context for those unaware: supposedly, Good Ventures (and by extension OpenPhil) has recently decided to pull out of funding artificial sentience.

Can you give some examples of topics that qualify and some that don't qualify as "EA priorities"?

I feel like for the purpose of getting the debate started, the vague question is fine. For the purpose of measuring agreement/disagreement and actually directly debating the statement, it's potentially problematic. Does EA as a whole have priorities? How much of a priority should it be?

Interesting distinction, thank you!
I'm thinking of a chart like this, which represents descriptive or revealed "EA Priorities"
 

(Link to spreadsheet here, and original Forum post here). The question is (roughly) whether Artificial Welfare should take up 5% of that right hand side bar or not. And also similar for EA talent distribution (which I don't have a graph to hand for). 

As a more general point- I think we can say that EA has priorities, insofar as funders and individuals, in their self-reported EA decisions, clearly have priorities. We will be arguing about prescriptive priorities (what EAs should do), but paying attention to descriptive priorities (what EAs already do). 

This is a great experiment. But I think it would have been much clearer if the question was phrased as "What percentage of talent+funding should be allocated to AI welfare?", with the banner showing a slider from 0% to 100%. As it is now, if I strongly disagree with allocating 5% and strongly agree with 3% or whatever, I feel like I should still place my icon on the extreme left of the line. This would make it look like I'm all against this cause, which wouldn't be the case.

Good point (I address similar concerns here). For the time being, personally I would treat a half agree as some percentage under 5%, and explain your vote in the discussion thread if you want to make sure that people know what you mean. 

I think I would prefer to strongly disagree, because I don't want my half agree to be read as if I agreed to some extent with the 5% statement. This is because "half agree" is ambiguous here. People could think that it means 1) something around 2,5% of funding/talent or 2) that 5% could be ok with some caveats. This should be clarified to be able to know what the results actually mean.  

Makes sense Leo, thanks. I don't want to change anything very substantial about the banner after so many users have voted, but I'll bear this in mind for next time. 

Just I want to register the worry that the way you've operationalised “EA priority” might not line up with a natural reading of the question. 

The footnote on “EA priority” says:

By “EA priority” I mean that 5% of (unrestricted, i.e. open to EA-style cause prioritisation) talent and 5% of (unrestricted, i.e. open to EA-style cause prioritisation) funding should be allocated to this cause.

This is a bit ambiguous (in particular, over what timescale), but if it means something like “over the next year” then that would mean finding ways to spend ≈$10 million on AI welfare by the end of 2025, which you might think is just practically very hard to do even if you thought that more work on current margins is highly valuable. Similar things could have been said for e.g. pandemic prevention or AI governance in the early days!

Curated and popular this week
Relevant opportunities