Epistemic status:

I have thought a decent amount about the hard problem of consciousness and the far future, and some about AI, but am working on another project and don't have much time to hone this one, just wanted get something out for AI Welfare Debate Week. Feedback welcome!

  1. If a focus on artificial welfare detracts from alignment enough that it causes alignment to fail, this could be catastrophic and highly net negative 
    1. One monkey-wrench - if at some point AI Sentience / AI Welfare becomes a hot political issue, and this opens up an avenue for slowing down AI, this seems like a potential counter-point
  2. Artificial welfare could be the most important cause and may be something like animal welfare multiplied by longtermism; most or possibly all future minds may be artificial, and
    1. If they are not sentient this would be a catastrophe, or
    2. If they are sentient and they are suffering (for example if when they try to optimize their reward function for some reason this is actually painful for them, and so the most painful action is actually the most powerful and evolutionarily fit, causing suffering AI’s to dominate) this would be a suffering catastrophe
    3. If they are sentient and prioritize their own happiness and wellbeing this could actually quite good
  3. Perhaps advanced AI can help us solve the hard problem of consciousness, via AGI or artificial superintelligence which automate philosophy, or via finding a way of actually measuring consciousness via extensive experimentation on artificial minds
  4. Being able to measure consciousness would be extremely good as it would allow us to measure and quantity suffering and happiness which would likely lead to innovations in animal welfare, and innovations in increasing human and AI happiness and decreasing suffering
  5. One possible path to work on artificial sentience is by connecting human minds to AI with brain computer interfaces.
  6. Alternatively when we are able to upload human minds or create full brain emulations, we will likely be able to confirm digital sentience is possible and empirically study it
  7. It is possible that the only way to create artificial sentience will be to very deliberately design it to be sentient. If we are not able to achieve AI alignment, then the next best thing might be designing artificial sentience with high positive wellbeing which, if AI ends up destroying humanity, becomes our successor and populates the universe with artificial minds which possess high positive wellbeing
  8. If we are able to achieve AI alignment, perhaps it is best not to design artificial sentience, if it is not naturally occurring, because:
  9. If artificial sentience is confirmed or designed it opens up a profound can of highly aware worms to wrestle with. 
    1. AI’s are moral patients
    2. Aligning AI may in fact be equivalent to enslaving it? 
    3. The possibility of super-beneficiaries, digital minds capable of profoundly higher wellbeing than humans, may imply that, according to utilitarianism, the ethical thing to do is to design new minds that are capable of orders of magnitude higher wellbeing than humans are (we could also allow current human minds who so choose to upload and transition themselves into super-beneficiaries)
    4. It will likely be desirable to create tool AI’s that are not sentient, if this is possible, for some tasks, and sentient AI’s, perhaps super-beneficiaries as digital people who have rights and responsibilities, though not necessarily the same as human rights and responsibilities, e.g. the right to reproduce combined with right to vote
    5. It is not clear to what degree highly artificially sentient minds should be “aligned,” but it does seem clear that artificial minds should be designed in such that they have high positive wellbeing and are glad that they have been created, and ideally it would be possible to also create them in such a way that humans are glad that they have been created, otherwise perhaps it is best to hold off, or to instead transition human minds into digital minds via uploading
    6. Etc
  10. I am just now discovering the contest rules and did not realize I was supposed to convince the audience whether AI Welfare should be a priority or not... My bad! I didn't realize there was a structure to it, I was just excited to flesh out some the important cruxes on the topic. To answer the big question, my overall feeling is that:
    1. The most important thing is something like the amount of net happiness or wellbeing in the universe, which requires sentience
    2. We know humans are sentient, we do not yet know whether AI is sentient
    3. The net amount of happiness or wellbeing will mostly be  determined by whether superintelligent AI is under control or not, and by the values and choices of whoever controls superintelligent AI, or the values of the AI itself (note, even a sentient AI may not primarily value sentience)
    4. Due to the hard-ness of the hard problem of consciousness, it is quite possibly more difficult to confirm or create sentient AI than it is to align AI (if this changes or is incorrect, then AI Welfare should be higher priority, especially if superintelligent AI could be designed as sentient and highly valuing sentience)
    5. Therefore, it is probably best to prioritize aligning AI first, and then figuring out what to do about AI Welfare, because this is probably the most surefire way to assure that the long-term future is determined by beings who highly value sentience - although if you have strong comparative advantage or no interest in alignment it might make sense for such people to focus on AI Welfare

Feedback Welcome!
 

31

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities