MO

Mayowa Osibodu

AI Research and Engineering @ Aditu (https://aditu.tech)
9 karmaJoined Working (6-15 years)
about.me/mayowaosibodu

Comments
1

Interesting podcast - I read the transcript.

My main takeaway was that building AI systems to have self-interest is dangerous because that has the potential to explicitly conflict with humanity's own interest, leading to a major existential risk with super-intelligent AIs.

I wonder if there's any advantage of self-interest in AI though. Is there any way self-interest could possibly make an AI more effective at accomplishing its goals? In biological entities, self-interest obviously helps with e.g. avoiding threats, seeking more favourable living conditions, etc. I wonder if this applies in a similar manner to AIs, or if self-interest in an AI is inconsequential at best.

 

I'm curious, what exactly is the worry with AGI development in e.g. Russia and China? Is the concern that they are somehow less invested in building safe AGI (which seems to strongly conflict with their own self-interest)?

Or is the concern that they could somehow build AGI which selectively harms people/countries of their choosing? In this latter case it seems to me that the problem is exclusively a human one, and isn't ethically different from concerns about super-lethal computer viruses or bio/nuclear weapons. It's not clear how this precise risk is specific to AI/AGI.