DS

David Stinson

20 karmaJoined

Posts
1

Sorted by New

Comments
4

Let me try to steelman this fear (which I mostly disagree with):

  1. Social media was originally thought to be a radical force for democratic change - see the Arab Spring, for instance.
  2. The objective of disinformation was never to change minds, but to reduce trust in anonymous online interactions. See Russia's human-based propaganda methods.
  3. Thus, disinformation blunts the value proposition of social media platforms in allowing individuals to coordinate political action. 

So it's really an opportunity cost we're talking about here in preventing social media from achieving its full potential - which may have been oversold in the first place.

My own view is that very few actors will attempt to target "political trust" as an abstract force. Instead, we should be significantly more concerned about financially-motivated scams targeting individuals. 

Focusing just on the quoted text, I'm not sure "happy medium" is the right message to take from these two incidents. AI and blockchain involve two entirely different ways of thinking about risk control.

AI risk involves frequent events with undefined causes, whereas a digital currency collapse is a rare event with overdetermined causes. For the first you would need lots of communication in order to establish a logical sequence, whereas the second requires carefully controlled communications in order to prevent false logic from taking hold. 

I was going to say something similar, based on international relations theory (realism). The optimal size of a military power unit changes over time, but equilibria can exist.

In the shorter term, though, threshold effects are possible, particularly when the optimal size grows and the number of powers shrinks. We appear to be in the midst of a consolidation cycle now, as cybersecurity and a variety of internet technologies have strong economies of scale. 

I feel like a lot of what you're describing is already encompassed by the concept of scalability, which would naturally include integration with existing social systems. However you are right in questioning whether this is a "relatively well-defined technical problem."

An alternative taxonomy might be "technical" and "game-theoretic" alignment. The latter recognizes that competing visions for social organization exist and will not be solved within the scope of AI regulation. That in turn leads to more meta-theoretical discussions about how ambitious the AI safety agenda should be in order not to stifle market competition, which would be the ultimate insurance against extremist goalcraft. 

Otherwise, engaging in these debates at the object level creates an open invitation for manipulation and bad faith.