Recently, there's been significant interest among the EA community in investigating short-term social and political risks of AI systems. I'd like to recommend this video (and Jordan Harrod's channel as a whole) as a starting point for understanding the empirical evidence on these issues.

In this video, Jordan reviews some studies on whether YouTube's algorithm contributes to radicalization: some of these studies say it does, others say it doesn't. She says that all of the studies have common weaknesses, both of which are difficult for studies like this to address:

  1. The authors determine which YouTube channels to study and how to categorize them into political ideologies; classifying channels by ideology is hard. The lack of agreement on how to categorize channels makes it hard to trust claims that recommender systems are pushing users in any particular direction.
  2. All of these studies are conducted as users who are not logged in, so it's hard to say whether their findings would apply to logged-in users (on whose preferences YouTube's algorithm has more information). This problem could be fixed by either using real YouTube accounts (which would create significant privacy issues) or getting access to YouTube's algorithm and data, both of which are unlikely for typical academic researchers.

Jordan also notes that YouTube was able to decrease the reach of radicalizing content, largely by removing the radicalizing videos themselves.

Finally, Jordan puts the studies on YouTube's algorithm into context: YouTube is part of a larger information ecosystem, in which multiple actors (including websites and individuals) may spread misinformation and radicalizing content, so it may be hard to isolate the effects of any one of these actors.

44

0
0

Reactions

0
0
Curated and popular this week
Relevant opportunities