A

AmbiguousAardvark

2 karmaJoined

Comments
1

Thank you for writing this Matthew, I agree that capitalism is the elephant in the room that requires much more active engagement and soul searching by EA. It reminds me of Mark Fisher’s work on ‘capitalist realism’, which he defined as "the widespread sense that not only is capitalism the only viable political and economic system, but also that it is now impossible even to imagine a coherent alternative to it." According to Fisher, the quotation "it is easier to imagine an end to the world than an end to capitalism" encompasses the essence of capitalist realism.

From Wikipedia:

“According to Fisher, capitalist realism has so captured public thought that the idea of anti-capitalism no longer acts as the antithesis to capitalism. Instead, anti-capitalism is deployed as a means for reinforcing capitalism. This is done through modern media which aims to provide a safe means of entertaining anti-capitalist ideas without actually challenging the system. The lack of coherent alternatives, as presented through the lens of capitalist realism, leads many anti-capitalist movements to cease targeting the end of capitalism, but instead to mitigate its worst effects, often through individual consumption-based activities such as Product Red.”

This is most evident to me in the AI safety space, which I am still trying to understand (AI is not my expertise, but I have been following the AI safety discourse a bit through the 80k podcast). For all the emphasis on AI existential risks, it seems an entirely open question whether the increased attention (through EA / the FLI letter etc) didn’t contribute to increased investments in Machine Learning/ AI and essentially lead to AI developments accelerating, bringing us closer to the feared outcome? For a movement that is so concerned with doing the most good, shouldn’t avoiding increased hype and investments in a technology that can potentially kill us all, be very high on the list of priorities? At the very least, I would expect much more modesty on the recommendation and investments in this area. 

To me it is clear that a genuine solution to this kind of technologically-induced-existential-risk problem has to focus on the root of the problem, meaning a change to the incentive structure (away from growth at the expense of massive risk to society)/’legal fictions’ that create wealth. I recommend Ezra Klein’s interview with Katharina Pistor on this latter point. Maybe I do not fully understand the weighing of factors here, I would love if someone could explain it to me, because this really baffles me.