Hide table of contents

I keep seeing EA[1] accused of being "techno-utopian," which I think means something like, "They may not talk much about it, but ultimately the thing that's driving all their work is the dangerous/naive/selfish/capitalist/colonial/male vision of a spacefaring civilisation of happy sentient beings made possible by differential technological development."

If we likewise try to oversimplify their motives for a moment, what's their vision?

I often find myself assuming that it's either something like "Direct democracy everywhere"[2][3] or that there isn't really one (because critics are rarely expected to provide fleshed out alternatives to the thing criticised). But I haven't given it much thought and I'm curious to hear others' impressions.

I don't think a group needs to have confident consensus on a comprehensive vision of the future to have productive moral debate with others. But I do think it would be helpful to get a bit more clarity on what our respective visions might be, because they seem to be closer to where the main cruxes are than where the debate usually takes place.

 

  1. ^

    Perhaps "longtermism" or "core EA" would be more accurate, as I think I've seen EAs make this accusation of longtermist/core EAs a fair bit too.

  2. ^

    I.e. for all adult human beings alive today for all non-trivial decisions. Maybe with some attempt to represent the interests of domesticated nonhuman vertebrates or human beings in the next 100 years max.

  3. ^

    And then the hugely simplified picture in my mind of what's going on when EAs argue is one side saying, "But can we just agree that hell is overwhelmingly bad and heaven is overwhelmingly good?" and the other saying, "But can we just agree that that line of reasoning has a mixed-at-best track record even by its own lights?" over and over again.

17

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

There's not going to be a one-size-fits-all answer to this. EA (implicitly and explicitly) criticises how many other worldviews see the world, and as such we get a lot of criticism back. However, it is a topic I've thought a bit about, so here are some best guesses at the 'visions' of some of our critics put into four groups. [Note: I wrote this up fairly quickly, so please point out any disagreements, mistakes, or suggest additional groups that I've missed]

1: Right-of-centre Libertarians: Critics from this school may think kind of well of EAs intentions, but think we are naïve and/or hubristic, and place us in the a tradition of thought that relies on central planning rather than market solutions. They'd argue along the lines of the most efficient interventions being the spread of markets and the rule of law rather than charities. They may also, if on the more social conservative end, believe that social traditions capture cultural knowledge than can't be captured by quantification or first-principles reasoning. Example critic: Tyler Cowen

2: Super Techno-Optimistic Libertarians: This set thinks that EA has been captured by 'wokeness'/'AI doomers'/whatever Libertarian boogeyman you can think of here. Generally dismissive of EAs, EA institutions, and not really willing to engage on object-level discussions in my experience. Their favoured intervention is probably cutting corporate taxes, removing regulations, and increased funding on AI capabilities so we can go as fast as possible to reap the huge benefits they expect.

In a way, this group acts as a counter-point to some other EA critics, who don't see a true distinction between us and this group, perhaps because many of them live in the Bay and are socially similar to/entangled with EAs there. Example critic: Perry Metzger/Mark Andreessen 

3: Decentralised Democrats: There are some similarities to group 1 here, in the sense that critics in this group think that EAs are too technocratic. Sources of disagreement here include pragmatic ones: they are likely to believe that social institutions are not adapted to the modern world to such a degree that fixing them is higher priority than 'core EA' think, normative ones: they likely believe that decisions that will have a large impact over the future deserve the consent of as much of the world as possible and not just the acceptance of whatever EA thinks, and sociological ones: if I had to guess, I'd say they're more central-left/liberaltarian than other EA critics. Very likely to think that distinguishing from EA-as-belief and EA-as-institutions is a false distinction, and very supportive of reforms to EA including community democratisation. Example critic: E. Glen Weyl/Zoe Cremer

4: Radical Progressives/Anti-capitalists: This group is probably the one that you're thinking of in terms of 'our biggest critics', and they've been highly critical of EA since the beginning. They generally believe EA to be actively harmful, and usually ascribe this to either deliberate design or EA being blind to its support of oppressive ideologies/social structures. There's probably a lot of variation in what kind of world they do want, but it's likely to be a very radical departure, probably involving mass cultural and social change (perhaps revolutionary change), ending capitalism as it is currently constituted, and more money, power, and support being given to the State to bring about positive changes.

There is a lot of variation in this group, though you can pick up on some common themes (e.g. a more Hickel-esque view of human progress, compared to a more 'Pinkerite' view that EA might have), common calls-to-action (climate change is probably the largest/most important cause area here). I suggest you don't take my word for it and read them yourself,[1] but I think you won't find much in terms of practical policy suggestions - perhaps because that's seen as "working within a fatally flawed system", but some in this group are more moderate. Example critic: Alice Crary/Emile Torres/Jason Hickel

  1. ^

    Though I must admit, I find reading criticism from this group very demotivating - lots of it seems to me to be bad faith, shallowly researched, assuming bad intentions from EAs, or avoiding object-level debates on purpose. YMMV though.

Curated and popular this week
Relevant opportunities