Bio

Kinda pro-pluralist, kinda anti-Bay EA

(my opinions are fully my own, and do not represent the views of any close associates or the company I work for)

Posts
2

Sorted by New
4
· · 1m read

Comments
99

Answer by JWS38
8
0
6
1

There's not going to be a one-size-fits-all answer to this. EA (implicitly and explicitly) criticises how many other worldviews see the world, and as such we get a lot of criticism back. However, it is a topic I've thought a bit about, so here are some best guesses at the 'visions' of some of our critics put into four groups. [Note: I wrote this up fairly quickly, so please point out any disagreements, mistakes, or suggest additional groups that I've missed]

1: Right-of-centre Libertarians: Critics from this school may think kind of well of EAs intentions, but think we are naïve and/or hubristic, and place us in the a tradition of thought that relies on central planning rather than market solutions. They'd argue along the lines of the most efficient interventions being the spread of markets and the rule of law rather than charities. They may also, if on the more social conservative end, believe that social traditions capture cultural knowledge than can't be captured by quantification or first-principles reasoning. Example critic: Tyler Cowen

2: Super Techno-Optimistic Libertarians: This set thinks that EA has been captured by 'wokeness'/'AI doomers'/whatever Libertarian boogeyman you can think of here. Generally dismissive of EAs, EA institutions, and not really willing to engage on object-level discussions in my experience. Their favoured intervention is probably cutting corporate taxes, removing regulations, and increased funding on AI capabilities so we can go as fast as possible to reap the huge benefits they expect.

In a way, this group acts as a counter-point to some other EA critics, who don't see a true distinction between us and this group, perhaps because many of them live in the Bay and are socially similar to/entangled with EAs there. Example critic: Perry Metzger/Mark Andreessen 

3: Decentralised Democrats: There are some similarities to group 1 here, in the sense that critics in this group think that EAs are too technocratic. Sources of disagreement here include pragmatic ones: they are likely to believe that social institutions are not adapted to the modern world to such a degree that fixing them is higher priority than 'core EA' think, normative ones: they likely believe that decisions that will have a large impact over the future deserve the consent of as much of the world as possible and not just the acceptance of whatever EA thinks, and sociological ones: if I had to guess, I'd say they're more central-left/liberaltarian than other EA critics. Very likely to think that distinguishing from EA-as-belief and EA-as-institutions is a false distinction, and very supportive of reforms to EA including community democratisation. Example critic: E. Glen Weyl/Zoe Cremer

4: Radical Progressives/Anti-capitalists: This group is probably the one that you're thinking of in terms of 'our biggest critics', and they've been highly critical of EA since the beginning. They generally believe EA to be actively harmful, and usually ascribe this to either deliberate design or EA being blind to its support of oppressive ideologies/social structures. There's probably a lot of variation in what kind of world they do want, but it's likely to be a very radical departure, probably involving mass cultural and social change (perhaps revolutionary change), ending capitalism as it is currently constituted, and more money, power, and support being given to the State to bring about positive changes.

There is a lot of variation in this group, though you can pick up on some common themes (e.g. a more Hickel-esque view of human progress, compared to a more 'Pinkerite' view that EA might have), common calls-to-action (climate change is probably the largest/most important cause area here). I suggest you don't take my word for it and read them yourself,[1] but I think you won't find much in terms of practical policy suggestions - perhaps because that's seen as "working within a fatally flawed system", but some in this group are more moderate. Example critic: Alice Crary/Emile Torres/Jason Hickel

  1. ^

    Though I must admit, I find reading criticism from this group very demotivating - lots of it seems to me to be bad faith, shallowly researched, assuming bad intentions from EAs, or avoiding object-level debates on purpose. YMMV though.

Hi Jeff,

Yes, this roughly comes under the idea of the 4th bullet point at the end on 'further data cleaning'. I think different people could obviously slice the data differently at what belongs in each topic (e.g. Is Longtermism it's own category, or part of philosophy?), but it is a chart I'll probably get around to creating.

As for sharing the data, absolutely! I'm happy to share it with anyone who asks really, as long as they understand the caveats to it I mention for interpretation's sake. If you'd like to have a look, DM me and we can set something up :)

As for your earlier comment, it's an interesting idea that the EA 'core' represents all parts of the philosophy, and that newer entrants have been more drawn in my the longtermism-side. I think a lot of people have the anecdotal experience that people often make their way to EA through the 'neartermist'/global-health stuff, and often experience a 'rug pull' when introduced to xRisk/AI stuff. I'm not sure there's good data on that, maybe when Rethink can share more from the latest EA survey?

Hey Sol, some thoughts on this comment:

  • I don't think the Forum's reaction to the HLI post has been "shut up and just be nice to everyone else on the team", as Jason's response suggested.
  • I don't think mine suggests that either! In fact, my first bullet point has a similar sceptical prior to what you express in this comment[1] I also literally say "holding charity evaluators to account is important to both the EA mission and EAs identity", and point that I don't want to sacrifice epistemic rigour. In fact, one of my main points is that people - even those disagreeing with HLI, are shutting up too much! I think disagreement without explanation is bad, and I salute the thorough critics on that post who have made their reasoning for putting HLI in 'epistemic probation' clear.
  • I don't suggest 'sacrificing the truth'. My position is that the truth on StrongMind's efficacy is hard to get a strong signal on, and therefore HLI should have been more modest early on their history, instead of framing it as the most effective way to donate.
  • As for the question of whether HLI were "quasi-deliberately stacking the deck", well I was quite open that I think I am confused on where the truth is, and find it difficult to adjudicate what the correct takeway should be.

I don't think we really disagree that much, and I definitely agree that the HLI discussion should proceed transparently and EA has a lot to learn from the last year, including FTX. I think if you maybe re-read my Quick Take, I'm not taking the position you think I am.

  1. ^

    That's my interpretation of course, please correct me if I've misunderstood

JWS
50
12
18

The HLI discussion on the Forum recently felt off to me, bad vibes all around. It seems very heated, not a lot of scout mindset, and reading the various back-and-forth chains I felt like I was  'getting Eulered' as Scott once described. 

I'm not an expert on evaluating charities, but I followed a lot of links to previous discussions and found this discussion involving one of the people running an RCT on Strongminds (which a lot of people are waiting for the final results of) who was highly sceptical of SM efficacy. But the person offering counterarguments in the thread seems to be just as valid to me? My current position, for what it's worth,[1] is:

  • the initial Strongminds results of 10x cash transfer should raise a sceptical response. most things aren't that effective
  • it's worth there being exploration of what the SWB approach would recommend as the top charities (think of this as trying other bandits in a multi-armed bandit charity evaluation problem)
  • it's very difficult to do good social science, and the RCT won't give us dispositive evidence about the effectiveness of Strongminds (especially at scale), but it may help us update. In general we should be mindful of how far we can make rigorous empirical claims in the social sciences
  • HLI has used language too loosely in the past and overclaimed/been overconfident, which Michael has apologised for, though perhaps some critics would like a stronger signal of neutrality (this links to the 'epistemic probation' comments)
  • GiveWell's own 'best guess' analysis seems to be that Strongminds is 2.3x that of GiveDirectly.[2] I'm generally a big fan of the GiveDirectly approach for reasons of autonomy - even if Strongminds got reduced in efficacy to around ~1x GD, it'd still be a good intervention? I'm much more concerned with what this number is than the tone of HLIs or Michael's claims tbh (though not at the expense of epistemic rigour).
  • The world is rife with actively wasted or even negative action, spending, and charity. The integrity of EA research, and holding charity evaluators to account is important to both the EA mission and EAs identity. But HLI seems to have been singled out for very harsh criticism,[3] but so much of the world is worse.

I'm also quite unsettled by a lot of what I call 'drive-by downvoting'. While writing a comment is a lot more effort than clicking to vote on a comment/post, I think the signal is a lot higher, and would help those involved in debates reach consensus better. Some people with high-karma accounts seem to be making some very strong votes on that thread, and very few are making their reasoning clear (though I salute those who are in either direction).

So I'm very unsure how to feel. It's an important issue, but I'm not sure the Forum has shown itself in a good light in this instance.

  1. ^

    And I stress this isn't much in this area, I generally defer to evaluators

  2. ^

    On the table at the top of the link, go to the column 'GiveWell best guess' and the row 'Cost-effectiveness, relative to cash'

  3. ^

    Again, I don't think I have the ability to adjudicate here, which is part of why I'm so confused.

To explain my disagree-vote, this kind of explanation isn't a good one in isolation

I could also say it benefits AI developers to downplay[1] risk, as that means their profits and status will be high, and society will have a more positive view of them as people who are developing fantastic technologies rather than raising existential risks

And what makes this a bad explanation is that it is so easy to vary. Like above, you can flip the sign. I can also easily swap out the area for any other existential risk (e.g. Nuclear War or Climate Change), and the argument could run exactly the same.

Of course, I think motivated reasoning is something that exists and may play a role in explaining the gap between superforecasters and experts in this survey. But on the whole I don't find it convincing without further evidence.

  1. ^

    consciously or not

JWS
16
5
0

I just wanted to say that this was a fantastic post, and one of the best reads (imo) on the Forum this year.

I've never been to the Bay or interacted in person with this culture, so I'd be very interested to hear to what extent other EAs on the Forum think that these perceptions are accurate.[1]

In general it makes me appreciate coming to EA (at least the community side, as opposed to awareness of the philosophy) later in life - it means that my professional and personal life isn't so highly entangled with EA to the extent that seems to be causing a lot of dissonance and distress in these anecdotes.

I do think that some of these 'corrupting influences' are things that happen naturally in any human society and hierarchy (e.g. The Seeker's Game vignette itself - the phrase "It's not what you know it's who you know" is a common idiom for a reason!), but there do seem to be reasons why these concerns seem to be worse in the Bay than in other EA areas atm.

  1. ^

    Only if you're comfortable sharing ofc

JWS
33
13
5

Adding a +1 to Nathan's reaction here, this seems to have been some of the harshest discussion on the EA Forum I've seen for a while (especially on an object-level case). 

Of course, making sure charitable funds are doing the good that the claim is something that deserves attention, research, and sometimes a critical eye. From my perspective of wanting more pluralism in EA, it seems[1] to me that HLI is a worthwhile endeavour to follow (even if its programme ends with it being ~the same or worse than cash transfers). Of all the charitable spending in the world, is HLI's really worth this much anger?

It just feels like there's inside baseball that I'm missing here.

  1. ^

    weakly of course, I claim no expertise or special ability in charity evaluation

I'd be very interested to read up a post about your thoughts about this (though I'm not sure what 'ITT' means in this context?) and I'm curious about which SSC post that you're referring to.

I also want to say I'm not sure how universal the 'EAs have been caught so off guard' claim is. Some have sure, but plenty were hoping the the AI risk discussion stays out of the public sphere for exactly this kind of reason.

JWS
53
36
3

Just want to register some disagreement here about the name change, to others in this thread and Will (not just you Gemma!). In rough order of decreasing importance:

  • I really don't like the name MoreGood. It's a direct callback to LessWrong. I don't want to have to endorse LW to endorse EAF, or EA more generally, or the causes we care about, and this name change would signal that. Yes, there's some shared intellectual history, but I don't think LW-rationalism is inherent to or necessary for EA.
  • For people new to/interested in EA, they'll probably search for "EA" or "Effective Altruism". They wouldn't know the rebrand or name change unless there was a way to preserve it for SEO.
  • I think EA Forum is fine, and it is the major place for EA discussion online at the moment. I don't think it's that unrepresentative of EA?
  • Any other online forum will also be skewed towards those online or 'extremely online'. I think EA Twitter is much worse for this than the Forum.
  • In the spirit of do-ocracy, there's no reason that other people can't set up an alternative forum with a different focus/set of norms, though it will probably suffer from the network effects that make it difficult to challenge social media incumbents.

I do accept it was just a small draft suggestion though.

JWS
17
2
0

While I agree 'no first strikes' is good, my prior is that EA communications currently has a 'no retaliation at all' policy, which I think is a very bad one (even if unofficial - I buy Shakeel's point that there may have been a diffusion of responsibility around this)

So for clarification, do you think that CEA ought to adopt this policy just because it is a good thing to do, or because they/other EAs have broken this rule and it needs to be a clearer norm? If the latter, I'd love to see some examples, because I can't really think of any (at least from 'official' EA orgs, and especially the CEA comms team) 

On the other hand, I can think of many examples, some from quite senior figures/academics, absolutely attacking EA in an incredibly hostile way, and basically being met with no pushback from official EA organisations or 'EA leadership' however defined.

Load more