F

freedomandutility

4912 karmaJoined Apr 2021

Comments
497

Topic contributions
7

"Thinking in terms of group rather than individual agency makes transition from capitalism to socialism appear more tractable."

I disagree. There is a long history of large, organised, and well-funded groups failing to engineer transitions to socialism within individual countries, let alone a global transition to socialism.

I'd also like to add "backlash effects" to this, and specifically effects where advocacy for AI Safety policy ideas which are far outside the Overton Window have the inadvertent effect of mobilising coalitions who are already opposed to AI Safety policies.

I think Yudkowsky's public discussion of nuking data centres has "poisoned the well" and had backlash effects.

I think this is really worrying, and I think it’s also surprising how little work I’ve seen trying to explain it.

One view I’ve come across is that the public are so traumatised from Covid that they want to avoid thinking about pandemics.

FWIW I think this kind of post is extremely valuable. I may not see him as very EA-aligned but identifying very rich people who might be a bit EA-aligned is very good because the movement could seek to engage with them more and potentially get funding for some impactful stuff.

"Most charities seem much less effective than the most effective for-profit organizations, and most of the good in the world seems achieved by for-profit companies."

I disagree but even I did agree, per dollar of investment, I think the best charities far outpeform the best for-profit companies in terms of social impact, and we can do a reasonable job of identifying the best charities, such that donating a lot of money to these charities should be seen as a necessary component of being EA-aligned if you're rich.

I don't think the third question is a good faith question. 

This is the context for how Wenar used the phrase: "And he’s accountable to the people there—in the way all of us are accountable to the real, flesh-and-blood humans we love.""

I interpret this as "direct interaction with individuals you are helping ensures accountability, i.e, they have a mechanism to object to and stop what you are doing". This contrasts with aid programs delivered by hierarchical organisations where locals cannot interact with decision makers, so cannot effectively oppose programs they do not want, eg - the deworming incident where parents were angry.

"If I accepted every claim in his piece, I’d come away with the belief that some EA charities are bad in a bunch of random ways, but believe nothing that imperils my core belief in the goodness of the effective altruism movement or, indeed, in the charities that Wenar critiques."

I agree - but I think Wenar does a very good job of pointing out specific weaknesses. If he alternatively framed this piece as "how EA should improve" (which is how I mentally steelman every EA hit-piece that I read), it would be an excellent piece. Under his current framing of "EA bad", I think it is a very unsuccessful piece.

I think these are his very good and perceptive criticisms:

  1. Global health and development EA does not adequately account for side-effects, unintended consequences and perverse incentives caused by different interventions in its expected-value calculations, and does not adequately advertise these risks to potential donors. Weirdly, I don't think I've come across this criticism of EA before despite it seeming very obvious. I think this might be because people are polarised between "aid bad" and "aid good", leaving very few people saying "aid good overall but you should be transparent about downsides of interventions you are supporting".
  2. The use of quantitative impact estimates by EAs can mislead audiences into overestimating the quality of quantitative empirical evidence supporting these estimates.
  3. Expected-value calculations rooted in probabilities derived from belief (as opposed to probabilities derived from empirical evidence) are prone to motivated reasoning and self-serving biases.  

I've previously discussed weaknesses of expected-value calculations on the forum and have suggested some actionable tools to improve them.

I think Givewell should definitely clarify what they think the most likely negative side-effects and risks of the programs they recommend are, and how severe they think the side-effects are.

This is great, thank you for doing this hard work!

A couple of disagreements:

"I think it’s important for many to realise the importance of other players and funding sources in the landscape. This could mean many more funding opportunities EAs are systematically neglecting." 

My view is that many players and funding sources means that fewer important funding opportunities will be missed.

"I was struck by how little philanthropy has been directed towards tech development for biosecurity, mitigating GCBRs, and policy advocacy for a range of topics from regulating dual-use research of concern (DURC) to mitigating risks from bioweapons."

I 100% agree regarding policy advocacy, but I disagree regarding tech development and mitigating GCBRs for reasons you do mention - that many different interventions, including vaccine R&D and broad public health systems strengthening in LMICs, contribute to mitigating GCBRs. 

My sense is that there is a lot of impact to be made from just convincing US foundations to donate to charities abroad, which is probably more tractable than selling EA as an entire concept, and is still very compatible with TBP.

(In my opinion they are basically correct about TBP and EA being incompatible!)

Load more