JGH

John G. Halstead

10524 karmaJoined

Bio

John Halstead - Independent researcher. Formerly Research Fellow at the Forethought Foundation; Head of Applied Research at Founders Pledge; and researcher at Centre for Effective Altruism. DPhil in political philosophy from Oxford

Comments
679

"These spatial fixes leave climate and ecological breakdown, mass poverty and extreme income inequality in their wake." This is empirically false. If you think that capitalism started in 1800 and gradually took over the world over the following 200 years, then you will notice that mass poverty has not in fact increased. In fact, capitalism brought literally the first sustained increase in living standards ever. Living standards increase by more than all prior human history combined, on any objective measure of welfare. Global income inequality also declined after east Asia went capitalist after WW2. 

You say that capitalism is the cause of patriarchy and racism. Over the last 200 years which you and Marx posit as the period of capitalist dominance, patriarchy has declined enormously, and the welfare of women has  increased enormously. Capitalist societies are much less patriarchal than pre-capitalist societies and non-capitalist societies. Sexual violence is much lower in capitalist societies than in pre-capitalist societies. 

The welfare of non-white people in Asia has also increased enormously since they went capitalist. Capitalist societies are also much less racist than societies in the pre-industrial period. 

You attribute environmental degradation to capitalism. There is some truth in this to the extent that capitalism increases consumption, which sometimes (not always) causes increased environmental damage. However, a lot of fossil fuels are owned by governments not private capitalists. Environmental management is worse than in capitalist countries, along many dimensions. 

I don't think you need to commit yourself to including everyone. If it is true for any subset of people, then the point you gesture at in your post goes through. I have had similar thoughts to those you suggest in the post. If we gave the AI the goal of 'do what Barack Obama would do if properly informed and at his most lucid', I don't really get why we would have high confidence in a treacherous turn or of the AI misbehaving in a catastrophic way. The main response to this seems to be to point to examples of AI not doing what we intend from limited  computer games. I agree something similar might happen with advanced AI but don't get why it is guaranteed to do so or why any of the arguments I have seen lend weight to any particular probability estimate of catastrophe. 

It also seems like increased capabilities would in a sense increased alignment (with Obama) because the more advanced AIs would have a better idea of what Obama would do.  

I don't think deworming is classed as an education intervention by GiveWell. The rationale for deworming is that it supposedly increases long-term income. But GiveWell is unsure about the mechanism by which this happens. The follow up studies to the Miguel and Kremer RCT (the only study used by GiveWell to assess deworming) find "little to no impacts on education". The initial RCT found no effect on test scores. 

If I had to, I would guess that there is maybe one Republican leaning regrantor in there. Do you think we should actively recruit more Trump supporting regrantors, as this would also increase diversity? It also looks like almost all of the regrantors have university degrees. Do you think there should be more regrantors with no formal education? Approximately 14% of the world adult population is illiterate after all. 

Which of the humanities should people be recruited from? 

This post is excellent. Overall, this reminds me of Bryan Caplan's comment on Radical Markets that it proposes ingenious solutions to non-existent problems. I have some supplementary comments.

I would make one of the drawbacks that most use cases do not actually involve satisfaction of aggregate rational selfish preferences; in a slogan, democracy is not social choice. You mention this in the 'revisiting potential applications' section, but I think it should be classed as a drawback on its own. Charitable donors, grantmakers and public good providers are not trying to satisfy their selfish rational preferences, but rather aiming to act altruistically often with irrational or mistaken beliefs about how to make the world better. 

  • The empirical literature on voting suggests that voters are neither selfish nor rational. Voters vote on the basis of what they believe to be the public interest, but are often badly ill-informed and irrational. Surveys from 2015 found that over one third of Republicans wanted to bomb Aladdin’s homeland of Aghrab, while 44% of Democrats wanted to let in refugees from the same fictional country. This is not a rational selfish preference, but an irrational beliefs about what would be good in the world. The interesting thing about modern liberal democracies is how they do so well despite the political ignorance and irrationality of altruistic voters, not how they aggregate rational informed selfish preferences.
  • Similarly, as you note, grantmakers and donors are not expressing selfish preferences about pots of money, but rather acting on beliefs about how money should be spent to make the world better. The rationale for QF does not work in this context. The challenge of good grantmaking is to get people to make good altruistic decisions using multiple sources of information efficiently. 

Thanks for doing this, I appreciate the transparency in the calculations and write-ups. I have a few comments. 

  1. One possible considerations in biorisk is Kevin Esvelt's argument about declining costs of gene synthesis and the potential democratisation of bio WMD. I think these risks are extremely troubling in the next 10-20 years, and much larger than other biorisks, but I don't see much discussion of this argument in your post, and that might affect your conclusions. 
  2. I'm a bit surprised to see deforestation in there as a top priority. Have any pandemics in the past been caused by deforestation? this is a genuine question, I don't know the answer. If I were focused on zoonoses, I think I would start with wild animal markets and factory farms, not deforestation, as this seems to have been what drove eg SARS and MERS. I notice that you assume that it costs $17 to avert a tonne of CO2 through forestry protection. After looking into this for some time, Johannes Ackva and I concluded that these numbers are too high probably by 1-2 orders of magnitude, and that lot of forestry protection probably has no effect. see the climate change page of the giving what we can website. 

The default cultural norm varies a lot across offices within countries. Should we anchor to Google, hedge funds, Amazon, academia, Wave, Trajan House, the nonprofit sector, the local city council  etc? So I don't understand which cultural norm the post is anchoring to, and so I don't understand the central claim of the post. 

One of the examples given in the post is the implicit judgement that EA doesn't want to be like Google - Google is an extremely successful company that people want to work for. I don't get why it is an example of excessive perk culture. It's true that FTX had excessive perks and also committed fraud. Google has nice perks but hasn't committed fraud. 

While there may be some perks in EA, it is also the case that work in EA is (a) extremely competitive and (b) highly precarious. Most people struggle to get jobs or get by on one year contracts, and have to compete for jobs with assorted Stakhanovite super-geniuses. This is very different to the rest of the comparably cushy nonprofit sector.  

While at times it appears the OP is arguing for the default cultural norm, he also says various things which seem/are inconsistent with that such as that we can't have nice things and we must not be free from menial tasks. There is a big gap between the extravagance of FTX and standard office perks and the post provides no criterion on which to decide between these different perk cultures. 

Re your last paragraph, that might be some of what is driving my disagreement, but I think my disagreement is:

  • I don't understand what the central claim of the post is and that seems to be common among commenters eg see Richard Ngo's comment, the first sentence in your reply to me. There appears to be widespread confusion about the post means - should we have wine at conferences, should offices serve free coffee, what type of coffee is permitted etc?
  • Some of the supporting arguments for the central claim seem unsound 
  • Some of the supporting claims in the post seem inconsistent. 
  • At present, the section on what EA salaries should be has no substantive content. By definition, we don't want to underpay or overpay: these are tautologies. Similarly, what does 'pay well' mean? 

I think it would be good if the OP would clarify which office perks he is criticising. Perks vary a lot across offices - probably more generous at Google and hedge funds, less at Amazon, less at a paper merchant in Slough, not great for an academia etc. The terms 'normal', 'usual', 'nice' are doing a lot of work in this post but are never defined and I don't know what they mean. Some things are normal in offices (dishwashers, standing desks) but are also nice. 

Load more