A

Askell

205 karmaJoined

Comments
5

On individual advice: I'd add  something about remembering that you are always in charge and should set your own boundaries. You choose what you want to do with your life, how much of EA you accept, and how much you want to use to influence your choices. If you're a professional acrobat and want to give 10% of your income to effective charities, that's a great way to be an EA. If someone points out that you also have a degree in computer science and could go work on AI safety, it's fine to reply "I know but I don't want to do that". You don't need to defend or justify your choices on EA grounds.

(That doesn't mean you might not want to defend some choice you've made. The research side of EA is all about making and breaking down claims about what actions do the most good. But people's personal choices about how to act don't themselves constitute claims about the best way to act.)

EA is a highly intellectual community, so I worry that people  feel the need to justify or defend anything they do or any choice they make though an EA lens, and this might make EA infiltrate their life more than they are actually comfortable with and fail to set the right boundaries. People should do EA things because and to the extent that they want to, and the EA community should be there as a resource to help them do that. But EA should justify itself to you, not the other way round.

So I think that if you identify with or against some group (e.g. 'anti-SJWs'), then anything that people say that pattern matches to something that this group would say triggers a reflexive negative reaction. This manifests in various ways: you're inclined to attribute way more to the person's statements than what they're actually saying or you set an overly demanding bar for them to "prove" that what they're saying is correct. And I think all of that is pretty bad for discourse.

I also suspect that if we take a detached attitude towards this sort of thing, disagreements about things like how much of a diversity problem EA has or what is causing it would be much less prominent than they currently are. These disagreements only affect benefits we expect to directly accrue from trying to improve things, but the costs of doing these things are usually pretty low and the information value of experimenting with them is really high. So I don't really see many plausible views in this area that would make it rational to take a strong stance against a lot of the easier things that people could try that might increase the number of women and minorities that get involved with EA.

An example of a particular practice that I think might look kind of innocuous but can be quite harmful to women and minorities in EA is what I'm going to call "buzz talk". Buzz talk involves making highly subjective assessments of people's abilities, putting a lot of weight in those assessments, and communicating them to others in the community. Buzz talk can be very powerful, but the beneficiaries of buzz seem to disproportionately be those that conform to a stereotype of brilliance: a white, upper class male might be "the next big thing" when his black, working class female counterpart wouldn't even be noticed. These are the sorts of small, unintentional behaviors that I that it can be good for people to try to be conscious of.

I also think it's really unfortunate that there's such a large schism between those involved in the social justice movement and people who largely disagree with this movement (think: SJWs and anti-SJWs). The EA community attracts people from both of groups, and I think it can cause people to see this whole issue through the lens of whatever group they identify with. It might be helpful if people tried to drop this identity baggage when discussing diversity issues in EA.

There are two different claims here: one is "type x research is not very useful" and the other is "we should be doing more type y research at the margin". In the comment above, you seem to be defending the latter, but your earlier comments support the former. I don't think we necessarily disagree on the latter claim (perhaps on how to divide x from y, and the optimal proportion of x and y, but not on the core claim). But note that the second claim is somewhat tangential to the original post. If type x research is valuable, then even though we might want more type y research at the margin, this isn't a consideration against a particular instance of type x research. Of course, if type x research is (in general or in this instance) not very useful, then this is of direct relevance to a post that is an instance of type x research. It seems important not to conflate these, or to move from a defense of the former to a defense of the latter. Above, you acknowledge that type x research can be valuable, so you don't hold the general claim that type x research isn't useful. I think you do hold the view that either this particular instance of research or this subclass of type x research is not useful. I think that's fine, but I think it's important not to frame this as merely a disagreement about what kinds of research should be done at the margin, since this is not the source of the disagreement.

I suspect that the distinctions here are actually less bright than "philosophical analysis" and "concrete research". I can think of theoretical work that is consistent with doing what you call (i) - (iii) and does not involve a lot of guesswork. After all, lot of theoretical work is empirically informed, even if it's not itself intended to gather new data. And a lot of this theoretical work is quite decision relevant. A simple example is effective altruism itself: early work in EA was empirically informed theoretical work. Another example that's close to my heart is value of information work. There are existing problems in how to identify high and low value of information, when to explore vs. exploit, and so on. I suspect that doing empirically informed theoretical work on these question would be more fruitful than trying to solve them through empirical means only. So my inclination is to take this on a case to case basis. We see radical leaps forward sometimes being generated by theoretical work and sometimes being generated by novel empirical discoveries. It seems odd to not draw from two highly successful methods.

What, then, about pure a priori work like mathematics and conceptual work? I think I agree with Owen that this kind of work is important for building solid foundations. But I'd also go further in saying that if you find good, novel foundational work to do, then it can often bear fruit later. E.g. work in economics and game theory is of this sort, and yet I think that a lot of concepts from game theory are very useful for analyzing real world situations. It would have been a shame if this work had been dismissed early on as not decision relevant.