KG

Katja_Grace

1377 karmaJoined

Comments
16

Possible, but likely a smaller effect than you might think because: a) I was very ambiguous about the subject matter until they were taking the survey (e.g. did not mention AGI or risk or timelines) b) Last time (for the 2016 survey) we checked the demographics of respondents against those for a random subset of non-respondents, and they weren't very different.

Participants were also mostly offered substantial payment for taking the survey ($50 usually, for a ~15m survey), in part in the hope of making payment a larger motivator than desire to express some particular view, but I don't think payment actually made a large difference to the response rate, so probably failed have the desired effect on possible response bias.

>I would be very excited to see research by Giving Green into whether their approach of recommending charities which are, by their own analysis, much less cost effective than the best options is indeed justified.

Several confusions I have:

  • When did they say these were much less cost-effective? I thought they just failed to analyze cost effectiveness? (Which is also troubling, but different from what you are saying, so I'm confused)
  • What do you mean by it being justified? It looks like you mean 'does well on a comparison of immediate impact', but, supposing these things are likely to be interpreted as recommendations about what is most cost-effective, this approach sounds close to outright dishonesty, which seems like it would still not be justified. (I'm not sure to what extent they are presenting them that way.)
  • Do they explicitly say that this is their approach?

Do you have quantitative views on the effectiveness of donating these organizations, that could be compared to other actions? (Or could you point me to any of the links go to something like that?) Sorry if I missed them.

It seems worth distinguishing 'effectiveness' in the sense of personal competence (as I guess is meant in the first case, e.g. 'reasonably sharp') and 'effectiveness' in the sense of trying to choose interventions by cost-effectiveness.

Also remember that selecting people to encourage in particular directions is a subset of selecting interventions. It may be that 'E not A' people are more likely to be helpful than 'A not E' people, but that chasing either group is less helpful than doing research on E that is helpful for whichever people already care about it. I think I have stronger feelings about E-improving interventions overall being good than about which people are more promising allies.

Yeah, and among common intuitions I think. But I thought EAs were mostly consequentialists, so the intended role of obligations is not obvious to me.

I'm curious about the implicit framework where some things are obligatory and some things are choices.

We evaluated all of the projects other than the three I specifically mentioned not evaluating. Sorry for not writing up the other evaluations - we just didn't have time. We bought the ones that gave us the most impact per dollar, according to our evaluations (and based on the prices people wanted for their work). So we didn't purchase Joao's work this round because we calculated that it was somewhat less cost-effective than the things we did purchase, given the price. We may still purchase it in a later round.

Changing one's values does not more effectively promote the values one has initially, so it seems one should be averse to it. I think the expanding circle case is more complicated - the advocates of a wider circle are trying to convince the others that those others are mistaken about their own existing values, and that by consistency they must care about some entities they think they don't care about. This is why the phenomenon looks like an expanding circle - points just outside a circle look a lot like points just inside it, so consistency pushes the circle outwards (this doesn't explain why the circle expands rather than contracting).

It seems there are some common states where this comes up, such as when one person is doing a thing which they think is good, given personal constraints which are hidden to their conversation partner, and worries that they are harshly judged because the constraints are hidden. Or where one person is trying out a thing, because they think it might be very good, however they don't already think it is very good (except for VOI), and worry that others think they are actually advocating for something suboptimal. Or where one person doesn't think what they are doing is likely to be optimal, but struggles to find something actually better that they could feasibly do.

Perhaps it would be helpful if there was a thing you could say in these recognized circumstances to let your conversation partner know that you know that what you are doing doesn't look optimal, and you are already aware of the situation.

Load more