Bio

Participation
4

I am open to work. I see myself as a generalist quantitative researcher.

How others can help me

You can give me feedback here (anonymous or not).

You are welcome to answer any of the following:

  • Do you have any thoughts on the value (or lack thereof) of my posts?
  • Do you have any ideas for posts you think I would like to write?
  • Are there any opportunities you think would be a good fit for me which are either not listed on 80,000 Hours' job board, or are listed there, but you guess I might be underrating them?

How I can help others

Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering and paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.

Comments
1780

Topic contributions
26

Thanks, Michael.

1. Continuity: Continuity rules out infinities and prospects with finite value but infinite expected value, like St Petersburg lotteries. If continuity is meant to apply to all logically coherent prospects (as usually assumed), then this implies your utility function must be bounded. This rules out expectational total utilitarianism as a general view.

2. Continuity: You might think some harms are infinitely worse than others, e.g. when suffering reaches the threshold of unbearability. It could also be that this threshold is imprecise/vague/fuzzy, and we would also reject completeness to accommodate that.

In practice, I think the effects of one's actions decay to practically 0 after 100 years or so. In principle, I am open one's actions having effects which are arbitrarily large, but not infinite, and continuity does not rule out arbitrarily large effects.

3. Completeness: Some types of values/goods/bads may be incomparable. Or, you might think interpersonal welfare comparisons, e.g. across very different kinds of minds, are not always possible. Tradeoffs between incomparable values would often be indeterminate. Or, you might think they are comparable in principle, but only vaguely so, leaving gaps of incomparability when the tradeoffs seem too close.

Reality forces us to compare outcomes, at least implicitly.

4. Independence: Different accounts of risk aversion or difference-making risk aversion (not just decreasing marginal utility, which is consistent with Independence).

I just do not see how adding the same possibility to each of 2 lotteries can change my assessment of these.

Thanks, Nuño. I strongly endorse maximising expected welfare, but I very much agree with using heuristics. At the same time, I would like to see more cost-effectiveness analyses.

Thanks, LChamberlain! It has now been a month, so I am sending this kind reminder.

Thanks for the comment, Karthik! I strongly upvoted it. I have changed "expected value" to "expected utility" in this post, and updated to the following the last paragraph of the comment of mine you linked to.

I reject risk aversion with respect to impartial welfare (although it makes all sense to be risk averse with respect to money), as I do not see why the value of additional welfare would decrease with welfare.

Hi Anthony,

I think completeness is self-evident because "the individual must express some preference or indifference". Reality forces them to do so. For example, if they donate to organisation A over B, at least implicitly, they imply donating to A is as good or better than donating to B. If they decide to keep the money for personal consumption, at least implicitly, they imply that is as good or better than donating.

I believe continuity is self-evident because rejecting it implies seemingly non-sensical decisions. For example, if one prefers 100 $ over 10 $, and this over 1 $, continuity says there is a probability p such that one is indifferent between 10 $ and a lottery involving a probability p of winning 1 $, and 1 - p of winning 100 $. One would prefer the lottery with p = 0 over 10 $, because then one would be certain to win 100 $. One would prefer 10 $ over the lottery with p = 1, because then one would be certain to win 1 $. If there was not a tipping point between preferring the lottery or 10 $, one would have to be insensitive to an increased probability of an outcome better than 10 $ (100 $), and a decreased probability of an outcome worse than 10 $ (1 $), which I see as non-sensical.

Thanks, Erich! I found your comment funny.

Thanks for the post, Russel! Relatedly, readers may be interested in A Case for Voluntary Abortion Reduction by Ariel Simnegar.

I think the best case for prioritising helping animals over humans is that the best animal welfare interventions are way more cost-effective than the best human welfare interventions. I estimate:

Hi Bob,

What is your best guess for the median welfare range of mosquitoes you would get applying the same methodology as you did for the species you analysed?

Great point, Dillon! I strongly upvoted it. I very much agree a 100 % chance of full automation by 2103 is too high. This reminds me of a few "experts" and "superforecasters" in the Existential Risk Persuasion Tournament (XPT) having predicted a probability of human extinction from 2023 to 2100 of exactly 0. "Null values" below refers to values of exactly 0

In this case, people could be predicting an extinction risk of exactly 0 as representing a very low value. However, for the predictions about automation, it would be really strange if people replied 100 % to mean something like 90 %, so I assume they are just overconfident.

Load more