I am open to work. I see myself as a generalist quantitative researcher.
You can give me feedback here (anonymous or not).
You are welcome to answer any of the following:
Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering and paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.
Thanks, Nuño. I strongly endorse maximising expected welfare, but I very much agree with using heuristics. At the same time, I would like to see more cost-effectiveness analyses.
Thanks for the comment, Karthik! I strongly upvoted it. I have changed "expected value" to "expected utility" in this post, and updated to the following the last paragraph of the comment of mine you linked to.
I reject risk aversion with respect to impartial welfare (although it makes all sense to be risk averse with respect to money), as I do not see why the value of additional welfare would decrease with welfare.
Hi Anthony,
I think completeness is self-evident because "the individual must express some preference or indifference". Reality forces them to do so. For example, if they donate to organisation A over B, at least implicitly, they imply donating to A is as good or better than donating to B. If they decide to keep the money for personal consumption, at least implicitly, they imply that is as good or better than donating.
I believe continuity is self-evident because rejecting it implies seemingly non-sensical decisions. For example, if one prefers 100 $ over 10 $, and this over 1 $, continuity says there is a probability p such that one is indifferent between 10 $ and a lottery involving a probability p of winning 1 $, and 1 - p of winning 100 $. One would prefer the lottery with p = 0 over 10 $, because then one would be certain to win 100 $. One would prefer 10 $ over the lottery with p = 1, because then one would be certain to win 1 $. If there was not a tipping point between preferring the lottery or 10 $, one would have to be insensitive to an increased probability of an outcome better than 10 $ (100 $), and a decreased probability of an outcome worse than 10 $ (1 $), which I see as non-sensical.
Thanks for the post, Russel! Relatedly, readers may be interested in A Case for Voluntary Abortion Reduction by Ariel Simnegar.
I think the best case for prioritising helping animals over humans is that the best animal welfare interventions are way more cost-effective than the best human welfare interventions. I estimate:
Great point, Dillon! I strongly upvoted it. I very much agree a 100 % chance of full automation by 2103 is too high. This reminds me of a few "experts" and "superforecasters" in the Existential Risk Persuasion Tournament (XPT) having predicted a probability of human extinction from 2023 to 2100 of exactly 0. "Null values" below refers to values of exactly 0
In this case, people could be predicting an extinction risk of exactly 0 as representing a very low value. However, for the predictions about automation, it would be really strange if people replied 100 % to mean something like 90 %, so I assume they are just overconfident.
Thanks, Michael.
In practice, I think the effects of one's actions decay to practically 0 after 100 years or so. In principle, I am open one's actions having effects which are arbitrarily large, but not infinite, and continuity does not rule out arbitrarily large effects.
Reality forces us to compare outcomes, at least implicitly.
I just do not see how adding the same possibility to each of 2 lotteries can change my assessment of these.