Hey! I am Mart, I learned about EA a few years back through LessWrong. Currently, I am pursuing a PhD in the theory of quantum technologies and learning more about doing good better in the EA Ulm local group and the EA Math and Physics professional group.
This is not enough to claim that Christianity as a whole holds this position, but there certainly exist sentiments in this direction such as
Revelation 3:15--16
I know your works: you are neither cold nor hot. Would that you were either cold or hot! So, because you are lukewarm, and neither hot nor cold, I will spit you out of my mouth.
(Holy Bible, New International Version)
I really like the description, but would like to add that infinities in the "size" of the universe could also refer to time: it might be that there is an infinite future which we could possibly influence even if the size of the universe is finite. I don't think that anyone expects this to be true with anything approaching certainty (due to entropy it seems likely that there is no possibility to sustain life/agents indefinitely), but it does not seem ruled out to me that there could be relevant loopholes like cosmic expansion ensuring that entropy can just grow indefinitely or other unusual scenarios (like the possibility to create a pocket universe by triggering a big bang)).
Would one only use 'direct steps' in decision-making? How is "path dependency" interpreted?
I'm not sure what you are referring to here. I would flag that the relative value type specification is very narrow - it just states how valuable things are, not the "path of impact" or anything like that.
After talking to GPT about this[1], I think that my concern is actually already covered by your
If people were doing it by hand, there could be contradictory properties, as you mention. But with programming, which we likely want anyway, it's often trivial or straightforward to make consistent tables.
and could be addressed to a large degree with a few automatized checks and a user interface (one could even auto-fill the table given the first line of entries by assuming ~maximum resulting uncertainty for the unknown correlations). I feel like this could be really helpful for reflecting on one's values if done right, or overwhelming if done wrong.
[GPTs answer includes a summary, so I'll leave out my prompt:] I see your point now. You're referring to a potential inconsistency in the triangle inequality context for relative values. Let's use a concrete example to explore this. Suppose the relative value of 'chocolate' to 'vanilla' is given by a 90% credibility interval from 0.8 to 1.2, and 'banana' to 'vanilla' is from 1.0 to 1.5. This would imply that, based on your preferences for chocolate and banana ice cream relative to vanilla, 'banana' should have a relative value compared to 'chocolate' that lies within a certain range. If, however, you then provide a wide 90% credibility interval for 'chocolate' to 'banana' (e.g., 0.2 to 5), it may seem inconsistent with the narrower ranges implied by the comparisons with 'vanilla'. In this case, it may be that you need to revisit your estimates for 'chocolate' to 'vanilla' and 'banana' to 'vanilla'. If you feel a lot of uncertainty about 'chocolate' vs. 'banana', perhaps you also should feel more uncertainty about these options compared to 'vanilla' than you initially thought. You may have overestimated your confidence in these comparisons. Alternatively, if you're confident in your estimates of 'chocolate' to 'vanilla' and 'banana' to 'vanilla', you may want to narrow down your 90% credibility interval for 'chocolate' vs. 'banana'. In any case, it's a good point to keep in mind when building a relative value table. You want to be consistent in your uncertainty estimates across different comparisons. If there seems to be a contradiction, it's a sign that you may need to rethink some of your estimates. ↩︎
Ooh, that makes sense. Thanks!
So my idea of 'strict relative values' turns out to be an illusory edge case if we use distributions and not numbers, and in practice we'll usually be in the 'generalized case' anyway.
I fear, my not-grokking the implications remains. But at least, I don't mistakenly think I fully understood the concept any more.
It is probably not worth the effort for you to teach me all about the approach, but I'll still summarize some of my remaining questions. Possibly my confusions will be shared by others who try to understand/apply relative value functions in the future
As you write, this is not really well-defined as one would need correlations to combine the distributions perfectly. But there should still be some bounds one could get on the outcome distribution. ↩︎
For example, it might totally happen that I feel comfortable with giving precise monetary values to some things I enjoy, but feel much less certain if I try to compare them directly ↩︎
I think you might not quite yet grok the main benefits of relative values
Thanks for your reply, you are probably right. Let my share my second attempt of understanding relative values after going through the web app.
'strict' relative values
If I did not overlook some part in the code, the tables created in the web app are fully compatible with having a single unit.
If this is the intent of how relative values are meant to be used, my impression of their advantages is:
This version of relative values (let's call it "strictly coherent relative values according to Mart's understanding v2" or "strict relative values" for short) feels quite intuitive to me and also seems significantly similar to how givewell's current cost-effectiveness analyses are done (except that they do not create a value table with all-to-all translations and there being no/fewer distributions[1].)
Your link to the usage of relative values in Finance seems to me to be compatible with this definition of relative values.
Beyond 'strict' relative values
But, from reading your OP (and the recommended section of the video), my impression is that relative values are intended to be used to describe situations more general than my "strict relative values".
Your
and also David Johnston's comments seem to refer to a much more general case.
For this more general version my 'strictness' equation would typically not be valid. Translated into David's notation, the 'strictness' equation would be where is the reference value, and are the relative values comparing and .
David's
Note that, under this interpretation, we should not expect i=j$. This is because items have different values in different contexts.
is clearly not compatible with 'strictness' [2].
In such a generalized case, I think that the philosophical status of what entries mean is much more complicated. I do not have a grasp on what the added degrees of freedom do and why it is good to have them. In my last comment, I kind of assumed that any deviation from strictness would be "irrational inconsistency" by definition. But maybe I am just missing the relevant background and this really does capture something important?
This impression is based on the 2023 spreadsheet. This might well be a mistaken impression ↩︎
Proof: Insert and into the 'strictness equation' and see that the results are the reciprocals of each other ↩︎
I am just coming from a What We Owe the Future reading group - thanks for reminding me of the gap between my moral untuitions and total utilitarianism!
One reason why I am not convinded by your argument is that I am not sure that the additional lifes lived due to the unintended pregnancies are globally net-positive:
the number of 100 pregnancies averted does not correspond to 100 fewer children being born in the end. A significant part of the pregnancies would only be shifted in time. I would be surprised if the true number is larger than 10 and expect it to be lower than this. My reasoning here is that the total number of children each set of parents is going to have will hardly be reduced by 100x from access to contraception. If this number started at 10 children and is reduced to a single child, we have a reduction that corresponds to 10 fewer births per death averted. And stated like this, even the number 10 seems quite high(sorry, there were a few confusions in this argument)This being said, the main reason why I am emotionally unconvinced by the argument you give is probably that I am on some level unable to contemplate "failing to have children" as something that is morally bad. My intuitions have somewhat cought up with the arguments that giving happy lives the opportunity to exist is a great thing, but they do not agree to the sign-flipped case for now. Probably, a part of this is that I do not trust myself (or others) to actually reason clearly on this topic and this just feels like "do not go there" emotionally.
It also does not seem obvious that we are above that number. Especially when trying to include topics like wild animal suffering. At least I feel confident that human population isn't off from the optimum by a huge factor. ↩︎