MK

Mart_Korz

69 karmaJoined Pursuing a doctoral degree (e.g. PhD)

Bio

Participation
5

Hey! I am Mart, I learned about EA a few years back through LessWrong. Currently, I am pursuing a PhD in the theory of quantum technologies and learning more about doing good better in the EA Ulm local group and the EA Math and Physics professional group.

Posts
9

Sorted by New

Comments
24

I am just coming from a What We Owe the Future reading group - thanks for reminding me of the gap between my moral untuitions and total utilitarianism!

One reason why I am not convinded by your argument is that I am not sure that the additional lifes lived due to the unintended pregnancies are globally net-positive:

  • on the one hand, it does seem quite likely that their lives will be subjectively worth living (the majority of people agrees with this statement and it does not seem to me that these lives would be too different) and that they would have net-positive relationships in the future.
  • but on the other hand, given a level of human technology, there is some finite number of people on earth which is optimal form a total utility standpoint. And given the current state of biodiversity loss, soil erosion and global warming, it does not seem obvious that humanity is below that number[1]
  • as a third part, given that these are unintended pregnancies, it does seem likely that there are resource limitations which would lead to hardships if a person is born. We would need to know a lot about the life situation and social support structures of the potential parents if we wanted to estimate how significant this effect is, but it could easily be non-trivial.
  • edited to add and remove: the number of 100 pregnancies averted does not correspond to 100 fewer children being born in the end. A significant part of the pregnancies would only be shifted in time. I would be surprised if the true number is larger than 10 and expect it to be lower than this. My reasoning here is that the total number of children each set of parents is going to have will hardly be reduced by 100x from access to contraception. If this number started at 10 children and is reduced to a single child, we have a reduction that corresponds to 10 fewer births per death averted. And stated like this, even the number 10 seems quite high (sorry, there were a few confusions in this argument)

This being said, the main reason why I am emotionally unconvinced by the argument you give is probably that I am on some level unable to contemplate "failing to have children" as something that is morally bad. My intuitions have somewhat cought up with the arguments that giving happy lives the opportunity to exist is a great thing, but they do not agree to the sign-flipped case for now. Probably, a part of this is that I do not trust myself (or others) to actually reason clearly on this topic and this just feels like "do not go there" emotionally.


  1. It also does not seem obvious that we are above that number. Especially when trying to include topics like wild animal suffering. At least I feel confident that human population isn't off from the optimum by a huge factor. ↩︎

This is a good point, although I would argue that the reasons why practicing religion has these advantages is unrelated to it being a case of Pascal's wager (if we let Pascal's wager stand for promises of infinite value in general).

This is not enough to claim that Christianity as a whole holds this position, but there certainly exist sentiments in this direction such as

Revelation 3:15--16

I know your works: you are neither cold nor hot. Would that you were either cold or hot! So, because you are lukewarm, and neither hot nor cold, I will spit you out of my mouth.
(Holy Bible, New International Version)

I really like the description, but would like to add that infinities in the "size" of the universe could also refer to time: it might be that there is an infinite future which we could possibly influence even if the size of the universe is finite. I don't think that anyone expects this to be true with anything approaching certainty (due to entropy it seems likely that there is no possibility to sustain life/agents indefinitely), but it does not seem ruled out to me that there could be relevant loopholes like cosmic expansion ensuring that entropy can just grow indefinitely or other unusual scenarios (like the possibility to create a pocket universe by triggering a big bang)).

Would one only use 'direct steps' in decision-making? How is "path dependency" interpreted?

I'm not sure what you are referring to here. I would flag that the relative value type specification is very narrow - it just states how valuable things are, not the "path of impact" or anything like that.

After talking to GPT about this[1], I think that my concern is actually already covered by your

If people were doing it by hand, there could be contradictory properties, as you mention. But with programming, which we likely want anyway, it's often trivial or straightforward to make consistent tables.

and could be addressed to a large degree with a few automatized checks and a user interface (one could even auto-fill the table given the first line of entries by assuming ~maximum resulting uncertainty for the unknown correlations). I feel like this could be really helpful for reflecting on one's values if done right, or overwhelming if done wrong.


  1. [GPTs answer includes a summary, so I'll leave out my prompt:] I see your point now. You're referring to a potential inconsistency in the triangle inequality context for relative values. Let's use a concrete example to explore this. Suppose the relative value of 'chocolate' to 'vanilla' is given by a 90% credibility interval from 0.8 to 1.2, and 'banana' to 'vanilla' is from 1.0 to 1.5. This would imply that, based on your preferences for chocolate and banana ice cream relative to vanilla, 'banana' should have a relative value compared to 'chocolate' that lies within a certain range. If, however, you then provide a wide 90% credibility interval for 'chocolate' to 'banana' (e.g., 0.2 to 5), it may seem inconsistent with the narrower ranges implied by the comparisons with 'vanilla'. In this case, it may be that you need to revisit your estimates for 'chocolate' to 'vanilla' and 'banana' to 'vanilla'. If you feel a lot of uncertainty about 'chocolate' vs. 'banana', perhaps you also should feel more uncertainty about these options compared to 'vanilla' than you initially thought. You may have overestimated your confidence in these comparisons. Alternatively, if you're confident in your estimates of 'chocolate' to 'vanilla' and 'banana' to 'vanilla', you may want to narrow down your 90% credibility interval for 'chocolate' vs. 'banana'. In any case, it's a good point to keep in mind when building a relative value table. You want to be consistent in your uncertainty estimates across different comparisons. If there seems to be a contradiction, it's a sign that you may need to rethink some of your estimates. ↩︎

Thanks! I'll reply in separate comments

Is the meaning of each entry "How many times more value is there in item1 than in item2? (Provide a distribution)"?

Yep, that's basically it.

Okay, so maybe relative values are a more straightforward concept than I thought/feared :)

Ooh, that makes sense. Thanks!

So my idea of 'strict relative values' turns out to be an illusory edge case if we use distributions and not numbers, and in practice we'll usually be in the 'generalized case' anyway.

I fear, my not-grokking the implications remains. But at least, I don't mistakenly think I fully understood the concept any more.

It is probably not worth the effort for you to teach me all about the approach, but I'll still summarize some of my remaining questions. Possibly my confusions will be shared by others who try to understand/apply relative value functions in the future

  • If someone hands me a table with distributions drawn on it, what exactly do I learn? What decisions do a make based in the table?
    • Is the meaning of each entry "How many times more value is there in than in ? (Provide a distribution)"?
  • Would one only use 'direct steps' in decision-making? How is "path dependency" interpreted?
    • usually, will just give a more precise distribution than a distribution one would get from [1]. But it could also turn out that the indirect path creates a more narrow, or an interestingly different distribution[2].
  • what is the necessary knowledge for people who want to use relative value functions? Can I do worse compared to using a single unit by using relative values naively?

  1. As you write, this is not really well-defined as one would need correlations to combine the distributions perfectly. But there should still be some bounds one could get on the outcome distribution. ↩︎

  2. For example, it might totally happen that I feel comfortable with giving precise monetary values to some things I enjoy, but feel much less certain if I try to compare them directly ↩︎

I think you might not quite yet grok the main benefits of relative values

Thanks for your reply, you are probably right. Let my share my second attempt of understanding relative values after going through the web app.

'strict' relative values

If I did not overlook some part in the code, the tables created in the web app are fully compatible with having a single unit.

  • For every single table, one could use a single line of the table to generate the rest of the table. Knowing for all , we can use to construct arbitrary entries.
  • Between the different tables, one would need to add a single translation factor which one could then use to merge the tables to a big single table.
  • Without such a translation factor, the tables would remain disconnected (there could be a single unit for all tables, but it is not specified). Still, the tables could still be used to make meaningful decisions inside of the scope of each table.

If this is the intent of how relative values are meant to be used, my impression of their advantages is:

  • they are, in principle, compatible with a single value/utility function. One does not need to change one's philosophy at all when switching over from using a single unit for measuring value.
  • they allow for a more natural thought process when exploring the value of interventions
    • one can use crisply defined units at each step of one's research: "Person in city x of income y gets $1" can be distinguished from "Person in city x of income y gets $5" as necessary.
    • throughout the process, one will tend to work 'bottom-up' or 'top-down', that is for bottom-up, start out with very specific value-measures and expand their connections (via relative values / translation factors) to more and more abstract/general values (such as maybe WELLBYs)
    • If one feels that there is an unbridgeable gap between two currently non-connected groups of values, one can keep them as separate value tables and decide to add the connection some time in the future
      • thanks to using distributions, one can also decide to add a connection and use a very high uncertainty instead.

This version of relative values (let's call it "strictly coherent relative values according to Mart's understanding v2" or "strict relative values" for short) feels quite intuitive to me and also seems significantly similar to how givewell's current cost-effectiveness analyses are done (except that they do not create a value table with all-to-all translations and there being no/fewer distributions[1].)

Your link to the usage of relative values in Finance seems to me to be compatible with this definition of relative values.

Beyond 'strict' relative values

But, from reading your OP (and the recommended section of the video), my impression is that relative values are intended to be used to describe situations more general than my "strict relative values".

Your

and also David Johnston's comments seem to refer to a much more general case.

For this more general version my 'strictness' equation would typically not be valid. Translated into David's notation, the 'strictness' equation would be where is the reference value, and are the relative values comparing and .

David's

Note that, under this interpretation, we should not expect i=j$. This is because items have different values in different contexts.

is clearly not compatible with 'strictness' [2].

In such a generalized case, I think that the philosophical status of what entries mean is much more complicated. I do not have a grasp on what the added degrees of freedom do and why it is good to have them. In my last comment, I kind of assumed that any deviation from strictness would be "irrational inconsistency" by definition. But maybe I am just missing the relevant background and this really does capture something important?


  1. This impression is based on the 2023 spreadsheet. This might well be a mistaken impression ↩︎

  2. Proof: Insert and into the 'strictness equation' and see that the results are the reciprocals of each other ↩︎

Audio matters

Are there by any chance plans to collect the audio in a podcast feed?

Load more