This is a linkpost for https://www.lesswrong.com/posts/yGrL388z4WHKeerN2/fair-collective-efficient-altruism
[Crosspost from LessWrong forum]
In this post, I propose to explore a novel methodology by which a collective of decision makers (simply called "we" here) such as the current generation of humanity could make an altruistic collective decision in the following type of situation:
- The decision is about about a number of paths of actions (called "options" here) that have potentially vast consequences in the future and potentially generating quite different populations, such as deciding to colonise space.
- There might be moral uncertainty about the moral status of some of the beings that might be affected by the decision, such as other species.
- These beings (called "potential stakeholders" here) might be quite different from humans so that trying to estimate their subjective wellbeing and compute WELLBYs seems too speculative to base the decision on. It might also not be justified to assume that these beings are rational, have complete preferences or are even expected utility maximizers whose preferences can be encoded in a von-Neumann–Morgenstern utility function.
Epistemic status: Highly speculative but based on years of theoretical research into collective decision making.
The proposed methodology is based on the following ideas and rationale:
- Even though a quantitative estimation of subjective wellbeing might not be possible, the members of the collective (called the "deciders" here) might be able to estimate what an affected being "would have preferred us to do" via an "empathy exercise" similar to what János (John) Harsányi assumed is possible to perform interpersonal comparisons of preferences (but without assuming von-Neumann–Morgenstern utility functions).
- Since such empathetic preference estimations (EPEs) are still bound to be uncertain and somewhat speculative:
- The number of different EPEs needed should be kept as low as possible.
- They should mostly be about what the being's favourite action would be since for this one only needs to imagine what the being would do in our situation. The necessary EPEs should not be about pairs of lotteries of options (as would be required to estimate interpersonally comparable cardinal preferences such as WELLBYs), but they might have to be about pairs of options or about a comparison between a single option and a certain lottery of options. If the latter is necessary, the number of different lotteries used in the EPEs should be as low as possible.
- The EPEs should not be performed by a few decision makers or experts only, but as independently as possible by as many deciders as possible, which should then be aggregated in a suitable way as a form of efficient epistemic democracy.
- Since the empathetic preference estimations by different deciders are likely of very different precision that is likely correlated with their own confidence about their estimations, and since they are likely not independent across deciders, we use their own estimations about the standard error and level of independence of their estimates as weight in the aggregation.
- The EPE aggregation should allow for different deciders having diverging value systems regarding moral status and moral weight of beings, so that this part of the moral uncertainty is taken care of in the aggregation.
- The aggregated EPEs can then be used to simulate a hypothetical collective decision made by all potential stakeholders.
- This hypothetical collective decision should be as fair as possible, trying as best as possible not to sacrifice one stakeholder's preferences for the sake of others.
- To achieve this fairness, we use a device somewhat similar to Vickrey and Harsanyi's original position or veil of ignorance: we imagine performing a lottery in which a randomly selected stakeholder makes the decision, similar to the so-called "Random Dictator" rule studied in Social Choice Theory. Let's call this hypothetical lottery the benchmark lottery.
- Using the benchmark lottery directly to make our decision would be perfectly fair ex ante , so it could be considered a form of justifiable social contract. But it would not be fair ex post since it could lead to vast inquality, and would not be efficient since it would ignore any potential for compromise. This is why we to not use it directly to make the decision, but rather use it as a fair reference point that allows us to perform some very mild form of interpersonal comparison of preferences.
- To also achieve a high level of fairness ex post, we however do not use the reference point to normalize cardinal preferences (which would seem a natural idea in other contexts where cardinal preferences can be assumed and estimated better), but we use it as a cutoff point for hypothetical approval: We assume that a potential stakeholder would approve any option that they prefer to the benchmark lottery, and then simulate a hypothetical approval voting by all potential stakeholders.
In other words: We choose that option which we believe the largest percentage of potential stakeholders would prefer to having a random potential stakeholder decide alone.
The actual procedure I propose for this is the following:
- Stage 1: Collective epistemics
- For each option:
- Collectively assemble the set of possible futures that might result from this option
- Collectively estimate the possible probability distributions on that set, taking into account all kinds of uncertainties and ambiguities
- For each option:
- Stage 2: Collective decision
- Step 2.1. Estimating a fair reference point
- Each decider does the following, according to their own value system:
- Identify the set of potential stakeholders, i.e., all morally relevant beings that would exist at some point in time in at least one possible future of at least one of the options.
- Assign a moral weight to each potential stakeholder.
- For each option , estimate which percentage of the so-weighted potential stakeholder population would want us to have chosen that option . Call this percentage . To do this, perform the following "empathy exercise" for each potential stakeholder :
- Imagine they have the same information you have (as collected in stage 1).
- Imagine which option they would want us to have chosen.
- Estimate the standard error of these percentage estimates.
- Estimate the degree of independence from 0 to 1 of these percentage estimates from other deciders' percentage estimates.
- Aggregate all deciders' individual percentage estimates into collective estimates by taking their average, weighted by estimated precision and independence:
- Consider the benchmark lottery, denoted , that consists in choosing option with a probability of percent.
- Each decider does the following, according to their own value system:
- Step 2.2. Estimating potential stakeholders' approval
- Each decider does the following, according to their own value system:
- For each option , estimate which percentage of the weighted stakeholder population would rather want us to have chosen than to have applied the benchmark lottery. To do this, use a similar empathy exercise as above. Call this estimate .
- As before, estimate the standard error of your percentage estimates and the degree of independence from 0 to 1 of your percentage estimates from others' percentage estimates, but this time for each option separately.
- Aggregate all deciders' individual percentage estimates into an estimated stakeholder approval score
- Each decider does the following, according to their own value system:
- Finally, find that option with the largest estimated stakeholder approval score and implement it.
- Step 2.1. Estimating a fair reference point
Some possible variants:
- If we are more confident about estimating potential stakeholders' preferences, we can replace the binary approval by a cardinal measure of preference satisfaction :
- Let estimate the probability at which would be indifferent between and the lottery that selects 's favorite option with probability and performs with probability . If such a exists, put . If not, then would prefer to , so then estimate the probability at which would be indifferent between and the lottery that selects 's favorite option with probability and selects with probability , and then put .
- Let be the weighted average of using the moral weights assigned to all , and proceed as above.
- If we are even more confident about estimating preferences, we could extend the choice set from the set of individual options to the set of all lotteries of options, estimate for all , find that with the largest and use it to draw the actually implemented option. Considering that potential stakeholders might not be risk-neutral w.r.t. their preference satisfaction (i.e., might not be expected utility maximizers), this highest-scoring lottery will likely be a proper lottery rather than a single option.
Hi Jobst,
As far as I can grok, this seems to work (but I can't grok very far). Maybe you could have a chat with Frederik van de Putte who's also working on collective decision making, and who even tried his own hand in formalizing the veil of ignorance.