Introduction
Suppose that at every point in time, we take the action given by:
That is, we want to choose the action in the set of possible actions which maximizes () the expected () utility () in the world given that action () and given all our observations and models about the world ().
In the next sections, I will give a brief example, analyze each of the parts in some detail as they relate to altruism, flesh them out, and then point out where I think some EA organizations and myself fall into according to this model.
I hesitated for a long while about posting this piece, because I thought that it might be perceived as too basic or unsophisticated, and because I'd been working on a related but much more complicated model. And indeed, the below model is basic. However, I've found that it does contribute to my clarity of thought, which I think is valuable.
A brief example
If your utility function is “eat as much ice cream as possible”, then at every point you’d want to choose the action a among the set of possible actions A available to you (buy ice cream, invest in the stock market, work to get more money, etc.) which leads to the most ice cream eaten by you, given all you know about the world.
The moving parts in our model, and what it means to optimize them, are:
- Our utility function . In this case, this is “eat as many ice creams as possible”. Fine-tuning this utility function might involve better defining what an ice-cream is, and why eating them is valuable to us.
- Concrete example: Maybe you reflect on the meaning of ice-cream and decide that what you really care about is actually the feeling of contentment while eating ice cream in good company, maximizing sugar intake, or something else.
- The optimal action a, and the set of actions A available to you. Fine-tuning them might involve gaining access to larger or better sets of actions.
- Concrete example: You make sure to have better grades in school so that your parents don’t ground you and limit your ranges of action.
- The expected value function . Fine-tuning this might involve becoming a better forecaster and tracking the past record of information sources.
- Concrete example: You hire a group of superforecasters to predict inflation; the higher the inflation, the less ice-cream you will be able to later buy with your savings.
- Our knowledge of the world, . Fine-tuning this might involve gaining more information about the world, and having it better organized.
- Concrete example: You become an expert about the ice cream supply chain, but you also get a subscription to The Economist to be informed about broad trends which may affect your plans.
- Our decision method, originally . Fine-tuning it might involve choosing a different decision method.
- Concrete example: Because of your moral uncertainty, you're inclined to quantilize (e.g., to choose an action randomly among the top 5% of actions by expected value) rather than to directly choose the action with the highest expected value.
- Concrete example: You fine-tune your function to also take into account not only the direct utility of actions, but also their value of information. For instance, if your city has 100 ice cream shops, the long-run optimal behavior is probably to visit all of them at some point and choose the best, rather than to always go to the one which was in-expectation best at the beginning (cf. Multi-armed bandits).
- Concrete example: Because you can't actually evaluate the value of all actions and then choose the most valuable ones, you find yourself making some simplifications.
Building Blocks
The choice function (originally )
So you have something like a landscape of the expected value of actions, and you want to find and choose the highest point. Some ways in which you can improve your ability to do this:
- Having more computing power or intellectual manpower to sift through the landscape of expected values.
- Having better algorithms; better processes to calculate and choose the value of actions.
- cheaper algorithms,
- more accurate algorithms,
- more scalable algorithms, and in particular, computations that can be reused by many people,
- etc.
- Having better "parametrizations" of actions so that you can evaluate groups of actions all at once.`
- Having better fundamentals
Consider an organization like GiveWell. GiveWell could estimate the value of any charity. But doing so is costly, so it can't just evaluate the value of every charity and choose the best ones. This leads to interesting exploration-exploitation tradeoffs problems, even if the evaluations of expected value of any particular charity were perfect (!).
Here, better parametrizations are particularly helpful. By parametrizations, I mean something like dividing into parts which can be considered in isolation. For example, GiveWell could divide charities into various cause areas, and evaluate swathes of causes (e.g., rare diseases) all at once.
Good parametrizations could lead to efficiency gains, worse parametrizations could lead to confused results. For example, one might feel aversion towards "politics" in general—thinking that it is generally toxic—and as a result discount "better voting mechanisms" as a cause. But perhaps a more fine-grained parametrization would have made a distinction between "ideological or party politics" and "all other politics", and realized that "better voting mechanisms" falls into the second bucket.
With regards to fundamentals, one would want to make sure that one is maximizing over the right thing. For example one would want to make sure that one isn't e.g., triple counting impact, and one might want to maximize over Shapley values, instead of over counterfactual values to avoid this. Similarly, one might want to take into account that one is maximizing over an estimate, and adjust for the optimizer's curse.
Many of these points could also belong in the next section, estimating the utility of actions, or the consequences of actions in general.
the expected ()
In general, to get better predictions (or more accurate expectations), one can either:
- Improve one's ability to predict the world, or
- make the world more predictable (e.g., simpler, or by causing that which one predicting would happen).
Various forecasting platforms (such as Metaculus, Hypermind, Predictit, etc.) provide forecasting capabilities. Robust randomized trials can generate conclusions (and thus predictions) that span longer time periods, and scholarly works such as, e.g., the regressions from Acemoglu and Robinson could provide conclusions that could last many generations (though they are not immune to criticism [1].)
However, in general our current general forecasting capabilities feel insufficient, particularly because they don't allow for cheap, reliable, longer-term predictions. Some open questions in the area are:
- How to create forecasts which influence real world decisions
- How to design collaborative scoring rules which work in practice
- How to scale prediction markets with real money
- How to identify capable forecasters
- To what extent have past long-term predictions proved accurate
- How to make forecasts cheaper
- ...
It also feels like there hasn't been much work in forecasting the value of individual actions, projects, or the promisingness of research directions, in such a way that forecasts could be action-guiding.
Note also that forecasts normally require some sort of evaluation or resolution at the end in order for forecasters to be rewarded. This means that as evaluation capabilities increase, so do forecasting capabilities, because anything that can be evaluated could be forecasted in advance.
utility ()
Advances related to utility functions might be:
- Designing better specifications or proxies of utility.
- Discerning which agents are worthy of moral value, and to what extent.
- Determining whether infinite ethics are plausible, and how to deal with the problems they pose.
- Coming to terms with various seemingly tricky philosophical problems (e.g., the repugnant conclusion.)
- ...
throughout time
Consider that the utility of an action can be expressed as
Where corresponds to additional utility during year , and is a discount factor— which could correspond to the probability of value drift, the probability of expropriation, the probability of existential-risk, irrational bias, or intrinsically caring less about future people and events. Parts of that discount factor might be unavoidable (e.g., unavoidable probability of a physically unlikely catastrophe, practically unavoidable risk of expropriation), but the rest could likely be reduced, which would increase the overall utility.
Once one considers a time dimension, coordination throughout time becomes an additional point of optimization.
Incidentally, note that because the expected value is additive:
which could be a useful decomposition in terms of forecasting, because forecasting systems could forecast the additional expected value of an action for each year, and said predictions could be evaluated year by year.
of actions (, )
Various ways of improving the set of actions () available to oneself might be:
- To have larger sets of actions to choose from.
- To increase the number of actions which you can physically take.
- E.g.: Normally follows from accumulation of resources, such as capital or prestige.
- Name: Pursuing instrumental goals.
- To increase the number of actions which you can conceive of taking.
- E.g.: Making "earning to give" or "working on AI-safety" or "create a charity" a thing people can conceive of doing.
- Name: Could be called "iconoclastic altruism," or "exploratory altruism"
- To increase the number of people to take actions, i.e., movement building.
- To increase the number of actions which you can physically take.
- To have better sets of actions to choose from.
- E.g.: Being born rich, doing movement building in highly prestigious or affluent organizations, having an upper bound in the terribleness of your actions.
- To improve your ability to actually take optimal actions.
- E.g.: having better mental health, having better incentives or status gradients, having healthier communities with status dynamics that incentivize doing good
- ...
¿taken by agents?
In the previous section, I added people kind of as an after-thought. We could make our model more elaborate by having
where is now a vector of actions, with one index for each person (i.e., denotes an action which could be taken by the -th person, and denotes the set of actions which the -th person could take) Writing and , we could have:
This would open new avenues of optimization:
- Make agents longer lived
- Make agents more productive
- Improve coordination between agents, now and throughout time.
- Make agents more altruistic, so that in general contains more altruistic actions.
- Get more agents.
- ...
But perhaps not all actions are carried out by human agents. For example, large bureaucracies, ideologies or nations could be modeled as having their own sets of actions at their disposal. This could be further modeled, and relates to the "improving institutional decision making" cause.
given your knowledge of the world ()
Previously, I was considering forecasting as the art of maximizing accuracy holding information about the world constant. But one can also improve one's grasp of the state of the world, and have more information with which to make better forecasts.
One particular useful type of knowledge about the world is a good categorization scheme or parametrization which allows you to group different things together and evaluate their characteristics at the same time, and thus more easily optimize over a set of options.
Where EA organizations fall in this scheme
There isn't a clear mapping between EA organizations and the parts of this scheme, but overall:
- Taking object-level optimal actions: Individual EAs, Good Ventures, object-level EA organizations like the Against Malaria Foundation, Wave,
- Estimating the expected value of actions: GiveWell, 80,000 hours, Animal Charity Evaluators, Open Philanthropy, SoGive, EA Funds, etc.
- Attaining clarity about one's values: Global Priorities Institute, Forethought Foundation, Rethink Priorities, Happier Lives Institute, etc.
- Fine-tuning agents:
- More agents: EA local groups,
- More coordinated agents: CEA (??)
- More altruistic agents: Founders Pledge, Raising for Effective Giving, Giving What We Can.
- More rational agents: CFAR, ClearerThinking.
- Improving models of the world: Our World in Data, Metaculus, Open Philanthropy, Rethink Priorities, J-PAL, IDInsight, etc.
Each of these points then has various meta-levels. Or, in other words, these can be stacked. For example, one can try to [estimate the expected value] of [more agents] (e.g., the expected value of an additional Giving What We Can pledge), or one can [recruit more agents] in order [to have better models] about [expected value estimates] about [object level actions] (e.g., by running forecasting tournament about OpenPhilanthropy grants.)
I see QURI as mostly working on the meta-level of 2. and 5. And I see myself as working on 2., 3. and 5., and maximally away from 4.
Conclusion
Intuitively, the EA community would want to invest in all of these "building blocks", because each of them probably has diminishing returns. For instance, as one gains influence over more and more rational agents, clarity about one's utility function becomes more valuable in comparison. [2]
[1]: Despite criticisms, I do think that there is some core to those studies. For instance, the results of The Persistent Effects of Peru's Mining "Mita" seem relatively robust: the paper looks at extractive institutions which for bureaucratic reasons changed discretely at a geographic boundary: "on one side, all communities sent the same percentage of their population, while on the other side, all communities were exempt."
[2]: It also seems to me that considering the optimal distribution of talent and resources among these building blocks is probably more important than considering which has the highest marginal value at any given moment.
In theory, both approaches should be equivalent—always directing resources to the block with the highest marginal value should lead to the optimal allocation, in which all marginal values are equal.
But in practice, I imagine that coordination is difficult and includes some noise, and external shocks mean that knowing which block has the highest marginal value is less information that what one might think.
This is good stuff!
re: footnote 1
The paper The Standard Errors of Persistence, you cite as a criticism says the following about the robustness of Peruan study:
What do you think of that? In general, it seems that your justification for relative robustness doesn't engage with the critiques at all. My understanding of their major point is that spatial autocorrelations of residuals are unaccounted for and might make noise look significant. The simpler example of a common spurious relationship was, AFIAK, first described in Spurious regressions in econometrics (see this decently looking blogpost for relevant intuitions).
Note that per Table A1...A3, the authors replace the explanatory variable with noise in every study except in the Mita study, for which they only make their point for the dependent variable. Also, the Mita study isn't present in Figure 8. Not sure why that is.
So I sort of understand this point, but not enough to understand if the construction of the noise makes sense.
In any case, yeah, it looks like it was less robust than I thought.