D

Davidmanheim

5962 karmaJoined

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
678

  • I feel pretty uncertain about this sort of modeling in general. It feels very sensitive to assumptions and inputs. If it were really hard to get the model to put any significant probability on TAI this century, I’d take that as an update (similarly with the model making TAI soon look very very likely). But for most middling values I’m not personally inclined to base too much on them.

 

Yes - this needs to be said again, and again, and again. And then people need to consider how valuable arguing about the details of these models really is. 

And yes, I think that it's incredibly valuable for people to have done thinking about this in public, but the difference between 25% and 75% probability of AGI in a decade is a tiny rounding error for this type of modeling compared to the uncertainties and approximations, and the fact that we're talking about a loose proxy for an upper bound anyways!

...coming back to this discussion 6 months later, having had nothing to do with any of this except as an observer, I'm incredibly happy with their recent work. Given that, I think that in retrospect, their work basically fully justifies the grant. (To be clear: a failure does not refute a claim of value based on hits-based giving, at best if functions as weak evidence - but success does strongly justify the claim.)

  1. Life is rare.

  2. Things are very far apart in space.

  3. The universe is pretty young compared to evolutionary timelines that seem to require 3rd generation stars for the right mix of elements.

I would prefer a pause on LLMs that are more capable, in part to give us time to figure out how to align these systems. As I argued, I think mathematical approaches are potentially critical there. But yes, general intelligences could help - I just don't expect them to be differentially valuable for mathematical safety over capabilities, so if they are capable of these types of work, it's a net loss.

My only caveat is that lots of work that is supposed to "help" with reducing existential AI risk is net-negative, due to accelerating capabilities, creating race dynamics, enabling dangerous misuse, etc. But it seems much less likely to be a risk for the type of work described in the post.

Yeah, I'm a fan of joint and several liability for LLMs that includes any party which was not explictly stopping a use of the LLM for something that causes harms, for exactly this reason.

To start, all of this is conceptually tricky, and you should talk to an economist or policy analyst with expertise in CBA if this ends up being a crux.

That said, in cost benefit/effectiveness analysis, the perspective EA usually takes is to consider public benefit per EA dollar spent, whereas policy analysts typically consider total social benefit per total dollar spent, which is slightly different. Either way, if government forces businesses to spend money, that's a cost to society, and if government spending replaces private spending, it's not a benefit. (EA funders should consider that as well - forcing others to spend money isn't a benefit.) Overall, the total public benefit perspective makes sense, because the costs and benefit to people and companies needs to be compared to the costs to government or to the EA funder. 

In the case you're discussing, the cost to government is negative, so the Benefit-to-cost ratio is negative - which happens sometimes[1]. If we spend $1m to get the government to save $5m and create a benefit of $5m, we would have total social cost-benefit ratio of -0.8. It's good to note that, and then we also probably want to do the cost-benefit analysis from the perspective of EA spending, so it can be compared to other interventions, and we would say it cost $1m to generate $10m in total public benefit.

A key caveat for actual analysis, however, is that if we shift government spending, it might not save anything! For instance, if the budget for the government spending on healthcare is set by Congress, and not impacted by costs, then "saving" $5m in transplant costs is actually just shifting that spending to a different place - which has benefits, but it's hard to estimate them, so we often just assume the benefit from marginal spending is a wash, and call it savings. This is hard to justify, but doing anything else is often infeasible.

On the other hand, if the savings are real, there is another small but often important caveat, which is that we often want to account for the overhead and deadweight costs of taxation. It's plausible[2] that every dollar the government spends costs the economy $1.20. That is, if the government were to only get $1 in benefit for doing something, it's a net loss for the country because they damage the economy via taxes more than they benefit it. If that is the case, saving $5m in government spending might actually be worth $6m - and so it cost the EA funder $1m to generate $11m in benefit. But if you do this, be really careful to note exactly what you're doing - otherwise, it can be very misleading.

  1. ^

    There are two ways this happens, and you need to know which it is - it's either really good, or really bad. If cost is negative but benefit is positive, as in the transplant case, it's great. If cost is positive but benefit is negative, it's very bad. If both are negative, it's easily compared to other positive numbers; spending $X less causes $Y in harm is equivalent to saying that we're currently spending $X to benefit, i.e. avoid the harm of, $Y, and consdiering if we should stop.

  2. ^

    The 20% figure isn't sourced anywhere I've seen it used, but I have seen it used and mentioned more than once as the typical value for deadweight loss for taxation.

It seems like in the full post you didn't reference or cite this RAND report, which is directly relevant; https://www.rand.org/pubs/perspectives/PE296.html

Value of Information

Here's my brief intro post about it:
https://forum.effectivealtruism.org/posts/8w2hNT5WtDMzoaGuy/when-to-find-more-information-a-short-explanation

And for more on the debates about second-order probabilities and confidence intervals, and why Pearl says you don't need them, you should just use a Bayesian Network, see his paper here: https://core.ac.uk/download/pdf/82281071.pdf 

Load more