A

aaguirre

219 karmaJoined

Bio

Professor of Physics at UCSC, and co-founder of the Future of Life Institute, Metaculus, and the Foundational Questions Institute

Comments
21

I'm not sure about this, but there is a possibility that this sort of model violated US online gambling laws. (These laws, along with those against unregulated trading of securities, are the primarily obstacles to prediction markets in the US.) IIRC, you can get into trouble with these rules if there is a payout on the outcome of a single event, which seems like it would be the case here. There's definite gray area, but before setting up such a thing one would definitely want to get some legal clarity.

I'd note that Metaculus is not a prediction market and there are no assets to "tie up." Tachyons are not a currency you earn by betting. Nonetheless, as with any prediction system there are a number of incentives skewing one way or another. But for a question like this I'd say it's a pretty good aggregator of what people who think about such issues (and have an excellent forecasting track record) think — there's heavy overlap between the Metaculus and EA communities, and most of the top forecasters are pretty aware of the arguments.

Great, thanks! Just PM me (anthony@futureoflife.org) and I'll put you in touch once the project is underway.

Probably some of both; the toolkit we can make available to all but the capacity to advise will obviously be limited by available personnel.

Totally agree here that what's interesting is the ways in which things turn out well due to agency rather than luck. Of course if things turn out well, it's likely to be in part due to luck — but as you say that's less useful to focus on. We'll think about whether it's worth tweaking the rules a bit to emphasize this.

Even if you don't speak for FLI, I (at least somewhat) do, and agree with most of what you say here — thanks for taking the time and effort to say it!

I'll also add that — again — we envisage this contest as just step 1 in a bigger program, which will include other sets of constraints.

There's obviously lots I disagree with here, but at bottom, I simply don't think it's the case that economically transformative AI necessarily entails singularity or catastrophe within 5 years in any plausible world: there are lots of imaginable scenarios compatible with the ground rules set for this exercise, and I think assigning accurate probabilities amongst them and relative to others is very, very difficult.

Speaking as one partly responsible for that conjunction, I'd say the aim here was to target a scenario that is interesting (AGI) but not too interesting. (It's called a singularity for a reason!) It's arguably a bit conservative in terms of AGI's transformative power, but rapid takeoff is not guaranteed (Metaculus currently gives ~20% probability to >60 months), nor is superintelligence axiomatically the same as a singularity. It is also in a conservative spirit of "varying one thing at a time" (rather than a claim of maximal probability) that we kept much of the rest of the world relatively similar to how it is now.

Part of our goal is to use this contest as a springboard for exploring a wider variety of scenarios and "ground assumptions" and there I think we can try some out that are more radically transformative.

Thanks Hayden!

FLI also is quite funding constrained particularly on technical-adjacent policy research work, where in my opinion there is going to be a lot of important research and a dearth of resources to do it. For example, the charge to NIST to develop an AI risk assessment framework, just passed in the US NDAA, is likely to be extremely critical to get right. FLI will be working hard to connect technical researchers with this effort, but is very resource-constrained.

I generally feel that the idea that AI safety (including research) is not funding constrained to be an incorrect and potentially dangerous one — but that's a bigger topic for discussion.

You can see the recording here. It was a great ceremony.!

Load more