The Future of Life Institute (FLI) invites individuals and teams to compete for a prize purse worth $100,000+ by designing visions of a plausible, aspirational future including artificial general intelligence.
This post gives an overview of the contest and our reasons for running it. For full details on how to enter, visit worldbuild.ai.
What is Worldbuilding?
Worldbuilding is the art and science of constructing a coherent and relatively detailed fictitious world. It is frequently practised by creative writers and scriptwriters, providing the context and backdrop for stories that take place in future, fantasy or alternative realities.
Overview of the Worldbuilding Contest
This contest challenges entrants to use worldbuilding to explore possible futures for our own world.
Worldbuilding in this context is not prediction so builds need not reflect the most probable scenarios but they must be a) plausible, b) aspirational and c) consistent with a set of ground rules:
- The year is 2045.
- AGI has existed for at least 5 years.
- Technology is advancing rapidly and AI is transforming the world sector by sector.
- The US, the EU and China have managed a steady, if uneasy, power equilibrium.
- India, Africa and South America are quickly on the rise as major players.
- Despite ongoing challenges, there have been no major wars or other global catastrophes.
- The world is not dystopian and the future is looking bright.
[Edit]: The ground rules are a set of assumptions designed to constrain the build in such a way that shifts participants' focus to figuring out how exactly we might avoid power upsets, wars, and catastrophes as AGI arrives; since these are precisely the challenges we will face in the near-future.
For definitions of plausible and aspirational, scroll to "Your Mission" here.
How to Enter
Applications comprise of four parts:
- A timeline from 2022 to 2045. For each year, you must you specify at least two events that occurred (e.g. “X invented”) and provide one data point (e.g. “GDP rises by 25%”). Participants are encouraged to fill all the data points on the timeline but entries will still be accepted and judged (though at a disadvantage) if fewer than 23 years are provided. Submissions will be disqualified if fewer than 10 years are provided.
- Two “day in the life” short stories of between 750 and 1000 words. These should recount a day in the life of an individual somewhere in the world in 2045. The stories can focus on the same individual or two different individuals.
- Answers to the following prompts. Each answer must be fewer than 250 words.
- AGI has existed for years but the world is not dystopian and humans are still alive. Given the risks of very high-powered AI systems, how has your world ensured that AGI has remained safe and controlled, at least so far?
- The dynamics of an AI-populated world may depend a lot on how AI capability is distributed. In your world, is there one AI system that is substantially more powerful than all others, or a few such systems? Or are there many top-tier AI systems of comparable capability? Or something else
- How has your world avoided major AI/AGI arms races and wars?
- In the US, the EU, and China, how and where is national decision-making power held, and how has the advent of advanced AI changed that, if at all? (max 500 words)
- Is the global distribution of wealth as measured by national or international gini coefficients more or less unequal than 2022’s, and by how much? How did it get that way?
- What is a major problem that AI has solved in your world and how did it do so?
- What is a new social institution that has played an important role in the development of your world?
- What is a new non-AI technology that has played an important role in the development of your world?
- What changes to the way countries govern the development and/or deployment and/or use of emerging technologies (including AI), if any, played an important role in the development of your world?
- Pick a sector of your choice (education, transport, energy, communication, finance, healthcare, tourism, aerospace, materials etc.) and describe how that sector was transformed with AI in your world.
- What is the life expectancy of the most wealthy 1% and of the least wealthy 20% of your world? How and why has this changed since 2022?
- In the US, considering the human rights enumerated in the UN declaration, which rights are better respected and which rights are worse respected in your world than in 2022? Why? How?
- In a second country of your choice, which rights are better and which rights are worse respected in your world than in 2022? Why? How?
- What’s been a notable trend in the way that people are finding fulfilment?
- One original non-text media piece, e.g. a piece of art, video, music, etc., that brings your built world to life through vivid visual and/or auditory storytelling. The piece must have been created after the launch of the contest (01/01/2022) and videos / pieces of music must be no longer than 5 minutes in length.
These four parts should cohere with one another, with (e.g.) the short stories referencing or explaining some of the institutions or technologies introduced in answers to the prompts.
The deadline to enter is 15 April 2022. Finalists will be announced on 15 May 2022 and the general public will be invited to give feedback. The winning builds will be announced on 15 June 2022.
Why FLI is pursuing this project
FLI is frequently pegged as an "existential risk organisation" (or something to that effect) but reducing expected large-scale (catastrophic, extinction, dystopic) risks from transformative technologies represents only half of the organisation's mission. We also aim to promote the development and use of these technologies to benefit all life on Earth. The worldbuilding contest is in large part inspired by this second part of the mission.
We have four main goals for this project:
- Encourage people to start thinking about the future in more positive terms.
- Receive inspiration for our real-world policy efforts and future projects to run / fund.
- Identify potential collaborators from outside of our existing network.
- Update our messaging strategy.
1. Thinking positively about the future
To be able to steer the technology's trajectory in a positive direction, we need to know what we're aiming for. To know what future we would most like, we must first imagine the kinds of futures we could plausibly have.
Unfortunately, not nearly enough effort goes into imagining what a good future might look like. Mainstream media tends to focus on the dystopias we could end up in. This contest seeks to change that by encouraging entrants, the FLI team and others to start thinking more optimistically.
We are still debating various options for scaling the competition's impact such that it can meaningfully influence perceptions and attitudes towards the future on a larger scale. Options include coordinating with screenwriters and film makers to produce fiction based on the winning builds and/or publishing some of the short stories in significant media forums / outlets.
2. Receive inspiration for our policy efforts and other projects
The contest requires entrants to submit relatively detailed roadmaps that span from the present day to (a desirable) 2045. We're hoping to receive some inspiration for our real-world policy efforts from these roadmaps, e.g. answers to questions like "how has your world avoided major AI/AGI arms races and wars?" and "how has your world ensured that AGI has remained safe and controlled?" may point towards some interesting policies that we could investigate and then possibly advocate for. There is precedent for this - several ongoing FLI policy initiatives were born at a previous worldbuilding event, the 2019 Augmented Intelligent Summit.
Similarly, and more broadly, these roadmaps may provide ideas for new projects / initiatives that FLI could run or fund.
3. Identify potential collaborators
We're always looking to discover and work with new and diverse talent from both within and beyond the EA and extreme risk communities. We're excited to connect with technical and policy-oriented individuals whose great ideas we might not have otherwise come across. We're also keen to uncover creatives as they're currently under-represented in our network and we're increasingly excited about the power of storytelling for science and risk communication.
4. Improve our messaging
Risk communication is hard. There are all sorts of traps. For instance, if we paint too vivid a picture of the risks and thereby boost imaginability – as dystopic films do – we risk triggering an "all-or-nothing" mentality where people are sensitive to the possibility rather than the probability of bad outcomes, and so may develop an opposition to the technology or application in question.
A classic risk communication strategy is to pair negative messages with solution-oriented and positive messages. Importantly, a solution-oriented message is not sufficient to counterbalance the (undesirable) psychological impact of a negative message; saying something to the effect of "don't worry, there's so much we can do to limit the probability of a bad outcome" doesn't provide the listener with a good reason for tolerating any level of risk in the first place.
But it's difficult to articulate positive messages about the future, artificial intelligence, transformative technologies, etc. because not enough effort goes into thinking about what a good future with (e.g.) artificial general intelligence could look like. What could it actually consist of? We're hoping the winning world builds will provide us some ideas, and that we can incorporate these into our messaging.
Other Important Details (including prizes)
The prizes are:
- First prize: $20,000
- 2x second prizes: $10,000 each
- 5x third prizes: $2,000 each
- 10x fourth prizes: $1,000 each
- Judges' discretionary prizes: Up to 5 prizes of up to $2,000 each.
If you'd like to attend a worldbuilding workshop or you need help finding team members, scroll to the bottom of the homepage.
All questions not answered on the FAQ page should be directed to worldbuild@futureoflife.org.
*Post updated on 21 January 2022. Edit indicated in the body of text.
The conjunction
would be quite surprising to me, since I strongly expect superintelligence within a couple years after AGI, and I strongly expect a technological singularity at that time. So I do not believe that a story consistent with the rules can be plausible. (I also expect more unipolarity by 5 years after AGI, but even multipolar scenarios don't give us a future as prosaic as the rules require.)
I also feel like this assumption kind of moves this from "oh, interesting exercise" to "hmm, the set of ground rules feel kind of actively inconsistent, I guess I am not super excited about stories set in this world, since I expect it to kind of actively communicate wrong assumptions". Though I do generally like using fiction to explore things like this.
Yeah, it seems strange to be forced to adopt a scenario where the development of AGI doesn't create some kind of surprising upset in terms of power.
I suppose a contest that included a singularity might seem too far out for most people. And maybe this is the best we can do insofar as persuading people to engage with these ideas. (There's definitely a risk that people over update on these kind of scenarios, but it's not obvious that this will be a huge problem).
If you're confident in very fast takeoff, I agree this seems problematic.
But otherwise, given the ambiguity about what "AGI" is, I think you can choose to consider "AGI" to be the AI technology that existed, say, 7 years before the technological singularity (and I personally expect that AI technology to be very powerful), so that you are writing about society 2 years before the singularity.
Even without a singularity, no unexpected power upsets seems a bit implausible.
(Disagree if by implausible you mean < 5%, but I don't want to get into it here.)
Even a slow takeoff! If there is recursive self-improvement at work at all, on any scale, you wouldn't see anything like this. You'd see moderate-to-major disruptions in geopolitics, and many or all technology sectors being revolutionized simultaneously.
This scenario is "no takeoff at all" - advancement happening only at the speed of economic growth.
Sorry for the late reply.
You seem to have an unusual definition of slow takeoff. If I take on the definition in this post (probably the most influential post by a proponent of slow / continuous takeoff), there's supposed to be an 8-year doubling before a 2-year doubling. An 8-year doubling corresponds to an average of 9% growth each year (roughly double the current amount). Let's say that we actually reach the 9% growth halfway through that doubling; then there are 4 years before the first 2-year doubling even starts. If you define AGI to be the AI technology that's around at 9% growth (which, let's recall, is doubling the growth rate, so it's quite powerful), then there are > 6 years left until the singularity (4 years from the rest of the 8-year doubling, 2 years from the first 2-year doubling, which in turn happens before the start of the first 0.5 year doubling, which in turn is before the singularity).
Presumably you just think slow takeoff of this form is completely implausible, but I'd summarize that as either "Czynski is very confident in fast / discontinuous takeoff" or "Czynski uses definitions that are different from the ones other people are using".
Again, that would produce moderate-to-major disruptions in geopolitics. The first doubling with any recursive self-improvement at work being eight years is, also, pretty implausible, because RSI implies more discontinuity than that, but that doesn't matter here, as even that scenario would cause massive disruption.
Speaking as one partly responsible for that conjunction, I'd say the aim here was to target a scenario that is interesting (AGI) but not too interesting. (It's called a singularity for a reason!) It's arguably a bit conservative in terms of AGI's transformative power, but rapid takeoff is not guaranteed (Metaculus currently gives ~20% probability to >60 months), nor is superintelligence axiomatically the same as a singularity. It is also in a conservative spirit of "varying one thing at a time" (rather than a claim of maximal probability) that we kept much of the rest of the world relatively similar to how it is now.
Part of our goal is to use this contest as a springboard for exploring a wider variety of scenarios and "ground assumptions" and there I think we can try some out that are more radically transformative.
I'd expect the bets there to be basically random. Prediction markets aren't useful for predictions about far out events: Betting in them requires tying up your credit for that long, which is a big opportunity cost, so you should expect that only fools are betting here. I'd also expect it to be biased towards the fools who don't expect AGI to be transformative, because the fools who do expect AGI to be transformative have even fewer incentives to bet: There's not going to be any use for metaculus points after a singularity: They become meaningless, past performance stops working as a predictor of future performance, the world will change too much, and so will the predictors.
If a singularity-expecter wants tachyons, they're really going to want to get them before this closes. If they don't sincerely want tachyons, if they're driven by something else, then their answers wouldn't be improved by the incentives of a prediction market.
I'd note that Metaculus is not a prediction market and there are no assets to "tie up." Tachyons are not a currency you earn by betting. Nonetheless, as with any prediction system there are a number of incentives skewing one way or another. But for a question like this I'd say it's a pretty good aggregator of what people who think about such issues (and have an excellent forecasting track record) think — there's heavy overlap between the Metaculus and EA communities, and most of the top forecasters are pretty aware of the arguments.
I checked again and, yeah, that's right, sorry about the misunderstanding.
I think the root of my confusion on this is that most of my thinking about prediction platform designs, is situated in the genre of designs where users can create questions without oversight, and in this genre I'm hoping to find something highly General, and Robust. These sorts of designs always seem to collapse into being prediction markets.
So it comes as a surprise to me that just removing user-generated questions seems to turn out to prevent that collapse[1], and this thing it becomes instead, turns out to be pretty Robust. Just did not expect that.
[1] (If you had something like Metaculus and you added arbitrary user-generated questions (I think that would allow unlimited point farming, but, that aside), that would enable trading points as assets, as phony questions with user-controlled resolution criteria could be made just for transferring points between a pair of users, with equal, opposite transfers of currency out of band.)
Correction: Metaculus's currency is just called "points", tachyons are something else. Aside from that, I have double-checked, and
it definitely is a play-money prediction market(well, is it wrong to call it a prediction market it's not structured as an exchange, even ifit has the same mechanics?) (Edit: I was missing the fact that, though there are assets, they are not staked when you make a prediction), and you do in fact earn points by winning bets.I'm concerned that the bettors here may be the types who have spent most of their points on questions that wont close for decades. Metaculus has existed for less than one decade, so that demographic, if it's a thing, actually wouldn't have any track record.
Isn't "Technology is advancing rapidly and AI is transforming the world sector by sector" perfectly consistent with a singularity? Perhaps it would be a rather large understatement, but still basically true.
Not really (but the quote is consistent with no singularity; see Rohin's comment). I expect technological progress will be very slow soon after a singularity because science is essentially solved and almost all technology is discovered during or immediately after the singularity. Additionally, the suggestions that there's 'international power equilibrium' and generally that the world is recognizable--e.g., with prosaic global political power balance, and that AI merely 'solves problems' and 'reshapes the economy'--rather than totally transformed is not what I expect years after singularity.
This sounds like a nice addition for those who want it, but I would be surprised if requiring it turned out to be worthwhile.
Yeah I was considering writing something up if I had free time but probably wouldn't be able to fulfill this requirement at decent quality within a reasonable timeframe.
I was thinking that if they insist on requiring it (and I get around actually participating), I'll just iterate on some prompts on wombo.art or similar until I get something decent.
Same. I'm fairly confident in my writing skills but lack any talent in the other areas and would find doing so embarassing
Yeah, it seems to require quite distinct skills. That said, they seem to be encouraging collaboration.
My intuition is that this is quite relevant if the goal is to appeal to a wider audience. Not everyone, not even most, people are drawn in by purely written fiction.
I think it is quite relevant.
However, if FLI are publishing a group of these possible worlds, maybe they want to consider outsourcing the media piece to someone else. It:
a) links/brands the stories together (like how in a book of related short stories, a single illustrator is likely used)
b) makes it easier for lone EAs to contribute using the skill that is more common in the community.
Oh yes I agree. I think that'd be a wonderful addition and would lower the barrier to entry!.
I'm totally on board with the constraint that the future be good, that it be broadly appealing rather than just good-according-to-our-esoteric-morality.
What I'm worried about is that this contest will end up being a tool for self-deception (like Czynski said) because the goodness of the future correlates with important variables we can't really influence, like takeoff speeds and difficulty of the alignment problem and probability of warning shots and many other things. So in order to describe a good future, people will fiddle with the knobs of those important variables so that they are on their conducive-to-good settings rather than their most probable settings.
Thus the distribution of stories we get at the end will be unrealistic, not just in that it'll be more optimistic/good than is likely (that part is fine, that's by design) but in a variety of other ways as well, ways that we can't change. So then insofar as we use these scenarios as targets to aim towards, we will fail because of the underlying unchangeable variables that have been set to their optimistic settings in these scenarios.
Analogy: Let's say it's January 2020 and we are trying to prepare the world for COVID. We ask people to write a bunch of optimistic stories in which very few people die and very little economic disruption happens. What'll people write? Stories in which COVID isn't that infectious, in which it isn't that deadly, in which masks work better than they do in reality, in which vaccines work better, in which compliance with lockdowns is higher... In a thousand little ways these stories will be unrealistically optimistic, and in hundreds of those ways, they'll be unrealistic in ways we can't change. So the policies we'd make on the basis of these stories would be bad policies. (For example, we might institute lockdowns that cause lots of economic and psychological damage without saving many lives at all, because the lockdowns were harsh enough to be disruptive but not harsh enough to get the virus under control; we didn't think they needed to be any harsher because of the aforementioned optimistic settings of the variables.)
My overall advice would be: Explain this problem to the contestants. Make it clear that their goal is to depict a realistic future, and then depict a series of actions that steer that realistic future into a good, broadly appealing state. It's cheating if you get to the good state by wishful thinking, i.e. by setting a bunch of variables like takeoff speeds etc. to "easy mode." It's not cheating if the actions you propose are very difficult to pull off, because the point of the project is to give us something to aim for and we can still productively aim for it even if it's difficult.
Returning to this thread to note that I eventually did enter the contest, and was selected as a finalist! I tried to describe a world where improved governance / decisionmaking technology puts humanity in a much better position to wisely and capably manage the safe development of aligned AI. https://worldbuild.ai/W-0000000088/
The biggest sense in which I'm "playing on easy mode" is that in my story I make it sound like the adoption of prediction markets and other new institutions was effortless and inevitable, versus in the real world I think improved governance is achievable but is a bit of a longshot to actually happen; if it does, it will be because a lot of people really worked hard on it. But that effort and drive is the very thing I'm hoping to help inspire/motivate with my story, which I feel somehow mitigates the sin of unrealism.
Overall, I am actually suprised at how dystopian and pessimistic many of the stories are. (Unfortunately they are mostly not pessimistic about alignment; rather there are just a lot of doomer vibes about megacorps and climate crisis.) So I don't think people went overboard in the direction of telling unrealistic tales about longshot utopias -- except to the extent that many contestants don't even realize that alignment is a scary and difficult challenge, thus the stories are in that sense overly-optimistic by default.
Totally agree here that what's interesting is the ways in which things turn out well due to agency rather than luck. Of course if things turn out well, it's likely to be in part due to luck — but as you say that's less useful to focus on. We'll think about whether it's worth tweaking the rules a bit to emphasize this.
Thanks! I think explaining the problem to the contestants might go a long way. You could also just announce that realism (about unchangeable background variables, not about actions taken) is an important part of the judging criteria, and that submissions will be graded harshly if they seem to be "playing on easy mode." EDIT: Much more important than informing the contestants though is informing the people who are trying to learn from this experiment. If you are (for example) going to be inspired by some of these visions and work to achieve them in the real world... you'd better make sure the vision wasn't playing on easy mode!
I think though the way the purpose of this exercise is understood is more about characterizing an utopia, and not about trying to explain how to solve alignment in a world where a singularity is in the cards.
These goals are not good goals.
It is actively harmful for people to start thinking about the future in more positive terms, if those terms are misleading and unrealistic. The contest ground rules frame "positive terms" as being familiar, not just good in the abstract - they cannot be good but scary, as any true good outcome must be. See Eutopia is Scary:
It is actively harmful to take fictional evidence as inspiration for what projects are worth pursuing. This would be true even if the fiction was not constrained to be unrealistic and unattainable, but this contest is constrained in that way, which makes it much worse.
Again, a search which is specifically biased to have bad input data is going to be harmful, not helpful.
Your explicit goal here is to look for 'positive', meaning 'non-scary', futures to try to communicate. This is lying - no such future is plausible, and it's unclear any is even possible in theory. You say
but this is not true. Lots of effort goes into thinking about it. You just don't like the results, because they're either low-quality (failing in all the old ways utopias fail) or they are high-quality and therefore appropriately terrifying.
The best result I can picture emerging from this contest is for the people running the contest to realize the utter futility of the approach they were targeting and change tacks entirely. I'm unsure whether I hope that comes with some resignations, because this was a really, spectacularly terrible idea, and that would tend to imply some drastic action in response, but on the other hand I'd hope FLI's team is capable from learning from its mistakes better than most.
The contest is only about describing 2045, not necessarily a radically alien far-future "Eutopia" end state of human civilization. If humans totally solve alignment, we'd probably ask our AGI to take us to Eutopia slowly, allowing us to savor the improvement and adjust to the changes along the way, rather than leaping all the way to the destination in one terrifying lurch. So I'm thinking there are probably some good ways to answer this prompt.
But let's engage with the harder question of describing a full Eutopia. If Eutopia is truly good, then surely there must be honest ways of describing it that express why it is good and desirable, even if Eutopia is also scary. Otherwise you'd be left with three options that all seem immoral:
It's impossible to imagine infinity, but if you're trying to explain how big infinity is, surely it's better to say "it's like the number of stars in the night sky", or "it's like the number of drops of water in the ocean", than to say "it's like the number of apples you can fit in a bucket". Similarly, the closest possible description of the indescribable Eutopia must be something that sounds basically good (even if it is clearly also a little unfamiliar), because the fundamental idea of Eutopia is that it's desirable. I don't think that's lying, anymore than trying to describe other indescribable things as well as you can is lying.
Yudkowsky's own essay "Eutopia is Scary" was part of a larger "Fun Theory" sequence about attempting to describe utopias. He mostly described them in a positive light, with the "Eutopia is Scary" article serving as an important, but secondary, honesty-enhancing caveat: "these worlds will be a lot of fun, but keep in mind they'll also be a little strange".
I want to second what Czynski said about pure propaganda. Insofar as we believe that the constraints you are imposing are artificial and unrealistic, doesn't this contest fall into the "pure propaganda" category? I would be enthusiastically in favor of this contest if there weren't such unrealistic constraints. Or do you think the constraints are actually realistic after all?
I think it's fine if we have broad leeway to interpret the constraints as we see fit. E.g. "Technology is improving rapidly because, while the AGI already has mature technology, humans have requested that advanced technology be slowly doled out so as not to give us too much shock. So technology actually used by humans is improving rapidly, even though the cutting-edge stuff used by AGI has stagnated. Meanwhile, while the US, EU, and China have no real power (real power lies with the AGI) the AGI follow the wishes of humans and humans still want the US, EU, and China to be important somehow so lots of decisions are delegated to those entities. Also, humans are gradually starting to realize that if you are delegating decisions to old institutions you might as well do so to more institutions than the US, EU, and China, so increasingly decisions are being delegated to African and South American etc. governments rather than US, EU, and China. So in that sense a 'balance of power between US, EU, and China has been maintained' and 'Africa et al are on the rise.'" Would you accept interpretations such as this?
To clarify, I'm not affiliated with FLI, so I'm not the one imposing the constraints, they are. I'm just defending them, because the contest rules seem reasonable enough to me. Here are a couple of thoughts:
Anyways, on to the more important issue of this actual contest, the 2045 AGI story, and its oddly-specific political requirements:
You say: "[This scenario seems so unrealistic that I can only imagine it happening if we first align AGI and then request that it give us a slow ride even though it's capable of going faster.] ...Would you accept interpretations such as this?"
I'm not FLI so it's not my job to say which interpretations are acceptable, but I'd say you're already doing exactly the work FLI was looking for! I agree that this scenario is one of the most plausible ways that civilization might end up fulfilling the contest conditions. Here are some other possibilities:
See my response to Czynski for more assorted thoughts, although I've written so much now that perhaps I could have entered the contest myself by now if I had been writing stories instead of comments! :P
Edited to add that alas, I only just now saw your other comment about "So in order to describe a good future, people will fiddle with the knobs of those important variables so that they are on their conducive-to-good settings rather than their most probable settings. ". This strikes me as a fair criticism of the contest. (For one, it will bias people towards handwaving over the alignment problem by saying "it turned out to be surprisingly easy".) I don't think that's devastating for the contest, since I think there's a lot of value in just trying to envision what an agreeable good outcome for humanity looks like. But definitely a fair critique that lines up with the stuff I was saying above -- basically, there are both pros and cons to putting $100K of optimization pressure behind getting people to figure out the most plausible optimistic outcome under a set of constraints. (Maybe FLI should run another contest encouraging people to do more Yudkowsky-style brainstorming of how everything could go horribly wrong before we even realize what we were dealing with, just to even things out!)
Thanks for this thoughtful and detailed response. I think we are basically on the same page now. I agree with your point about Eutopia vs. 2045.
Even if you don't speak for FLI, I (at least somewhat) do, and agree with most of what you say here — thanks for taking the time and effort to say it!
I'll also add that — again — we envisage this contest as just step 1 in a bigger program, which will include other sets of constraints.
Directly conflicts with the geopolitical requirements. Also not compatible with the 'sector by sector' scope of economic impact - an AGI would be revolutionizing everything at once, and the only question would be whether it was merely flipping the figurative table or going directly to interpolating every figurative chemical bond in the table with figurative gas simultaneously and leaving it to crumble into figurative dust.
The 'Silent elitism' view is approximately correct, except in its assumption that there is a current elite who endorse the eutopia, which there is not. Even the most forward-thinking people of today, the Ben Franklins of the 2020s, would balk. The only way humans know how to transition toward a eutopia is slowly over generations. Since this has a substantial cost, speedrunning that transition is desirable, but how exactly that speedrun can be accomplished without leaving a lot of wreckage in its wake is a topic best left for superintelligences, or at the very least intelligences augmented somewhat beyond the best capabilities we currently have available.
What a coincidence! You have precisely described this contest. This is, explicitly, a "make up a nice-sounding future with no resemblance to our true destination" contest. And yes, it's at best completely immoral. At worst they get high on their own supply and use it to set priorities, in which case it's dangerous and aims us toward UFAI and impossibilities.
At least it's not the kind of believing absurdities which produces people willing to commit atrocities in service of those beliefs. Unfortunately, poor understanding of alignment creates a lot of atrocities from minimal provocation anyway.
This is not true. There is no law of the universe which states that there must be a way to translate the ways in which a state good for its inhabitants (who are transhuman or posthuman i.e possessed of humanity and other various important mental qualities), into words which can be conveyed in present human language, by text or speech, that sound appealing. That might be a nice property for a universe to have but ours doesn't.
Some point along a continuum from here to there, a continuum we might slide up or down with effort, probably can be so described - a fixed-point theorem of some sort probably applies. However, that need not be an honest depiction of what life will be like if we slide in that direction, any more than showing a vision of the Paris Commune to a Parisien on the day Napoleon fell (stipulating that they approved of it) would be an honest view of Paris's future.
See my response to kokotajlod to maybe get a better picture of where I am coming from and how I am thinking about the contest.
"Directly conflicts with the geopolitical requirements." -- How would asking the AGI to take it slow conflict with the geopolitical requirements? Imagine that I invent a perfectly aligned superintelligence tomorrow in my spare time, and I say to it, "Okay AGI, I don't want things to feel too crazy, so for starters, how about you give humanity 15% GDP growth for the next 30 years? (Perhaps by leaking designs for new technologies discreetly online.) And make sure to use your super-persuasion to manipulate public sentiment a bit so that nobody gets into any big wars." That would be 5x the current rate of worldwide economic growth, which would probably feel like "transforming the economy sector by sector" to most normal people. I think that world would perfectly satisfy the contest rules. The only problems I can see are:
I agree with you that there's a spectrum of different things that can be meant by "honesty", sliding from "technically accurate statements which fail to convey the general impression" to "correctly conveying the general impression but giving vague or misleading statements", and that in some cases the thing we're trying to describe is so strange that no matter where we go along that continuum it'll feel like lying because the description will be misleading in one way or the other. That's a problem with full Eutopia but I don't think it's a problem with the 2045 story, where we're being challenged not to describe the indescribable but to find the most plausible path towards a goal (a familiar but peaceful and prosperous world) which, although very desirable to many people, doesn't seem very likely if AGI is involved.
I think the biggest risk of dishonesty for this contest, is if the TRUE most-plausible path to a peaceful & prosperous 2045 (even one that satisfies all the geopolitical conditions) still lies outside the Overton Window of what FLI is willing to publish, so instead people choose to write about less plausible paths that probably won't work. (See my "cabal of secret geniuses runs the show from behind the scenes" versus "USA/China/EU come together to govern AI for all mankind" example in my comment to kokotajlod -- if the cabal of secret geniuses path is what we should objectively be aiming for but FLI will only publish the latter story, that would be unfortunate.)
Maybe you think the FLI contest is immoral for exactly this reason -- because the TRUE most-plausible path to a good future doesn't/couldn't go through . Yudkowsky has said a few things to this effect, about how no truly Pivotal Action (something your aligned AGI could do to prevent future unaligned AGIs from destroying the world) fits inside the overton window, and he just uses "have the AGI create a nanosystem to melt all the world's GPUs" (which I guess he sees as being an incomplete solution) as a politically palatable illustrative example. I'm not sure about this question and I'm open to being won over.
The 'unambitious' thing you ask the AI to do would create worldwide political change. It is absurd to think that it wouldn't. Even ordinary technological change creates worldwide political change at that scale!
And an AGI having that little impact is also not plausible; if that's all you do, the second mover -- and possibly the third, fourth, fifth, if everyone moves slow -- spits out an AGI and flips the table, because you can't be that unambitious and still block other AGIs from performing pivotal acts, and even if you want to think small, the other actors won't. Even if they are approximately as unambitious, they will have different goals, and the interaction will immediately amp up the chaos.
There is just no way for an actual AGI scenario to meet these guidelines. Any attempt to draw a world which meets them has written the bottom line first and is torturing its logic trying to construct a vaguely plausible story that might lead to it.
I believe that you are too quick to label this story as absurd. Ordinary technology does not have the capacity to correct towards explicitly smaller changes that still satisfy the objective. If the AGI wants to prevent wars while minimally disturbing the worldwide politics, I find it plausible that it would succeed.
Similarly, just because an AGI has very little visible impact, does not mean that it isn't effectively in control. For a true AGI, it should be trivial to interrupt the second mover without any great upheaval. It should be able to surpress other AGIs from coming into existence without causing too much of a stir.
I do somewhat agree with your reservations, but I find that your way of adressing them seems uncharitable (i.e. "at best completely immoral").
I hope you'll write a retrospective this summer and post it here; I'm curious what you'll think about the contest and the submissions you receive in retrospect.
I posted this to r/rational (subreddit for rational & rationalist fiction), if anyone would like to see the response there
This project will give people an unrealistically familiar and tame picture of the future. Eutopia is Scary, and the most unrealistic view of the future is not the dystopia, nor the utopia, but the one which looks normal.[1] The contest ground rules requires, if not in so many words, that all submissions look normal. Anything which obeys these ground rules is wrong. Implausible, unattainable, dangerously misleading, bad overconfident reckless arrogant wrong bad.
This is harmful, not helpful; it is damaging, not improving, the risk messaging; endorsing any such view of the future is lying. At best it's merely lying to the public - it runs a risk of a much worse outcome, lying to yourselves.
The ground rules present a very narrow target. Geopolitical constraints state that the world can't substantially change in form of governance or degree of state power. AI may not trigger any world-shaking social change. AGI must exist for 5+ years without rendering the world unrecognizable. These constraints are (intentionally, I believe) incompatible with a hard takeoff AGI, but they also rule out any weaker form of recursive self-improvement. This essentially mandates a Hansonian view of AI progress.
I would summarize that view as
This has multiple serious problems.
One, it's implausible in light of the nature of ML progress to date; most significant achievements have all come from a single source, DeepMind, and propagated outward from there.
Two, it doesn't lead to a future dominated by AGI - as Hanson explicitly extrapolated previously, it leads to an Age of Em, where uploads, not AGI, are the pivotal digital minds.
Which means that a proper prediction along these lines will fail at the stated criteria, because
will not be true - AI will not be transformative here.
With all that in mind, I would encourage anyone making a submission to flout the ground rules and aim for a truly plausible world. This would necessarily break all three of
since those all require a geopolitical environment which is similar to the present day. It would probably also have to violate
If we want a possible vision of the future, it must not look like that.
I am quoting this from somewhere, probably the Sequences, but I cannot find the original source or wording.
There's obviously lots I disagree with here, but at bottom, I simply don't think it's the case that economically transformative AI necessarily entails singularity or catastrophe within 5 years in any plausible world: there are lots of imaginable scenarios compatible with the ground rules set for this exercise, and I think assigning accurate probabilities amongst them and relative to others is very, very difficult.
"Necessarily entails singularity or catastrophe", while definitely correct, is a substantially stronger statement than I made. To violate the stated terms of the contest, an AGI must only violate "transforming the world sector by sector". An AGI would not transform things gradually and limited to specific portions of the economy. It would be broad-spectrum and immediate. There would be narrow sectors which were rendered immediately unrecognizable and virtually every sector would be transformed drastically by five years in, and almost certainly by two years in.
An AGI which has any ability to self-improve will not wait that long. It will be months, not years, and probably weeks, not months. A 'soft' takeoff would still be faster than five years. These rules mandate not a soft takeoff, but no takeoff at all.
That something is very unlikely doesn't mean it's unimaginable. The goal of imagining and exploring such unlikely scenarios is that with a positive vision we can at least attempt to make it more likely. Without a positive vision there are only catastrophic scenarios left. That's I think the main motivation for FLI to organize this contest.
I agree, though, that the base assumptions stated in the contest make it hard to come up with a realistic image.
A positive vision which is false is a lie. No vision meeting the contest constraints is achievable, or even desirable as a post-AGI target. There might be some good fiction that comes out of this, but it will be as unrealistic as Vinge's Zones of Thought setting. Using it in messaging would be at best dishonest, and, worse, probably self-deceptive.
For the sake of coordination, I declare an intent to enter.
(It's beneficial to declare intent to enter, because if we see that the competition is too fierce to compete with, we can save ourselves some work and not make an entry, while if we see that the competition is too cute to compete with, we can negotiate and collaborate.)
I'll be working under pretty much Eliezer's model, where general agency emerges abruptly and is very difficult to align, inspect, or contain. I'm also very sympathetic to Eliezer's geopolitical pessimism, but I have a few tricks for fording it.
For the sake of balance in comments section, I should mention that contrary to many of the voices here I don't really see anything wrong with the requirements. For instance, the riddle with AGI existing as long as five years without a singularity really abruptly hitting, though I agree it's a source of.. tension, it was kind of trivial for me to push through, and the solution was clean.
I developed illustration skills recently (acquired a tablet after practicing composition and anatomy in the background for most of my life and, wow, I can paint pretty good), can narrate pretty well, and I have plenty of musician friends, so although I have no idea what our audiovisual accompaniment is going to be, I'll be able to produce something. (maybe I'll just radio-play the ground-level stories)
And can I just say "director of communications" sounds about right, because you're really directing the hell out of this ;) you're very specific about what you want. And the specification forms the shape of a highly coherent vision. Sorry I just haven't really encountered that before, it's interesting.
The community and events link is broken.
Wow this sounds awesome!
Just want to say this sounds great!
This sounds spectacular! SwarthmoreEA is working on experimenting with competitions; this seems like an awesome item to advertise + a great model to follow.
Neat! Sadly I can't interact with the grants.futureoflife.org webpage yet because my "join the community" application is still sitting around.
Thanks for letting us know. We're looking into it!
I think this should be "rise"
Thank you!