This a unique, interesting and simple proposal I have not seen presented in academic form yet. With the development of the article, you'll of course need to change the framing of a few sections to introduce the idea, the viability, along with the multi-purpose potential of the proposal.
Despite unlikely effective enforcement of the policy, it seems like a valuable idea to publish. Combining it with newer work in GPU monitoring firmware (Shavit, 2023) and your own proposals for required GPU server tracking.
To comment on kpurens comment, carbon taxation was a non-political issue before it became contentious and if the lobbying hadn't hit as hard, it seems like there would be a larger chance for a global carbon tax. At the same time, compute governance seems more enforceable because of the centralization of data centers.
The CE incubatees are an absolutely amazing bunch and exactly the types of people I would want on these world-bettering projects. Charity Entrepreneurship is no doubt one of the EA projects I am most excited about due to pure impact, research prowess and future potential.
When I compare the CE program to YC (see also OWID@YC), it feels even better due to the great co-founder matching process and the success rate along with the excellence within the focus areas (people don't come with their own esoteric tech startups).
For other commenters who talk about the use of reach numbers instead of e.g. QALY or WELLBYs, having seen some of their spreadsheets and programs, I am in deep awe and respect of some of the superheroes that have saved hundreds of lives (with many of these being counterfactual given the effectiveness and new vectors of impact) though I'll let them summarize their numbers.
I am very excited about where the projects will be in one and even five years and commend everyone involved.
There's this project that a few people have been working on towards making an overview in Obsidian that seems to be in this vein as well (see the comment, too): aisi.ai/?idea=75
Wonderful to see an analysis of all this even more wonderful data coming out of the programs! We're also happy to share our survey results (privately and anonymized) with people interested:
We also have a few other surveys but the ones above are probably most interesting in the context of this post.
This is wonderful to hear! Every time I've talked with the Norwegian EA community, there has been a strong sense of integrity, ethics, and a will to do good along with tangible results. I'm consistently impressed and it's wonderful to see that effort get rewarded and for that impact to get a public voice.
Congratulations!
Regarding legalities, seeing that the grant is from April, I will also add that Molly mentions the general clawbacks are only relevant 3 months back into time from the dissolution of FTX organizations.
Yonatan can probably help you figure out what to use that skillset for and your goals with it but web development process nuances are available from many other resources, e.g:
The above video uses a tech stack with Svelte, Postgres, Vercel, and Gitpod and represents the favorite programming paradigm for many modern developers.
We had a hackathon a month ago with some pretty interesting projects that you can build out further: Results link. These were made during 44 hours (average 17 hours spent per participants) but some of the principles seem generalizable.
You're also very welcome to check out aisi.ai or the intro resources for the interpretability hackathon (resources) running this weekend or the previous hackathon (resources).
The focus of FLI on lethal autonomous weapons systems (LAWS) generally seems like a good and obvious framing for a concrete extinction scenario. Currently, a world war will without a doubt use semi-autonomous drones with the possibility of a near-extinction risk from nuclear weapons.
A similar war in 2050 seems very likely to use fully autonomous weapons under a development race, leading to bad deployment practices and developmental secrecy (without international treaties). With these types of "slaughterbots", there is the chance of dysfunction (e.g. misalignment) leading to full eradication. Besides this, cyberwarfare between agentic AIs might lead to broad-scale structural damage and for that matter, the risk of nuclear war brought about through simple orders given to artificial superintelligences.
The main risks to come from the other scenarios mentioned in the replies here are related to the fact that we create something extremely powerful. The main problems arise from the same reasons that one mishap with a nuke or a car can be extremely damaging while one mishap (e.g. goal misalignment) with an even more powerful technology can lead to even more unbounded (to humanity) damage.
And then there are the differences between nuclear and AI technologies that make the probability of this happening significantly higher. See Yudkowsky's list.