Scriptwriter for RationalAnimations! Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc. Also a big fan of EA / rationalist fiction!
The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.
Here is my attempt at thinking up other historical examples of transformative change that went the other way:
Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.
Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).
You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...
You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...
People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.
(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)
The animal welfare side of things feels less truthseeking, more activist, than other parts of EA. Talk of "speciesim" that implies animals' and humans' lives are of ~equal value, seems farfetched to me. People frequently do things like taking Rethink's moral weights project (which kinda skips over a lot of hard philosophical problems about measurement and what we can learn from animal behavior, and goes all-in on a simple perspective of total hedonic utilitarianism which I think is useful but not ultimately correct), and just treat the numbers as if they are unvarnished truth.
If I considered only the immediate, direct effects of $100m spent on animal welfare versus global health, I would probably side with animal welfare despite the concerns above. But I'm also worried about the relative lack of ripple / flow-through effects from animal welfare work versus global health interventions -- both positive longer-term effects on the future of civilization generally, and more near-term effects on the sustainability of the EA movement and social perceptions of EA. Going all-in on animal welfare at the expense of global development seems bad for the movement.
I’d especially welcome criticism from folks not interested in human longevity. If your priority as a human being isn’t to improve healthcare or to reduce catastrophic/existential risks, what is it? Why?
Personally, I am interested in longevity and I think governments (and other groups, although perhaps not EA grantmakers) should be funding more aging research. Nevertheless, some criticism!
Probably instead of one giant comprehensive mega-post addressing all possible objections, you should tackle each area in its own more bite-sized post -- to be fancy, maybe you could explicitly link these together in a structured way, like Holden Karnofsky's "Most Important Century" blog posts.
I don't really know anything about medicine or drug development, so I can't give a very detailed breakdown of potential tractability objections, and indeed I personally don't know how to feel about the tractability of anti-aging.
Of course, to the extent that your post is just arguing "governments should fund this area more, it seems obviously under-resourced", then that's a pretty low bar, and your graph of the NIH's painfully skewed funding priorities basically makes the entire argument for you. (Although I note that the graph seems incorrect?? Shouldn't $500M be much larger than one row of pixels?? Compare to the nearby "$7B" figures; the $500M should of course be 1/14th as tall...) For this purpose, it's fine IMO to argue "aging is objectively very important, it doesn't even matter how non-tractable it is, SURELY we ought to be spending more than $500m/year on this, at the very least we should be spending more than we do on Alzheimers which we also don't understand but is an objectively smaller problem."
But if you are trying to convince venture-capitalists to invest in anti-aging with the expectation of maybe actually turning a profit, or win over philanthropists who have other pressing funding priorities, then going into more detail on tractability is probably necessary.
You might be interested in some of the discussion that you can find at this tag: https://forum.effectivealtruism.org/topics/refuges
People have indeed imagined creating something like a partially-underground town, which people would already live in during daily life, precisely to address the kinds of problems you describe (working out various kinks, building governance institutions ahead of time, etc). But on the other hand, it sounds expensive to build a whole city (and would you or I really want to uproot our lives and move to a random tiny town in the middle of nowhere just to help be the backup plan in case of nuclear war?), and it's so comparatively cheap to just dig a deep hole somewhere and stuff a nuclear reactor + lots of food + whatever else inside, which after all will probably be helpful in a catastrophe.
In reality, if the planet was to be destroyed by nuclear holocaust, a rogue comet, a lethal outbreak none of these bunkers would provide the sanctity that is promised or the capability to ‘rebuild’ society.
I think your essay does a pretty good job of pointing out flaws with the concept of bunkers in the Fallout TV + videogame universe. But I think that in real life, most actual bunkers (eg constructed by militaries, the occasional billionare, cities like Seoul which live in fear of enemy attack or natural disasters, etc) aren't intended to operate indefinitely as self-contained societies that could eventually restart civilization, so naturally they would fail at that task. Instead, they are just supposed to keep people alive through an acute danger period of a few hours to weeks (ie, while a hurricane is happening, or while an artillery barage is ongoing, or while the local government is experiencing a temporary period of anarchy / gang rule / rioting, or while radiation and fires from a nearby nuclear strike dissapate). Then, in 9 out of 10 cases, probably the danger passes and some kind of normal society resumes (FEMA shows up after the hurricane, or a new stable government eventually comes to power, etc -- even most nuclear wars probably wouldn't result in the comically barren and devastated world of the Fallout videogames). I don't think militaries or billionaires are necessarily wasting their money; they're just buying insurance against medium-scale catastrophes, and admitting that there's nothing they can do about the absolute worst-case largest-scale catastrophes.
Few people have thought of creating Fallout-style indefinite-civilizational-preservation bunkers in real life, and to my knowledge nobody has actually built one. But presumably if anyone did try this in real life (which would involve spending many millions of dollars, lots of detailed planning, etc), they would think a little harder and produce something that makes a bit more sense than the bunkers from the Fallout comedy videogames, and indeed do something like the partially-underground-city concept.
This is a great idea and seems pretty well thought-through; one of the more interesting interventions I've seen proposed on the Forum recently. I don't have any connection to medicine or public policy or etc, but it seems like maybe you'd want to talk to OpenPhil's "Global Health R&D" people, or maybe some of the FDA-reform people including Alex Tabbarok and Scott Alexander?
Of course both candidates would be considered far-right in a very left-wing place (like San Fransisco?), and they'd be considered far-left in a right-wing place (like Iran?), neoliberal/libertarian in a protectionist/populist place (like Turkey or peronist Argentina?), protectionist/populist in a neoliberal/libertarian place (like Singapore or Argentina under Milei?).
But I think the question is why neither party seems capable of offering up a more electable candidate, with fewer of the obvious flaws (old age and cognitive decline for Biden, sleazyness and transparent willingness to put self-interest over the national interest for Trump) and perhaps closer to the median American voter in terms of their positions (in fact, Biden and Trump are probably closer to the opinions of the median democrat / republican, respectively, than they are to the median overall US citizen).
Some thoughts:
As you mention, the scale seems small here relative to the huge political lift necessary to get something like MAID passed in the USA. I don't know much about MAID or how it was passed in Canada, but I'm picturing that in the USA this would become a significant culture-war issue at least 10% as big as the pro-life-vs-pro-choice wars over abortion rights. If EA decided to spearhead this movement, I fear it would risk permanently politicizing the entire EA movement, ruining a lot of great work that is getting done in other cause areas. (Maybe in some European countries this kind of law would be an easier sell?)
If I was a negative utilitarian, besides focusing on longtermist S-risks, I would probably be most attracted to campaigns like this one to try and cure the suffering of cluster-headaches patients. This seems like a much more robustly-positive intervention (ie, regular utilitarians would like it too), much less politically dangerous, for a potentially similar-ish (???) reduction in suffering (idk how many people suffer cluster headaches versus how many people would use MAID who wouldn't otherwise kill themselves, and idk how to compare the suffering of cluster headaches to that of depression).
In terms of addressing depression specifically, I'd think that you could get more QALYs per dollar (even from a fully negative-utilitarian perspective) by doing stuff like:
Finally, I would have a lot of questions about the exact theory of impact here and the exact pros/cons of enacting a MAID-style law in more places. From afar (I don't know much about suicide methods), it seems like there are plenty of reasonably accessible ways that a determined person could end their life. So, for the most part, a MAID law wouldn't be enabling the option of suicide for people who previously couldn't possibly commit suicide in any way -- it's more like it would be doing some combination of 1. making suicide logistically easier / more convenient, and 2. making suicide more societally acceptable. This seems dicier to me, since I'd be worried about causing a lot of collateral damage / getting a lot of adverse selection -- who exactly are the kinds of people who would suicide if it was marginally more societally acceptable, but wouldn't suicide otherwise?
To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):
There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.
Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".
Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).
One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.
I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.