UPDATE (12/13/24): Zeffy is now our primary method of receiving donations! You can access it through the “Donate” button on PauseAI-us.org or directly here.
UPDATE (12/12/24): PauseAI US has received it's own 501(c)(3) status (!) so Manifund will no longer be our fiscal sponsor. So, I closed our Manifund fundraiser to save on fees from using that platform that we no longer have to pay. I'll update with other platforms we use that we can now sign up for on our own, and you can always email donations at pauseai-us dot org to arrange a donation.
PauseAI US needs your donations! We were very fortunate not to have to do much dedicated fundraising up until this point, but I was caught off-guard by receiving nothing in the SFF main round (after receiving multiple speculation grants), so we're in a crunch and only fully-funded through the end of 2024.
If you're sold, you can donate right now via PauseAI's general support Manifund project, the text of which I'll share here below the dots.
If you're open but you have questions, or you just thought of a great question you know other people are wondering about, ask them in the comments below! I'll answer them before or on 11/19/24.
Project summary
PauseAI US's funding fell short of expected, and now we are only funded until the end of 2024! Money donated to this project will go to fund the operations of PauseAI US until midyear 2025.
What are this project's goals? How will you achieve them?
PauseAI US advocates for an international treaty to pause frontier AI development. But we don't need to achieve that treaty to have positive impact-- most of our positive impact will likely come from moving the Overton window and making more moderate AI Safety measures more possible. Advocating straightforwardly for what we consider the best solution is an excellent frame for educating the general public and elected officials on AI danger-- we don't know what we're doing building powerful AI, so we should wait until we do to proceed-- compared to tortured and confusing discussions of other solutions like alignment that have no clear associated actions for those outside the technical field.
To fulfill our goal of moving the Overton window in the direction of simply not building AGI while it is dangerous to do so, PauseAI US has two major areas of programming: protesting and lobbying.
Protests (like this upcoming one) are the core of our irl volunteer organizing, local social community, and social media presence. Protests send the general overarching message to Pause frontier AI training, in line with the PauseAI proposal. Sometimes protests take issue with the AI industry and take place at AGI company offices like Meta, OpenAI, or Anthropic (RSVP for 11/22!). Sometimes protests are in support of international cooperative efforts. Protests get media attention which communicates not only that the protestors want to Pause AI, but shows in a visceral and easily understood way the stakes of this problem, filling the bizarre missing mood surrounding AI danger ("If AI companies are doing something so dangerous, how come there aren't people in the streets?"). Protests are a highly neglected angle in the AI Safety fight. Ultimately, the impact of protests is in moving the Overton window for the public, which in turn affects what elected officials think and do.
Organizing Director Felix De Simone is based in DC and does direct lobbying on the Hill as well as connecting constituents to their representatives for grassroots lobbying. Felix holds regular email- and letter-writing workshops for the general public on the PauseAI US Discord (please join!) aimed at specific events, such as emailing and calling the California Assembly and Senate during the SB-1047 hearings and, more recently, workshops coordinating supportive emails expressing hope about the possibility of a global treaty to pause frontier AI development to attendees of the US AI Safety Conference. We work with SAG-AFTRA representatives to coordinate with their initiatives and add an x-risk dimension to their primarily digital identity and provenance-related concerns. PauseAI US is part of a number of other more speculative legal interventions to Pause AI, such as working with Gabriel Weil to develop a strict liability ballot initiative version of SB-1047 and locate funders to get it on the 2026 ballot. We are members of Coalition for a Baruch Plan for AI and Felix attended the UN Summit for the Future Activist Days. We hope to be able to serve as a plaintiff in lawsuits against AI companies that our attorney allies are developing, a role which very few others would be willing or able to fill. Lobbying is more of a nitty gritty approach, but the goal of our lobbying is the same as our protesting: to show our elected officials that cooperation to simply not build AGI is possible, because the will and the ways are there.
How will this funding be used?
Salaries - $260k/year
Specific Events - ~$7.5-15k/year
Operating costs - ~$24k/year (this includes bookkeeping, software, insurance, payroll tax, etc. and may be an overestimate for next year because there were so many startup costs this year-- if it is, consider it slack)
Through 2025 Q2 -- $150k.
Our programming mainly draws on our labor and the labor of our volunteers, so salaries are our overwhelmingly largest cost.
Q1&Q2 programming:
- quarterly protest
- monthly flyering
- monthly local community social event
- 2+ lobbying events for public education
- PauseAI US Discord (please join!) for social times, AI Safety conversation, and help with running your own local PauseAI US community
- PauseAI US newsletter
- expansion of Felix's lobbying plan, improving his relationships with key offices
Org infrastructure work by Q2:
(This one is massive. We just hired Lee Green to run ops.)
- massively improved ops and legal compliance leading us to be able to scale up much more readily
- website with integrated event platform streamlining our volunteer discovery and training processes and allowing us to hold more frequent and larger protests
- Executive Director able to focus on strategy and fundraising and not admin
- improved options for donating and continuous fundraising
Incidental work likely to happen by Q2:
- strict liability ballot initiative will have progressed as far as it can
- We respond to media requests for comment on major news events, may muster small immediate demonstrations and/or orchestrate calls into key offices
- supporting other AI Safety organizations with our knowledge and connections, bringing an understanding of inside-outside game dynamics in AI Safety
- lots of behind the scenes things I unfortunately can't discuss but which are a valuable part of what our org does
Who is on your team? What's your track record on similar projects?
Executive Director - Holly Elmore
Founded this org, long history of EA organizing (2014-2020 at Harvard) and doing scientific research as an evolutionary biologist and then as a wild animal welfare researcher at Rethink Priorities.
Director of Operations - Lee Green
+20 years experience in Strategy Consulting, Process Engineering, and Efficiency across many industries, specifically supporting +40 Nonprofit and Impact-Driven Organizations
Organizing Director - Felix De Simone
Organized U Chicago EA and climate canvassing campaigns.
[crossposted from Manifund]
donated $90,000
It's more important than ever that PauseAI is funded. Pretty much the only way we're going to survive the next 5-10 years is by such efforts being successful to the point of getting a global moratorium on further AGI/ASI development. There's no point being rich when the world ends. I encourage others with 7 figures or more of net worth to donate similar amounts. And I'm disappointed that all the big funders in the AI Safety space are still overwhelmingly focused on Alignment/Safety/Control when it seems pretty clear that those aren't going to save us in time (if ever), given the lack of even theoretical progress, let alone practical implementation.
(This was 1 Bitcoin btw. Austin helped me with the process of routing it to Manifund, allowing me to donate ~32% more, factoring in avoiding capital gains tax in the UK).
I've been impressed with both Holly and Pause AI US, and Joep and Pause AI Global, and intend to donate a similar amount to Pause AI Global.
I'm confident in PauseAI US's ability to run protests and I think the case for doing protests is pretty strong. You're also doing lobbying, headed by Felix De Simone. I'm less confident about that so I have some questions.
Happy to weigh in here with some additional information/thoughts.
Before I started my current role at PauseAI US, I worked on statewide environmental campaigns. While these were predominantly grassroots (think volunteer management, canvassing, coalition-building etc.) they did have a lobbying component, and I met with statewide and federal offices to advance our policy proposals. My two most noteworthy successes were statewide campaigns in Massachusetts and California, where I met with a total of ~60 state Congressional offices and helped to persuade the legislatures of both states to pass our bills (clean energy legislation in MA; pollinator protection in CA) despite opposition from the fossil fuel and pesticide industries.
I have been in D.C. since August working on PauseAI US’ lobby efforts. So far, I have spoken to 16 Congressional offices — deliberately meeting with members of both parties, with a special focus on Congressmembers in relevant committees (i.e. House Committee on Science, Space, and Technology; Senate Committee on Commerce, Science, and Transportation; House Bipartisan AI Task Force).
I plan to speak with another >50 offices over the next 6 months, as well as deepen relationships with offices I’ve already met. I also intend to host a series of Congressional briefings— on (1) AI existential risk, (2) Pausing as a solution, and (3) the importance and feasibility of international coordination— inviting dozens of Congressional staff to each briefing.
I do coordinate with a few other individuals from aligned AI policy groups, to share insights and gain feedback on messaging strategies.
Here are a few takeaways from my lobbying efforts so far, explaining why I believe PauseAI US lobbying is important:
Framing and vocabulary matter a lot here — it’s important to find the best ways to make our arguments palatable to Congressional offices. This includes, for instance, framing a Pause as “pro-safe innovation” rather than generically “anti-innovation,” anticipating and addressing reasonable objections, making comparisons to how we regulate other technologies (i.e. aviation, nuclear power), and providing concrete risk scenarios that avoid excessive technical jargon.
As such, I spend a lot of time emphasizing loss-of-control scenarios, making the case that this technology should not be thought of as a “weapon” to be controlled by whichever country builds it first, but instead as a “doomsday device” that could end our world regardless of who builds it.
I also make the case for the feasibility of an international pause, by appealing to historical precedent (i.e. nuclear non-proliferation agreements) and sharing information about verification and enforcement mechanisms (i.e. chip tracking, detecting large-scale training runs, on-chip reporting mechanisms.)
The final reason for the importance of PauseAI US lobbying is a counterfactual one: If we don’t lobby Congress, we risk ceding ground to other groups who push the “arms race” narrative and convince the US to go full-speed ahead on AGI development. By being in the halls of Congress and making the most persuasive case for a Pause, we are at the very least helping prevent the pendulum from swinging in the opposite direction.
3 is more important than ever now, following the recommendation by the bipartisan U.S.-China Economic and Security Review Commission for a Manhattan Project on AGI.
1. Our lobbying is more “outside game” than the others in the space. Rather than getting our lobbying authority from prestige or expense, we get it from our grassroots support. Our message is simpler and clearer, pushing harder on the Overton window. (More on the radical flank effect here.) Our messages can complement more constrained lobbying from aligned inside gamers by making their asks seem more reasonable and safe, which is why us lobbying is not redundant with those other orgs but synergistic.
2. Felix has experience on climate campaigns and climate canvassing and was a leader in U Chicago EA. He's young, so he hasn't had many years of experience at anything, but he has the relevant kinds of experience that I wanted and is demonstrably excellent at educating, building bridges, and juggling a large network. He. has the tact and sensitivity you want in a role like this while also being very earnest. I'm very excited to nurture his talent and have him serve as the foundation for our lobbying program going forward.
One other thing I forgot to mention re: value-add. Some of the groups you mentioned (Center for AI Policy & Center for AI Safety; not sure about Palisade) are focused mostly on domestic AI regulation. PauseAI US is focused more on the international side of things, making the case for global coordination and an AI Treaty. In this sense, one of our main value-adds might be convincing members of Congress that international coordination on AI is both feasible and necessary to prevent catastrophic risk. This also serves to counter the "arms race" narrative ("the US needs to develop AGI first in order to beat China!") which risks sabotaging AI policy in the coming years.
Adverse selection
I am thinking a bit about adverse selection in longtermist grantmaking and how there are pros and cons to having many possible funders. Someone else not funding you could be evidence I/others shouldn’t either, but conversely updating too much on what a small number of grantmakers think could lead to missing lots of great opportunities as a community.
Going to flag that a big chunk of the major funders and influencers in the EA/Longtermist community have personal investments in AGI companies, so this could be a factor in lack of funding for work aimed at slowing down AGI development. I think that as a community, we should be divesting (and investing in PauseAI instead!)
Jaan Tallinn, who funds SFF, has invested in DeepMind and Anthropic. I don't know if this is relevant because AFAIK Tallinn does not make funding decisions for SFF (although presumably he has veto power).
If this is true, or even just likely to, and someone has data on this, making this data public, even in anonymous form will be extremely high impact. I do recognize that such moves could come at great personal cost but in case it is true I just wanted to put it out there that such disclosures could be a single action that might by far outstrip even the lifetime impact of almost any other person working to reduce x-risk from AI. Also, my impression is that any evidence of this going on is absent form public information. I really hope absence of such information is actually just because nothing of this sort is actually going on but it is worth being vigilant.
It's literally at the top of his Wikipedia page: https://en.m.wikipedia.org/wiki/Jaan_Tallinn
What do you mean by "if this is true"? What is "this"?
It’s well-known to be true that Tallinn is an investor in AGI companies, and this conflict of interest is why Tallinn appoints others to make the actual grant decisions. But those others may be more biased in favor of industry than they realize (as I happen to believe most of the traditional AI Safety community is).
(I don't think this is particularly true. I think the reason why Jaan chooses to appoint others to make grant decisions are mostly unrelated to this.)
Doesn’t he abstain voting on at least SFF grants himself because of this? I’ve heard that but you’d know better.
He generally doesn't vote on any SFF grants (I don't know why, but would be surprised if it's because of trying to minimize conflicts of interest).
I don't know if this analogy holds but that sounds a bit like how in certain news organizations, "lower down" journalists self censor - they do not need to be told what not to publish. Instead they independently just anticipate what they can and cannot say based on how their career might be affected by their superiors' reactions to their work. And I think if that is actually going on it might not even be conscious.
I also saw some pretty strong downvotes on my comment above. Just to make clear and in case this is the reason for my downvotes: I am not insinuating anything - I really hope and want to believe there is no big conflicts of interest. I might have been scarred by working on climate change where the polluters for years, if not decades really spent time and money slowing down action on cutting down CO2 emissions. Hopefully these patterns are not repeated with AI. Also I have much less knowledge about AI and have only heard a few times that Google etc. are sponsoring safety conferences etc.
In any case, I believe that in addition to technical and policy work, it would be really valuable to have someone funded to really pay attention and dig into details on any conflicts of interest and skewed incentives - it set action on climate change back significantly something we might not afford with AI as it might be more binary in terms of the onset of a catastrophe. Regarding funding week - if the big donors are not currently sponsoring anyone to do this, I think this is an excellent opportunity for smaller donors to put in place a crucially missing piece of the puzzle - I would be keen to support something like this myself.
There are massive conflicts of interest. We need a divestment movement within AI Safety / EA.
FYI, weirdly timely podcast episode out from FLI that includes discussion of CoIs in AI Safety.
Could you spell out why you think this information would be super valuable? I assume something like you would worry about Jaan's COIs and think his philanthropy would be worse/less trustworthy?
Yeah apologies for the vague wording - I guess I am just trying to say this is something I know very little about. Perhaps I am biased from my work on Climate Change where there is a track record of those who would lose economically (or not profit) from action on climate change making attempts to slow down progress on solving CC. If there might be mechanisms like this at play in AI safety (and that is a big if!) I feel (and this should be looked more deeply into) like there is value to directing only a minimal stream of funding to have someone just pay attention to the fact that there is some chance such mechanisms might be beginning to play out in AI safety as well. I would not say it makes people with COI's impact bad or not trustworthy, but it might point at gaps in what is not funded. I mean this was all inspired by the OP that Pause AI seems to struggle to get funding. Maybe it is true that Pause AI is not the best use of marginal money. But at the same time, I think it could be true that at least partially, such a funding decisions might be due to incentives playing out in subtle ways. I am really unsure about all this but think it is worth looking into funding someone with "no strings attached" to pay attention to this, especially given the stakes and how previously EA has suffered from too much trust especially with the FTX scandal.
It's no secret that AI Safety / EA is heavily invested in AI. It is kind of crazy that this is the case though. As Scott Alexander said:
Since we passed the speculation round, we will receive feedback on the application, but haven't yet. I will share what I can here when I get it.
Politics
We are an avowedly bipartisan org and we stan the democratic process. Our messaging is strong because of its simplicity and appeal to what the people actually think and feel. But our next actions remain the same no matter who is in office: protest to share our message and lobby for the PauseAI proposal. We will revise our lobbying strategy based on who has what weight, as we would with any change of the guard, and different topics and misconceptions will likely dominate the education side of our work than before.
This is why it's all the more important that we be there.
The EA instinct is to do things that are high leverage and to quickly give up causes that are hard or involve tugging the rope against an opponent to find something easier (higher leverage). There is no substitute for doing the hard work of grassroots growth and lobbying here. There will be a fight for hearts and minds, conflicts between moneyed industry interests and the population at large, and shortcuts in that kind of work are called "astroturfing". Messaging getting harder is not a reason to leave-- it's a crucial reason to stay.
If grassroots protesting and lobbying were impossible, we would something else. But this is just what politics looks like, and AI Safety needs to be represented in politics.
I'm highly skeptical about the risk of AI extinction, and highly skeptical that there will be singularity in our near-term future.
However, I am concerned about near-term harms from AI systems such as misinformation, plagiarism, enshittification, job loss, and climate costs.
How are you planning to appeal to people like me in your movement?
Yes, very much so. PauseAI US is a coalition of people who want to pause frontier AI training, for whatever reason they may have. This is the great strength of the Pause position— it’s simply the sensible next step when you don’t know what you’re doing playing with a powerful unknown, regardless of what your most salient feared outcome is. The problem is just how much could go wrong with AI (that we can and can’t predict), not only one particular set of risks, and Pause is one of the only general solutions.
Our community includes x-risk motivated people, artists who care about abuse of copyright and losing their jobs, SAG-AFTRA members whose primary issue is digital identity protection and digital provenance, diplomats whose chief concern is equality across the Global North and Global South, climate activists, anti-deepfake activists, and people who don’t want an AI Singularity to take away all meaningful human agency. My primary fear is x-risk, ditto most of the leadership across the PauseAIs, but I’m also very concerned about digital sentience and think that Pause is the only safe next step for their own good. Pause comfortably accommodates the gamut of AI risks.
And the Pause position accommodates this huge set of concerns without conflict. The silly feud between AI ethics and AI x-risk doesn’t make sense through the lens of Pause: both issues would be helped by not making even more powerful models before we know what we’re doing, so they aren’t competing. Similarly, with Pause, there’s no need to choose between near-term and long-term focus.
On Pauses
(As you note much of the value may come from your advocacy making more 'mainstream' policies more palatable, in which case the specifics of Pause itself matter less, but are still good to think about.)
I would also be interested in your thoughts on @taoburga's push back here. (Tao, I think I have a higher credence than you that Pause advocacy is net positive, but I agree it is messy and non-obvious.)
Holly, you’re a 10% Pledger, does that mean that some of the money we give you ends up with different charities?
I struggled with how to handle the 10% pledge when I first starting seeking donations. I did find it a little hinky to donate to my own org, but also kind of wrong to ask people for donations that end up funding other stuff, even though it’s 100% the employee’s business what they do with the salary they receive and that doesn’t change just because they do charitable work, etc.
But circumstances have made that decision for me as I’ve ended up donating a considerable amount of my salary to the org to get it through the early stages. Let’s just say I’m well ahead on my pledge!
Do you actually take the salary and donate it, or do you just claim a lower salary and call some hours 'pro-bono'? Obviously the latter is more tax-efficient.
It is actual salary. Since I'm an exempt, salaried employee, it's not clear that I could claim pro bono hours, and unless that was very clearly written into my hire letter I feel that doing things that way wouldn't be enough in line with the spirit of the pledge. It's possible we could get the tax benefits and deal with my qualms in the future.
I didn't receive salary I was owed before the org was officially formed (waiting for the appropriate structures to pay myself with a W2), all of which is still an account payable to me, and I've foregone additional salary when the org couldn't afford it, which is owed to me as backpay. In order to donate any of the money that's owed to me, we have to process it through payroll and pay payroll tax on it.
At this point, I have many years of 10% donations in backpay. Some of it I'm reserving the right to still claim one day. But I'm processing some as a donation for my year-end giving (when I do the bulk of my giving) this year.
The pledge, for me, is not just about donating the money but about the spiritual hygiene parting with the money and affirming my priorities, so it's very important to me to actually give money I was in possession of. It could work for hours but I'd need to have that same knowledge of making the sacrifice as it was happening. I'm not saying this is the correct or necessary way to view the pledge and I approve of other people using the pledge in the way that best helps them to stay in line with their altruistic values.
I'm trying to understand... what does "exempt" mean in the phrase "exempt, salaried employee"?
Do you mean that your salary is part of the expenses of a tax-exempt nonprofit, so people who donate to PauseAI (partly to pay your salary) can deduct this from their taxes if they itemize their returns? And I'm trying to understand the connection between this and the idea of claiming pro-bono hours? Thanks!
Oh sorry, “exempt employee” is a legal term, referring to being exempt from limits on hours, overtime, mandatory lunch breaks, etc. What I meant was I’m not an hourly employee.
https://www.indeed.com/hire/c/info/exempt-vs-non-exempt-employee
Donation mechanics
I know bigger orgs like recurring for a few reasons (people likely give more total money this way, fundraising case is often is based on need so it’s good not to be holding all your future at once), but I think we are currently too small to prefer uncertain future money over a lump sum. Also, because we are fiscally sponsored until we get our own 501(c)(3) status, setting up systems for recurring donations is a bit hairy and we’ll just have to redo them in a few months. So, perhaps in the future we will prefer recurring, for now lump sum is great. If it’s easier for you to do recurring, we could set up a recurring Zelle transfer now.
Manifund is also our fiscal sponsor, so we would owe them 5% of our income anyway. In our case, it makes no difference financially, and the platform is more convenient.
Fundraising scenarios
A comment not a question (but feel free to respond): let's imagine Pause AI US doesn't get much funding and the org dies, but then in two years someone wants to start something similar - this would seem quite inefficient and bad. Or conversely that Pause AI US get's lots of funding and hires more people, and then funding dries up in a year and they need to shrink. My guess is there is an asymmetry where an org shrinking for lack of funding is more bad than growing with extra funding, which I suppose leans towards growing slower with a larger runway, but not sure about this.
The minimal PauseAI US is me making enough to live in the Bay Area. As long as I’m keeping the organization alive, much of the startup work will not be lost. Our 501(c)(3) and c4 status would come in in the next year, I’d continue to use the systems set up by me and Lee, and I’d be able to keep a low amount of programming going while fundraising. I intend to keep PauseAI US alive unless I absolutely can’t afford it or I became convinced I would never be able to effectively carry out our interventions.
I wonder why this has been downvoted. Is it breaking some norm?
Ooooo, I shared it on twitter facepalm
Wait, is that an explanation? Can new accounts downvote this soon?
Yes strange, maybe @Will Howard🔹 will know re new accounts?
Or maybe a few EAF users just don't like PauseAI and downvoted, probably the simplest explanation.
And while we are talking about non-object level things, I suggest adding Marginal Funding Week as a tag.
Yup new accounts can downvote immediately, unlike on LessWrong where you need a small amount of karma to do so. Can't confirm whether this happened on this post
Is PauseAI US a 501(c)(3)?
We are fiscally sponsored by Manifund and just waiting for the IRS to process our 501(c)(3) application (which could still take several more months). So, for the donor it's all the same-- we have 501(c)(3) status via Manifund, and in exchange we give 5% of our income to them. Sometimes these arrangements are meant to be indefinite, and the fiscal sponsor does a lot of administration and handles the taxes and bookkeeping. PauseAI US has its own bookkeeper and tax preparer and we will end the fiscal sponsor relationship as soon as the IRS grants us our own 501(c)(3) status.
Additionally, we've applied for 501(c)(4) status for PauseAI US Action Fund, which will likely take even longer. Because Manifund (and PauseAI US, in our c3 application) have an election h, we are able to do lobbying as a c3 as long as it doesn't exceed ~20% (actual formula is more complicated) of our expenditures, so we probably will not need the c4 for the lobbying money for a while, but the structure is being set up now so we can raise unrestricted lobbying money.
Thanks, Holly. If helpful, I am open to more deals like the one I did with Greg, although I suspect there are better options for you via loans.
Hmm, I wonder what we would bet on. There’s no official timeline or p(doom) of PauseAI, and our community is all over the map on that. Our case for you donating to pausing AI is not about exactly how imminently doom is upon us, but how much a grassroots movement would help in concentrating public sentiments and swaying crucial decisions by demanding safety and accountability.
My personal views on AI Doom (https://forum.effectivealtruism.org/posts/LcJ7zoQWv3zDDYFmD/cutting-ai-safety-down-to-size) are not as doomy as Greg’s. I just still think this is the most important issue in the world at a lower chance of extinction or with a longer timelines, and that the crucial time to act is as soon as possible. I don’t think the timeline prediction is really the crux.
You could make a bet about whether PauseAI will have any salient successes, or otherwise be able to point to why it did achieve a reduction in existential risk of, say, half a basis point, in the next five years, according to an external judge such as myself.
No offense to forecasting, which is good and worthwhile, but I think trying to come with a bet in this case is a guaranteed time suck that will muddy the waters instead of clarifying them. There are very few crisp falsifiable hypotheses that also get at the cruxes of whether it's better to donate to PauseAI or animal welfare given that that's not already clear to Vasco that I think would make good bets unfortunately.
https://x.com/ilex_ulmus/status/1776724461636735244
That is a perspective you could inhabit, but it also seems contradictory with the vibe in "Hmm, I wonder what we would bet on"
Well if someone has a great suggestion that’s the objection it has to overcome
Thanks for following up, Holly.
What is the date by which you guess there is a 50 % chance of human population being 7 billion or lower (due to an AI catastrophe or other)?