As someone who runs an organization that does a lot of biorisk work, it's incredibly expensive in staff time and logistics to receive small donations - but if you're giving more than, say, $5,000, you could just email the organizations to ask, and I'm sure they could figure it out.
But as I answered, CHS does have a donation page. (And NTI does allow donations, with a box to indicate where you'd like the money to go, but it's unclear to me if that actually lets you direct it only to bio.)
LLMs are not AGIs in the sense being discussed, they are at best proto-AGI. That means the logic fails at exactly the point where it matters.
When I ask a friend to give me a dollar when I'm short, they often do so. Is this evidence that I can borrow a billion dollars? Should I go on a spending spree on the basis that I'll be able to get the money to pay for it from those friends?
When I lift, catch, or throw a 10 pound weight, I usually manage it without hurting myself. Is this evidence that weight isn't an issue? Should I try to catch a 1,000 pound boulder?
No one is really suggesting that a unilateral "pause" is effective, but there is growing support for some non-unilateral version as an important approach to be negotiated.
There was a quite serious discussion of the question, and different views, on the forum late last year (which I participated in,) summarized by Scott Alexander here; https://forum.effectivealtruism.org/posts/7WfMYzLfcTyDtD6Gn/pause-for-thought-the-ai-pause-debate
Confirmed; he does work in this area, there's independent reporting about his work on these topics, and has a substack about his very relevant legal work; https://www.nlrbedge.com/
I think there are useful analogies between specific aspects of bio, cyber, and AI risks, and it's certainly the case that when the biorisk is based on information security, it's very similar to cybersecurity, not the least in that it requires cybersecurity! And the same is true for AI risk; to the extent that there is a risk of model weights leaking, this is in part a cybersecurity issue.
So yes, I certainly agree that many of the dissimilarities with AI are not present if analogizing to cyber. However, more generally, I'm not sure cybersecurity is a good analogy for biorisk, and have heard that computer security people often dislike the comparison of computer viruses and biological viruses for that reason, though they certainly share some features.
I think this also ignores the counterfactual world with less safety research, where the equivalent advances, which are funded because of commercial incentives, come from less generalizable safety research, and we end up with less well prosaically aligned but similarly capable systems. (And I haven't really laid out this argument before, but I think it generalizes to the counterfactual world without OpenAI or even Deepmind being inspired by AI safety concerns.)