An adversary who knows that his opponents’ troops have been inoculated against anthrax can switch his battle plans to smallpox or plague—or to an agent for which no vaccine exists. -Ken Alibek, Biohazard
A challenge for reducing bio risk is that many of the risks are coming from adversaries. Adversaries can react to our interventions, and so developing countermeasures may be less effective than one might naively expect due to 'substitution effects'.[1] There are several distinct substitution effects:
- ‘Switching’ - we find a countermeasure for X, adversary then switches from X to developing Y
- ‘Escalating’ - we find a countermeasure for X, adversary modifies X to X’ to overcome countermeasure[2]
- ‘Attention Hazard + Offense Bias’ - we investigate a countermeasure for X, but fail. Adversary was previously not developing X, but seeing our interest in X starts to develop X.
- Can be combined with escalation, even if we successfully find countermeasure for X, adversary is now on general X pathway and starts developing X’
- Can also simply be a timeframe effect, if adversary can produce X before we successfully get the countermeasure to X (although here it matters if the adversary is more like a terrorist and would deploy X as soon as it was created, or a state program that would keep X around for awhile, imposing some ongoing accident or warfare risk until the countermeasure for X was found).
- Can be a problem if we think the countermeasure is imperfect enough that the attention hazard outweighs the countermeasure development.
- ‘Exposing Conserved Vulnerabilities’ - imagine that there are 10 possible attacks. For 9 of them, we could quickly develop a countermeasure in an emergency, but for one of them, finding a countermeasure is impossible (we don’t know which is which in advance). We research countermeasures for all 10 attacks, solve 9 of them, but also reveal the attack that is impossible to counter. The adversary thus picks that one, leaving us worse off than had we just remained ignorant and just waited for an emergency (e.g. if we assume before doing the research that adversary would have only a 10% chance of picking that attack). By picking up the low hanging fruit, we’ve ‘funneled’ our adversary towards the weak points.
Substitution effects will have varying consequences for global catastrophic biological risks (GCBRs). In a worst case scenario, finding countermeasures to more mundane things will cause adversaries to move towards GCBR territory (either by more heavily engineering mundane things, or switching into entirely new kinds of attack). However, this is also counterbalanced by the fact that bioweapons (‘BW’) in general might be less attractive when there are countermeasures for a lot of them.
- ‘Reduced BW appeal’ - An adversary has a program that is developing X and Y. We find a cure to X, which reduces the appeal of the entire program, causing the adversary to give up on both X and Y.
Better technology for attribution (e.g. tracing the origin of an attack or accident) is one concrete example that produces ‘reduced BW appeal.’ Better attribution is unlikely to dissuade development of bioweapons oriented towards mutually assured destruction (and we might expect most GCBRs to be coming from such weapons). But in reducing the strategic/tactical appeal of bioweapons for assassination, sabotage, or ambiguous attacks, the overall appeal of a BW program is reduced, which could have spillover effects into reducing the probability of more GCBR-style weapons.
One key question around substitution effects is the flexibility of an adversary. I get a vague impression from reading about the Soviet program that many scientists were extreme specialists, focusing on only one type of microbe. If this is the case, I would expect escalation risk to be greater than risks of switching or attention hazards (e.g. all the smallpox experts try to find ways around the vaccine, rather than switching to Ebola[3]). This is especially true if internal politics and budget battles are somewhat irrational and favor established incumbents (e.g. so that the smallpox scientist gets a big budget even if their project to bypass a countermeasure is unjustifiable).
Some implications:
- Be wary of narrow countermeasures
- Be hesitant to start an offense-defense race unless we think we can win
- Look for broad spectrum countermeasures or responses—which are more likely to eliminate big chunks of the risk landscape and provide overall reduced bioweapons appeal
Thank you to Chris Bakerlee, Anjali Gopal, Gregory Lewis, Jassi Pannu, Jonas Sandbrink, Carl Shulman, James Wagstaff, and Claire Zabel for helpful comments.
Analogous to the 'fallacy of the last move' - H/T Greg Lewis ↩︎
Forcing an adversary to escalate from X to X' may still reduce catastrophic risk by imposing additional design constraints on the attack ↩︎
Although notably the Soviet program attempted to create a smallpox/Ebola chimera virus in order to bypass smallpox countermeasures ↩︎
I particularly agree with the last point on focussing on purely defensive (not net-defensive) pathogen-agnostic technologies, such as metagenomic sequencing and resilience measures like PPE, air filters and shelters.
If others share this biodefense model in the longtermist biosecurity community, I think it'd be important to point towards these countermeasures in introductory materials (80k website, reading lists, future podcast episodes)
The central point of this piece is that a bioattacker may use biodefense programs to inform their method of attack, and adapt their approach to defeat countermeasures. This is true. I think the point would be strengthened by clarifying that this adaptability would not be characteristic of all adversaries.
We also face a threat by the prospect of being attacked at unpredictable times by a large number of uncoordinated unadaptable adversaries launching one-off attacks. They may evade countermeasures on their first attempt, but might not be able to adapt.
A well-meaning but misguided scientist could also accidentally cause a pandemic with an engineered pathogen that was not intended as a bioweapon, but as an object of scientific study or model for biodefense. They might not be thinking of themselves as an adversary, or be thought of that way, and yet we can still imagine that the incentives they may face in their line of research may lead them into behaviors similar to those of an adversary.
In general, it seems important to develop a better sense of the range of adversaries we might face. I'm agnostic about what type of adversary would be the most concerning.
To escalate a bioweapon, researchers would have to be engaged in technically difficult and complicated engineering or breeding/selecting efforts. To switch to a different bioweapon, researchers could potentially be just selecting a different presently existing alternative. They'd be retraining scientists on known protocols and using already-existing equipment. Switching is almost certain to be much easier than escalating.
If you've already done the costly work of engineering two threats up to similarly enhanced levels of efficacy for a particular purpose, then this might be true. Otherwise, either option involves substantial work before you regain your previous level of efficacy.
Also, you seem to be coming at this from the perspective of the program, whereas ASB seems to be coming at it from the level of particular groups or institutes. Even if it would be rational for the program to switch rather than escalate, that doesn't mean this would actually happen, if internal politics favoured those who would prefer to escalate.
I’m not sure what you mean by “groups/institutes” vs. “programs.” It’s not clear to me which is a subset of which.
It wasn’t clear to me that the OP was referring to the goal of achieving the same bioweapon potency by switching or escalating. Instead, it seemed like they were referring to a situation in which threat A had been countered and was no longer viable. In that situation, to have any threat at all, they can either “escalate” A to A’ or switch to some alternative B, which might less potent than A was prior to the countermeasure, but is still better than A now that it’s been countered.
In a case like this, the adversary will probably find it easier to switch to something else, unless the countermeasure is so broad that all presently-existing pathogens have been countered and they have no other options.
But I think this points not to a flaw in the OPs argument, but to a general need for greater precision in discussing these matters. Defining the adversary, their goals, their constraints, and the same characteristics for their opponents, would all make it much easier, though still difficult, to make any empirical guesses about what they’d do in response to a threat or countermeasure.
I was using "program" in the sense of the "Soviet bioweapons program" – a large government program spanning many different scientists at many different institutes. Hence the principal/agent problem gestured at by the OP.
I disagree, for the reasons already stated.
I agree different types of adversary will likely differ on this sort of thing, but this particular part of the OP was fairly explicitly talking about the Soviet program and potential future programs like it. You disagreed with that part and made a strong and general statement ("Switching is almost certain to be much easier than escalating."). I think this statement is both plausibly wrong for the scenario specified (Soviet-like programs) and too strong and general to apply across the space of other potential adversaries.
Thanks for clarifying what you mean by "program."
When we look at the Soviet BW program, they engaged in both "switching" (i.e. stockpiling lots of different agents) and "escalating," (developing agents that were heat, cold, and antibiotics-resistant).
If the Soviets discovered that Americans had developed a new antibiotic against one of the bacterial agents in their stockpile, I agree that it would have been simpler to acquire that antibiotic and use it to select for a resistant strain in a dish. Antibiotics were hard to develop then, and remain difficult today.
Let's say we're in 2032 and a Soviet-style BW program was thinking about making a vaccine-resistant strain of Covid-19. This would have to be done with the fact in mind that their adversaries could rapidly make, manufacture, and update existing mRNA vaccines, and also had a portfolio of other vaccines against it. Any testing for the effectiveness of a vaccine and the infectiousness/deadliness of any novel would probably have to be done in animals in secret, providing limited information about its effectiveness as an agent against humans.
Depending on the threat level they were trying to achieve, it might be simpler to switch to a different virus that didn't already have a robust vaccine-making infrastructure against it, or to a bacterial agent against which vaccines are far less prevalent and antibiotic development is still very slow.
So I agree that my original statement was too sweeping. But I also think that addressing this point at all requires quite a bit of specific context on the nature of the adversaries, and the level of defensive and offensive technologies and infrastructures.