This is a special post for quick takes by Liakias. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Could the latent effects of Covid worsen AI alignment efforts and/or other x-risk responses?
This is very much a 'I suspect (and hope) I'm wrong' question, but I thought it was still worth checking the rationale for this not being seen as a major issue. Essentially, is it likely that the long-term and latent effects of Covid on cognitive performance could significantly damage global responses to x-risks?
With a studies finding cognitive decline and brain shrinkage after even mild Covid infections (with IQ drops higher than stroke patients in some severe cases) and Omicron variants, though less deadly, apparently still causing greater brain apoptosis (of many previously healthy cells) than previous variants, is it possible that mass infection could be causing some level of general cognitive decline? Or, if this is happening, to some extent, to most people, with mass infection, are we not even noticing the extent of this decline?
If so, even if this is a pretty small or even negligible decline in most cases, if the raw ability to handle cognitive complexity is an important aspect of making effective political decisions, could small (and therefore particularly unnoticed) but en masse cognitive declines be enough to negatively tip the balance in responses to existing x-risks?
Add in potential further declines from repeat infections and cumulative damage, and might key political decision-makers have unrecognised, biologically worsened responses to AI policy during a crucial period for the field?
Equally, could this affect responses to other, perhaps previously more manageable risks? E.g. for nuclear risks, with admittedly arbitrary numbers, if each year has a 1% pre-Covid risk of nuclear war, if Covid-related cognitive decline shifted this risk to even something like 1.1% per year, even small risk increases could still be significant for such a strong potential negative.
Counterpoints
As potential counterpoints, perhaps Covid-related cognitive decline just isn't that serious, but with perhaps hidden long-term consequences of many multiple infections not yet showing significantly and there also being a 60% increased risk of developing a new mental illness after infection, perhaps both raw intelligence decline, combined with mental health shifts, is worth considering?
However, perhaps population cognitive decline doesn't have enough of an effect on decision-making to be significant, or there are genuinely significant cognitive declines among key decision-makers, but these are being counterbalanced by other organisational, health and tech improvements?
Future considerations
Finally, if Covid-related decline is a serious possibility over repeat, even seemingly mild, infections, might it even be helpful, most other things being equal, to draw key decision-makers and policy specialists disproportionately from among those who have had fewer infections, or those who appear to be genetically resistant to even first infections?
My intuition is that, something just feels wrong or missing from this line of reasoning, but with AI regulation and alignment perhaps already being poorly managed by governments, could our efforts to avert a larger, existential crisis still be hampered by the lingering effects of our last global crisis?
Could the latent effects of Covid worsen AI alignment efforts and/or other x-risk responses?
This is very much a 'I suspect (and hope) I'm wrong' question, but I thought it was still worth checking the rationale for this not being seen as a major issue. Essentially, is it likely that the long-term and latent effects of Covid on cognitive performance could significantly damage global responses to x-risks?
With a studies finding cognitive decline and brain shrinkage after even mild Covid infections (with IQ drops higher than stroke patients in some severe cases) and Omicron variants, though less deadly, apparently still causing greater brain apoptosis (of many previously healthy cells) than previous variants, is it possible that mass infection could be causing some level of general cognitive decline? Or, if this is happening, to some extent, to most people, with mass infection, are we not even noticing the extent of this decline?
If so, even if this is a pretty small or even negligible decline in most cases, if the raw ability to handle cognitive complexity is an important aspect of making effective political decisions, could small (and therefore particularly unnoticed) but en masse cognitive declines be enough to negatively tip the balance in responses to existing x-risks?
Add in potential further declines from repeat infections and cumulative damage, and might key political decision-makers have unrecognised, biologically worsened responses to AI policy during a crucial period for the field?
Equally, could this affect responses to other, perhaps previously more manageable risks? E.g. for nuclear risks, with admittedly arbitrary numbers, if each year has a 1% pre-Covid risk of nuclear war, if Covid-related cognitive decline shifted this risk to even something like 1.1% per year, even small risk increases could still be significant for such a strong potential negative.
Counterpoints
As potential counterpoints, perhaps Covid-related cognitive decline just isn't that serious, but with perhaps hidden long-term consequences of many multiple infections not yet showing significantly and there also being a 60% increased risk of developing a new mental illness after infection, perhaps both raw intelligence decline, combined with mental health shifts, is worth considering?
However, perhaps population cognitive decline doesn't have enough of an effect on decision-making to be significant, or there are genuinely significant cognitive declines among key decision-makers, but these are being counterbalanced by other organisational, health and tech improvements?
Future considerations
Finally, if Covid-related decline is a serious possibility over repeat, even seemingly mild, infections, might it even be helpful, most other things being equal, to draw key decision-makers and policy specialists disproportionately from among those who have had fewer infections, or those who appear to be genetically resistant to even first infections?
My intuition is that, something just feels wrong or missing from this line of reasoning, but with AI regulation and alignment perhaps already being poorly managed by governments, could our efforts to avert a larger, existential crisis still be hampered by the lingering effects of our last global crisis?