Drawing from work done in the former Soviet Union to improve safety in their bioweapons and nuclear facilities (e.g. free consultations and install of engineering safety measures, at-cost upgrades of infrastructure such as ventilation and storage facilities, etc), developing a standard set of physical/infrastructure technologies to help monitor AI Development labs/hardware and provide physical failsafes in the event of unexpectedly rapid takeoff (e.g., a FOOM scenario). Although unlikely, some standard guidelines modifying current best-practices for data center safety (e.g., restrictions on devices, physical air gaps between critical systems and the broader world, extensive onsite power monitoring and backup generators) could be critical to prevent anxiety over both physical and digital security from encouraging risk-taking behaviors by AI Development programs (Such as rushing builds, hiding locations, inappropriate dual-use or shared facilities which decrease control over data flows). In particular, physical low-tech hardware such as low-voltage switches have already provided demonstrable benefit in safeguarding high-tech, high-risk activity (See the Goldsboro B52 Crash, where a single low-voltage switch prevented disaster after numerous higher-tech controls failed in the chaotic environment of a bomber breaking apart in mid-air). These technologies have low dual-use risk, low-cost of installation and development, but as physical hardware are potentially easily overlooked either due to lack of interest, perceived risk of adding friction/failure points to the main mission, and belief in high-tech safeguards being more 'reliable' or 'sophisticated'.
Avenues for progress could be establishing an international standard for physical security in AI facilities, sponsoring or subsidizing installation or retrofit into new/existing facilities, and advocacy within AI organizations for attention to this or similar problems.
Physical AI Safety
Drawing from work done in the former Soviet Union to improve safety in their bioweapons and nuclear facilities (e.g. free consultations and install of engineering safety measures, at-cost upgrades of infrastructure such as ventilation and storage facilities, etc), developing a standard set of physical/infrastructure technologies to help monitor AI Development labs/hardware and provide physical failsafes in the event of unexpectedly rapid takeoff (e.g., a FOOM scenario). Although unlikely, some standard guidelines modifying current best-practices for data center safety (e.g., restrictions on devices, physical air gaps between critical systems and the broader world, extensive onsite power monitoring and backup generators) could be critical to prevent anxiety over both physical and digital security from encouraging risk-taking behaviors by AI Development programs (Such as rushing builds, hiding locations, inappropriate dual-use or shared facilities which decrease control over data flows). In particular, physical low-tech hardware such as low-voltage switches have already provided demonstrable benefit in safeguarding high-tech, high-risk activity (See the Goldsboro B52 Crash, where a single low-voltage switch prevented disaster after numerous higher-tech controls failed in the chaotic environment of a bomber breaking apart in mid-air). These technologies have low dual-use risk, low-cost of installation and development, but as physical hardware are potentially easily overlooked either due to lack of interest, perceived risk of adding friction/failure points to the main mission, and belief in high-tech safeguards being more 'reliable' or 'sophisticated'.
Avenues for progress could be establishing an international standard for physical security in AI facilities, sponsoring or subsidizing installation or retrofit into new/existing facilities, and advocacy within AI organizations for attention to this or similar problems.