Hide table of contents

Cullen O’Keefe, Jade Leung, Markus Anderljung[1] [2]

Summary

Standard-setting is often an important component of technology safety regulation. However, we suspect that existing standard-setting infrastructure won’t by default adequately address transformative AI (TAI) safety issues. We are therefore concerned that, on our default trajectory, good TAI safety best practices will be overlooked by policymakers due to the lack or insignificance of efforts which identify, refine, recommend, and legitimate TAI safety best practices in time for their incorporation into regulation.

Given this, we suspect the TAI safety and governance communities should invest in capacity to influence technical standard setting for advanced AI systems. There is some urgency to these investments, as they move on institutional timescales. Concrete suggestions include deepening engagement with relevant standard setting organizations (SSOs) and AI regulation, translating emerging TAI safety best practices into technical safety standards, and investigating what an ideal SSO for TAI safety would look like.

A plausible high-level plan for achieving TAI safety is to (a) identify state-of-the-art technical safety and security measures that reduce the probability of catastrophic AI failures, then (b) ensure (such as by legal mandate) that actors at the frontier of AI development and deployment adopt those measures.

This general structure of first identifying and then mandating safety measures is obviously not unique to AI. How do lawmakers choose which substantive safety measures to legally mandate for other technologies? Several options are possible and used in practice, including encoding such requirements directly into legislation, or delegating such decisions to regulatory agencies. One common strategy is to have the law incorporate by reference (i.e., “point” to) existing technical safety standards[3] previously developed by private standard-setting organizations (“SSOs”). Another strategy, common in the EU, is to first pass generally-phrased regulation, and later have the regulation operationalized via standards developed by SSOs.[4]

Standardization accomplishes several important things. First, it provides a structured process for a consensus of technical safety experts to identify and recommend the best, well-tested technical safety ideas. As a result, policymakers have to spend less time developing governmental standards and exercise less non-expert judgment about which safety requirements should be adopted. Notably, standards can also be updated more rapidly than regulation, due to lower bureaucratic and legal overhead, therefore making it possible to keep more apace with technical developments. Second, standardization takes emerging safety practices that are under-specified or heterogeneous and restates them in a precise, consistent, and systematized form that is more readily adoptable by new actors and appropriately clear for a legal requirement. Supranational SSOs provide a routinized and reliable infrastructure for facilitating international harmonization and regulation via standards. Finally, well-structured standard-setting organizations (“SSOs”) operate on the basis of multistakeholder consensus, and therefore both aim to generate and provide evidence of politically viable standards.

In the US, the path from standardization often roughly follows a pattern of:

  1. Informal, loose networks of industry safety experts identify, develop, and converge on safety-promoting best practices.

  2. Private[5] SSOs elevate some of these best practices into standards, through a well-defined, multistakeholder, consensus-driven process with procedural safeguards (such as open and equitable participation, a balance of represented parties, and opportunities for appeal).[6]

  3. Assuming the government passes regulation for which some of these standards are appropriate, this then provides a route via which these standards are incorporated into domestic law.[7] [8]

  4. International bodies like the ISO attempt to harmonize standards across countries, as well as between SSOs; via these mechanisms, standards developed in e.g. the US could eventually have international impact.

To be clear, we do not necessarily think this is the best way to approach technology regulation. Our claim is primarily empirical: that privately developed standards are one of the main (and in the US, legally preferred) sources of mandated safety measures, and are likely to remain as such. There are substantial downsides with this approach, such as:

  • Increased risk of industry capture, since industry employees are heavily represented in SSOs.
  • A built-in preference for uniformity over experimentation and competition in regulatory approaches.
  • Slow and bureaucratic processes for setting standards (though less slow and bureaucratic than many governmental processes).
  • Reduced democratic accountability and participation, since SSOs are private organizations.
  • Lack of access to the incorporated standards, since the standards often cost hundreds of dollars each to access.[9] Importantly, we also think that standardization can be a useful lever for safety even if those standards are not incorporated into hard law. Established safety standards can establish a natural normative “floor” against which AI developers (especially those represented in the standard-setting process) can be evaluated. Special antitrust protections for bona fide standard-setting activities makes standard-setting a less risky way for labs to jointly work on safety.[10] Standardization of informal and heterogeneous safety best practices can lower the cost of adopting such practices, leading to broader coverage.[11] Standards can also form the substantive primitives for private certification and auditing schemes.

Emergence of Consensus AI Safety Best Practices

Part of what excites us about standardization as a tractable approach to TAI governance is the increasing emergence of best practices in AI safety with increasingly broad buy-in. For example, a number of industry, academic, and civil society actors appear to endorse and/or are willing to discuss some fairly concrete measures to improve alignment, safety, and social impact throughout the AI lifecycle, including (but not limited to):

We think these measures may be good candidates for formalization into standards in the near future. As AI safety and policy research matures, currently theoretical, vague, or nascent ideas may mature into consensus best practices, adding to the list of candidates for standardization. Of course, the goal of existential-risk focused AI safety research is to eventually produce training and testing methods that can, when applied to an AI system, reliably improve that system’s alignment with human values. We hope that such methods will be (or could be made) sufficiently clear and universalizable to make into legally appropriate standards.

AI Standardization Today

Standardization may be an appropriate next step for some (but by no means all)[13] consensus best practices.

A number of SSOs currently develop standards relevant to AI safety. For example, the International Organization for Standardization (“ISO”) and International Electrotechnical Commission (“IEC”) run a joint subcommittee on AI, which has promulgated standards on AI trustworthiness, robustness, bias, and governance. The Institute of Electrical and Electronics Engineers (“IEEE”) has also promulgated a number of AI standards. The U.S. National Institute of Standards and Technology (“NIST”) is developing an AI Risk Management Framework.

Best practices, standardization, and the complementary process of conformity assessment are beginning to play an important role in the regulation of AI. The Federal Trade Commission has repeatedly implied that compliance with best practices and “independent standards” in ethical AI may be required by—or at least help evidence conformity with—various laws they enforce. In its Inaugural Joint Statement, the U.S.–EU Trade and Technology Council announced an intent to prioritize collaboration on AI standard-setting. Standardization and conformity assessments for certain high-risk AI systems play an important role in the proposed EU Artificial Intelligence Act. In short, governments appear poised to rely heavily on standardization for AI regulation.

Actionable Implications for the TAI Safety and Governance Communities

Our core thesis is that technical AI safety standards can and will be the building blocks for many forms of future AI regulation. We’ve laid out the case briefly above; additional analysis and refinement of this thesis could be valuable. If this thesis is true of the most existentially important forms of AI regulation, this has important and actionable implications for the TAI safety and governance communities, many of which were presciently identified by Cihon (2019). Thus, this post serves as a renewed call to take AI safety standardization seriously. Concretely, we have several ideas on how to do this in the near- and medium-term.

First, safety-conscious AI practitioners should consider advancing standardization of TAI-relevant safety best practices. Although we are aware and appreciative of several TAI-concerned individuals who have participated in AI safety standard-setting, we suspect that TAI-focused perspectives are still underrepresented in the processes of the various SSOs already developing AI safety, security, and governance standards. While this might not be a problem today, if those standards are increasingly relied upon by policymakers for substantive AI regulation, TAI perspectives and priorities might not be adequately represented or considered legitimate, and we won’t have routes to promote TAI safety best practices once they are discovered. We therefore renew Cihon (2019)’s call for strategic engagement between the TAI safety communities and AI SSOs. For example, (more) AI safety researchers may consider joining the membership of such SSOs, and serving on relevant committees.[14]

For similar reasons, TAI safety researchers and practitioners should consider engaging seriously with regulatory efforts in jurisdictions where regulation typically precedes standards. This notably includes forthcoming EU AI regulation and accompanying standard-setting processes, especially if we should expect such regulation to diffuse globally.

As the TAI safety community converges on best practices for frontier systems, we should proactively push for them to be refined into technical standards. An intermediate step here might look like creating fora where safety practitioners from across organizations can easily share and refine safety best practices and other lessons learned,[15] then sharing these publicly in concrete form.

We’d also encourage proper analysis of the adequacy of current AI-relevant SSOs. If it seems they might be inadequate at dealing with TAI safety issues, we should get to work investigating what new SSOs tailored to TAI safety issues might look like. Ideal features of such an SSO would likely include:

  • Disciplined focus on TAI safety issues, with supporting institutional rules (e.g., bylaws) and aligned leadership to maintain that focus.
  • Exceptionally high transparency and accessibility (e.g., technical standards are freely available, including in multiple relevant languages, for easy use, reference, and critique).[16]
  • Ability to very rapidly initiate or update standards.
  • Calibrated communication of the safety value of standards (i.e., communicating by how much does this standard, when properly applied, reduce worst-case risks).
  • Design of multiple layers of standards, including organization- and process-level standards (e.g., organizations are required to make extensive good-faith efforts to identify and remedy safety issues, and are not permitted to infer that their systems are safe merely because they’ve checked off a list of object-level system requirements).
  • Low implementation costs for safety standards., such as through provision of how-to guides for implementation.
  • ANSI accreditation, for credibility and legitimacy reasons.

Like many things in governance, learning to influence and implement standardization well will require iteration and experience. We shouldn’t assume that we can simply “tack on” standardization after discovering AI safety solutions. We suspect that such solutions will be more consistently, quickly, and smoothly adopted and eventually legally codified if there is already a nimble, well-functioning, respected, legitimate, TAI-oriented standardization infrastructure to translate our best collective safety measures into standards. Creating such an infrastructure will take time, but seems tractable if we invest our efforts efficiently and strategically in this space.

Conclusion

To summarize:

  1. Technical standards form the fundamental building blocks of many technology regulation regimes, and could plausibly form the fundamental building blocks of TAI-relevant regulation.
  2. Given (1), the TAI safety and governance communities should ensure that there exists SSOs that can efficiently elevate AI safety best practices into technical standards that are, in substance and form, appropriate for legal and regulatory use.
  3. It’s not clear that long-term safety priorities are currently well represented in existing AI standard-setting efforts, or that the current structure and procedures of AI-relevant SSOs are appropriate to the challenges that TAI may pose.
  4. If existing SSOs are inadequate, we should have a plan for improving existing SSOs or creating new ones, particularly to ensure they are focused on and nimble to the evolving challenges of advanced AI safety. This would take several years.
  5. We may therefore wish to start investing in answering (3)—and then possibly working on (4)—soon.

Concrete steps we propose include:

  1. TAI safety researchers and practitioners should consider joining and influencing existing TAI-relevant SSOs, both for the object-level reason of improving AI safety standard-setting and for the purpose of learning more about AI safety standard-setting.
  2. For similar reasons, TAI safety researchers and practitioners should consider engaging with regulatory efforts in jurisdictions where regulation typically precedes standards.
  3. TAI safety researchers should actively drive for convergence on what best safety practices are for frontier systems, and refine those best practices into technical standards that would be suitable for integration into law.
  4. TAI safety and governance researchers and practitioners should analyze whether existing AI SSOs are adequate for the needs of TAI standard-setting, including analyzing which standardization processes are going to be most important to influence today[17]. If existing efforts seem likely to be inadequate, we should design and possibly build new standard-setting infrastructure. If you are interested in working on this and think that we could help, or have valuable insight regarding AI safety standard-setting, please reach out to us at tai-standards[at]googlegroups.com

Notes


  1. Thanks to Jonas Schuett, Joslyn Barnhart, Miles Brundage, and Will Hunt for comments on earlier drafts of this post. All views and errors our own. ↩︎

  2. This post is written in our individual capacities, rather than in our capacities of employment or affiliation with particular organizations. ↩︎

  3. “Standardization” is defined as “[c]ommon and repeated use of rules, conditions, guidelines or characteristics for products or related processes and production methods, and related management systems practices.” Off. of Mgmt. & Budget, Exec. Off. of the President, OMB Circ. No. A-119, Federal Participation in the Development and Use of Voluntary Consensus Standards and in Conformity Assessment Activities § 3(a) (1998) (hereinafter “1998 Circular A-119”), https://perma.cc/Y32D-R2JQ. Standards can include “[t]he definition of terms; classification of components; delineation of procedures; specification of dimensions, materials, performance, designs, or operations; measurement of quality and quantity in describing materials, processes, products, systems, services, or practices; test methods and sampling procedures; or descriptions of fit and measurements of size or strength.” Id. We are here primarily focused on standards that attempt to improve safety. Other standards (perhaps most) are focused on promoting interoperability or reducing information costs. ↩︎

  4. In the EU, this responsibility typically falls on the “European Standards Organizations” (ESOs), some of which work on requests from the EU Commission, e.g. in preparation of forthcoming regulation such as the AI Act. The most important ones are the European Committee for Standardisation (CEN), European Committee for Electrotechnical Standardisation CENELEC) and European Telecommunications Standards Institute (ETSI). The EU’s recent Strategy on Standardisation is a good place to get an overview of EU standard-setting and approach to engagement with international SSOs. ↩︎

  5. In some other countries, governments take a much more active role in standard-setting. ↩︎

  6. Some might worry that these due process requirements pose a possible risk as a source of distraction, obfuscation, or delay in setting TAI safety standards. We share this concern, which is why we propose investigating the creation of a new SSO that could retain a strong focus on TAI, with corresponding application of other due process requirements (such as faster turnaround times than achieved by most SSOs). ↩︎

  7. See also Off. of Mgmt. & Budget, Exec. Off. of the President, OMB Circ. No. A-119, Federal Participation in the Development and Use of Voluntary Consensus Standards and in Conformity Assessment Activities § 2(e) (2016) (hereinafter “2016 Circular A-119”), https://perma.cc/KUV8-VWN8; National Technology Transfer And Advancement Act Of 1995, Pub. L. No. 104–113, 110 Stat. 775 (1996). ↩︎

  8. To be clear, the US federal government retains the option to develop their own standards outside of this framework. See 2016 Circular A-119 § 5(c). ↩︎

  9. SSOs defend the costs to access as necessary to recoup the costs of standards development and maintenance. Standards incorporated by reference into US regulations can be freely viewed. One important goal for the TAI safety and governance communities is ensuring that existentially important AI safety standards are freely available, unlike most safety standards. ↩︎

  10. See 15 U.S.C. § 4302. ↩︎

  11. For example, companies can reduce the amount of discovery and tinkering required to achieve some goal by referencing an appropriate standard. Consistent standards can also foster an ecosystem of actors specialized in the relevant standards, who can transfer those skills to other appropriate contexts. ↩︎

  12. E.g., at Anthropic, DeepMind, OpenAI, and elsewhere. ↩︎

  13. In particular, standard-setting is a time-consuming and expensive process. These costs may not always be worth the benefits of standardization. ↩︎

  14. To be clear, we do not consider such involvement to be an obvious unalloyed good. The time of AI safety researchers and engineers is very valuable, and they should not reallocate it lightly. ↩︎

  15. In so doing, they will have to take care not to run afoul of antitrust laws. ↩︎

  16. NB: This is not the case with most existing SSOs! ↩︎

  17. Relevant questions include: Which standards are likely to be most relevant for future frontier systems? Which order are standards from influential standard setting bodies going to come out? Which standards are most likely to see global diffusion? For example, will the EU AI Act, and its accompanying standards, diffuse globally? Should we expect the NIST AI Risk Management Framework to affect relevant ISO, IEC, or ESO standards? ↩︎

Comments15
Sorted by Click to highlight new comments since:

Thanks for the article! I agree. Like it or not standards are going to be created, and regulators (FTC, FDA etc.) will likely rely on them.  

One tangible area to work on: publicizing well-researched best practice 'safe/ aligned' implementations of LLMs. Given the resource challenges that organizations like NIST has, they will likely put a lot of weight behind such research.

I'm working with NIST as part of my masters dissertation to 'operationalize the risk management framework'. If you'd like to discuss please reach out to samyoon@hks.harvard.edu 

The U.S. National Institute of Standards and Technology (“NIST”) is developing an AI Risk Management Framework.

Just a sidenote for anyone interested in this. There is an existing effort from some folks in the AI safety community to influence the development of this framework in a positive direction. See Actionable Guidance for High-Consequence AI Risk Management (Barett et al. 2022)

It's great to see this renewed call for safety standardization! A few years after my initial report, I continue to view standardization of safety processes as an important way to spread beneficial practices and as a precursor to regulation, as you describe. A few reactions to forward the conversation:

 1. It's worth underlining a key limitation to standards in my view: it's difficult for them to influence the vanguard.  Standards are most useful in disseminating best practices (from the vanguard where they're developed to everyone else) and thus raising the safety floor. This poses obvious challenges for  standards' use in alignment. Though not insurmountable, effective use of standards here would be a deviation from the common path in standardization. 

2. Following from 1, a dedicated SSO for AI safety that draws from actors concerned about alignment could well make sense. One possible vehicle could be the Joint Development Foundation.

3. I appreciate the list of best practices worth considering for standardization. These are promising directions, though it would be helpful to understand if there is much buy-in from safety experts. A useful intervention: create a (recurring) expert survey that measures perceived maturity of candidate best practices and their priority for standardization. This would be a good intervention in the short-term. 

4. I agree that AI safety expertise should be brought to existing standardization venues and also with your footnote 14 caveat that the opportunity cost of researchers time should not be treated lightly. In practice, leading AI labs would benefit from emulating large companies' approaches: dedicated staff (or even teams) to monitor developments at SSOs and to channel expertise (whether inviting an expert researcher to one SSO meeting or by circulating SSO submission internally for AI safety researcher feedback) in a way that does not overburden researchers. At the community level, individuals may be able to fill this role, as Tony Barrett has with NIST (as Evan Murphy linked, his submission is worth a close read).

5. I appreciate your identification of transparency as a pitfall of many SSOs and a point to improve. Open availability of standards should be encouraged. I'd go further to encourage actors to be transparent about their engagement in standardization: publish blogs/specifications for wider scrutiny. Transparency can also increase data for researchers trying to measure the efficacy of standards engagement (itself a challenging question).

6. It's worth underlining the importance of standards to implementing the EU AI Act as currently envisioned. Even if the incentives are not such that we see a Brussels Effect for AI, the standards themselves may be expected to be used beyond the single market. This would mean prioritizing engagement in CEN-CENELEC to inform standards that will support conformity assessment.
 

Thank you for sharing! Great post and I’m glad there’s more attention going towards standard-setting activities. Some misc. ‘off the top of my head’ thoughts:  

  • You’re right to highlight that standards are not a panacea and can be difficult in practice:
    • Looking into when and why companies deviate from standards would be a useful area of study. What would the Volkswagen emissions scandal look like for TAI?
       
    • As you mention, it might be difficult to find consensus where other stakeholders are not necessarily aligned or have the same incentives. A potentially useful thought experiment could be “if we were negotiating standards with Yann LeCun, where/why would we disagree?
       
    • Trade secrets and intellectual property considerations are also important in this process: this piece on 5G standards and Huawei is quite illustrative. This could be either a blocker or an opportunity depending on how you see it.
       
    • Geopolitical challenges (see this and this) might make things a bit more complicated in practice. 
       
  • Some SSOs work closely with others: e.g. CEN/CENELEC and ISO. I’m not familiar with their work but orgs like OCEANIS might be worth looking into as well. 
     
  • Companies tend to value harmonisation, so avoiding fragmentation should be a key aim too. 
     
  • In order to do standards well,  a lot of that work on measurement and assessment is needed first - though a lot of this work is ongoing
     
  • From a US policy POV, this bill might be of interest. Worth thinking about the impact of subsidizing or incentivising the involvement of more small and medium-sized companies.

I think an interesting project might be developing three 'real' AI use cases and assessing throughout what best practice / desirable standards might look like. It’s a complex area for AI systems in particular, so a demonstration would be very persuasive. Definitely an area where AI capabilities and AI safety people could work together on. And perhaps policymakers and regulators (like the ICO in the UK) could facilitate this with sandboxes. 

Just my 2c. Very supportive otherwise, as this is definitely an under-explored area: haven't seen much on standards in EA world since Peter Cihon's excellent paper. Thanks afor sharing :)

Just to add to UK regulator stuff in the space:  the DRCF has a stream on algorithm auditing. Here is a paper with a short section on standards. Obviously it's early days, and focused on current AI systems, but it's  a start: https://www.gov.uk/government/publications/findings-from-the-drcf-algorithmic-processing-workstream-spring-2022/auditing-algorithms-the-existing-landscape-role-of-regulators-and-future-outlook

This is a bit of a hot-take, but I'm somewhat skeptical of the ability of standards to effectively regulate TAI. I suspect that in order to be safe, an actor will have to be willing to take measures beyond any standards, in which case implementing paragraph 23 subsection d will only be a distraction. On the other hand, standards could very easily slow the most responsible actors and cause one of the least responsible actors who doesn't care about them at all to win the AGI race.

I can respond to your message right now via a myriad of potential software because of the establishment of a technical standard, HTTP.  Additionally, all major web browsers run and interpret Javascript, in large part due to SSOs like IETF and W3C. By contrast, on mobile, we have two languages for the duopoly, and a myriad of issues I won't go into, but suffice to say there has been a failure of SSOs in the space to replicate what happened with web browsing and early internet. It may be that TAI present novel and harder challenges, but in some of the hardest such technical coordination challenges to date, SSOs have been very useful. I'm not as worried about defection as you if we get something good going - the leaders will likely have significant resources, and therefore be under bigger public scrutiny and will want to show they are also leading on participating in standard setting. I am hopeful that there will be significant innovation in this area in the next few years. [Disclaimer, I work in this area, so naturally biased]

I guess the success of those standards for the web doesn’t feel very relevant to the problem of aligning AI. For a start, the design of the protocols has led to countless security flaws, hardly seems robust?

In addition, the technology has often evolved by messing up and then being patched later.

AI doesn't exist in a vacuum, and TAI won't either. AI has messed up, is messing up and will mess up bigger as it gets more advanced. Security will never be a 100% solved problem, and aiming for zero breaches of all AI systems is unrealistic.  I think we're more likely to have better AI security with standards - do you disagree with that?  I'm not a security expert, but here some relevant considerations of one applied to TAI. See in particular the section "Assurance Requires Formal Proofs, Which Are Provably Impossible". Given the probably impossible nature of having formal guarantees (not to say we shouldn't try to get as close as possible), it really does seem that leveraging whatever institutional and coordination mechanisms have worked in the past is a worthwhile idea. I consider SSOs to be one set of these, all things considered.

Here is a section from an article written by someone who has worked in SSOs and security for decades:
> Most modern encryption is based on standardised algorithms and protocols; the use of open, well-tested and thoroughly analysed encryption standards is generally recommended. WhatsApp, Facebook Messenger, Skype, and Google Messages now all use the same encryption standard (the Signal protocol) because it has proven to be secure and reliable. Even if weaknesses are found in such encryption standards, solutions are often quickly made available thanks to the sheer number of adopters.

Standards can help with security b/c that's more of a standard problem, but I suspect it'll be a distraction for aligning AGI.

Well I disagree but there's no need to agree - diverse approaches to a hard problem sounds good to me. 

I think that's a valid worry and I also don't expect the standards to end up specifying how to solve the alignment problem. :P I'd still be pretty happy about the proposed efforts on standard setting because I also expect standards to have massive effects that can be more or less useful for 
a) directing research in directions that reduce longterm risks (e.g. pushing for more mechanistic interpretability),  
b) limiting how quickly an agentic AI can escape our control (e.g. via regulating internet access, making manipulation harder), 
c) enabling strong(er) international agreements (e.g. shared standards could become basis for international monitoring efforts of AI development and deployment).

Lack of access to the incorporated standards, since the standards often cost hundreds of dollars each to access.

Not only are many standards expensive, but they often include digital rights management that make them cumbersome to access and open.

In Australia, access to standards is controlled by private companies that can charge whatever they like. There's currently a petition to the Australian parliament with 22,526 signatures requesting free or affordable access to Australian Standards, including standards mandated by legislation. Across the ditch, the New Zealand government has set a great example by funding free access to building standards.

It's important for AI safety standards to be open access from the start.

Great post! I agree that standard setting could be useful. I think it could be especially important to set standards on how AI systems interact with animals and the natural environment, in addition to humans.

Great initiative! Slight nuance to Chris Leong 's earlier comment. Though I'm not an expert, I would just caution for standards-setting bodies hastingly standardizing a losing standard, see https://en.wikipedia.org/wiki/Protocol_Wars

Your encryption standards examples feel like a great comparison of the way to go

Curated and popular this week
Relevant opportunities