The NYT just released a breaking news piece regarding an agreement on AI safeguards. It's hard to tell exactly how useful the proposed measures will be, but it seems like a promising step.
Seven leading A.I. companies in the United States have agreed to voluntary safeguards on the technology’s development, the White House announced on Friday, pledging to strive for safety, security and trust even as they compete over the potential of artificial intelligence.
The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — will formally announce their commitment to the new standards at a meeting with President Biden at the White House on Friday afternoon.
The voluntary safeguards announced on Friday are only an early step as Washington and governments across the world put in place legal and regulatory frameworks for the development of artificial intelligence. White House officials said the administration was working on an executive order that would go further than Friday’s announcement and supported the development of bipartisan legislation.
As part of the agreement, the companies agreed to:
- Security testing of their A.I. products, in part by independent experts and to share information about their products with governments and others who are attempting to manage the risks of the technology.
- Ensuring that consumers are able to spot A.I.-generated material by implementing watermarks or other means of identifying generated content.
- Publicly reporting the capabilities and limitations of their systems on a regular basis, including security risks and evidence of bias.
- Deploying advanced artificial intelligence tools to tackle society’s biggest challenges, like curing cancer and combating climate change.
- Conducting research on the risks of bias, discrimination and invasion of privacy from the spread of A.I. tools.
Here's the white house factsheet link
It's also very much worth reading the linked pdf, which goes into more detail than the fact sheet.
Pulling out highlights from PDF of the voluntary commitments that AI safety orgs agreed to:
I'd be very curious if there are historical case studies of how much private corporations stuck to voluntary commitments they made, and how long it took for more binding regulation to replace voluntary commitments