Hide table of contents

There are many flaws in the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB1047) that’s headed to the California Governor’s office for his signature. Flaws that will no doubt hurt AI companies, both in lost productivity and unnecessary expense as well as in giving a competitive edge to Chinese AI companies that operate without such onerous controls.

However, the single biggest flaw is the one having to do with “advanced persistent threats and other sophisticated actors.”

Section 22603.

 (a) Before a developer initially trains a covered model, the developer shall do all of the following:

(1) Implement administrative, technical, and physical cybersecurity protections to prevent unauthorized access to, misuse of, or unsafe post-training modifications of, the covered model and all covered model derivatives controlled by the developer that are appropriate in light of the risks associated with the covered model, including from advanced persistent threats or other sophisticated actors.

Failure to comply with the above, as well as all of the other requirements in the bill, and there are far too many, would be an “unlawful act” subject to monetary penalties.

(1) A civil penalty for a violation that occurs on or after January 1, 2026, in an amount not exceeding 10 percent of the cost of the quantity of computing power used to train the covered model to be calculated using average market prices of cloud compute at the time of training for a first violation and in an amount not exceeding 30 percent of that value for any subsequent violation.

(2) (A) Injunctive or declaratory relief, including, but not limited to, orders to modify, implement a full shutdown, or delete the covered model and any covered model derivatives controlled by the developer.

(B) The court may only order relief under this paragraph for a covered model that has caused death or bodily harm to another human, harm to property, theft or misappropriation of property, or constitutes an imminent risk or threat to public safety.

(3) (A) Monetary damages.

(B) Punitive damages pursuant to subdivision (a) of Section 3294 of the Civil Code.

(4) Attorney’s fees and costs.

(5) Any other relief that the court deems appropriate.

What’s The Problem?

In spite of what the cybersecurity industry wants you to believe, there is no way to keep a patient and dedicated adversary out of a network, and there are two reasons why:

  1. software
  2. human beings

Both are vulnerable to being exploited, and for the most secure systems, it’s almost always a combination of both.

The Russia-Ukraine war has shown time and again that even the most heavily fortified networks can be cracked with nothing more than patience and ingenuity, and maybe a little money.

And it’s not just Russia.

It can and does happen to the best protected agencies, financial institutions, and defense organizations in the world. It’s the price we pay for all of the benefits that software and cloud computing has brought us. We are more productive, and more vulnerable, than ever.

This bill puts the onus for keeping sophisticated bad actors out of an AI computing cluster on the AI company, without regard for the myriad of vendors and suppliers that make up the AI company’s supply chain.

For example, starting in 2028 the bill requires that the AI company hire a third party auditor do an annual audit of compliance with all of the provisions of SB1047. Once hired, the auditor becomes part of the company’s supply chain.

The auditing firm would be a perfect candidate for an adversary to infiltrate as the first stage of an attack against the AI company. Who at the firm isn’t going to open an attachment in an email sent by its own auditor? One click and you’ve been compromised, and now subject to heavy penalties.

Even worse, while the AI company will be held responsible, the cybersecurity company whose product didn’t stop the intruders will continue to evade any responsibility, just as it always has, thanks to the EULA that the customer signs.

For example, this section comes from Crowdstrike’s Terms and Conditions:

“NEITHER PARTY SHALL BE LIABLE TO THE OTHER PARTY IN CONNECTION WITH THIS AGREEMENT OR THE SUBJECT MATTER HEREOF (UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STATUTE, TORT OR OTHERWISE) FOR ANY LOST PROFITS, REVENUE, OR SAVINGS, LOST BUSINESS OPPORTUNITIES, LOST DATA, OR SPECIAL, INCIDENTAL, CONSEQUENTIAL, OR PUNITIVE DAMAGES.”

And this one is from Microsoft’s Defender product:

“DISCLAIMER OF WARRANTY. THE SOFTWARE IS LICENSED “AS IS.” YOU BEAR THE RISK OF USING IT.”

Summary

This approach is “bass-ackwards.”

The first step in regulating AI is regulating software vendors. This is 40 years overdue and it must happen at the federal level.

Once that’s in place, an AI bill need only address common abuse issues as well as low probablilty - high risk events.

 

0

0
6

Reactions

0
6

More posts like this

Comments6
Sorted by Click to highlight new comments since:

This doesn't compute for me. The quoted language obliges the AI company to "[i]mplement administrative, technical, and physical cybersecurity protections . . . .that are appropriate in light of the risks associated with the covered model . . . ." It does not establish strict liability for the acts of malicious actors. If the implemented protections were appropriate, I don't see the violation.

I also don't get the point about liability shifting between the cybersecurity vendors and the regulated AI companies. These are big corporations which are capable of negotiating bespoke contracts, obtaining cybersecurity insurance, and taking other actions to manage risk. If a given cybersecurity firm's work is not up to snuff, the insurer will require the insured to use something more effective as a condition of coverage or will hit the client with an appropriate surcharge. In fact, the cybersecurity firms would make awful de facto insurers, as the risks they would hold would be highly correlated with each other.

Yes, sometimes the liability clauses in contracts are negotiable if the customer is large enough. Often, it is not, as we've seen in the fallout from the recent Crowdstrike blunder that caused worldwide chaos where Crowdstrike has been invoking its EULA provisions re liability being limited to twice what the customer's annual bill was. 

Fair, but I'm not sure how much difference there is between "not negotiable" and "no rational large customer would ever choose to buy cyberinsurance from its security vendor by negotiating a liability shift in exchange for paying massively more." This would be like buying pandemic insurance from an insurer who only sold pandemic insurance (and wasn't backstopped by reinsurance or government support). If/when you needed to make a claim, everyone else would be in a similar position, and the claims would bankrupt the security vendor quite easily. That means everyone gets only a small fraction of their claim paid and holds the bag for the rest.

  • I think it is very likely that the top American AI labs are receiving substantial help from the NSA et al in implementing their "administrative, technical, and physical cybersecurity protections". No need to introduce Crowdstrike as a vulnerability.
  • The labs get fined if they don't implement such protections, not if they get compromised.

I didn't introduce Crowdstrike as a vulnerability.

The NSA doesn't provide support to U.S. corporations. That's outside of its mandate.

When a lab gets compromised, there will be an investigation and the fault will almost certainly be placed with the lab unless the lab could prove negligence on the part of the cybersecurity company or companies that they contracted with.

None of us know what is in the classified portions of the US intelligence budgets. For example, I doubt there was a line item in the budget for bribing a major US security vendor to set as default an algorithm with a NSA trap door in it, but there's pretty good reason to believe that happened.

More from Caruso
Curated and popular this week
Relevant opportunities