C

CAISID

AI Legislation
281 karmaJoined Working (6-15 years)

Bio

Currently work in AI Law, and fulfil a safety & legislation role at a major technology company

Comments
70

Thank you for this post Matthew, it is just as thoughtful and detailed as your last one. I am excited to see more posts from you in future! 

I have some thoughts and comments as someone with experience in this area. Apologies in advance if this comment ends up being long - I prefer to mirror the effort of the original post creator in my replies and you have set a very high bar!

  1. Risk Assessments  How should frontier AI organisations design their risk assessment procedures in order to sufficiently acknowledge – and prepare for – the breadth, severity and complexity of risks associated with developing frontier AI models?

This is a really great first area of focus, and if I may arrogantly share a self-plug, I recently posted something along this specific theme here. Clearly it has been field-changing, achieving a whopping 3 karma in the month since posting. I am truly a beacon of our field!

Jest aside, I agree this is an important area and one that is hugely neglected. A major issue is that academia is not good at understanding how this actually works in practice. Much more industry-academia partnership is needed but that can be difficult to arrange where it really counts - which is something you successfully allude to in your post.



Senior leadership of firms operate with limited information. Members of senior management of large companies themselves cannot know of everything that goes on in the firm. Therefore, strong communication channels and systems of oversight are needed to effectively manage risks.



This is a fantastic point, and one that is frequently a problem. Not long ago I was having a chat with the head of a major government organisation who quite confidently stated that his department did not use a specific type of AI system. I had the uncomfortable moral duty to inform that it did, because I had helped advise on risk mitigation for that system only some weeks earlier. It's a fun story, but the higher up the chain you are in large organisations the harder it can be. Another good, recent example is also Nottinghamshire Police publicly claiming that they do not use and do not plan to use AFR in an FOI request - seemingly unaware their force revealed a new AFR tool to the media earlier that week.



Although much can be learned from practices in other industries, there are a number of unique challenges in implementing good corporate governance in AI firms. One such challenge is the immaturity of the field and technology. This makes it difficult currently to define standardised risk frameworks for the development and deployment of systems. It also means that many of the firms conducting cutting edge research are still relatively small; even the largest still in many ways operate with “start-up” cultures. These are cultures that are fantastic for innovation, but terrible for safety and careful action. 


This is such a fantastic point, and to back this up it's the source of I reckon about 75% of the risk scenarios I've advised on in the past year. Although I don't think 'AI firms' is a good focus term because many major corporations are making AI as part of their coverage but are not themselves "AI Firms", your point still stands well in the face of evidence because a major problem right now is AI startups selling immature, untested, ungoverned tools to major organisations who don't know better and don't know how to question what they're buying. This isn't just a problem with corporations but with government, too. It's such a huge risk vector.

For Sections 2 and 3, engineering and energy are fantastic industries to draw from in terms of their processes for risk and incident reporting. They're certainly amongst the strictest I've had experience of working alongside.

 

Ethics committees take a key role in decision making that may have particularly large negative impacts on society. For frontier AI labs, such committees will have their work cut out for them. Work should be done to consider the full list of processes ethics committees should have input in, but it will likely include decisions around:

  • Model training, including
    • Appropriate data usage
    • The dangers of expected capabilities
  • Model deployments
  • Research approval

 

This is an area that's seen a lot of really good outcomes in AI in high-risk industries. I would advise reading this research which covers a fantastic use-case in detail. There are also some really good ones in the process of getting the correct approvals which I'm not entirely sure I can post here yet but if you want kept updated shoot me an inbox and I'll keep you informed.



The challenge for frontier AI firms by comparison is that many of the severe risks posed by AI are of a more esoteric nature, with much current uncertainty about how failure modes may present themselves. One potential area of study is the development of more general forms of risk awareness training, e.g. training for developing a “scout mindset” or to improve awareness of black swan events.



This is actually one of the few sections I disagree with you on. Of all the high-risk AI systems I've worked with in a governance capacity, exceptionally few have had esoteric risks. Many times AI systems interact with the world via existing processes which themselves are fairly risk scoped. Exceptions if you meant far-future AI systems which obviously would be currently unpredictable. For contemporary and near-future AI systems though the risk landscape is quite well explored.
 


7 – Open Research Questions

These are fantastic questions, and I'm glad to see some of these are covered by a recent grant application I made. Hopefully the grant decision-makers read these forums! I actually have something of a research group forming in this precise area, so feel free to drop me a message if there's likely to be any overlap there and I'm happy to share research directions etc :)

 

There are huge technical research questions that must be answered to avoid tragedy, including important advancements in technical AI safety, evaluations and regulation. It is the author’s opinion that corporate governance should sit alongside these fields, with a few questions requiring particular priority and focus:

 

One final point of input that may be valuable is that in most of my experience of hiring people for risk management / compliance / governance roles in high-risk AI systems is the best candidates in the long run seem to be people with an interdisciplinary STEM and social studies background. It is tremendously hard to find these people. There needs to be much, much more effort put towards sharing of skills and knowledge between the socio-legal and STEM spheres, but a glance at my profile might show a bit of bias in this statement! Still, for these type of roles that kind of balance is important. I understand that many European universities now offer such interdisciplinary courses, but no degrees yet. Perhaps the winds will change.

Apologies if this comment was overly long! This is a very important area of AI governance and it was worth taking the time to put some thoughts on your fantastic post together. Looking forward to seeing your future posts - particularly in this area!

I agree with this and will add a (potentially unpopular) caveat of my own - work a 'normal' job outside of your EA interest area altogether if possible. Absolutely fantastic applicable experience to a whole range of stuff.

I hire for AI-related roles sometimes and one of the main things I look for when hiring AI Safety roles is experience doing other work. Undergrad to Postgrad to Academic Role is great for many, but experience working in a 'normal' work environment is super valuable and is something I look for. It seems super neglected for consideration in recruiting too. For me it's a huge green flag.

Just understanding how large organisations work, how stuff like logistics and supply chains work, the 'soft knowledge' often missing from a pure research career is insanely valuable in many sectors.

It's almost frowned upon to the extent people apologise for it.  "I worked 2 years in a warehouse, but not because I don't care about AI Safety, it's just I needed the money" - like dude, actual logistics experience is why I picked you for interview!

Your mileage may vary obviously, and the AI Safety roles I hire for are for 'frontline impact' so less research and more stakeholder interaction so those soft skills are more useful, but too many people think stepping outside the "academic beeline" is some kind of failure.

It's also worth highlighting I do super impactful AI Safety work now, leading a team that does some amazing frontline work, and in the past have been rejected from every single EA grant, EA fellowship, and EA job I've ever applied to :) That can be demoralising, but obviously wasn't related to my value! Perhaps just fit and luck :)

These are some interesting thoughts.

I think OSINT is a good method for varying types of enforcement, especially because the general public can aid in the gathering of evidence to send to regulators. This happens a lot in the animal welfare industry AFAIK, though someone with experience here please feel free to correct me. I know Animal Rising recently used OSINT to gather evidence of 280 legal breaches from the livestock industry which they handed to DEFRA which is pretty cool. This is especially the case given that these were RSPCA-endorsed farms so it showed that the stakeholder vetting (pun unintended) was failing. This only happened 3 days ago, so the link may expire, but here is an update.

For AI this is often a bit less effective, but is still useful. A lot of the models in nuclear, policing, natsec, defence, or similar are likely to be protected in a way that makes OSINT difficult, but I've used it before for AI Governance impact. The issue is that even if you find something, a DSMA-Notice or similar can be used to stop publication. You said "Information on AI development gathered through OSINT could be misused by actors with their own agenda" which is almost word for word the reason the data is often, haha. So you're 100% right that in AI Governance in these sectors OSINT can be super useful but may fall at later hurdles.

However, commercial AI is much more prone to OSINT because there's no real lever to stop you publishing OSINT information. You can usually in my experience use the supply chain for a fantastic source of OSINT, depending on how dedicated you are. That's been a major AI Governance theme in the instances I've been involved in on both sides of this.

Answer by CAISID1
0
0

There's quite a bit of work in AI IP, but a lot of it is siloed. The legal field (particularly civil law) doesn't do a fantastic job of making their content readable for non-legal people, so the research and developments can be a bit hit and miss in terms of informing people.

The tight IP laws don't always help things (as you rightly mention). It can be good for helping keep models in-house but to be honest that usually harms rather than helps risk mitigation. I do a lot of AI risk mitigation for clients and I've been in the courtroom a few times where this has been a major issue in forcing companies to show or reduce their risk, and is a fairly big issue in both compliance and in criminal law. 

IP in particular is a hard tightrope to walk, AI-wise.

This is a good read. I've been thinking a lot about how Monopsonies affect regulation, and this ties in with that which is useful. 

It's interesting someone voted 'Disagree' to this, and I would be interested in hearing why - even if that's via inbox. Always happy to hear dissenting ideas.

Answer by CAISID2
0
0

I guess it depends on which area of AI Governance you research in. I'm almost entirely front-end, so a lot of my research is talking to people it will impact and trialling how different governance mechanisms might actually work in practice.

I guess spitballing it would be:

30% reading draft or upcoming governance changes

50% discussing with end users or using my own experience to highlight issues or required changes

20% writing those responses up for either the legislators or the organisations impacted

Hm. The closest things I can think of would either be things like inciting racial hatred or hate speech (ie not physical, no intent for crime, but illegal). In terms of research, most research isn't illegal but is usually tightly regulated by participating stakeholders, ethics panels, and industry regulations. Lots of it is stakeholder management too. I removed some information from my PhD thesis at the request of a government stakeholder, even though I didn't have to. But it was a good idea to ensure future participation and I could see the value in the reasoning. I'm not sure there was anything they could do legally if I had refused, as it wasn't illegal per se.

The closest thing I can think of to your example is perhaps weapons research. There's nothing specifically making weapons research illegal, but it would be an absolute quagmire in terms of not breaking the law. For example sharing the research could well fall under anti-terrorism legislation, and creating a prototype would obviously be illegal without the right permits. So realistically you could come up with a fantastic new idea for a weapon but you'd need to partner with a licensing authority very, very early on or risk doing all of your research by post at His Majesty's pleasure for the next few decades.

I have in the past worked in some quite heavily regulated areas with AI, but always working with a stakeholder who had all the licenses etc so I'm not terribly sure how all that works behind the scenes.

 

Answer by CAISID4
0
0

You have some interesting questions here. I am a computer scientist and a legal scholar, and I work a lot with organisations on AI policy as well as helping to create policy too. I can sympathise with a lot of the struggles here from experience. I'll focus in some some of the more concrete answers I can give in the hopes that they are the most useful. Note that this explanation isn't from your jurisdiction (which I assume from the FBI comment is USA) but instead from England & Wales, but as they're both Common Law systems there's a lot of overlap and many key themes are the same.


For example, one problem is: How do you even define what "AGI" or "trying to write an AGI" is?

This is actually a really big problem. There's been a few times we've trialled new policies with a range of organisations and found that how those organisations interpret the term 'AI' makes a massive difference to how they interpret, understand, and adhere to the policy itself. This isn't even a case of bad faith, more just people trying to attach meaning to a vague term and then doing their best but ultimately doing so in different directions. A real struggle is that when you try to get more specific, it can actually end up being less clear because the further you zoom in, the more you accidentally exclude. It's a really difficult balancing act - so yes, you're right. That's a big problem. 
 


I'm wondering how much this is actually a problem, though. As a layman, as far as I know there could be existing government policies that are somewhat comparably difficult to evaluate.


Oh, tons. In different industries, in a variety of forms. Law and policy can be famously hard to interpret. Words like 'autonomous', 'harm', and 'intend' are regular prickly customers.


Many judicial decisions related to crimes, as I vaguely understand it, depend on intentionality and belief——e.g. for a killing to be a murder, the killer must have intended to kill and must not have believed on reasonable grounds that zer life was imminently unjustifiedly threatened by the victim.
 


This is true to an extent. So in law you often have the actus reus (what actually happened) and the mens rea (what the person intended to happen). The law tends to weigh the mens rea quite heavily. Yes, intent is very important - but more so provable intent. Lots of murder cases get downgraded to manslaughter for a better chance at a conviction. Though to answer your question yes at a basic level criminal law often relates to intention and belief. Most of the time this is the objective belief of the average person, but there are some cases (such as self-defence in your example) where the intent is measured against the subjective belief of that individual in those particular circumstances. 

 

What are some crimes that are defined by mental states that are even more difficult to evaluate? Insider trading? (The problem is still very hairy, because e.g. you have to define "AGI" broadly enough that it includes "generalist scientist tool-AI", even though that phrase gives some plausible deniability like "we're trying to make a thing which is bad at agentic stuff, and only good at thinky stuff". Can you ban "unbounded algorithmic search"?)

 

Theft and assault of the everyday variety are actually some of the most difficult to evaluate really, since both require intent to be criminal and yet intent can be super difficult to prove. In the context of what you're asking, 'plausible deniability' is often a strategy chosen when accused of a crime (i.e making the prosecution prove something non-provable which is an uphill battle) but ultimately it would come down to a court to decide. You can ban whatever you want, but the actual interpretation could only really be tested in that matter. In terms of broad language the definitions of words is often a core point of contention in court cases so likely it would be resolved there, but honestly from experience the overwhelming majority of issues never reach court. Neither side wants to take the risk, so usually the company or organisation backs off and negotiates a settlement. The only times things really go 'to the hilt' is for criminal breaches which require a very severe stepping over the mark. 

 

  • Bans on computer programs. E.g. bans on hacking private computer systems. How much do these bans work? Presumably fewer people hack their school's grades database than would without whatever laws there are; on the other hand, there's tons of piracy.

 

In the UK the Computer Misuse Act 1990 is actually one of the oldest bits of computer-specific legislation and is still effective today after a few amendments. It's mostly due to the broadness of the law and the fact that evidence is fairly easy to come by and that intent with those is fairly easy to prove. It's beginning to struggle in the new digital era though, thanks to totally unforeseen technologies like generative AI and blockchain.

Some bits of legislation have been really good at maintaining bans though. England and Wales have a few laws against CSAM which included the term 'pseudo-photography' which actually applies to generative AI and so someone who launched an AI for that purpose would still be guilty of an offence. It depends what you mean by 'ban' as a ban in legislation can often function much differently than a ban from, for example, a regulator.
 


Bans on conspiracies with illegal long-term goals. E.g. hopefully-presumably you can't in real life create the Let's Build A Nuclear Bomb, Inc. company and hire a bunch of nuclear scientists and engineers with the express goal of blowing up a city. And hopefully-presumably your nuke company gets shut down well before you actually try to smuggle some uranium, even though "you were just doing theoretical math research on a whiteboard". How specifically is this regulated? Could the same mechanism apply to AGI research?

 

Nuclear regulation is made up of a whole load of different laws and policy types too broad to really go into here, but essentially what you're describing there is less about the technology and more about the goal. That's terrorism and conspiracy to commit murder just to start off with, no matter whether you use a nuke or an AGI or a spatula. If your question centres more on 'how do we dictate who is allowed access to dangerous knowledge and materials' that's usually a licensing issue. In theory you could have a licensing system around AGIs, but that would probably only work for a little while and would be really hard to implement without buy-in internationally.

If you're specifically interested in how this example is regulated, I can't help you in terms of US law beyond this actually quite funny example of a guy who attempted a home-built nuclear reactor and narrowly escaped criminal charges - however some UK-based laws include the Nuclear Installations Act 1965 and much of the policy from the Office for Nuclear Regulation (ONR).

Hopefully some of this response is useful!

Yeah, that's fixed for me :)

Load more