M

MarkusAnderljung

979 karmaJoined

Comments
68

Semafor reporting confirms your view. They say Musk promised $1bn and gave $100mn before pulling out. 

Was there a $1bn commitment attributed to Musk? The OpenAI wikipedia article says: "The organization was founded in San Francisco in 2015 by Sam Altman, Reid Hoffman, Jessica Livingston, Elon Musk, Ilya Sutskever, Peter Thiel and others,[8][1][9] who collectively pledged US$1 billion."

I suspect that it wouldn't be that hard to train models at datacenters outside of CA (my guess is this is already done to a decent extent today: 1/12 of Google's US datacenters are in CA according to wiki). That models are therefore a pretty elastic regulatory target. 

Data as a regulatory target is interesting, in particular if it transfers ownership or power over the data to data subjects in the relevant jurisdiction. That might e.g. make it possible for CA citizens to lodge complaints about potentially risky models being trained on data they've produced. I think the whole domain of data as a potential lever for AI governance is worthy of more attention. Would be keen to see someone delve into it. 

I like the thought that CA regulating AI might be seen as a particularly credible signal that AI regulation makes sense and that it might therefore be more likely to produce a de jure effect. I don't know how seriously to take this mechanism though. E.g. to what extent is it overshadowed by CA being heavily Democrat. The most promising way to figure this out in more detail to me seems like talking to other state legislators and looking at the extent to which previous CA AI-relevant regulation or policy narratives has seen any diffusion. Data privacy and facial recognition stand out as most promising to look into, but maybe there's also stuff wrt autonomous vehicles. 

Thanks!

That sounds like really interesting work. Would love to learn more about it. 

"but also because a disproportionate amount of cutting-edge AI work (Google, Meta, OpenAI, etc) is happening in California." Do you have a take on the mechanism by which this leads to CA regulation being more important? I ask because I expect most regulation in the next few years to focus on what AI systems can be used in what jurisdictions, rather than what kinds of systems can be produced. Is the idea that you could start putting in place regulation that applies to systems being produced in CA? Or that CA regulation is particularly likely to affect the norms of frontier AI companies because they're more likely to be aware of the regulation? 

We've already started to do more of this. Since May, we've responded to 3 RFIs and similar (you can find them here: https://www.governance.ai/research): the NIST AI Risk Management Framework; the US National AI Research Resource interim report; and the UK Compute Review. We're likely to respond to the AI regulation policy paper. Though we've already provided input to this process via Jonas Schuett and I being on-loan to the Brexit Opportunities Unit to think about these topics for a few months this spring. 

I think we'll struggle to build expertise in all of these areas, but we're likely to add more of it over time and build networks that allow us to input in these other areas should we find doing so promising. 

"I'd suggest being discerning with this list"

Definitely agree with this! 

One thing you can do is collect some demographic variables on non-respondents and see whether there is self-selection bias on those. You could then try to see if the variables that see self-selection correlate with certain answers. Baobao Zhang and Noemi Dreksler did some of this work for the 2019 survey (found in D1/page 32 here: https://arxiv.org/pdf/2206.04132.pdf ). 

Really excited to see this! 

I noticed the survey featured the MIRI logo fairly prominently. Is there a way to tell whether that caused some self-selection bias? 

In the post, you say "Zhang et al ran a followup survey in 2019 (published in 2022)1 however they reworded or altered many questions, including the definitions of HLMI, so much of their data is not directly comparable to that of the 2016 or 2022 surveys, especially in light of large potential for framing effects observed." Just to make sure you haven't missed this: we had the 2016 respondents who also responded to the 2019 survey receive the exact same question they were asked in 2016, including re HLMI and milestones. (I was part of the Zhang et al team)

Load more