JB

Jamie B

Independent
906 karmaJoined Working (0-5 years)London, UK

Comments
27

What I had in mind was "shows up to all 8 discussion groups for the taught part of the course". I also didn't check this figure, so that was from memory.

True, there are lots of ways to define it (e.g. finishing the readings, completing the project, etc)

Thanks for engaging!

Two sessions. One to discuss the readings and another for people to bounce their takes off others in the cohort.

Sounds like a fun experiment! I found that just open discussion sometimes leads to less valuable discussion, so in both cases I'd focus on a few specific discussion prompts / trying to help people come to a conclusion on some question. I linked to something about learning activities in the main post, which I think helps with session design. As with anything though, I think trying it out is the only way to know for sure, so feel free to ignore me.

without assuming knowledge of AGISF

I'd be keen to hear specifically what the pre-requisite knowledge is - just in order to inform people if they 'know enough' to take your course. Maybe it's weeks 1-3 of the alignment course? Agree with your assessment that further courses can be more specific, though.

I agree that one of the best ways to do this would be to just create a curriculum, send it around to people and then additionally collect feedback from people who have gone through the course

Sounds right! I would encourage you trying to front-load some of the work before creating a curriculum though. Without knowing how expert you are in agent foundations yourself - I'd suggest trying to take steps that mean your first stab is close enough for giving feedback to seem valuable to the people you ask, and so it's not a huge lift to get from 1st draft to final product and there are no nasty surprises from people who would have done it completely differently.

I.e. what if you ask 3-5 experts what they think the most important part of agent foundations is, and maybe try to conduct 30 min interviews with them to solicit the story they would tell in a curriculum? You can also ask them their top recommended resources, and why they recommend it. That would be a strong start, I think.

Do I need a technical background to work on AI Governance? I think no, not really. Quick take because I don't justify many of my claims.

Context: I haver been a technical ML engineer and (briefly) a researcher, and I'm now trying to work on AI governance (and spending a lot of time speaking to people who do work on AI governance).

Examples of things that are useful to understand to do AI governance:
1. Knowing about the train, test, deploy cycle at industrial AI companies.
2. 1. Knowing the psyche of ML engineers at those orgs.
3. Knowing which media channels machine learning engineers & researchers use to stay on top of news, including twitter & ML companies.

You don't get any of those insights by doing an ML coursera course. It might be fun / gratifying to do that course for other reasons, but I think it won't make you better at governance. It's better to have a few friends who are ML engineers and to get them to sketch out what it's like at a lab, some day (or - more costly but more thorough - to take a role at a lab, technical or nontechnical).

What I do think you need to engage with technically is not to be afraid to read below the surface of techincal memes - but I think not much below the surface.

Concrete example: watermarking.

It's enough for policymakers to be able to read a few watermarking papers and understand:
a) watermarking is a way of tagging your model's outputs to prove it was produced by AI
b) There are no tried & tested, reliable watermarking methods at the moment.

Where I see nontechnincal folk fall down (less so in this community) is when they throw out the term 'watermarking' but couldn't tell you about what methods can be used or what the reliability of those methods is. I think that can be read about, and you don't need to have direct experience having tried to watermarking something (I certainly haven't).

I revisit this post from time to time, and had a new thought!

Did you consider at the time talent needs in the civil service & US congress? If so, would you consider these differently now?

This might just be the same as "doing policy implementation", and would therefore be quite similar to Angelina's comment. My question is inspired by the rapid growth in interest in AI regulation in the UK & US governments since this post, which led me to consider potential talent needs on those teams.

Yes - the best thing to do is to sign up and work through the curriculum in your own time!

https://course.aisafetyfundamentals.com/governance

Thanks for the post!

There was consensus that it would be good if CEA replaced one of its (currently) three annual conferences with a conference that’s explicitly framed as being about x-risk or AI-risk focused conference.

In response to a corresponding prompt (“ … at least one of the EAGs should get replaced by an x-risk or AI-risk focused conference …”)

I'm curious if you felt the thrust was that the group thought it's good if CEA in particular replace the activity of running its 3rd EAG with running an AI safety conference, or that there should be an AI safety conference?

In general when we talk about 'cause area specific field building', the purpose that makes most sense to me is to build a community around those cause areas, which people who don't buy the whole EA philosophy can join if they spot a legible cause they think is worthwhile working on.

I'm a little hesitant to default to repurpose existing EA institutions, communities and events to house the proposed cause area specific field building. It seems to me that the main benefit of cause area specific field building is to potentially build something new, fresh and separate from the other cultural norms and beliefs that the EA community brings with it.

Perhaps the crux for me is "is this a conference for EAs interested in AI safety, or is it a conference for anyone interested in AI safety?" If the latter, this points away from an EA-affiliated conference (though I appreciate there are pragmatic questions around "who else would do it"). A fresh feel and new audience might still be achievable in the case that CEA runs the conference ops, but I imagine it would be important to bear in mind during CEA's branding, outreach and choices made during the execution of such a conference.

We'll aim to release a short post about this by the end of the week!

I also sometimes use naturalreaders. Unfortunately I find it a bit... unnatural at times.

I've been really enjoying Type III Audio's reader on this forum, though!

Answer by Jamie B9
0
0

I totally agree there's a gap here. At BlueDot Impact  (/ AGI safety fundamentals), we're currently working on understanding the pipeline for ourselves.

 

We'll be launching another governance course in the next week, and in the longer term we will publish more info on governance careers on our website, as and when we establish the information for ourselves.

In the meantime, there's great advice on this account, mostly targeted at people in the US, but there might be some transferrable lessons:

https://forum.effectivealtruism.org/users/us-policy-careers

Load more