(click "see more")
Link to my coaching post.
(apparently I'm doing some of this?)
For transparency: I'd personally encourage 80k to be more opinionated here, I think you're well positioned and have relevant abilities and respect and critical-mass-of-engineers-and-orgs. Or at least as a fallback (if you're not confident in being opinionated) - I think you're well positioned to make a high quality discussion about it, but that's a long story and maybe off topic.
TL;DR: "which lab" seems important, no?
You wrote:
Don’t work in certain positions unless you feel awesome about the lab being a force for good.
First of all I agree, thumbs up from me! 🙌
But you also wrote:
Recommended organisations
We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.
I assume you don't recommend people go work for whatever lab "currently [seems like they're] taking existential risk more seriously than other labs" ?
Do you have further recommendations on how to pick a lab?
(Do you agree this is a really important part of an AI-Safety-Career plan, or does it seem sort-of-secondary to you?)
I'm asking in the context of an engineer considering working on capabilities (and if they're building skill - they might ask themselves "what am I going to use this skill for", which I think is a good question). Also, I noticed you wrote "broadly advancing AI capabilities should be regarded overall as probably harmful", which I agree with, and seems to make this question even more important.
What do you think about the effect of many people (EAs) joining top AI labs - on the race dynamics between those labs?
Hard for me to make up my mind here
Adding [edit] :
This seems especially important as you're advising many people to consider entering the field, where one of the reasons to do it is "Moving faster could reduce the risk that AI projects that are less cautious than the existing ones can enter the field." (but you're sending people to many different orgs).
In other words: It seems maybe negative to encourage many people to enter a race, on many different competing "teams", if you want the entire field to move slowly, no?
When I talk to people, I sometimes explicitly say that this is a way of thinking that I hope most people WON'T use.
Hi! Thanks for your answer. TL;DR: I understand and don't have further questions on this point
What I mean by "having a good understanding of how to do alignment" is "being opinionated about (and learning to notice) which directions make sense, as opposed to only applying one's engineering skills towards someone else's plan".
I think this is important if someone wants to affect the situation from inside, because the alternative is something like "trust authority".
But it sounds like you don't count on "the ability to push towards safe decisions" anyway
Hey, there is a common plan I hear that maybe you'd like to respond to directly.
It goes something like this: "I'll go work at a top AI lab as an engineer, build technical skills, and I care about safety so I can push a bit towards safe decisions, or push a lot if it's important, overall it seems good to have people there who care about safety like me. I don't have a good understanding of how to do alignment but there are some people I trust"
If you're willing to reply to this, I'll probably refer people directly to your answer sometimes
I have a crazy opinion that everyone's invited to disagree with: Often long comments on the EA forum would better be split up into a few smaller comments, so that others could reply separately, agree/disagree separately, or (as you point out) emoji-react to separately.
This is a forum culture thing, right now it would be weird to respond with many small comments, but it would be better to make it not-weird
What do you think?