Yonatan Cale

@ Effective Developers
4308 karmaJoined Working (6-15 years)Seeking workTel Aviv-Yafo, Israel

Bio

Anonymous Feedback Form

(click "see more")

I'm happy to help

  • People running EA aligned software projects (about all the normal problems)
  • EA Software engineers (about.. all the normal problems)

Link to my coaching post.

I'd be happy for help from

  • People who think about global EA priorities:
    • Rewriting arxiv.org: Is this a high impact job?
    • Does EA need a really good hiring agency?
  • Funding my work would be nice

My opinions about hiring

A better job board

  • draft 1: 75% of 80k's engineering jobs are unrelated to software development. This board is the other 25%.

Tech community building & outreach

(apparently I'm doing some of this?)

  • Some ideas I'm working on or strongly considering working on
  • Are you talking to someone about working on strange neglected problems? Here's how I'd frame it

My opinions about EA software careers

  • An alternative career guide
  • Improving CVs (beyond what I saw any professional CV editor doing)
  • Getting your first paid software job
  • [more coming]

My personal fit for jobs

  • Owning the tech of a pre-production (helping with things around it, like some Product)
  • I really enjoy coaching, user research, explaining tech concepts and tradeoffs simply to non tech people, unclear if this will fit into some future job

Fun

  • I'm currently reading ProjectLawful and Worth A Candle [26-7-2022]
  • Big hpmor fan
  • I like VR
  • My shirts have cats on them

Contact details

How others can help me

  • Connections to EA aligned orgs that have software problems

How I can help others

  • Running software projects, specifically hiring
  • EA careers

Comments
850

I have a crazy opinion that everyone's invited to disagree with: Often long comments on the EA forum would better be split up into a few smaller comments, so that others could reply separately, agree/disagree separately, or (as you point out) emoji-react to separately. 

This is a forum culture thing, right now it would be weird to respond with many small comments, but it would be better to make it not-weird

What do you think?

For transparency: I'd personally encourage 80k to be more opinionated here, I think you're well positioned and have relevant abilities and respect and critical-mass-of-engineers-and-orgs. Or at least as a fallback (if you're not confident in being opinionated) - I think you're well positioned to make a high quality discussion about it, but that's a long story and maybe off topic.

TL;DR: "which lab" seems important, no?


You wrote:

Don’t work in certain positions unless you feel awesome about the lab being a force for good.

First of all I agree, thumbs up from me! 🙌

 

But you also wrote:

Recommended organisations

We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.

 

I assume you don't recommend people go work for whatever lab "currently [seems like they're] taking existential risk more seriously than other labs" ?

 

Do you have further recommendations on how to pick a lab?

(Do you agree this is a really important part of an AI-Safety-Career plan, or does it seem sort-of-secondary to you?)

 

I'm asking in the context of an engineer considering working on capabilities (and if they're building skill - they might ask themselves "what am I going to use this skill for", which I think is a good question). Also, I noticed you wrote "broadly advancing AI capabilities should be regarded overall as probably harmful", which I agree with, and seems to make this question even more important.

I'd expect clicking on my profile picture to take me to my profile (currently the click doesn't do anything) (but it does have a pretty animation)

What do you think about the effect of many people (EAs) joining top AI labs - on the race dynamics between those labs?

Hard for me to make up my mind here

 

Adding [edit] :

This seems especially important as you're advising many people to consider entering the field, where one of the reasons to do it is "Moving faster could reduce the risk that AI projects that are less cautious than the existing ones can enter the field." (but you're sending people to many different orgs).

In other words: It seems maybe negative to encourage many people to enter a race, on many different competing "teams", if you want the entire field to move slowly, no?

When I talk to people, I sometimes explicitly say that this is a way of thinking that I hope most people WON'T use.

Hi! Thanks for your answer. TL;DR: I understand and don't have further questions on this point

 

What I mean by "having a good understanding of how to do alignment" is "being opinionated about (and learning to notice) which directions make sense, as opposed to only applying one's engineering skills towards someone else's plan".

I think this is important if someone wants to affect the situation from inside, because the alternative is something like "trust authority".

But it sounds like you don't count on "the ability to push towards safe decisions" anyway

Hey, there is a common plan I hear that maybe you'd like to respond to directly.

It goes something like this: "I'll go work at a top AI lab as an engineer, build technical skills, and I care about safety so I can push a bit towards safe decisions, or push a lot if it's important, overall it seems good to have people there who care about safety like me. I don't have a good understanding of how to do alignment but there are some people I trust"

If you're willing to reply to this, I'll probably refer people directly to your answer sometimes

  1. I'm really excited about the emojis! I think this has potential to push the forum further towards "a social network but done well, optimizing for high quality discussions". I thought the agree/disagree was a great idea and I'm happy you're continuing exploring similar directions
  2. Regarding writing the post itself: Consider maybe adding descriptions for the screen shots, especially for people listening to the nonlinear-library (text-to-speech) version of the post (as I did originally, and then I came here to see the specific emojis)

I mean on the EA forum / lesswrong

Load more