Hide table of contents

Summary:

  • I'm a software engineer interested in working on AI safety, but confused about its career prospects. I outlined all my concerns below.
  • In particular, I had trouble finding accounts of engineers working in the field, and the differences between organizations/companies working on AI safety are very unclear from the outside.
  • It's also not clear if frontend skills are seen as useful, or whether applicants should reside within the US.

Full text:

I'm an experienced full-stack software engineer and software/strategy consultant based in Japan. I've been loosely following EA since 2010, and have become increasingly concerned about AI x-risk since 2016. This has led me to regularly consider possible careers in AI safety, especially now that the demand for software engineers in the field has increased dramatically.

However, having spent ~15 hours reading about the current state of the field, organizations, and role of engineers, I find myself having more questions than I started with. In hope of finding more clarity and help share what engineers considering the career shift might be wondering, I decided to outline my main points of concern below:

  1. The only accounts of engineers working in AI safety I could find were two articles and a problem profile on 80,000 Hours[1][2][3]. Not even the AI Alignment Forum seemed to have any posts written by engineers sharing their experience. Despite this, most orgs have open positions for ML engineers, DevOps engineers, or generalist software developers. What are all of them doing?
    1. Many job descriptions listed very similar skills for engineers, even when the orgs seemed to have very different approaches on tackling AI safety problems. Is the set of required software skills really that uniform across organizations?
    2. Do software engineers in the field feel that their day-to-day work is meaningful? Are they regularly learning interesting and useful things? How do they see their career prospects?
    3. I'm also curious whether projects are done with a diverse set of technologies? Who is typically responsible for data transformations and cleanup? How much ML theory should an engineer coming into the field learn beforehand? (I'm excited to learn about ML, but got very mixed signals about the expectations.)
  2. Some orgs describe their agenda and goals. In many cases, these seemed very similar to me, as all of them are pragmatic and many even had shared or adjacent areas of research. Given the similarities, why are there so many different organizations? How is an outsider supposed to know what makes each of them unique?
    1. As an example, MIRI states that they want to "ensure that the creation of smarter-than-human machine intelligence has a positive impact"[4], Anthropic states they have "long-term goals of steerable, trustworthy AI"[5], Redwood Research states they want to "align -- future systems with human interests"[6], and Center of AI Safety states they want to "reduce catastrophic and existential risks from AI"[7]. What makes these different from each other? They all sound like they'd lead to similar conclusions about what to work on.
    2. I was surprised to find that some orgs didn't really describe their work or what differentiates them. How are they supposed to find the best engineers if interested ones can't know what areas they are working on? I also found that it's sometimes very difficult to evaluate whether an org is active and/or trustworthy.
      1. Related to this, I was baffled to find that MIRI hasn't updated their agenda since 2015[8], and their latest publication is dated at 2016[4]. However, their blog seems to have ~quarterly updates? Are they still relevant?
    3. Despite finding many orgs by reading articles and publications, I couldn't find a good overall list of ones that specifically work on AI safety. Having such a list might be valuable for people coming into the field, especially if it had brief overviews on what makes each org stand out. It may also be relevant for donors and community builders, as well as people looking for a particular niche.
    4. It's a bit unclear how the funding for AI safety is organized. Some groups get grants from CEA and longtermism funds, some are sponsored by universities, but many also seem like private companies? How does that work? (My impression is that AI safety is still very difficult to monetize.)
  3. Frontend skills are sometimes listed in AI safety orgs' job descriptions, but rarely mentioned in problem profiles or overviews of the engineering work. Are people looking for frontend skills or not?
    1. As someone whose core experience is in developing business-critical web apps, I'm particularly curious about whether web/mobile apps are needed to compliment other tools, and whether UI/UX design is of any consideration in AI safety work.
    2. I'd argue that frontend and design skills can be relevant, in particular for meta tools like collaboration platforms, or for making results more visual and interactive (like OpenAI often does). Long-term research projects may also benefit from custom UIs for system deployment, management, and usage. I wonder what fraction of AI safety researchers would agree.
    3. My own skills are pretty evenly distributed between frontend and backend, and I'm left wondering whether AI safety orgs would need someone with more specialization (as opposed to skills they currently may not have).
  4. It seems a vast majority of AI safety work is done in the US. However, the US timezone is sometimes tricky in Asia due to little overlap in working hours. How much of a problem is this seen as? Are there any AI safety groups based in Asia, Africa, or EU that have a good track record?
    1. What would even be a reasonable heuristic for assessing "good track record" in this case? For research orgs one can look at recent publications, but not every org does research. The best I have right now is whether the org in question has been mentioned in at least two introductory posts across 80,000 Hours, EA Forum, and AI Alignment Forum. This could be another benefit of a curated list as mentioned above.

My counterfactual for not doing AI safety work would be becoming financially independent in ~3-5 years, after which I'd likely do independent work/research around AI policy and meta-EA matters anyway. I'm thinking that transitioning into AI safety now could be better, as the problems have become more practical, the problems seem more urgent, and working on them would allow gaining relevant skills/results sooner.

I decided to post this on the EA forum in order to get a broader view of opinions, including from people not currently engaged with the field. Any advice or insights would be much appreciated!

If you happen to be looking for someone with full-stack skills and are ok with flexible hours/location, feel free to drop me a private message as well!

  1. ^
  2. ^
  3. ^
  4. ^
  5. ^
  6. ^
  7. ^
  8. ^

45

0
0

Reactions

0
0
New Answer
New Comment

7 Answers sorted by

(context: I've recently started as a research engineer (RE) on DeepMind's alignment team. All opinions are my own)

Hi, first off, it's really amazing that you are looking into changing your career to help reduce x-risks from AI.

I'll give my perspective on your questions.

1.

a. All of this is on a spectrum but.. There is front-end engineering, which to my knowledge is mostly used to build human-feedback-interfaces or general dialogue chats like ChatGPT.

Then there's research engineering, which I'd roughly sort into two categories. One is more low-level machine learning engineering, like ensuring that you can train, query, serve (large) models, making your ML code more efficient or making certain types of analyses feasible in the first place. This one seems pretty crucial to a lot of orgs and is in especially high demand afaict. The second is more research-y, analysing existing models, or training/fine-tuning models in order to testing a hypothesis that's somehow related to safety. Again those exist on a spectrum.

c. In my work, I need to use quite a bit of ML/DL knowledge regularly and it was expected and tested for in the application process (I doubt that that would be the case for front-end engineering roles though). ML theory is basically never used in my work. I think this is similar across orgs who are working on "prosaic alignment", i.e. directly working with deep learning models, although I could be missing some cases here.

2.

... why are there so many different organizations?

Different beliefs about theory of change and threat models, but I'd say also more money and interested researchers than management capacity. Let's say there are 100 qualified people who would like to work on team X, chances are team X can only absorb a handful per year without imploding. What are those 100 people going to do? Either start their own org, work independently or try to find some other team.

a. That's a super valid point. All these organizations state to aim to reduce AI x-risk. As I see it they mainly differ along the axes "threat/development model" (how will dangerous AI be built and how is it causing x-risk?) and "theory of change" (how can we reduce the x-risk?). Of course there is still substantial overlap between orgs.

b. i. MIRI has chosen to follow a "not-publishing-by-default" policy, explaining some of the lack of publications. Afaict, MIRI still operates and can best be modeled as a bunch of semi-independent researchers (although I have especially little knowledge here)

c. For a slightly outdated overview you could checkout Larks' review from 2021

d. This seems like an accurate description, not sure what exactly you would like more clarity on. The field is in fact quite fragmented in that regard. Regarding the private companies (in particular OpenAI, DeepMind, Anthropic): They were founded with more or less focus on safety but all of them did have the safety of AI systems as an important part of their founding DNA.

3. I don't feel qualified to say much here. My impression is that frontend comes mostly into play when gathering human feedback/rating which is important for some alignment schemes but not others.

4. I'm not aware about groups in Asia. Regarding Europe there is DeepMind (with some work on technical safety and some work on AI governance) and Conjecture, both based in London. I think there are academic groups working on alignment as well, most notably David Krueger's group at Cambridge and Jan Kulveit's group in Prague. I'm probably forgetting someone here.

Thank you for taking the time to provide your thoughts in detail, it was extremely helpful for understanding the field better. It also helped me pinpoint some options for the next steps. For now, I'm relearning ML/DL and decided to sign up for the Introduction to ML Safety course.

I had a few follow-up questions, if you don't mind:

What are those 100 people going to do? Either start their own org, work independently or try to find some other team.

That seems like a reasonable explanation. My impression was that the field was very talent-constrained, but you m... (read more)

3
Frederik
I don't have a great model of the constraints but my guess is that we're mostly talent and mentoring constrained in that we need to make more research progress but also we don't have enough mentoring to upskill new researchers (programs like SERI MATS are trying to change this though).. but also we need to be able to translate that into actual systems so buy in from the biggest players seems crucial. I agree that most safety work isn't monetizable, some things are, e.g. in order to make a nicer chat bot, but it's questionable whether that actually reduces X risk. Afaik the companies which focus the most on safety (in terms of employee hours) are Anthropic and Conjecture. I don't know how they aim to make money. For the time being most of their funding seems to come from philanthropic investors. When I say that it's in the company's DNA then I mean that the founders value safety for its own sake and not primarily as a money making scheme. This would explain why they haven't shut down their safety teams after they failed to provide immediate monetary value. People, including engineers, can definitely spend all their time on safety at DM (can't speak for OpenAI). I obviously can't comment on the by me perceived priorities of DM leadership when it comes to the considerations around safety and ethics, beyond what is publicly available. In terms of the raw number of people working on it, I think it's accurate for both companies that most ppl are not working on safety. Thanks a lot! Best of luck with your career development!

The reason everything is very confusing is because it is a fast growing field, with lots of people doing lots of things. As someone already pointed out, different orgs often have the same end en goal (i.e. reducing X-risk form AI) but different ideas for how to do this. 

But this is not even the reason there are several orgs. Orgs are just legal entities to hire people to do the work. One reason to have several orgs is that there are researchers in more than one country and it's easier to organise this under different orgs. Another reason is that different orgs have different funding models, or leadership styles, etc. 

But also, most orgs don't grow very fast, probably for good reasons, but I don't know this is just an imperial observation. This means there are lots of researchers wanting to help, and some of them get funding, and some of them decide to start new orgs.

So we end up with this mess of lots of orgs doing their own thing, and no one really knows everything that is going on. This has some cost, e.g. there are probably people doing almost the same research with out knowing of each other. And as you say, it is confusing, especially when you are new. But I rather have this mess than a well ordered centrally controlled research ecosystem. Central coordination might seem good in theory, but in practice it not worth it. A centralised system is slow and can't spot its own blind spots. 

So what to do? How to navigate this mess?

There are some people who are creating information resources which might be helpful. 

There's AI Safety Supports lots of links page, which is both too long and to abbreviated, because making a good list is hard. 
AI Safety Support - Lots of Links

Alignment Ecosystem are working on some better resources, but it's all still under construction.
Other resources · Alignment Ecosystem Development (coda.io)

Currently I think the best way to get oriented is to talk so someone who is more acquainted with the AI Safety career landscape than you. Either someone you know, or book a call with AI Safety Support.
AI Safety Support - Career Coaching

This is both very informative and very helpful, thank you for the advice! That does seem like a very reasonable way of thinking about the current situation, and I'm happy to see that there already exist resources that try to compile this information.

I was already referred to AISS in private, but your recommendation helped me take the step of actually applying for their coaching. Looking forward to seeing what comes of it, thanks again!

Context: I work as an alignment researcher. Mostly with language models.

I consider myself very risk-averse, but I also personally struggled (and still do) with the instability of alignment. There’s just so many things I feel like I’m sacrificing to work in the field and being an independent researcher right now feels so shaky. That said, I’ve weighed the pros and cons and still feel like it’s worth it for me. This was only something I truly felt in my bones a few months after taking the leap. It was in the back of my mind for ~5 years (and two 80k coaching calls) before I decided to try it out.

With respect to your engineering skills, I’m going to start to work on tools that are explicitly designed for alignment researchers (https://www.lesswrong.com/posts/a2io2mcxTWS4mxodF/results-from-a-survey-on-tool-use-and-workflows-in-alignment) and having designers and programmers (web devs) would probably be highly beneficial. Unfortunately, I only have funding for myself for the time being. But it would be great to have some people who want to contribute. I’d consider doing AI Safety mentorship as a work trade.

I honestly feel like software devs could probably still keep their high-paying jobs and just donate a bit of time and expertise to help independent researchers if they want to start contributing to AI Safety.

Thank you for sharing your thoughts! It's helpful to know that others have been struggling with a similar situation.

Through investigating the need and potential for projects, it seems there are vaguely two main areas for engineers:

  1. Research engineering, it seems to be essentially helping researchers build prototypes and run models as smoothly as possible.
  2. Various meta-projects that grow the talent pool or enable knowledge to be gained and shared more efficiently. The ones in the post you linked fall under this.

It seems like getting (more useful) summaries of... (read more)

Great questions. 
On question 4, I don't personally know of any groups based in Asia, but feel free to check out this database of AI Safety relevant communities, and join any of them. 

Thank you for the link, I found several collections of links and more introductory information through it. This was very helpful for finding out about potentially relevant courses and opportunities.

After looking into other resources, it seems like this is the best overview of what everyone in the field is doing: 

https://www.alignmentforum.org/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is

Both OpenAI and DeepMind are hiring for software engineers right now, as per their website!

Yes, and they would have been my number one picks some years ago. However, I'm no longer convinced that they are progressing AI safety measures at the same speed they're pushing for new capabilities. Intuitively it feels unsustainable (and potentially very dangerous), which is why I'm being more cautious now.

That being said, I'm very glad that both companies are putting effort into growing their safety teams, and hope that they continue to do so.

Hello all,

There might currently be a security breach on EAforum. Please do not share your contact information or that of anyone you know until the situation is resolved, tomorrow at the earliest.

Forum dev here. The author of this comment has privately communicated to me that they believe someone is creating fraudulent Forum accounts. If someone DM's you, please consider that they might be a journalist looking to quote you uncharitably, a scammer trying to get personal information, etc.

We have no evidence that we have suffered a breach of our security in the traditional sense of the word (e.g. database being hacked).

Comments1
Sorted by Click to highlight new comments since:

Given the similarities, why are there so many different organizations? How is an outsider >supposed to know what makes each of them unique? ... What makes these different from each other?

You are not the first person to ask questions like this. And I feel like I often hear people trying to make databases of researchers and orgs or give overviews of the field etc. etc. But the situation you describe just seems...normal, to me. As in, most industries and areas, even pretty niche things will have many different organizations with slightly different outlooks and no central way to navigate them and learn about them. And most research fields in academia are like this; there isn't a great way to get to know everyone in the field, where they work, what they work on etc, it just takes years of talking to people, reading things, meeting them at conferences etc. and you just slowly build up the picture .

I don't think it's silly or bad to ask what you're asking, that's not what I'm saying/I may well be wrong anyway, but to my mind the situation seems like it wouldn't necessarily have a good 'explanation' that someone can just hand to you, and the usual way to find out the sorts of things you want to find out would be to just continue immersing yourself in the field (from what I hear from most people who start figuring out this field, ~15 hours is not very long).

Curated and popular this week
Relevant opportunities