It sounds like you’re a fairly senior software engineer, so my first thought is to look at engineering roles at AI safety orgs. There are a bunch of them! You’ve probably already seen this post, but just in case: AI Safety Needs Great Engineers.
It sounds to me like you’re concerned about a gap between the type of engineering work you’re good at, and the type of engineering work that AI safety orgs need. This is something I’ve also been thinking about a lot recently. I’m a full stack developer for a consumer product, which means I spend a lot of time discussing plans with product managers, writing React code, and sometimes working on backend APIs. Whereas it seems like AI safety orgs mostly need great backend engineers who are very comfortable setting up infrastructure and working with distributed systems, and/or machine learning engineers.
This suggests 2 options to me, if you want to stay focused on software engineering rather than research or something else:
I’m personally trying to decide between these options right now. The first thing to check is whether you feel excited at all about option 2. If ramping up in those new areas sounds super unpleasant, then I think you can rule that option out right away. But if you feel excited about both options and think you could be successful at either (which is the situation I’m in), then it’s a tougher question. I’m planning to talk to a bunch of AI safety folks at EAG in a few weeks to help figure out how to maximize my impact, and I hope to have more clarity on the matter then. I’ll update this comment afterwards if I have anything new to add.
Also interested!