Career choice
Career choice
In-depth career profiles, specific job opportunities, and overall career guidance

Quick takes

186
2y
5
I'm going to be leaving 80,000 Hours and joining Charity Entrepreneurship's incubator programme this summer! The summer 2023 incubator round is focused on biosecurity and scalable global health charities and I'm really excited to see what's the best fit for me and hopefully launch a new charity. The ideas that the research team have written up look really exciting and I'm trepidatious about the challenge of being a founder but psyched for getting started. Watch this space! <3 I've been at 80,000 Hours for the last 3 years. I'm very proud of the 800+ advising calls I did and feel very privileged I got to talk to so many people and try and help them along their careers! I've learned so much during my time at 80k. And the team at 80k has been wonderful to work with - so thoughtful, committed to working out what is the right thing to do, kind, and fun - I'll for sure be sad to leave them. There are a few main reasons why I'm leaving now: 1. New career challenge - I want to try out something that stretches my skills beyond what I've done before. I think I could be a good fit for being a founder and running something big and complicated and valuable that wouldn't exist without me - I'd like to give it a try sooner rather than later. 2. Post-EA crises stepping away from EA community building a bit - Events over the last few months in EA made me re-evaluate how valuable I think the EA community and EA community building are as well as re-evaluate my personal relationship with EA. I haven't gone to the last few EAGs and switched my work away from doing advising calls for the last few months, while processing all this. I have been somewhat sad that there hasn't been more discussion and changes by now though I have been glad to see more EA leaders share things more recently (e.g. this from Ben Todd). I do still believe there are some really important ideas that EA prioritises but I'm more circumspect about some of the things I think we're not doing as well as we could (
55
2y
2
Not all "EA" things are good    just saying what everyone knows out loud (copied over with some edits from a twitter thread) Maybe it's worth just saying the thing people probably know but isn't always salient aloud, which is that orgs (and people) who describe themselves as "EA" vary a lot in effectiveness, competence, and values, and using the branding alone will probably lead you astray. Especially for newer or less connected people, I think it's important to make salient that there are a lot of takes (pos and neg) on the quality of thought and output of different people and orgs, which from afar might blur into "they have the EA stamp of approval" Probably a lot of thoughtful people think whatever seems shiny in a "everyone supports this" kind of way is bad in a bunch of ways (though possibly net good!), and that granularity is valuable. I think feel very free to ask around to get these takes and see what you find - it's been a learning experience for me, for sure. Lots of this is "common knowledge" to people who spend a lot of their time around professional EAs and so it doesn't even occur to people to say + it's sensitive to talk about publicly. But I think "some smart people in EA think this is totally wrongheaded" is a good prior for basically anything going on in EA. Maybe at some point we should move to more explicit and legible conversations about each others' strengths and weaknesses, but I haven't thought through all the costs there, and there are many. Curious for thoughts on whether this would be good! (e.g. Oli Habryka talking about people with integrity here)
87
4y
4
Reflection on my time as a Visiting Fellow at Rethink Priorities this summer I was a Visiting Fellow at Rethink Priorities this summer. They’re hiring right now, and I have lots of thoughts on my time there, so I figured that I’d share some. I had some misconceptions coming in, and I think I would have benefited from a post like this, so I’m guessing other people might, too. Unfortunately, I don’t have time to write anything in depth for now, so a shortform will have to do. Fair warning: this shortform is quite personal and one-sided. In particular, when I tried to think of downsides to highlight to make this post fair, few came to mind, so the post is very upsides-heavy. (Linch’s recent post has a lot more on possible negatives about working at RP.) Another disclaimer: I changed in various ways during the summer, including in terms of my preferences and priorities. I think this is good, but there’s also a good chance of some bias (I’m happy with how working at RP went because working at RP transformed me into the kind of person who’s happy with that sort of work, etc.). (See additional disclaimer at the bottom.) First, some vague background on me, in case it’s relevant: * I finished my BA this May with a double major in mathematics and comparative literature. * I had done some undergraduate math research, had taught in a variety of contexts, and had worked at Canada/USA Mathcamp, but did not have a lot of proper non-Academia work experience. * I was introduced to EA in 2019. Working at RP was not what I had expected (it seems likely that my expectations were skewed). One example of this was how my supervisor (Linch) held me accountable. Accountability existed in such a way that helped me focus on goals (“milestones”) rather than making me feel guilty about falling behind. (Perhaps I had read too much about bad workplaces and poor incentive structures, but I was quite surprised and extremely happy about this fact.) This was a really helpful transition for m
37
2y
5
Immigration is such a tight constraint for me. My next career steps after I'm done with my TCS Masters are primarily bottlenecked by "what allows me to remain in the UK" and then "keeps me on track to contribute to technical AI safety research". What I would like to do for the next 1 - 2 years ("independent research"/ "further upskilling to get into a top ML PhD program") is not all that viable a path given my visa constraints. Above all, I want to avoid wasting N more years by taking a detour through software engineering again so I can get Visa sponsorship. [I'm not conscientious enough to pursue AI safety research/ML upskilling while managing a full time job.] Might just try and see if I can pursue a TCS PhD at my current university and do TCS research that I think would be valuable for theoretical AI safety research. The main detriment of that is I'd have to spend N more years in <city> and I was really hoping to come down to London. Advice very, very welcome. [Not sure who to tag.]
39
2y
I mostly haven't been thinking about what the ideal effective altruism community would look like, because it seems like most of the value of effective altruism might just get approximated to what impact it has on steering the world towards better AGI futures. But I think even in worlds where AI risk wasn't a problem, the effective altruism movement seems lackluster in some ways. I am thinking especially of the effect that it often has on university students and younger people. My sense is that EA sometimes influences those people to be closed-minded or at least doesn't contribute to making them as ambitious or interested in exploring things outside "conventional EA" as I think would be ideal. Students who come across EA often become too attached to specific EA organisations or paths to impact suggested by existing EA institutions.  In an EA community that was more ambitiously impactful, there would be a higher proportion of folks at least strongly considering doing things like starting startups that could be really big, traveling to various parts of the world to form a view about how poverty affects welfare, having long google docs with their current best guesses for how to get rid of factory farming, looking at non-"EA" sources to figure out what more effective interventions GiveWell might be missing perhaps because they're somewhat controversial, doing more effective science/medical research, writing something on the topic of better thinking and decision-making that could be as influential as Eliezer's sequences, expressing curiosity about the question of whether charity is even the best way to improve human welfare, trying to fix science.  And a lower proportion of these folks would be applying to jobs on the 80,000 Hours job board or choosing to spend more time within the EA community rather than interacting with the most ambitious, intelligent, and interesting people amongst their general peers. 
20
2y
2
Offput that 80k hours advises "if you find you aren’t interested in [The Precipice: Existential Risk], we probably aren’t the best people for you to get advice from". Hoped there was more general advising beyond just those interested in existential risk
13
2y
1
Things I'd say to people who are starting out with AI Safety Intro: * I'm imagining someone "with a profession"  * (a mathematician / developer / product manager / researcher / something else) who's been following AI Safety through Scott Alexander or LW or so, and want to do something more seriously now * To be clear, I am absolutely unqualified to give any advice here, and everyone is invited to point out disagreements. * I did this for ~3 months. * This is not my normal "software career" advice (of which I'm much more confident) * I'm going to prefer to be opinionated and wrong than to put so many disclaimers here that my words mean nothing. You'll get my opinion So, some things which I wish I'd known when I started all this, or other stuff that seems useful: 1. There is no clear "this is the way to solve AI Safety, just learn X and then do Y". 1. Similar to how, maybe, there's no "this is the way to solve cancer, just learn X and then do Y", just much worse. With cancer we have I guess 1000 ideas or so, and many of them have already cured/detected/reduced cancer, at least with cancer there are clear things we want to learn (I think?). with AI Safety we have about 5-20 serious (whatever that means) ideas and I can't personally say about any of them "omg that would totally solve the problem if we could get it to work". Still, they each have some kind of upside (which I can make a cancer-research metaphor for), so for some definition of progress, that would make progress 2. Even worse, some solutions seem (to me and to many others) to cause more harm than good. 1. Historically, people who cared about AI Safety have pushed AI Capabilities a LOT (which I think is bad) 3. Even worse, there is no consensus. 1. Really smart people are discussing this online but not coming to clear (to me) conclusions, and they have (in my opinion) maybe the best online platform for healthy discourse in the world (lesswrong) 4. And so, looking at this s
9
2y
I'm hiring for a new Director at Social Change Lab to lead our team! This is a hugely important role so if anyone is at all interested, I do encourage you to apply. Any questions, please feel free to reach out as well.  - Social Change Lab is a nonprofit conducting and disseminating social movement research to help solve the world’s most pressing problems. We’re looking for a Director to lead our small team in delivering cutting-edge research on the outcomes and strategies of social movements, and ensuring widespread communication of this work to key stakeholders. You would play a significant role in shaping our long-term strategy and the programs we want to deliver. See more information below, the full job description here and apply here. * Application deadline: 2nd of June, 23:59 BST. Candidates will be considered on a rolling basis so early applications are encouraged. Apply here. * Contract: Permanent, working 37.5 hours/week.  * Location: London or UK preferred, although fully remote or overseas applications will also be considered. * Salary: £48,000-£55,000/year dependent on experience.    If anyone is interested or knows someone who might be a good fit, please share the job advert with them or let me know. You can also see some more context on the leadership change here.
Load more (8/12)