(click "see more")
Link to my coaching post.
(apparently I'm doing some of this?)
I also think that a lot of work that is branded as safety (for example, that is developed in a team called the safety-team or alignment-team) could reasonably be considered to be advancing "capabilities" (as the topic is often divided).
My main point is that I recommend checking the specific project you'd work on, and not only what it's branded as, if you think advancing AI capabilities could be dangerous (which I do think).
Zvi on the 80k podcast:
Zvi Mowshowitz: This is a place I feel very, very strongly that the 80,000 Hours guidelines are very wrong. So my advice, if you want to improve the situation on the chance that we all die for existential risk concerns, is that you absolutely can go to a lab that you have evaluated as doing legitimate safety work, that will not effectively end up as capabilities work, in a role of doing that work. That is a very reasonable thing to be doing.
I think that “I am going to take a job at specifically OpenAI or DeepMind for the purposes of building career capital or having a positive influence on their safety outlook, while directly building the exact thing that we very much do not want to be built, or we want to be built as slowly as possible because it is the thing causing the existential risk” is very clearly the thing to not do. There are all of the things in the world you could be doing. There is a very, very narrow — hundreds of people, maybe low thousands of people — who are directly working to advance the frontiers of AI capabilities in the ways that are actively dangerous. Do not be one of those people. Those people are doing a bad thing. I do not like that they are doing this thing.
And it doesn’t mean they’re bad people. They have different models of the world, presumably, and they have a reason to think this is a good thing. But if you share anything like my model of the importance of existential risk and the dangers that AI poses as an existential risk, and how bad it would be if this was developed relatively quickly, I think this position is just indefensible and insane, and that it reflects a systematic error that we need to snap out of. If you need to get experience working with AI, there are indeed plenty of places where you can work with AI in ways that are not pushing this frontier forward.
The transcript is from the 80k website. The episode is also linked to in the post. It also continues to Rob replying that the 80k view is "it's complicated" and Zvi replying to that.
Hey :)
Looking at some of the engineering projects (which is closest to my field) :
- API Development: Create a RESTful API using Flask or FastAPI to serve the summarization models.
- Caching: Implement a caching mechanism to store and quickly retrieve summaries for previously seen papers.
- Asynchronous Processing: Use message queues (e.g., Celery) for handling long-running summarization tasks.
- Containerization: Dockerize the application for easy deployment and scaling.
- Monitoring and Logging: Implement proper logging and monitoring to track system performance and errors.
I'm guessing Claude 3.5 Sonnet could do these things, probably using 1 prompt for each (or perhaps even all at once).
Consider trying, if you didn't yet. You might not need any humans for this. Or if you already did then oops and never mind!
Thanks for saving the world!
If you ever run another of these, I recommend opening a prediction market first for what your results are going to be :)
I'm not sure how to answer this so I'll give it a shot and tell me if I'm off:
Because usually they take more time, and are usually less effective at getting someone hired, than:
- Do an online course
- Write 2-3 good side projects
For example, in Israel pre-covid, having a CS degree (which wasn't outstanding) was mostly not enough to get interviews, but 2-3 good side projects were, and the standard advice for people who finished degrees was to go do 2-3 good side projects. (based on an org that did a lot of this and hopefully I'm representing correctly).
There is more that I can say about this, but I'm not sure I'm even answering the question.
Also note that the main point of this post is to recommend people do side projects, as opposed to recommending they don't get a CS degree. Maybe another point is "don't try to learn all the topics you heard about before you apply to any job", which is also important.
My own intuition on what to do with this situation - is to stop trying to change your reputation using disclaimers.
There's a lot of value in having a job board with high impact job recommendations. One of the challenging parts is getting a critical mass of people looking at your job board, and you already have that.
My frank opinion is that the solution to not advancing capabilities is keeping the results private, and especially not sharing them with frontier labs.
((
making sure I'm not missing our crux completely: Do you agree:
))