(click "see more")
Link to my coaching post.
(apparently I'm doing some of this?)
Backend recommendations:
I'm much less confident about this.
Things that automatically override this advice:
Tech stack recommendations:
Many people who want to build a side project want to build a website that does something, or an "app" that does something (and could just be a website that can be opened on a smartphone).
So I want to add recommendations to save some searching:
My frank opinion is that the solution to not advancing capabilities is keeping the results private, and especially not sharing them with frontier labs.
((
making sure I'm not missing our crux completely: Do you agree:
))
I also think that a lot of work that is branded as safety (for example, that is developed in a team called the safety-team or alignment-team) could reasonably be considered to be advancing "capabilities" (as the topic is often divided).
My main point is that I recommend checking the specific project you'd work on, and not only what it's branded as, if you think advancing AI capabilities could be dangerous (which I do think).
Zvi on the 80k podcast:
Zvi Mowshowitz: This is a place I feel very, very strongly that the 80,000 Hours guidelines are very wrong. So my advice, if you want to improve the situation on the chance that we all die for existential risk concerns, is that you absolutely can go to a lab that you have evaluated as doing legitimate safety work, that will not effectively end up as capabilities work, in a role of doing that work. That is a very reasonable thing to be doing.
I think that “I am going to take a job at specifically OpenAI or DeepMind for the purposes of building career capital or having a positive influence on their safety outlook, while directly building the exact thing that we very much do not want to be built, or we want to be built as slowly as possible because it is the thing causing the existential risk” is very clearly the thing to not do. There are all of the things in the world you could be doing. There is a very, very narrow — hundreds of people, maybe low thousands of people — who are directly working to advance the frontiers of AI capabilities in the ways that are actively dangerous. Do not be one of those people. Those people are doing a bad thing. I do not like that they are doing this thing.
And it doesn’t mean they’re bad people. They have different models of the world, presumably, and they have a reason to think this is a good thing. But if you share anything like my model of the importance of existential risk and the dangers that AI poses as an existential risk, and how bad it would be if this was developed relatively quickly, I think this position is just indefensible and insane, and that it reflects a systematic error that we need to snap out of. If you need to get experience working with AI, there are indeed plenty of places where you can work with AI in ways that are not pushing this frontier forward.
The transcript is from the 80k website. The episode is also linked to in the post. It also continues to Rob replying that the 80k view is "it's complicated" and Zvi replying to that.
Hey :)
Looking at some of the engineering projects (which is closest to my field) :
- API Development: Create a RESTful API using Flask or FastAPI to serve the summarization models.
- Caching: Implement a caching mechanism to store and quickly retrieve summaries for previously seen papers.
- Asynchronous Processing: Use message queues (e.g., Celery) for handling long-running summarization tasks.
- Containerization: Dockerize the application for easy deployment and scaling.
- Monitoring and Logging: Implement proper logging and monitoring to track system performance and errors.
I'm guessing Claude 3.5 Sonnet could do these things, probably using 1 prompt for each (or perhaps even all at once).
Consider trying, if you didn't yet. You might not need any humans for this. Or if you already did then oops and never mind!
Thanks for saving the world!
If you ever run another of these, I recommend opening a prediction market first for what your results are going to be :)
Every time Zvi posts something, it covers everything (or almost everything) important I've seen until then
https://thezvi.substack.com/
Also in audio:
https://open.spotify.com/show/4lG9lA11ycJqMWCD6QrRO9?si=a2a321e254b64ee9
I don't know your own bar for how much time/focus you want to spend on this, but Zvi covers some bar
The main thing I'm missing is a way to learn what the good AI coding tools are. For example, I enjoyed this post:
https://www.lesswrong.com/posts/CYYBW8QCMK722GDpz/how-much-i-m-paying-for-ai-productivity-software-and-the