I have battled with cause prioritization for years. I took a hard turn early in my career that set me back years. I remember it being emotionally difficult to be in the process of a potential large career change, even more difficult to lock in the decision. Hopefully I can say something useful.
First of all, emotion drive us. It's a force multiplier of all other factors boiling into the amount of impact you can have. Think of all other parameters except for your drive as being a lever you can pull, and your drive is the force you can put on the lever. What will happen if your drive (applied force) is small vs large?
There are many parameters going into the equation of how much impact you can have in a field, but to mention a few, your experience in the field / track-record, your reputation and strength of your network, more general skills and knowledge, and, of course, your drive to get stuff done. I would think about what drives you the most of:
Re-training is a sour apple, for sure, but I wouldn't think too much about that at this (exploration?) phase -- mapping over one fields system of thinking into new domains is often very fruitful in serendipitous ways (I don't know the effect of this in your particular case, but it is generally so).
And I wouldn't think about this as being irrational, at all -- emotion spurs action. How we are driven to make the world a better place shouldn't be discounted.
If you really are burning for a specific cause, then I would definitely take that as a strong signal that this question should be further investigated, even in case it isn't seen effective, altruistic or scores high on the ITN framework.
I would love for someone to do research on the question about the Global AIS upskilling pipeline / funnel, to gain a better understanding about the supply and demand of seats.
This would enable field builders to gain a better understanding where the bottlenecks are (early/late stage). In so, hopefully we could create better ToCs on where and how to intervene in the system; toward the goal creating more well-suited and cost-effective programs, or in other ways to increase amount of talent going into the field.
The MVP version of the analysis could be done by reaching out to the current upskilling programs and ask them something along the lines of: how many of your applicants that you couldn't admit are you somewhat confident could become strong AIS contributors?
In my mind, this would include programs such as MATS, PIBBS, ARENA, AISF, and others.
In a perfect world with much more resources, the analysis would include how academia, the industry and governments position themselves in the global "pipeline", or how they enable people to become AIS contributors.
Edit: minor.
I also imagine that it hasn't caught on is be because there is a lack of people just knowing it exists.
Have you considered cold-emailing people you that could plausibly find this valuable, for example finding potential people from a list such as this one?
Or sending out cold-emails and asking orgs such as these if you could have a quick keynote presentation for them (if there is potential EV) and having a Q&A in the end?
My intuition just tells me that this is an obviously valuable service for many, but, like many of these good SaaS products die, not because they aren't good, but because it doesn't reach a critical mass of users soon enough.
Great stuff! Strongly upvoted.
I just had an idea. It could be a valuable thing to have a monthly / bi-monthly newsletter for people wanting to stay up to date with new developments in the AIS ecosystem, but that don't find the time to scroll through EAF, LW, etc, on a regular basis with the goal to keep themself updated.
Thanks for sharing the idea. Question: you've written that there wasn't sufficient interest, and I assume that includes OP, SFF, LTFF, (...). Is that correct? Wouldn't at least a weak form / Pilot run of this be an attractive idea to them? I'd be surprised if there weren't any interest by these actors. If there weren't any interest, what was the reason (if not confidential)?
Thanks.
Strong upvote.
GPT-1 was released 2018. GPT-4 has shown sparks of AGI.
We have early evidence of self-improvement -- or conservatively -- positive feedback loops are evident.
Open AI intends to build ~AGI to automate alignment research. Sam Altman is attempting to raise $7T USD to build more GPUs.
Anthropic CEO estimates 2-3 years until AGI.
Meta has gone public about their goal of open-sourcing AGI.
Superalignment might even be impossible.
It seems to be difficult to defend a world against rouge AGIs, it seems difficult for aligned AIs to defend us.
I saw the option now, it was hidden behind the three dots. Thanks! :)