I'm Buck Shlegeris. I am the CTO of Redwood Research, a nonprofit focused on applied alignment research. Read more about us here: https://www.redwoodresearch.org/
For what it’s worth, gpt4 knows what rat means in this context: https://chat.openai.com/share/bc612fec-eeb8-455e-8893-aa91cc317f7d
I think this is a great question. My answers:
My attitude, and the attitude of many of the alignment researchers I know, is that this problem seems really important and neglected, but we overall don't want to stop working on alignment in order to work on this. If I spotted an opportunity for research on this that looked really surprisingly good (e.g. if I thought I'd be 10x my usual productivity when working on it, for some reason), I'd probably take it.
It's plausible that I should spend a weekend sometime trying to really seriously consider what research opportunities are available in this space.
My guess is that a lot of the skills involved in doing a good job of this research are the same as the skills involved in doing good alignment research.
I don't think Holden agrees with this as much as you might think. For example, he spent a lot of his time in the last year or two writing a blog.
I found Ezra's grumpy complaints about EA amusing and useful. Maybe 80K should arrange to have more of their guests' children get sick the day before they tape the interviews.