Zach Stein-Perlman

Research @ AI Impacts
3234 karmaJoined Working (0-5 years)Berkeley, CA, USA

Bio

Participation
1

AI forecasting & strategy at AI Impacts. Blog: Not Optional.

Comments
327

Thanks for this update.

Open Philanthropy . . . has covered our entire operations budget.

Where will funding for operations come from now? (And how much does EA Funds spend on operations?)

Thanks!

I don't know what this means:

Career Stage: Our interest lies in assisting grantees who are at the beginning of their careers, are contemplating a career shift towards an area of higher impact, or have accumulated several years of experience in their respective fields.

That sounds like everyone.

Separately from the other thread-- the little evidence I'm aware of (bing chat, sparks of AGI) suggests that Microsoft is bad on safety. I'm surprised they were included.

Edit: and I weakly think their capabilities aren't near the frontier, except for their access to OpenAI's stuff.

More stuff that could be integrated:

  • https://github.com/georgetown-cset/CSET-AIID-harm-taxonomy/blob/main/CSET%20V1%20AI%20Annotation%20Guide%20(with%20Schema%20and%20Field%20Descriptions)%2025Jul2023.pdf

Good question. Yeah, Meta AI tends to share their research and model weights while OpenAI, Google DeepMind, and Anthropic seem to be becoming more closed. But more generally, those three labs seem to be concerned about catastrophic risk from AI while Meta does not. Those three labs have alignment plans (more or less), they do alignment research, they are working toward good red-teaming and model evals, they tend to support strong regulation that might be able to prevent dangerous AI from being trained or deployed, their leadership talks about catastrophic risks, and a decent chunk of their staff is concerned about catastrophic risks.

Sorry I don't have time to provide sources for all these claims.

(Briefly-- I of course agree that Meta AI is currently bad at safety, but I think a more constructive and less adversarial approach to them is optimal. And it doesn't seem that they're "frozen out"; I hope they improve their safety and join the FMF in the future.)

I disagree; last I checked most AI safety research orgs think they could make more good hires with more money and see themselves as funding-constrained-- at least all 4 that I'm familiar with: RP, GovAI, FAR, and AI Impacts.

Edit: also see the recent Alignment Grantmaking is Funding-Limited Right Now (note that most alignment funding on the margin goes to paying and supporting researchers, in the general sense of the word).

Yeah, I hear you. [Edit: well, I think it was the least aggressive way of saying what I wanted to say.]

(I note that in addition to hyping the post is kinda making an ask for funding for the three projects it mentions--"Some of our favorite proposals which could use more funding"--and I'm pretty uncomfortable with one-sided-ness in funding-asks.)

Load more