Working for Cooperative AI Foundation. I have a background in engineering and entrepreneurship and have previously been running a small non-profit focused on prevention of antibiotic resistance and worked for EA Sweden. Received an EA Infrastructure grant for cause exploration in meta-science during 2021-22.
Thank you Shaun!
I found myself wondering where we would fit AI Law / AI Policy into that model.
I would think policy work might be spread out over the landscape? As an example, if we think of policy work aiming to establishing the use of certain evaluations of systems, such evaluations could target different kinds of risk/qualities that would map to different parts of the diagram?
Interesting perspective!
I personally believe that many, if not most, of the world's most pressing problems are political problems, at least in part.
I agree! But if this is true, doesn't it seem very problematic if a movement that means to do the most good does not have tools for assessing political problems? I think you may be right that we are not great at that at the moment, but it seems... unambitious to just accept that?
I also think that many people in EA do work with political questions, and my guess would be that some do it very well - but that most of those do it in a full-time capacity that is something different from "citizen politics". Could it be than rather than EA being poorly suited to assessing political issues, EA does not (yet) have great tools for assessing part-time activism, which would be a much more narrow claim?
Thanks for commenting!
I think there are two different things to figure out: 1) should we engage with the situation at all? and 2) if we engage, what should we do/advocate for?
I might be wrong about this, but my perception so far is that many EAs based on some ITN reasoning answer the first question with a no, and then the second question becomes irrelevant. My main point here is that I think it is likely that the answer to the first question could be yes?
For this specific case I personally believe that a ceasefire would be more constructive than the alternative, but even if you disagree with that this would not automatically mean that the best thing is not to engage at all. Or do you think it does?
Thanks, I'm glad you found it useful!
- Having spent a couple of months working on this topic, do you still think AI science capabilities are especially important to explore, cf AI in other contexts? I ask because I've been thinking and reading a lot about this recently, and I keep changing my mind about the answer.
Answering just for myself and not for the team: I don't have a confident answer to this. I have updated in the direction that capabilities for autonomous science work are more similar to general problem-solving capabilities than I thought previously. I think that means that these capabilities could be more likely to emerge from a powerful general model than from a narrow "science model".
Still, I think there is something specific about how the scientific process develops new knowledge and then builds on that, and how new findings can update the world-view in a way that might discredit a lot of the previous training data (or change how it's interpreted).
Thank you so much for this post! It is SO nice to read about this in a framing that is inspiring/positive - I think it's unavoidable and not wrong that we often focus on criticism and problem description in relation to diversity/equality issues but that can also make it difficult and uninspiring to work with improvement. I love the framing you have here!
Thanks - yes I agree, and study of collusion is often included into the scope of cooperative AI (e.g. methods for detecting and preventing collusion between AI models is among the priority areas of our current grant call at Cooperative AI Foundation).