What are we pushing for? E.g. What policies are good to aim for/realistic? What should we push labs to do?
What is a general idea we think the public could get behind and which sufficiently tracks AI risk prevention?
Big questions! It seems like the first is a question for everyone working on this, not just communications people. I'd be interested to know what comms can offer ahead of these questions being settled. What kind of priming/key understandings are most important to get across to people to make the later convincing easier?
Who is doing this kind of comms work right now and running into these bottlenecks? Through what orgs? Are they the same organizations doing the policy development work?
Big questions! It seems like the first is a question for everyone working on this, not just communications people. I'd be interested to know what comms can offer ahead of these questions being settled. What kind of priming/key understandings are most important to get across to people to make the later convincing easier?
Who is doing this kind of comms work right now and running into these bottlenecks? Through what orgs? Are they the same organizations doing the policy development work?