Tl;dr: If you want to publicly debate AI risk with us, send us an email at hello@conjecture.dev with information about you, suggested topics, and the suggested platform.
Public debates strengthen society and public discourse. They spread truth by testing ideas and filtering out weaker arguments. Moreover, debating ideas publicly forces people to be coherent over time, or to adjust their beliefs faced with new evidence.
This is why we need more public debates on AI development, as AI will fundamentally transform our world, for better or worse.
Most of us at Conjecture expect advanced AI to be catastrophic by default, and that the only path to a good future goes through solving some very hard technical and social challenges.
However, many others inside and outside of the AI field have very different expectations! Some think very powerful AI systems are coming soon, but it will be easy to control them. Others think very powerful AI systems are just very far away, and there's no reason to worry yet.
Open debate about AI should start now, to discuss these and many more issues.
As Conjecture, we have a standing offer to publicly debate AI risk and progress in good faith.
If you want to publicly debate AI risk with us, send us an email at hello@conjecture.dev with information about you, suggested topics, and the suggested platform. By default, we prefer the debate to be a live discussion streamed on Youtube or Twitch. Given our limited time, we won't be able to accept all requests, but we'll explain in cases where we reject. As a rule of thumb, we will give priority to people with more reach and/or prominence.
Some relevant topics can include:
- What are reasons for and against expecting that the default outcome of developing powerful AI systems is human extinction?
- Is open source development of powerful AI systems a good or bad idea?
- How far are we from existentially dangerous AI systems?
- Should we stop development of more powerful AI, or continue development towards powerful general AI and superintelligence?
- Is a global moratorium on development of superintelligence feasible?
- How easy or hard is it going to be to control powerful AI systems?
Here's a recent debate between Connor Leahy (Conjecture CEO) and Joseph Jacks (open source software investor) on whether AGI is an existential risk, and a debate between Robin Hanson (Prof. of Economics at GMU) and Jaan Tallinn (Skype co-founder, AI investor) on whether we should pause AI research.
To see some of our stances on these topics, you can find some recent public appearances from Connor (CEO) here and here.
An overview of our main research agenda is available here and here.
We ran a debate initiative in the past, but it was focused on quite technical discussions with people already deep in the field of AI alignment. As AI risk gets into the mainstream, the conversation should become much broader.
Two discussions that we published from that initiative:
- Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes
- Shah (DeepMind) and Leahy (Conjecture) Discuss Alignment Cruxes
If the linked page doesn't load on your browser, try CMD + Shift + R on Mac or CTRL + F5 on Windows to hard reload the page.
I think this is extremely not true, and am pretty disappointed with this sort of "debate me" communications policy. In my opinion, I think public debates very rarely converge towards truth. Lots of things sound good in a debate but break down under careful analysis, and the pressure of saying things that look good to a public audience creates a lot of pressure opposed to actual truth-seeking.
I understand and agree with the importance of good communications here, but imo this is really not the way. Some alternative possibilities:
I'm sure there's a bunch more here; these are just some ideas off the top of my head. In general, I think there's a lot of ways to do public communications on complex, controversial topics that don't involve public debates, and I'd strongly encourage going in one of those alternative directions instead.
Cross-posted from LessWrong.