The Australian Government is considering how to regulate AI in Australia, has published a discussion paper ("Safe and Responsible AI"), and has invited feedback by 26 July 2023:

"We want your views on how the Australian Government can mitigate any potential risks of AI and support safe and responsible AI practices."

Good Ancestors Policy (goodancestors.org.au/policy), with the support of EA and AI Safety community organisers in Australia, have coordinated Australians' submissions to the feedback process.

Today, the website Australians for AI Safety launched with a co-signed letter (media release). The letter called on the relevant Australian Federal Minister, Ed Husic, to take AI safety seriously by:

  1. recognising the catastrophic and existential risks
  2.  addressing uncertain but catastrophic risks alongside other known risks 
  3. working with the global community on international governance
  4. supporting research into AI safety

Good Ancestors Policy have also held community workshops across Australia (e.g., Brisbane, Perth) to support members of the EA and AI Safety community in understanding the feedback process and preparing submissions, including access to some of the best evidence and arguments for acknowledging and addressing risks from AI. Policy ideas are drawn from the Global Catastrophic Risk Policy database (https://www.gcrpolicy.com/ideas), the AI Policy Ideas database (aipolicyideas.com), and expert community input.

So far, about 50 members of the community have attended a workshop, and feedback we've received is that the workshops have been very helpful, the majority (~75% people) are likely or very likely (>80% likelihood) to make a submission, and that most (~70% people) would be unlikely or very unlikely (<20% likelihood) to have made a submission without the workshop.

If you're an Australian living in Australia or overseas, and you'd like to make a submission to this process, there is one more online community workshop on Saturday 22 July at 3pm AEST (UTC+10).  Register here for the workshop!

Contact Greg Sadler (greg@goodancestors.org.au) or Alexander Saeri (alexander@goodancestors.org.au) if you'd like to stay involved.

51

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since:

Thanks so much for the summary Zan.  The letter has attracted a really good spread of AI expertise in Australia, has given us a vehicle to talk to other experts and government advisors less focused on safety issues. The letter is also attracting a reasonable amount of media attention this morning. 

It's hard to overstate how backwards the Australian government's leadership is on AI safety concerns at this point in time. If things continue as they are, it's essentially certain that the Australian government is going to be a skeptical voice in any multilateral negotiations relating to global agreements and standards setting etc. Given Australia's geopolitical position, it would meaningfully harm global efforts if Australia is pulling in the wrong direction.

I'm really hopeful that this effort will have a meaningful impact in Australia correcting course. This is a great start, but it will require sustained effort. 

Curated and popular this week
Relevant opportunities