The government plans to set up an expert advisory group on AI policy development, develop a voluntary “AI Safety Standard” for businesses integrating AI tech, and consult with the industry on new transparency measures. Mandatory safeguards, including pre-deployment risk and harm prevention testing of new products and accountability measures for software developers, are also being considered. The government is also looking into the merits of a voluntary code on including watermarks or labelling on AI-generated content. This comes in addition to existing work from the federal government to change online safety laws and require tech companies to stamp out AI-created harmful material.
Key takeaways:
- The Australian government is considering asking tech companies to watermark or label content generated by artificial intelligence (AI) as it grapples with the rapid evolution of high-risk AI products.
- Industry and Science Minister Ed Husic will release the government's response to a consultation on safe and responsible AI in Australia, which suggests that adopting AI and automation could grow Australia's GDP by up to $600bn a year.
- The government plans to set up an expert advisory group on AI policy development, develop a voluntary AI Safety Standard, and consult with the industry on new transparency measures.
- There are also plans to commence work with the industry on the merits of a voluntary code on including watermarks or labelling on AI-generated content, and to review laws related to AI, such as the use of AI to generate deepfakes and potential copyright infringements in training generative AI models.