Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Australia may ask tech companies to label content generated by AI platforms such as ChatGPT

Jan 16, 2024 - theguardian.com
The Australian federal government is considering asking tech companies to watermark or label AI-generated content, such as ChatGPT, as it grapples with the rapid evolution of “high risk” AI products. The industry and science minister, Ed Husic, will release the government’s response to a consultation process on Safe and responsible AI in Australia, which suggests that adopting AI and automation could grow Australia’s GDP by up to $600bn a year. However, the response also highlights public concern about the technology and the need for stricter regulation for some applications, such as self-driving cars and programs assessing job applications.

The government plans to set up an expert advisory group on AI policy development, develop a voluntary “AI Safety Standard” for businesses integrating AI tech, and consult with the industry on new transparency measures. Mandatory safeguards, including pre-deployment risk and harm prevention testing of new products and accountability measures for software developers, are also being considered. The government is also looking into the merits of a voluntary code on including watermarks or labelling on AI-generated content. This comes in addition to existing work from the federal government to change online safety laws and require tech companies to stamp out AI-created harmful material.

Key takeaways:

  • The Australian government is considering asking tech companies to watermark or label content generated by artificial intelligence (AI) as it grapples with the rapid evolution of high-risk AI products.
  • Industry and Science Minister Ed Husic will release the government's response to a consultation on safe and responsible AI in Australia, which suggests that adopting AI and automation could grow Australia's GDP by up to $600bn a year.
  • The government plans to set up an expert advisory group on AI policy development, develop a voluntary AI Safety Standard, and consult with the industry on new transparency measures.
  • There are also plans to commence work with the industry on the merits of a voluntary code on including watermarks or labelling on AI-generated content, and to review laws related to AI, such as the use of AI to generate deepfakes and potential copyright infringements in training generative AI models.
View Full Article

Comments (0)

Be the first to comment!