The government aims to differentiate between "high risk" and "low risk" AI applications, with the former including the production of modified information or "deep fakes," and the latter including screening spam emails. Safeguards being considered include testing requirements for products, transparency about model design and data supporting AI applications, training programs for AI system developers, and potential certification programs. The government is also observing how other nations, including the US, Canada, and EU, are addressing AI-related issues.
Key takeaways:
- Australia is planning to create its own artificial intelligence (AI) advisory body and guidelines to mitigate AI risks, in consultation with industry bodies and experts.
- The government is considering enacting new legislation or modifying existing ones to impose safeguards on the research and application of AI in high-risk environments.
- Immediate actions are being taken to collaborate with industries to create a voluntary AI safety standard and options for watermarking and labeling goods produced by AI.
- Australia is closely monitoring how other nations, including the US, Canada, and EU, are addressing the issues raised by AI, and is committed to collaborating with other nations to influence global efforts in this domain.