Australia has proposed regulating "mandatory guardrails" for AI in high-risk settings.
A proposals paper outlines preventative measures that would require developers and deployers of high-risk AI to take specific steps.
The guardrails have been developed in consultation with an AI Expert Advisory Group of experts in AI technologies, law and governance.
The proposals, now open for comment, include:
- A definition of high-risk AI
- 10 regulatory guardrails to reduce the likelihood of harms occurring from the development and deployment of AI systems
- Regulatory options to mandate guardrails, building on current work to strengthen and clarify existing laws.
The federal government also released a Voluntary AI Safety Standard for businesses where their use is high risk.
“Australians want stronger protections on AI, we’ve heard that, we’ve listened," said science minister Ed Husic.
“Australians know AI can do great things, but people want to know there are protections in place if things go off the rails.
“From today, we’re starting to put those protections in place.
“Business has called for greater clarity around using AI safely and today we’re delivering.
“We need more people to use AI and to do that we need to build trust.”
The proposed 10 mandatory guardrails require organisations developing or deploying high-risk AI systems to:
1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance
2. Establish and implement a risk management process to identify and mitigate risks
3. Protect AI systems, and implement data governance measures to manage data quality and provenance
4. Test AI models and systems to evaluate model performance and monitor the system once deployed
5. Enable human control or intervention in an AI system to achieve meaningful human oversight
6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content
7. Establish processes for people impacted by AI systems to challenge use or outcomes
8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks
9. Keep and maintain records to allow third parties to assess compliance with guardrails
10. Undertake conformity assessments to demonstrate and certify compliance with the guardrails
Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au
Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.