The federal government is looking at imposing mandatory guardrails for AI development and deployment in high-risk settings.
Canberra today released its interim response to the Safe and Responsible AI in Australia discussion paper released in June last year.
“Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled," says Ed Husic, the minister for science and industry.
“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI."
Publishers are also pushing to be compensated for the use of their premium content to train AIs and the government has established a copyright and artificial intelligence reference group.
The government will now consult on possible mandatory guardrails for AI, including:
- testing of products to ensure safety before and after release;
- transparency regarding model design and data underpinning AI applications; labelling of AI systems in use and/or watermarking of AI generated content;
- training for developers and deployers of AI systems, possible forms of certification, and clearer expectations of accountability for organisations developing, deploying and relying on AI systems.
Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au
Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.