Safe and responsible AI in Australia
The Australian Federal Government has released its interim response (25 pages) to the Safe and Responsible AI in Australia consultation.
The response identifies low public trust in AI systems as a handbrake on business adoption, and public acceptance.
The government recognises that many applications of AI do not present risks that require a regulatory response, however it will consider ‘mandatory safeguards for those who develop or deploy AI systems in legitimate, high-risk settings. This will help ensure AI systems are safe when harms are difficult or impossible to reverse.’
‘While the government considers mandatory guardrails for AI development and use and next steps, it is also taking immediate action through:
working with industry to develop a voluntary AI Safety Standard, implementing risk-based guardrails for industry
working with industry to develop options for voluntary labelling and watermarking of AI-generated materials
establishing an expert advisory body to support the development of options for further AI guardrails.’
The principles guiding the interim response:
Risk-based approach. The Australian Government will use a risk-based framework to support the safe use of AI and prevent harms occurring from AI.
Balanced and proportionate. The Australian Government will avoid unnecessary or disproportionate burdens for businesses, the community and regulators.
Collaborative and transparent. The Australian Government will be open in its engagement and work with experts from across Australia in developing its approach to the safe and responsible use of AI.
A trusted international partner. Australia will be consistent with the Bletchley Declaration and leverage its strong foundations and domestic capabilities to support global action to address AI risks
Community first. The Australian Government will place people and communities at the centre when developing and implementing its regulatory approaches.
According to Ed Husic, Minister for Industry and Science (media release):
“Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled.
“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI.
“The Albanese government moved quickly to consult with the public and industry on how to do this, so we start building the trust and transparency in AI that Australians expect.
“We want safe and responsible thinking baked in early as AI is designed, developed and deployed.