19 January 2024

AI might need some mandatory guardrails, government says

| Chris Johnson
Start the conversation

AI can be good, but there must be safeguards. The government is responding to concerns. Photo: File.

The Federal Government is considering mandatory guardrails for AI development to help ensure the technology is safe and used responsibly. The safety steps would also apply to the deployment in high-risk settings.

Releasing the government’s interim response to the Safe and Responsible AI in Australia consultation, Industry and Science Minister Ed Husic said it was clear that while AI has immense potential to improve wellbeing and grow the economy, Australians want stronger protections to help manage the risks.

“Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled,” Mr Husic said.

“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI.

“We want safe and responsible thinking baked in early as AI is designed, developed and deployed.”

The government has sought views on mitigating any potential risks of AI and supporting safe and responsible AI practices.

Mandatory guardrails being considered relate to testing of products to ensure safety before and after release; transparency regarding model design and data underpinning AI applications; labelling of AI systems in use and/or watermarking of AI-generated content; and training for developers and deployers of AI systems, which could include possible forms of certification and clearer expectations.

Consultation on possible mandatory guardrails is ongoing, but Mr Husic said some immediate actions are already being taken.

These include working with industry to develop a voluntary AI safety standard and options for voluntary labelling and watermarking of AI-generated materials.

They also include establishing an expert advisory group to support the development of options for mandatory guardrails of accountability for organisations developing, deploying and relying on AI systems.

Australia is not the only country looking at ways to mitigate any emerging risks of technologies such as AI. Some jurisdictions favour voluntary approaches, while others are pursuing more rigorous regulations.

Mr Husic said Australia was closely monitoring how other jurisdictions are responding to the challenges of AI, including initial efforts in the EU, the US and Canada.

He said building on its engagement at the UK AI Safety Summit in November, the government will continue to work with other countries to shape international efforts in this area.

The consultation discussion paper says AI is already improving many aspects of people’s lives, but the speed of innovation in AI could pose new risks. This creates uncertainty and gives rise to public concerns.

“Australia has strong foundations to be a leader in responsible AI,” the discussion paper states.

“We have world-leading AI research capabilities and are early movers in the trusted use of digital technologies.

“Australia established the world’s first eSafety Commissioner in 2015 to safeguard Australian citizens online and was one of the earliest countries to adopt a national set of AI Ethics Principles.

“This consultation will help ensure Australia continues to support responsible AI practices to increase community trust and confidence.”

The government’s interim response is targeted towards AI in high-risk settings where it believes harms could be difficult to reverse.

It wants all low-risk AI use to continue to flourish without impediments.

“In broad terms, what we’ve wanted to do is get the balance right with the work that we’ve undertaken so that we can get the benefits of AI while fencing up the risks and looking at realising that a lot of the way in which AI is used is low risk and beneficial,” Mr Husic told ABC Radio.

“What we want to do is get some of the best minds, through an expert advisory panel, to basically define those areas of high risk that will require a mandatory response, and that we also spell out what the consequences potentially are for not doing so.

“Our preference is to be able to work with industry and other people that are interested in this space, to be able to get a uniform, cooperative approach to this.

“And that’s why we’re staging it, developing a voluntary safety standard initially, and then scaling and laddering that up to mandatory guardrails longer term.”

Original Article published by Chris Johnson on Riotact.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.