The Federal Government wants Australia to adopt rules similar to those used in Europe and America for the use of artificial intelligence, with bans for systems that pose a danger to society and mandatory guardrails for high-risk use of the technology.
Science Minister Ed Husic unveiled his ideas on the topic on Thursday (5 September), saying regulating AI was one of the most complex policy issues facing governments across the world.
“The Australian Government is determined that we put in place the measures that provide for the safe and responsible use of AI in this country,” the Minister said
He released a discussion paper for the next round of consultations on guardrails to be implemented.
Mr Husic said high-risk uses of AI should only be introduced after vigorous testing to ensure they mitigated risks while also allowing human control and intervention.
“Australians want stronger protections on AI. We’ve heard that, we’ve listened,” he said.
“Australians know AI can do great things, but people want to know there are protections in place if things go off the rails.
“From today, we’re starting to put those protections in place. Business has called for greater clarity around using AI safely, and today, we’re delivering. We need more people to use AI, and to do that, we need to build trust.”
The Minister released a new Voluntary AI Safety Standard with immediate effect.
It aims to provide practical guidance for businesses where their use is high risk so they can start implementing best practices in the use of AI.
It is also a measure to give businesses certainty ahead of the government implementing mandatory guardrails.
The voluntary standard includes 10 guardrails that range across a number of areas, including accountability, testing, risk mitigation, record-keeping and engagement with stakeholders.
Mr Husic said the move was in step with similar actions in other international jurisdictions – including the EU, Japan, Singapore, and the US – and that it will be updated over time to conform to changes in best practice.
“This new guidance will help domestic businesses grow, attract investment and ensure Australians enjoy the rewards of AI while managing the risks,” he said.
The Responsible AI Index 2024, commissioned by the National AI Centre, shows that Australian businesses consistently overestimate their capability to employ responsible AI practices.
It found 78 per cent of Australian businesses believed they were implementing AI safely and responsibly, but in only 29 per cent of cases was this correct.
The index surveyed 413 executive decision-makers who are responsible for AI development across financial services, government, health, education, telecommunications, retail, hospitality, utilities and transport.
Businesses were assessed on 38 identified responsible AI practices across five dimensions: accountability and oversight, safety and resilience, fairness, transparency and explainability and contestability.
The index found that, on average, Australian organisations are adopting only 12 out of 38 responsible AI practices.
“We know AI can be hugely helpful for Australian business, but it needs to be used safely and responsibly,” Mr Husic said.
“The Albanese government has worked with business to develop standards that help identify and manage risks.
“This is a practical process that they can put into use immediately, so protections will be in place.
“Artificial intelligence is expected to create up to 200,000 AI-related jobs in Australia by 2030 and contribute $170 billion to $600 billion to GDP, so it’s crucial that Australian businesses are equipped to properly develop and use the technology.”
Earlier this year, the government appointed an AI expert group to guide its next steps and the group’s work informed the government’s proposal paper for mandatory guardrails.
There are three key elements being discussed: introducing a definition of high-risk AI, 10 mandatory guardrails and three regulatory options to mandate the guardrails.
The three regulatory approaches being considered are: adopting the guardrails within existing regulatory frameworks as needed, introducing new framework legislation to adapt existing regulatory frameworks across the economy, and introducing a new cross-economy AI-specific law such as an Australian AI Act.
The Consultation on the Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings is open for four weeks, closing 5 pm AEST on Friday 4 October 2024.
Original Article published by Chris Johnson on Riotact.