27 September 2023

Australia takes first steps to regulate AI

Start the conversation

Jake Evans* says artificial intelligence technologies could be classified by risk, as government consults on AI regulation


The federal government has outlined its intention to regulate artificial intelligence, saying there are gaps in existing law and new forms of AI technology will need “safeguards” to protect society.

While artificial intelligence technologies have existed in some sectors for years, recent “generative AI” tools including chatbot ChatGPT and image generator Midjourney have accelerated excitement — and anxiety — about their potential to reshape society.

Releasing advice from the National Science and Technology Council (NSTC) as well as a discussion paper on AI, Industry and Science Minister Ed Husic said the time had come to consider targeted regulation.

“While Australia already has some safeguards in place for AI and the responses to AI are at an early stage globally, it is not alone in weighing whether further regulatory and governance mechanisms are required to mitigate emerging risks,” the government’s discussion paper states.

“Our ability to take advantage of AI … will be impacted by the extent to which Australia’s responses are consistent with responses overseas.”

The NSTC said while the full risks and opportunities of AI were difficult to predict, in the near-term generative AI technologies “will likely impact everything from banking and finance to public services, education and creative industries”.

The minister has outlined two goals for the government: first to ensure that businesses can confidently and responsibly invest in AI technologies, and also to ensure “appropriate safeguards”, in particular for high-risk tools.

AI is already covered by existing privacy, consumer protection, copyright, criminal and other laws, and a review of the Privacy Act is underway which proposes to tighten data protection laws.

The paper says while adjustments to “general” regulations such as those can help to address developments in AI, “sector-specific” laws or standards may also be necessary.

World moving towards risk-based approach to classify AI

Australia was one of the first countries in the world to introduce an ethics framework for “responsible” AI in 2018.

Since then, several nations including the United States, the European Union, the United Kingdom and Canada have introduced legislation or made plans to regulate AI, while Australia’s AI framework has remained voluntary.

High profile developers of AI including Microsoft and the company behind ChatGPT, OpenAI, have urged governments to introduce laws to limit the potential damage the technologies could be capable of doing.

The discussion paper canvasses possible options for regulating AI, in particular pointing to “risk-based” classification systems being drafted in Canada and the European Union, which will rate AI tools and impose tiered levels of restrictions based on their potential destructiveness.

The federal government has proposed a three-tiered system for Australia that would classify AI tools as low, medium or high risk, with increasing obligations for higher risk classifications, as a possible response.

A low-risk tool such as an AI chess player might only require a “basic” self-assessment, user training and internal monitoring, while a high-risk tool such as an AI surgeon could require peer-reviewed impact assessments, public documentation, meaningful human interventions, recurring training and external auditing.

There are currently no laws requiring developers to undertake risk assessments for AI except in NSW for government suppliers.

The paper notes other countries such as Singapore have so far taken a voluntary approach, promoting responsible self-governance by industry, and most nations are developing voluntary standards alongside more stringent requirements.

The government has also left itself the option to outright ban some technologies, including some facial recognition and social scoring tools, such as those used in China.

The paper flagged Australia would ultimately have to keep close to overseas developments on regulation.

“As a relatively small, open economy, international harmonisation of Australia’s governance framework will be important as it ultimately affects Australia’s ability to take advantage of AI-enabled systems supplied on a global scale and foster the growth of AI in Australia,” the paper states.

Risks, and a multi-trillion-dollar opportunity

The federal government is also concerned about ensuring that any regulation can help to develop the industry and not stifle it.

A major report by the Productivity Commission found AI could add between $1-4 trillion to the economy in the next decade, supercharging Australia’s current annual GDP of about $1.5 trillion.

The paper points to an agreement by transport ministers last year to establish a self-driving car framework that would enable the commercial deployment of automated vehicles as an example of regulation that could encourage AI development.

AI technologies are already regularly used in hospitals, legal services, engineering and several other sectors, and most people use AI regularly through voice or navigation tools on their phones.

But recent tools made open to the public have prompted concerns for their potential to spread misinformation, create deceptive or harmful deepfakes, encourage self-harm or racial discrimination or help people to cheat tests, among many more fears for their potential impacts as people find new uses.

With the commercial implications of newer AI technologies only just emerging, the government has also flagged work across government on the issue would have to continue, with further considerations needed of the consequences for defence, national security, labour market and intellectual property.

The Australian Information Industry Association, the peak body for the sector, said ahead of the paper’s release that government must involve industry in designing any laws for AI.

“Strict regulation applied in isolation from industry stifles innovation,” AIIA’s chief executive Simon Bush said.

“That’s why we are calling for meaningful participation from both government and industry to establish flexible guardrails as generative AI technologies evolve.”

*Jake Evans is a federal political reporter at Parliament House in Canberra

This article first appeared at abc.net.au

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.