Invested in Client Success

Icon

AI guardrails are coming. Are you ready?

Banner
Dale Chapman
Dale Chapman
Partner
Robert Dooley
Robert Dooley
Special Counsel
Jason Zhang
Jason Zhang
Solicitor

AI technologies have the potential to transform our society and economy.  Their development has largely outpaced regulatory reform in recent years, although governments and regulatory bodies the world over are moving toward introducing various regulatory measures.  With AI technologies becoming increasingly accessible, their risks to Australian consumers, businesses and national security have also become more apparent. 

We have seen the risks of AI systems such as ChatGPT play out in the form of “hallucinations” or biases in the underlying data on which the AI model is trained.  More recently, concerns have arisen around deepfakes and fake news being spread intentionally using AI.  In response to these issues, the Australian Government recently published:

  • a proposal for a risk-based framework that features ten mandatory guardrails to regulate high-risk AI (Mandatory Guardrails); and
  • a Voluntary AI Safety Standard (Voluntary Standards) that closely aligns with the Mandatory Guardrails. The Voluntary Standards are in place now and can be applied by all Australian organisations, though, by their nature, compliance with them is voluntary.

Consultation on the Mandatory Guardrails proposal concluded recently and we are awaiting the Government’s response to submissions from the public.  The Mandatory Guardrails are not expected to become law until 2025 at the earliest. 

Mandatory Guardrails

The Mandatory Guardrails consist of the following ten principles, which would apply to organisations developing or deploying high-risk AI systems:

  1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
  2. Establish and implement a risk management process to identify and mitigate risks.
  3. Protect AI systems and implement data governance measures to manage data quality and provenance.
  4. Test AI models and systems to evaluate model performance and monitor the system once deployed.
  5. Enable human control or intervention in an AI system to achieve meaningful human oversight.
  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
  7. Establish processes for people impacted by AI systems to challenge use or outcomes.
  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
  9. Keep and maintain records to allow third parties to assess compliance with guardrails.
  10. Undertake conformity assessments to demonstrate and certify compliance with the guardrails.

These Mandatory Guardrails are designed to be flexible and easily adaptable to emerging AI technologies.  Rather than prescribing specific limits on AI technologies or prohibiting certain AI practices (as is the case in Europe), the Mandatory Guardrails are meant to be a principles-based approach that organisations can follow to ensure adequate oversight and human control over AI systems. 

According to the consultation paper, the Government is encouraging AI developers and deployers to actively engage with stakeholders across the AI supply chain and AI lifecycle to identify and address impacts of AI systems on stakeholder groups. 

The exact implementation of the Mandatory Guardrails is still being considered.  The Australian Government intends for the Mandatory Guardrails to complement existing laws, but it has not been decided whether the Mandatory Guardrails will be introduced as:

  • individual amendments to adapt existing laws;
  • framework legislation that that defines the Mandatory Guardrails and their application which is then integrated into existing laws; or
  • a new AI Act which would regulate high-risk AI.

While there are advantages to modifying existing, familiar regulatory regimes under the second option, there is something to be said for a new AI Act that addresses high-risk AI directly under a whole-of-economy approach.  We can expect to see further updates from the Australian Government on the outcome of the consultation process.

To whom would the Mandatory Guardrails apply?

As proposed, the Mandatory Guardrails would apply to both developers and deployers of AI systems.  It is currently unclear whether the Mandatory Guardrails would apply to Australian developers and deployers only, or if it would have extraterritorial reach.  With many major AI developers being based overseas, particularly the developers of general-purpose AI systems, it may be difficult for Australian deployers of AI technologies to enforce international developers’ compliance with the Mandatory Guardrails.  

End-users of AI systems or services are specifically excluded from the Mandatory Guardrail and will not need to comply with them.  However, end-users must comply with any obligations under existing laws, such as intellectual property and data protection laws.

Which types of “high-risk AI” would be caught by the Mandatory Guardrails?

The Mandatory Guardrails would apply to “high-risk AI” systems.  In the proposal, the Australian Government contemplates that two categories of AI systems will be considered “high-risk”:

  • AI systems where their use has a high risk of adverse effects on individuals, groups of people, or the broader Australian economy, society, environment and legal system; and
  • general-purpose AI models which are AI models that can perform a wide range of tasks such as OpenAI’s GPT family of large language models.

Under this approach, organisations would be required to evaluate each AI system individually to determine if it is considered “high-risk”, and if so, assess how the Mandatory Guardrails will be applied to it. 

In the case of general-purpose AI models, this proposed approach disregards how the technology is used in favour of blanket regulation.  It would place a heavy onus on Australian organisations to comply with the Mandatory Guardrails even if the general-purpose AI model were used in low-risk settings.

Prepare for the Mandatory Guardrails now

The Voluntary Standards can be applied by all Australian organisations involved in the development or deployment of AI technologies, today.  They serve as a blueprint for organisations to follow to improve their governance and encourage responsible AI innovation.

Of the ten Voluntary Standards, the first nine are identical to the first nine Mandatory Guardrails.  The tenth Voluntary Standard calls for stakeholder engagement instead of compliance certification, as required under the tenth Mandatory Guardrail.  While voluntary in nature, the parity between the Voluntary Standards and the Mandatory Guardrails signals the Australian Government’s expectation that organisations put into practice AI governance principles now. 

If your organisation develops or deploys AI technologies, take time now to assess your organisation’s policies and processes to ensure that they are congruent with the Voluntary Standards.  If you have adopted AI governance principles, assess them against the Voluntary Standards to ensure that your AI governance framework is fit for purpose.  If you do not already have a set of AI governance principles, and you are developing or deploying AI technology, you should carefully consider implementing some now, based on the Voluntary Standards.  Early adoption of the Voluntary Standards is recommended as it will help to prepare your organisation for the introduction of the Mandatory Guardrails.

The period for submissions responding to the Mandatory Guardrails proposals ended recently and we can expect to see further updates to the Mandatory Guardrails based on the submissions to the proposal paper.  Organisations should also stay abreast of any updates to the Mandatory Guardrails as they will likely translate to amendments to the Voluntary Standards to maintain parity between them. 

Liability limited by a scheme approved under Professional Standards Legislation.


© ADDISONS. No part of this document may in any form or by any means be reproduced, stored in a retrieval system or transmitted without prior written consent. This document is for general information only and cannot be relied upon as legal advice.

Related Insights