What is Australia’s Voluntary AI Safety Standard?

A Guide to Implementation

Currently Voluntary

Consists Of 10 High-Level Guardrails

Aligned With Other Standards Like The NIST AI RMF & ISO 42001

Utah AI Policy Act Compliance

FAQs About Australia’s Voluntary AI Safety Standard

Steps to Achieve Alignment

Colorado SB21-169 On Insurance

High-Level Overview

What are the core concepts of Australia’s Voluntary AI Safety Standard?

The Voluntary AI Safety Standard aims to help organizations build a foundation for responsible AI by focusing on risk management, safety testing, transparency, accountability, and security.

The Standard consists of 10 high-level guardrails that organizations can adopt to create a foundation for safe and responsible AI usage.

Each guardrail is broken into individual actionable steps, providing more concrete guidance to help organizations adopt the Standard.

Although the Standard is currently voluntary, organizations are encouraged to implement all ten guardrails.

The is Australian government considering making the framework mandatory for high-risk AI applications.

The guardrails are aligned with other responsible AI standards, such as the NIST AI RMF, ISO 42001, and the EU AI Act, which means adopting the Standard soon will help organizations meet future regulatory obligations.

Scope

Who does Australia’s Voluntary AI Safety Standard apply to?

The Standard is meant to be used by any organization that wishes to build a foundation for responsible and secure AI usage.

The Australian government is starting to collect feedback and will consider making the Standard mandatory for high-risk applications.

What high-risk AI applications are being considered for mandatory regulation?

The government’s proposed definition of “high-risk AI” includes:
– biometrics
– critical infrastructure
– education and training
– employment
– access to essential public services and products
– access to essential private services
– products affecting public health and safety
– law enforcement
– administration of justice

This list is not dissimilar to other regulations’ definitions of high-risk AI and especially aligns with that of the EU AI Act.

Alignment Requirements

What are the requirements of Australia’s Voluntary AI Safety Standard?

The Standard consists of ten guardrails, which are organized into multiple actionable controls that give more concrete guidance on how to best implement that guardrail.

Some requirements apply to specific AI systems, and some apply to the organization as a whole.

The ten guardrails include:

  1. Establishing a governance program with appropriate accountability
  2. Establishing an AI risk management program
  3. Apply security and data governance metrics
  4. Test AI thoroughly before deployment and monitor after
  5. Enable appropriate human oversight throughout the AI lifecycle
  6. Provide end-users with transparency around the use of AI
  7. Provide end-users with a way to challenge the usage or outcomes of AI
  8. Disclose information to other actors in the AI supply chain
  9. Maintain records that allow third parties to assess compliance with the Standard
  10. Solicit stakeholder feedback

The Standard also provides guidance on procuring AI, helping organizations manage the risks of acquired AI that they did not build themselves.

Non-Compliance Penalties

What are the non-compliance penalties associated with Australia’s Voluntary AI Safety Standard?

There are no non-compliance penalties as this is currently a voluntary standard.

Status

When was Australia’s Voluntary AI Safety Standard released?

The Standard was released on September 4th, 2024.

It is currently voluntary, but the Australian government signaled the possibility of the framework becoming a requirement in high-risk cases.

What are the next steps for Australia’s Voluntary AI Safety Standard?

The Australian government is soliciting feedback until October 4th, 2024, on:

  1. the proposed guardrails
  2. the proposed definition of “high-risk AI”
  3. the regulatory options for mandating the adoption of the Standard

Steps To Alignment

How can organizations ensure alignment with Australia’s Voluntary AI Safety Standard?

Australia’s Voluntary AI Safety Standard clearly defines 10 guardrails to ensure alignment.

1. Establish Governance and Accountability: Develop a clear governance framework that outlines ownership of AI use, a strategy for AI deployment, and the organization’s internal capabilities. This must include the designation of roles and responsibilities, training initiatives, and an overall strategy for regulatory compliance.

2. Implement Risk Management: Implement a dynamic AI risk management process that regularly assesses the potential harms of AI systems. This should be based on a comprehensive risk and stakeholder impact assessment and must include mechanisms for ongoing monitoring to ensure that risk mitigation efforts are effective over time.

3. Data Governance and System Security: Safeguard AI systems with robust data governance and cybersecurity measures that focus on data quality, provenance, and the unique risks posed by AI technologies. This includes ensuring the integrity and security of AI data pipelines.

4. Model Testing and Performance Monitoring: Conduct rigorous pre-deployment testing of AI models, ensuring that systems meet established acceptance criteria based on risk and impact assessments. Continuous post-deployment monitoring should be implemented to track system performance and detect any unintended outcomes or behavior changes.

5. Human Oversight: Implement meaningful human oversight mechanisms throughout the AI lifecycle. Human operators should be able to intervene in AI systems to address potential errors or unintended consequences, especially when AI interacts with critical decision-making processes.

6. Transparency and User Disclosure: Ensure that users and impacted parties are informed about AI-enabled decisions, interactions, and the generation of AI-based content. Transparency is crucial for building trust and confidence in AI systems, and disclosure should be tailored to the use case and audience.

7. Challenges and Dispute Resolution: Establish mechanisms that allow individuals and stakeholders affected by AI systems to challenge or contest AI-driven decisions and interactions. This will promote fairness and allow for corrective actions when necessary.

8. Supply Chain Transparency: Share critical information with other organizations within the AI supply chain, ensuring they understand the components, data, models, and risks involved in AI systems. This transparency facilitates collective risk management across the supply chain.

9. Record-Keeping for Compliance: Maintain comprehensive records of AI activities, including an AI inventory and documentation of system behavior. These records should demonstrate compliance with the Voluntary AI Safety Standard and facilitate audits or third-party evaluations.

10. Stakeholder Engagement: Engage stakeholders regularly to evaluate their needs, especially concerning safety, fairness, diversity, and inclusion. Organizations must continuously assess potential biases and ethical risks and strive to mitigate negative impacts on various demographic groups.

As always, staying informed and engaged will be key to achieving alignment.

AI Compliance Tools

How FairNow’s AI Governance Platform Helps With Australia’s Voluntary AI Safety Standard 

Developed by specialists in highly-regulated industries and compliance, FairNow’s AI Governance Platform is tailored to tackle the unique challenges of AI risk management.

FairNow provides a complete toolkit to satisfy all ten of the Standard’s guardrails. The FairNow AI Governance Platform was built to help organizations create:

  • Streamlined compliance processes that reduce reporting times
  • Centralized AI inventory management with intelligent risk assessments
  • Clear accountability frameworks with integrated human oversight
  • Ongoing testing and monitoring tools for continuous performance evaluation
  • Efficient regulatory tracking and comprehensive compliance documentation

FairNow enables organizations to ensure transparency, reliability, and unbiased AI usage, all while simplifying their compliance journey.

Experience how our industry-informed platform can simplify AI governance.

Book a free demo here.

AI compliance doesn't have to be so complicated.

Use FairNow's AI governance platform to:

Effortlessly ensure your AI is in harmony with both current and upcoming regulations

Ensure that your AI is fair and reliable using our proprietary testing suite

Stay ahead of compliance requirements before fees and fines become commonplace

Request A Demo

Explore the leading AI governance platform.