What is AI Verify?

An Overview & Guide to Implementation

Currently Voluntary

Focuses on Generative AI Governance

Aligned With Other Standards Like the NIST AI RMF

Utah AI Policy Act Compliance

FAQs About AI Verify

Steps to Achieve Alignment

Colorado SB21-169 On Insurance

High-Level Overview

What are the core concepts of the AI Verify Standard?

The AI Verify Standard, published by Singapore’s AI Verify Foundation, aims to foster a trusted ecosystem for the development and use of generative AI.

The framework ensures responsible AI development and usage by addressing the unique challenges posed by generative AI technologies.

The comprehensive framework addresses various aspects of AI governance, recognizing the importance of collaboration and expertise pooling.

Scope

Who does AI Verify apply to?

AI Verify is voluntary and recommended for any organization seeking to enhance its governance and accountability practices around generative AI technologies.

Alignment Requirements

What are the requirements of AI Verify?

  • Accountability: Companies are expected to establish clear lines of responsibility and accountability throughout the AI development and deployment process. This includes identifying who is responsible for different aspects of AI systems, from design to implementation, and ensuring that these responsibilities are met ethically and legally.
  • Data: Companies should ensure the quality and integrity of the data used in AI systems. This involves using reliable and unbiased data sources, respecting data privacy laws, and ensuring that data handling practices are transparent and ethical.
  • Trusted Development and Deployment: Companies are expected to follow best practices in the development and deployment of AI technologies. This includes being transparent about how AI systems work, ensuring that they are reliable, and communicating clearly about the capabilities and limitations of these systems. The framework also recognizes the importance of moving towards a comprehensive and systematic approach to safety evaluations, which is needed to make safety results comparable across different AI systems.
  • Incident Reporting: Companies should have robust processes in place for reporting and managing incidents (of all kinds) related to AI. Users should have the capability to report errors to the developer and deployer of the AI system. Organizations should develop and enact processes to cover monitoring AI systems for unexpected behavior, promptly reporting incidents, and taking corrective actions when necessary.
  • Testing and Assurance: Companies are expected to engage in rigorous testing and assurance practices to verify the safety and effectiveness of AI systems. This might involve third-party testing and adherence to industry standards to ensure that AI systems perform as intended and do not pose undue risks.
  • Security: Companies must address the unique security challenges posed by AI, including protecting AI systems from malicious use and ensuring that AI does not introduce new vulnerabilities into existing systems. The framework recommends a “security-by-design” posture that makes the security risks a consideration in every stage of the AI development lifecycle.
  • Content Provenance: Companies should be transparent about the origin of AI-generated content and implement measures to prevent the spread of misinformation. This could involve using new technologies like digital watermarking to verify the authenticity of AI-generated media.
  • Safety and Alignment R&D: Companies are encouraged to invest in research and development to improve the alignment of AI systems with human values and intentions, ensuring that AI behaves in ways that are beneficial and non-harmful.
  • AI for Public Good: Companies are encouraged to use AI in ways that benefit society, such as addressing social challenges, improving accessibility and promoting ethical AI use.

    Non-Compliance Penalties

    What are the non-compliance penalties associated with AI Verify?

    There are no non-compliance penalties as this AI Verify is currently a voluntary standard.

    Status

    When was AI Verify released?

    AI Verify was launched in June 2022 by Singapore’s Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC). The AI Verify Foundation, which now manages the standard, was established in 2023 to promote and govern the AI Verify framework. The foundation focuses on fostering trust in AI through the testing and validation of AI systems.

    AI Verify is voluntary, but adoption of the framework is encouraged for companies seeking to enhance their AI governance practices.

    The Standard reflects a proactive effort to address the evolving challenges posed by generative AI technologies.

    Steps To Alignment

    How can organizations ensure alignment with AI Verify?

    The framework covers the following pillars and lays out the expectations:

    1. Establish Governance and Accountability: Develop a clear governance framework that outlines ownership of AI use, a strategy for AI deployment, and the organization’s internal capabilities. This must include the designation of roles and responsibilities, training initiatives, and an overall strategy for regulatory compliance.

    2. Implement Risk Management: Implement a dynamic AI risk management process that regularly assesses the potential harms of AI systems. This should be based on a comprehensive risk and stakeholder impact assessment and must include mechanisms for ongoing monitoring to ensure that risk mitigation efforts are effective over time.

    3. Data Governance and System Security: Safeguard AI systems with robust data governance and cybersecurity measures that focus on data quality, provenance, and the unique risks posed by AI technologies. This includes ensuring the integrity and security of AI data pipelines.

    4. Model Testing and Performance Monitoring: Conduct rigorous pre-deployment testing of AI models, ensuring that systems meet established acceptance criteria based on risk and impact assessments. Continuous post-deployment monitoring should be implemented to track system performance and detect any unintended outcomes or behavior changes.

    5. Human Oversight: Implement meaningful human oversight mechanisms throughout the AI lifecycle. Human operators should be able to intervene in AI systems to address potential errors or unintended consequences, especially when AI interacts with critical decision-making processes.

    6. Transparency and User Disclosure: Ensure that users and impacted parties are informed about AI-enabled decisions, interactions, and the generation of AI-based content. Transparency is crucial for building trust and confidence in AI systems, and disclosure should be tailored to the use case and audience.

    7. Challenges and Dispute Resolution: Establish mechanisms that allow individuals and stakeholders affected by AI systems to challenge or contest AI-driven decisions and interactions. This will promote fairness and allow for corrective actions when necessary.

    8. Supply Chain Transparency: Share critical information with other organizations within the AI supply chain, ensuring they understand the components, data, models, and risks involved in AI systems. This transparency facilitates collective risk management across the supply chain.

    9. Record-Keeping for Compliance: Maintain comprehensive records of AI activities, including an AI inventory and documentation of system behavior. These records should demonstrate compliance with the Voluntary AI Safety Standard and facilitate audits or third-party evaluations.

    10. Stakeholder Engagement: Engage stakeholders regularly to evaluate their needs, especially concerning safety, fairness, diversity, and inclusion. Organizations must continuously assess potential biases and ethical risks and strive to mitigate negative impacts on various demographic groups.

    As always, staying informed and engaged will be key to achieving alignment.

    AI Compliance Tools

    How FairNow’s AI Governance Platform Helps With AI Verify Standard

    Developed by specialists in highly-regulated industries and compliance, FairNow’s AI Governance Platform is tailored to tackle the unique challenges of AI risk management.

    FairNow provides a complete toolkit to satisfy all of the Standard’s guardrails. The FairNow AI Governance Platform was built to help organizations create:

    • Streamlined compliance processes that reduce reporting times
    • Centralized AI inventory management with intelligent risk assessments
    • Clear accountability frameworks with integrated human oversight
    • Ongoing testing and monitoring tools for continuous performance evaluation
    • Efficient regulatory tracking and comprehensive compliance documentation

    FairNow enables organizations to ensure transparency, reliability, and unbiased AI usage, all while simplifying their compliance journey.

    Experience how our industry-informed platform can simplify AI governance.

    Book a free demo here.

    Request A Demo

    Explore the leading AI governance platform.