Canada’s Artificial Intelligence and Data Act


Section 1 - Description


Section 2 - Scope


Section 3 - Requirements


Section 4 - Penalties


Section 5 - Status

Section 1


The Artificial Intelligence and Data Act (AIDA) aims to set out clear requirements for the responsible development, deployment and use of AI systems by the private sector. Under the law, businesses developing or making use of “high-impact” AI will face the strictest obligations and notably be accountable for ensuring that employees implement mechanisms to mitigate the risks of such systems. The act covers all actors at every stage of the AI development chain, from development to ongoing usage, and the level of accountability will correspond to the overall impact that actor has on the AI system as a whole.

The AIDA aims to achieve two goals:

  1. Ensure that high-impact AI systems meet expectations with regards to safety and human rights 
  2. Prohibit reckless and malicious use of AI

This is the first legislation of its kind as no such requirements are in place for users and developers of AI in Canada today. The Act is part of a larger body of legislation called the Digital Charter, which also contains the Consumer Privacy Protection Act and the Data Protection Tribunal Act.

While many details of the bill have not been finalized yet, the authors have stated a desire to align to international norms, so we should expect these details to follow other significant legislation, like the European Union’s AI Act.

Section 2


The Act applies to businesses who develop, use, or manage the operations of “high-impact” AI systems – the exact definition of which is not defined yet but a list of criteria has been laid out to include severity of potential harms, scale of use, ease of opting-out, and more.

While “high-impact” AI systems have not been defined yet, the Canadian government has listed several types of systems that are of interest: AI used in employment decisions, biometric systems used for identification and inference, systems that influence human behavior at scale and systems critical to health and safety. These categories correspond to a subset of those noted by the EU in their proposed AI Act, so it’s likely that Canada and the EU will have similar perspectives on what categories of systems are in scope for regulation.

Any AI systems that pose serious harm to Canadians and their interested will be prohibited, but the Act is currently unclear exactly how such systems are defined.

Section 3


The requirements of high-risk AI systems have not been defined yet, but they will be focused around several key pillars:

    1. Human oversight & monitoring: AI systems should be designed to accommodate human oversight and provide interpretability into its operations.
    2. Transparency: the public should be informed of how high-impact AI systems are used.
    3. Fairness and equity: high-impact AI systems should be checked for discriminatory outcomes and steps should be taken to remediate such outcomes if found.
    4. Safety: high-impact AI systems should be proactively assessed to identify potential harms, and such harms should be remediated
    5. Accountability: organizations should must enforce governance mechanisms to ensure all legal and compliance obligations are met
    6. Validity and robustness: High-impact AI systems should perform their intended objectives well and should be reliable in a variety of circumstances

The law is expected to take a few years to flesh out so the exact scope of regulations has not been defined yet. In the meantime, the Canadian Government has released a Code of Practice for generative AI that aims to ensure the safety and trustworthiness of the technology. The Code is voluntary and aims to ensure generative AI is safe and trustworthy by identifying and managing risks, mitigating bias by curating training data and fine-tuning the models, providing transparency about how the model was trained, and creating human oversight mechanisms. 


      Section 4


      Enforcement of the act will consist of three mechanisms:

        1. Monetary penalties in response to any violation of the Act
        2. Regulatory offense will be prosecuted toward more serious violations of the Act
        3. Criminal charges are possible in cases of knowing or deliberate behavior resulting in serious harm

      Section 5


      The bill has not been passed yet. Based on timelines given recently (covering consultations, drafting and refinement of the regulations), it’s unlikely that AIDA would enter enforcement before 2025 and maybe not enter full enforcement until 2026 or 2027.

      Get Expert Help With AI Governance

      Schedule a free consultation with our AI governance experts today.