- EU AI Act (February 2, 2025)
- California Consumer Privacy Act AB 1008 (January 1st, 2025)
- Utah AI Policy (May 1st, 2024)
- NYC Local Law 144 (July, 2023)
- Connecticut SB 1103 (June, 2023)
- Colorado SB 21-169 on Insurance (July, 2021)
- Maryland HB 1202 on Facial Recognition (May, 2020)
- Illinois Artificial Intelligence Video Interview Act (AIVIA) (January, 2020)
Stay Informed: AI Regulation Guides
Get the latest on AI laws, regulations, and industry standards for responsible AI. Sign-up today to receive timely alerts and expert insights on evolving AI requirements.
AI Regulatory Compliance FAQs
Why is AI regulation needed?
AI regulation is needed to make sure AI is developed and used responsibly. Without clear rules, AI systems can pose risks, from bias and discrimination to privacy violations, safety concerns, and lack of accountability.
One analogy to draw from is cars: cars bring huge benefits, but we have rules that require carmakers to build safe, reliable cars and drivers to operate them safely (traffic laws, seatbelt requirements). Similarly AI regulation sets guardrails to protect individuals and society, requiring organizations (both developers and deployers) to manage risks, provide transparency, and keep humans in control of critical decisions. Well-designed rules also help build trust in AI and create a level playing field for businesses using it responsibly.
What is the impact of AI regulation on innovation?
Done the right way, AI regulation can actually boost innovation by building trust. To many, AI is a novel technology that’s challenging to understand. Proper AI governance measures can provide organizations with the assurance that AI is built, used and overseen in a way that the risks are understood and managed appropriately.
At the same time, overly strict or confusing rules can slow down experimentation, especially for smaller companies. The key is striking the right balance – effective AI regulation protects people and strengthens trust, while still allowing businesses to innovate and compete.
What are the common themes and trends that we are seeing in AI regulations globally?
The specifics can vary, but AI regulations are converging towards a set of common themes:
- Risk management: organizations are expected to systematically identify, measure and mitigate risks posed by the AI system they build and use.
- Transparency: Giving stakeholders sufficient information into AI – what it does, what data is used, and what rights impacted users have.
- Human oversight: Ensuring humans are kept in the loop over critical decisions.
- Accountability and documentation: Organizations must take and demonstrate accountability for AI.
- Vendor risk: Ensuring responsible use of AI even when it comes from a third party.
- Data governance: Many laws and standards emphasize data quality, provenance, and governance as central to responsible AI.
Who do AI regulations typically apply to? (e.g., developers, deployers, or both?)
AI regulations increasingly apply to both developers (i.e., builders) and deployers (i.e., users) of AI, but they may face different requirements.
Developers are often tasked with:
- Conducting impact or risk assessments that identify and assess what impacts the model could have
- Test for bias, safety, robustness, and accuracy
- Document model training, data, and assumptions
- Provide documentation to users that covers proper usage
Deployers are often tasked with:
- Using AI in accordance with developer instructions
- Ensuring proper oversight over the system
- Assess and mitigate discrimination and legal risks
- Monitor system performance post-deployment (possibly including bias audits)
- Maintain audit trails and recordkeeping
What happens if my company violates these AI regulations?
Fines vary for different regulations – but they can be large! Violation of the EU AI Act can bring fines as high as €35 million or 7% of a company’s revenue.
Beyond fines, violations can result in lawsuits, reputational damage, and bans on system deployment.
What constitutes "high-risk" AI systems under these regulations?
Existing and pending legislation often puts heavy scrutiny on AI use cases that can impact people’s rights, safety, or access to important services. While definitions vary slightly across jurisdictions, common examples include AI used in:
- Hiring and employment (e.g., hiring, promotion)
- Credit and lending (e.g., credit scoring, loan approvals)
- Healthcare (e.g., diagnostic tools, treatment recommendations)
- Essential government services (e.g. benefits eligibility, immigration decisions)
Regulations like the EU AI Act and the Colorado AI Act use risk-based approaches. They apply stricter requirements (like impact assessments, documentation, and oversight) to high-risk use cases. Even if a system doesn’t fully automate decisions, it may still be considered high-risk if it materially influences important outcomes.
Can chatbots fall under AI regulations?
Yes. In many cases, chatbots and simple automation tools can fall under AI regulations, depending on what they do and how they impact people.
If a chatbot is just answering FAQs, it may be considered low risk and subject to less stringent requirements. But if it does any of the following, it likely comes under regulatory scope:
- Influences decisions about people (e.g., screening job applicants, determining eligibility)
- Collects or processes personal or sensitive data, especially in health, finance, or employment
- Is designed to imitate humans in ways that may mislead users
Regulations like the EU AI Act and the Utah AI Policy Act include transparency requirements for conversational AI that mean users must be told when they’re interacting with an AI system. So even “simple” tools may need to comply, especially if they shape outcomes or affect user rights.
What is the current state of AI regulation in the US?
The US does not yet have a single, comprehensive AI law like the EU AI Act. Currently, AI is governed through a mixture of federal & state/local laws, sector-specific regulations, and voluntary frameworks.
At the federal level, agencies like the FTC, EEOC, and CFPB are applying existing laws to AI-related risks such as discrimination, unfair practices, and privacy violations. Several states, including California and Colorado, have introduced AI-specific rules, especially around consumer protections, transparency, and automated decision-making. In the US, the regulatory burden falls more on the deployers of AI, where the EU’s rules place the burden more heavily on developers.
What is the current state of AI regulation globally?
Many countries are moving quickly to regulate AI, but the specifics vary. The EU is leading the way with the EU AI Act, which imposes strict requirements on high-risk AI systems and bans certain uses altogether. Through the Brussels effect, other countries like Canada and Brazil are developing laws with similar scopes and requirements.
In the UK, AI oversight is guided by sector regulators and voluntary principles rather than a single law. China has already put binding rules in place for specific AI uses, like deepfakes and recommendation algorithms.
Globally, some common trends are emerging. Developers and deployers will be expected to focus on risk management, transparency, human oversight, and accountability. And AI uses that affect people’s rights or safety receive additional scrutiny.
What roles will standards like ISO 42001 and NIST AI RMF play?
Many organizations will want to adopt AI governance frameworks even when it’s not a legal requirement. They may find stakeholders and customers are asking about – and expecting – sound AI governance practices, and AI governance standards offer clear and widely-accepted best practices for AI risk management, transparency, and oversight.
Adopting these standards can also help build trust with customers, partners, and regulators by showing that an organization is following recognized guidelines. In some cases, they may even become mandatory if adopted into contracts or required by future laws.
How can I stay informed about new AI regulations that might affect my business?
Subscribe to our AI Governance Newsletter on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7339297409803395073