What is DC’s “Stop Discrimination by Algorithms” Act? A Detailed Guide

Covers "Key Life Opportunities," (Employment, Housing & More)

Accountability For Decision Making AI & Machine Learning

Mandatory Consumer Disclosures

Mandates Annual Bias Assessments

Utah AI Policy Act Compliance

FAQs About DC’s “Stop Discrimination by Algorithms” Act

Steps to Achieve Compliance

Stop Discrimination By Algorithms Act

High-Level Summary

Washington DC’s “Stop Discrimination by Algorithms” Act would prohibit individuals and organizations from using biased algorithms and require them to conduct annual bias audits.

Scope

The law would apply to any individuals, companies or groups of any type who meet at least one of the following criteria:

  • Process or control personal information for more than 25,000 DC residents.
  • Generate at least $15MM per year of revenue over the past 3 years.
  • Broker data or generate at least 50% of their revenue from brokering data that includes personal information of DC residents.
  • Vendors that perform algorithmic eligibility determinations or algorithmic information availability determinations on behalf of another business.

The act focuses on processes that use machine learning, AI, or similar techniques to replace or assist decision-making related to access to “important life opportunities.” The bill considers employment, education, credit, insurance, housing, or access to places of public accommodation.

Compliance Requirements of The Washington DC Algorithms Law

Individuals and organizations in scope would be subject to the following requirements:

The bill prohibits the use of biased algorithms and focuses on the 21 protected characteristics listed in the DC Human Rights Act.

They must conduct annual bias audits. Companies are required to share with the Office of the Attorney General details of bias audits and documentation about how the algorithms are built and used. The bill does not specify exactly what the bias audit should test or what data should be used.

Companies must publish disclosures to consumers about how they collect personal information and how their algorithms make decisions. Companies must also share explanations with the consumer if the algorithm denies them access to a service, including a method for consumers to submit corrections.

Non-Compliance Penalties

Businesses are liable to a civil penalty of up to $10,000 per violation. The bill also allows for civil lawsuits where plaintiffs could receive $100 to $10,000 per violation or damages.

Status

The bill was reintroduced in February 2023; It was first introduced in 2021 when it failed to advance in the legislation process. To date, the reintroduced bill has not been voted on. Once passed it would enter effect immediately.

How Can Companies Ensure Compliance with the Act?

Drawing from our work in AI governance and compliance, we’ve observed how organizations adapt to similar AI regulations such as NYC Local Law 144 and the Colorado AI Act (SB-205).

Here are seven practical steps to ensure compliance:

  1. Inventory Models and Assess Risk: Compile a detailed inventory of all AI technologies in use, assess associated risks, and establish accountability for AI operations.

  2. Invest in AI Governance Tools: Utilize tools that support compliance with the Act, managing AI effectively within regulatory requirements.

  3. Automate Audits and Compliance Checks: Implement automated systems to ensure consistent compliance with the Act, maintaining transparency and accountability.

  4. Consider the Use of Synthetic Data: Leverage synthetic data to fulfill objectives without compromising personal data privacy.

  5. Implement Mandatory Disclosures: Modify interfaces to clearly disclose when consumers are interacting with AI, and train employees on how to communicate about AI use.

  6. Engage in Continuous Learning and Adaptation: Stay informed about legislative changes and industry standards, and participate in voluntary programs during the Act’s implementation period.

  7. Voluntary Commitment to Standards: Actively participate in frameworks and commitment layouts introduced during the Act’s implementation to demonstrate leadership in AI governance.

How FairNow’s AI Governance Platform Helps

Built on deep industry expertise, FairNow’s AI Governance Platform addresses the unique challenges of AI risk management.

Our solution, designed by professionals with extensive experience in highly regulated sectors, such as those in scope for the DC AI Act, offers:

  1. Streamlined compliance processes and reduced reporting times
  2. Centralized AI inventory management with continuous risk assessment
  3. Clear accountability structures and human oversight implementation
  4. Robust policy enforcement backed by ongoing testing and monitoring
  5. Efficient regulation tracking and comprehensive compliance documentation

FairNow empowers organizations to ensure transparency, reliability, and unbiased AI usage while simplifying their compliance journey.

Experience how our industry-informed platform can transform your AI governance.

Book a free demo here.

AI compliance doesn't have to be so complicated.

Use FairNow's AI governance platform to:

Effortlessly ensure your AI is in harmony with both current and upcoming regulations

Ensure that your AI is fair and reliable using our proprietary testing suite

Stay ahead of compliance requirements before fees and fines become commonplace

Request A Demo

Explore the leading AI governance platform.