What is DC’s “Stop Discrimination by Algorithms” Act? A Detailed Guide
Covers "Key Life Opportunities," (Employment, Housing & More)
Accountability For Decision Making AI & Machine Learning
Mandatory Consumer Disclosures
Mandates Annual Bias Assessments
FAQs About DC’s “Stop Discrimination by Algorithms” Act
Steps to Achieve Compliance
High-Level Summary
Washington DC’s “Stop Discrimination by Algorithms” Act would prohibit individuals and organizations from using biased algorithms and require them to conduct annual bias audits.
Scope
The law would apply to any individuals, companies or groups of any type who meet at least one of the following criteria:
- Process or control personal information for more than 25,000 DC residents.
- Generate at least $15MM per year of revenue over the past 3 years.
- Broker data or generate at least 50% of their revenue from brokering data that includes personal information of DC residents.
- Vendors that perform algorithmic eligibility determinations or algorithmic information availability determinations on behalf of another business.
The act focuses on processes that use machine learning, AI, or similar techniques to replace or assist decision-making related to access to “important life opportunities.” The bill considers employment, education, credit, insurance, housing, or access to places of public accommodation.
Compliance Requirements of The Washington DC Algorithms Law
Individuals and organizations in scope would be subject to the following requirements:
The bill prohibits the use of biased algorithms and focuses on the 21 protected characteristics listed in the DC Human Rights Act.
They must conduct annual bias audits. Companies are required to share with the Office of the Attorney General details of bias audits and documentation about how the algorithms are built and used. The bill does not specify exactly what the bias audit should test or what data should be used.
Companies must publish disclosures to consumers about how they collect personal information and how their algorithms make decisions. Companies must also share explanations with the consumer if the algorithm denies them access to a service, including a method for consumers to submit corrections.
Non-Compliance Penalties
Businesses are liable to a civil penalty of up to $10,000 per violation. The bill also allows for civil lawsuits where plaintiffs could receive $100 to $10,000 per violation or damages.
Status
The bill was reintroduced in February 2023; It was first introduced in 2021 when it failed to advance in the legislation process. To date, the reintroduced bill has not been voted on. Once passed it would enter effect immediately.
How Can Companies Ensure Compliance with the Act?
Drawing from our work in AI governance and compliance, we’ve observed how organizations adapt to similar AI regulations such as NYC Local Law 144 and the Colorado AI Act (SB-205).
Here are seven practical steps to ensure compliance:
- Inventory Models and Assess Risk: Compile a detailed inventory of all AI technologies in use, assess associated risks, and establish accountability for AI operations.
- Invest in AI Governance Tools: Utilize tools that support compliance with the Act, managing AI effectively within regulatory requirements.
- Automate Audits and Compliance Checks: Implement automated systems to ensure consistent compliance with the Act, maintaining transparency and accountability.
- Consider the Use of Synthetic Data: Leverage synthetic data to fulfill objectives without compromising personal data privacy.
- Implement Mandatory Disclosures: Modify interfaces to clearly disclose when consumers are interacting with AI, and train employees on how to communicate about AI use.
- Engage in Continuous Learning and Adaptation: Stay informed about legislative changes and industry standards, and participate in voluntary programs during the Act’s implementation period.
- Voluntary Commitment to Standards: Actively participate in frameworks and commitment layouts introduced during the Act’s implementation to demonstrate leadership in AI governance.
How FairNow’s AI Governance Platform Helps
Built on deep industry expertise, FairNow’s AI Governance Platform addresses the unique challenges of AI risk management.
Our solution, designed by professionals with extensive experience in highly regulated sectors, such as those in scope for the DC AI Act, offers:
- Streamlined compliance processes and reduced reporting times
- Centralized AI inventory management with continuous risk assessment
- Clear accountability structures and human oversight implementation
- Robust policy enforcement backed by ongoing testing and monitoring
- Efficient regulation tracking and comprehensive compliance documentation
FairNow empowers organizations to ensure transparency, reliability, and unbiased AI usage while simplifying their compliance journey.
Experience how our industry-informed platform can transform your AI governance.
AI compliance doesn't have to be so complicated.
Use FairNow's AI governance platform to: