Over the last few decades, the digital economy has moved from the Internet era, to the data/cloud era, to now, the AI era. Throughout these times, a consistent thread has been the importance of building trust. In the 90s, many people were convinced that consumers would not feel comfortable submitting their credit card info online or buying products sight unseen from unknown sellers. Companies had to solve these issues of trust before they could make digital commerce a reality. In the last decade plus, data privacy, protection, and cybersecurity have emerged as important dimensions of trust that companies need to address. And to compete in the emerging AI economy, companies need to build trust around their AI. Issues of fairness, transparency/explainability, effectiveness, accountability, and ethics are paramount to building that trust. You can read more about our AI governance model here.
The first step in that journey begins today, July 5th, as NYC Local Law 144 enters enforcement. Introduced in 2021, this bill, though imperfect, is the first of its type in regulating AI in the United States. The law requires employers who use AI in their hiring process in New York City to ensure their AI is unbiased. The bill requires annual, independent, bias audits for employers who are based in NYC, or use automated decision tools (AEDT) on candidates or applicants in NYC. An AEDT is defined as any tool that uses machine learning, statistical modeling, data analytics, or artificial intelligence to make or influence decisions. We, at FairNow, help companies with this audit and you can learn more about the law here.
Other AI-focused regulations are in development, both globally and in the US. The European Parliament passed the EU AI Act and negotiations will begin to settle the final details of the law. Connecticut recently passed a law requiring state agencies to follow guidelines for the safe use, development and procurement of AI. Senator Chuck Schumer has recently been leading the push to develop federal AI legislation. A press release from his office shares that his framework would likely comprise four guardrails to ensure responsible AI development. These measures include (i) identifying algorithms’ creators and target audiences, (ii) disclosing data sources, (iii) clarifying decision-making processes, and (iv) implementing explicit and robust ethical guidelines. At FairNow, we are tracking the developments of AI legislation across the globe and you can stay up to date at fairnow.com/guides!
Compliance with regulations is an important first–but not final–step in building trust in your company’s AI. NIST and ISO standards are emerging to lay out specific policies and practices around AI development. We expect that these standards will become foundational table-stakes for companies that build and use AI. Companies that can demonstrate that their algorithms are effective, explainable, and fair will differentiate themselves in the market.
We are entering a new era and AI has so much positive potential, but that potential will go unrealized without trust. In this economy, AI won’t be the differentiator, but trustworthy AI will. At FairNow we’re helping companies be well governed and build trust in their AI. Learn more here.