Artificial Intelligence (AI)
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines or computer systems.
AI Transparency
Transparency of AI systems involves informing stakeholders about the system’s objectives and processes. This could include details about how the model was trained, what data is used, and how risks are mitigated.
AI Explainability
The process of interpreting how a model arrived at its decision in a way that humans can understand. Explainability is important to gain trust in the AI system and to make sure AI is logical, robust, and fair.
AI Ethics
The study and development of principles and guidelines to ensure that AI is designed, developed, and deployed in a manner that is beneficial, fair, transparent, and respects human rights.
AI Compliance
AI compliance involves an organization understanding the legal and regulatory obligations of its AI systems and demonstrating compliance. For example, companies that use AI models for employment in New York City are required to comply with Local Law 144.
Automated Employment Decision Tool (AEDT)
A program that uses machine learning, statistical inference, artificial intelligence, or data analytics to replace or assist decision-making.
Adverse Impact Analysis
This is typically a bias audit that specifically evaluates an automated decisioning tool (AEDT) under the four-fifths rule, which is used by the EEOC as a potential sign of discrimination. The four-fifths rule is violated when the ratio between the selection rates (or pass rate) between the highest and lowest scoring groups (e.g. race or gender categories) is less than 80%.
Machine Learning
A process for creating models by learning patterns and inference derived from data.