HR’s Simple Guide To AI Regulations & AI Governance Tools (Explained in Plain Language)

 

The last 12 months have flooded the news with new AI regulations and standards being released monthly. This brief highlights the most pressing and significant regulations for HR professionals looking to create a competitive advantage through trust.

Top 5 Key Insights You Need To Know:

    • There is rising concern globally about how AI, particularly in HR, can perpetuate bias and discrimination.
    • Within HR, most of the concern is around Automated Employment Decision Tools (AEDT), which are widely used to streamline the hiring process. These tools often include software for candidate aggregation, screening, chatbot interviews, and game-based assessments.
    • Many stakeholders must be involved and informed in crafting an organization’s AI governance framework.
    • Companies are beginning to use AI governance tools and software to streamline, monitor, and ensure compliance with these rapidly changing laws and regulations.
    • This software category is often referred to as AI TRiSM, AI Governance, Responsible AI, and Bias Auditing Software.

Q: As an HR Professional, What Regulations Should I Keep an Eye on in 2024?

A: 2024 Will Be Marked by Many Significant Regulation Advances. Become Very Familiar With These Ones:

Although an exhaustive or prioritized list may be impossible to create, these are the regulations garnering the most attention in 2024. These regulations are part of a global effort to hold companies accountable for their AI systems.

While we may look back on 2022-2023 as the “wild, wild, west” of AI usage, 2024 will bring remarkable advances in AI regulation.

Regulations are needed to ensure that the use of AI in HR doesn’t compromise employee rights, well-being, and job satisfaction, particularly in aspects like performance evaluation, monitoring, and hiring.

There are several AI regulations and standards that HR professionals should keep a close eye on:

Major AI Regulations:
  • EU AI Act
      • This is a regulation proposed by the European Union to establish a legal framework for AI systems. The Act will apply two years after it enters into force, with some exceptions for specific provisions. The legislation takes a “risk-based” approach, classifying and regulating systems based on their risk levels.
      •  Significant details of the act are expected to be released in early 2024.
  • US AI legislation: In the US, AI entered the political conversation in 2023, culminating in President Biden’s Executive Order on AI. In 2024, many items detailed in Biden’s executive order will be enacted. Some key themes of the EO include: 
      • NIST will define new standards for generative AI security and safety
      • Guardrails for data privacy in AI technology
      • Addressing algorithmic discrimination
      • Protecting workers, consumers, patients, and students from harm
      • Ensuring responsible use of AI by government agencies
US State & City-Level Regulations:

Within the United States, many state and city-level regulations are evolving. Most notably:

  • NYC Local Law 144 is called the “Automated Employment Decision Tools (AEDT) law.”
    • It applies to employers and employment agencies using AEDTs to evaluate candidates who reside in New York City. 
    • This law aims to eliminate bias in the use of AEDTs, which typically use algorithms, artificial intelligence, or machine learning to help HR professionals sort through, prioritize, or decide steps in the employment process, from hiring to firing.
    • The law prohibits employers and employment agencies from using an automated employment decision tool unless the tool has been subject to a bias audit within one year.
    • Information about the bias audit must be publicly available.
    • The Department of Consumer and Worker Protection (DCWP) began enforcement of this law and rule on July 5, 2023.
  • Maryland Code Section 3-717 is a bill in Maryland that prohibits employers from using certain facial recognition services during an applicant’s interview.
    • The bill also authorizes an applicant to consent to using certain facial recognition service technologies during an interview by signing a waiver.
    • The bill became effective on October 1, 2020.
  • Illinois Artificial Intelligence Video Interview Act (AIVIA) is a law in Illinois that regulates the use of artificial intelligence analysis on video interviews in filling a position.
    • Upon request from the applicant, employers, within 30 days after receipt of the request, must delete an applicant’s interviews.
    • The act became effective on January 1, 2020.
    • Employers must obtain consent from the applicant to be evaluated by artificial intelligence.
  • NY A00567 is a bill introduced in the New York State Assembly.
    • The bill aims to regulate the use of automated employment decision tools. These tools include personality tests, cognitive ability tests, resume scoring systems, and any system governed by statistical theory or whose parameters are defined by such systems.
    • The bill establishes criteria for using automated employment decision tools and provides for enforcement for violations of such criteria.
  • California AB 331 is titled “Automated decision tools.”
    • The bill requires a deployer and a developer of an automated decision tool to perform an impact assessment for any automated decision tool they use. This includes a statement of the purpose of the automated decision tool and its intended benefits, uses, and deployment contexts. The impact assessment must be provided to the Civil Rights Department within 60 days of completion.
    • The bill covers companies that use automated decision tools. These companies must disclose their use of such tools, explain their purpose, and how AI is used. When a consequential decision is made solely on an automated decision tool, individuals may be able to opt out and request an alternative selection process or accommodation.
  • DC “Stop Discrimination by Algorithms” Act.
    • The bill focuses on the 21 protected characteristics listed in the DC Human Rights Act.
    • Expected to pass in 2024.
    • Prohibits individuals and organizations from using biased algorithms and requires them to conduct annual bias audits.
  • MA Bill 1873
    • The bill targets employers with workplaces in Massachusetts that use automated employment decision tools (AEDTs).
    • The scope of employment decisions is comprehensive and covers hiring, promotion, termination, setting wages, performance evaluation, and more. This bill sets requirements for the employers but vendors who sell AEDTs to employers are obligated to provide their customers with all the information required.
    • Expected to pass in 2024.
Global AI Regulations:

Due to the nature of their operations, HR leaders must also pay close attention to AI regulations in countries where their customers, job applicants, or vendors operate. This is essential because these regulations affect how AI tools can be used and how data can be handled.

  • Canada’s Artificial Intelligence and Data Act
    • The Artificial Intelligence and Data Act (AIDA) is a proposed legislation that aims to regulate AI systems’ design, development, and deployment.
    • It establishes standard requirements across Canada for AI systems consistent with national and international standards. It prohibits certain conduct concerning AI systems that may seriously harm individuals or their interests.
    • It was introduced as part of the Digital Charter Implementation Act 2022 and is expected to pass in 2025 or 2026.
Global AI Standards:
  • US NIST AI Risk Management Framework (AI RMF), unlike the above regulations, AI RMF is a standard.
    • The standard is a set of guidelines developed by the National Institute of Standards and Technology (NIST) to help organizations assess and manage the risks associated with implementing and using AI systems.
    • It is intended for voluntary use and aims to improve the ability to incorporate trustworthiness considerations into designing, developing, using, and evaluating AI products, services, and systems.
    • The framework was released on January 26, 2023.
  • ISO 42001 is a “process standard.”
    • The standard is currently voluntary and opt-in. Organizations that wish to demonstrate sound AI management practices can do so by following the standard and getting certified.
    • Governments like the EU may look to make ISO 42001 compliance a requirement in some instances, like procurement of AI by governments. If this happens, ISO 42001 compliance could become table stakes for selling AI, much like SOC2 and ISO 27001 are for information security.
    • It was published on January 15, 2023.

With over 100 additional requirements expected to be added in 2024, companies will need to meticulously monitor regulatory developments to comply with existing rules across jurisdictions and prepare for new ones in the making. Alternatively, organizations may invest in AI governance software to track their requirements automatically.

Q: What Is the Difference Between an AI Regulation and an AI Standard?

A: AI Regulations Have Legal Enforceability and AI Standards Offer Technical and Ethical Guidelines.

Both are crucial for the effective governance of AI, but they operate in different domains and have different implications for non-compliance.

For example, the EU AI Act is a regulation proposed by the European Union to establish a legal framework for AI systems. The Act categorizes AI applications into risk categories and imposes legally binding rules requiring tech companies to notify people when they are interacting with specific AI systems. Non-compliance with the EU AI Act can lead to legal consequences such as fines.

Alternatively, the NIST AI Risk Management Framework is an example of a standard developed by the National Institute of Standards and Technology (NIST) in the United States. It is intended for voluntary use and aims to improve the ability to incorporate trustworthiness considerations into designing, developing, using, and evaluating AI products, services, and systems. 

Non-compliance with the NIST Framework does not directly lead to legal consequences but can potentially lead to lawsuits if an AI system causes harm due to non-adherence. Standards help build trust in the market and act as a certification of trustworthiness for users and buyers.

Q: Should HR Leaders Wait for Laws to Be Passed Before They Take Action?

A: No! Anti-bias Laws Are Already in Effect in the Employment Space.

Title VII of the Civil Rights Act already prohibits discrimination, regardless of whether it was a human or AI making the decision. Companies take on significant risks by not monitoring and self-auditing their talent practices for bias. 

Keith Sonderling, the commissioner of the EEOC, recommends that HR organizations continuously monitor and govern their talent practices. He recommends that HR organizations self-monitor when using AI, borrowing long-standing principles from the financial services industry (i.e., SR 11-7). 

Q: Why Is HR in the Spotlight When It Comes to AI Governance?

A: AI Systems Used in Employment Are Considered High-Risk.

Alongside electricity grid management, law enforcement, and health and safety, the EU Artificial Intelligence Act classifies AI-driven HR software as high-risk.

AI systems used in employment, workers management, and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination, and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons. EUROPEAN COMMISSION Brussels, 21.4.2021 COM(2021) 206 final 2021/0106 (COD) Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT Pg. 27, Section 36

Many other governing bodies have also indicated technology involved in employment will be highly scrutinized.

Q: What Do Organizational Leaders and Stakeholders Care About When It Comes To AI Governance?

A: It Depends on Who You Ask!

Feverish and widespread regulations pose a significant challenge for leadership teams, who must understand and adapt to these new laws to avoid penalties. This is how each stakeholder is impacted directly:

  • HR:
    • Ethical Recruitment: HR must ensure AI tools used for recruitment and management do not perpetuate bias or discrimination, aligning with EEOC guidelines and other labor laws.
    • Employee Training and Awareness: They must educate employees about AI use, addressing concerns about surveillance, data privacy, and job security.
    • Data Management: HR is responsible for managing employee data ethically and legally, especially when used in AI systems.
  • Legal:
    • Compliance: The legal department must ensure that the organization’s use of AI complies with all relevant regulations, which may involve interpreting complex and evolving legal standards.
    • Risk Management: They need to identify and mitigate legal risks associated with AI, such as potential liability from biased decision-making or privacy breaches.
    • Policy Development: Crafting policies and procedures that align AI practices with legal requirements.
  • Technology/IT:
    • Implementation and Oversight: They are responsible for the technical implementation of AI systems, ensuring they meet regulatory standards for accuracy, fairness, and security.
    • Data Integrity and Security: Maintaining the quality and security of data used in AI systems to prevent breaches and misuse.
  • C-Suite:
    • Strategic Decision-Making: Leadership must integrate AI responsibly, balancing innovation with regulatory compliance and ethical considerations.

With so many stakeholders involved, simply getting everyone in the same room proves challenging.

Many organizations are searching for platforms to help centralize and simplify the compliance process. Emerging technology is helping organizations stay lawful and up-to-date with current and pending regulations.

Q: What Solutions Are Available?

A: AI Governance Tools and Software

AI governance software is becoming increasingly vital in today’s rapidly evolving regulatory landscape, particularly in human resources. Well-built AI governance software balances efficiency and human oversight, empowering professionals to leverage AI technology’s efficiencies responsibly.

Organizations should prioritize systems that seamlessly integrate with their existing HR systems or offer integration-free solutions such as a synthetic fairness simulation.

Using Fairnow to Become Compliant the Easy Way: 5 Steps

Step 1 (Start Here!): Assess Current Compliance Status

Audit Existing Processes: Review your current hiring practices and identify areas where AI is used or could be implemented.

Determine if these processes meet the existing legal standards and regulations regarding fairness and non-discrimination.

Not using any AI? Then you may be able to skip this process for now. However, you will also need to confirm that none of your third-party vendors are using AI. If any of your vendors are using AI, you’ll want to do a thorough analysis of their AI use as well.

Identify Gaps: Pinpoint any discrepancies between your current practices and the requirements set by relevant employment laws, such as the Equal Employment Opportunity (EEO) laws, and AI-specific regulations (above).

Step 2: Understand Regulatory Requirements
Research AI Regulations: Of course, you can always stay informed about the latest regulations affecting AI in HR, including local, national, and international laws that impact your hiring practices yourself.

But if you’re not big on reading thousands of pages of emerging legislation, don’t worry, we are!

Consult the FairNow team to stay on top of AI regulations and standards globally.

Step 3: Implement FairNow AI Governance Software
Integrate with Existing Systems: Seamlessly integrate FairNow into your HR technology stack, ensuring it complements and enhances your current hiring processes.

Is your hiring data incomplete, inconsistent, or non-existent? Not to worry! That is very common. FairNow has developed an integration-free option that can assess bias without accessing your hiring data. Pretty cool, hey?

Customize According to Your Needs: Use FairNow’s customizable features to tailor the software to your company’s specific compliance needs and goals (we’ll help with that).

All of our features can also be adjusted to meet your team’s risk tolerance, easily increasing or decreasing controls as you see fit.

Step 4: Leverage FairNow’s Features for Compliance & Bias Evaluations

Create a Centralized AI Inventory: FairNow’s AI Inventory feature allows your team to create a centralized record of all of the AI your team is building, buying, and deploying across the organization. 

Enjoy Continuous Compliance: FairNow’s regulatory compliance toolkit ensures your hiring practices remain compliant with the latest AI regulations without manual intervention.

Enable Bias Detection Tools: Use FairNow’s tools to analyze and adjust your hiring algorithms, ensuring they are free from bias and promote fairness.

Step 5: Monitor, Report, and Improve
Generate Compliance Reports: Utilize FairNow to generate reports that demonstrate your commitment to fair and compliant hiring practices, which can be valuable for both internal review and regulatory audits.

Step 6: Educate Your Team
Promote Training and Awareness: Conduct training sessions for your HR team and relevant stakeholders about the importance of AI governance and how FairNow assists in achieving compliance.

Promote a Culture of Compliance: Encourage an organizational culture that values fairness, transparency, and compliance in every aspect of the hiring process.

By following these steps, HR professionals can leverage FairNow’s AI governance software to not only become compliant with current regulations but also maintain a proactive stance towards fairness and ethical considerations in AI-powered hiring processes.

They say, “Trust is built in drops and lost in buckets.

Companies using FairNow build trust with their users, job applicants, and end-consumers.

Ready to become compliant the easy way? Request a speedy demo here!

Request A Demo

Explore the leading AI governance platform.