AI has the potential to transform many ways that HR functions. In hiring, it is already transforming how companies source, assess, and select talent. It will play a large role in personalizing learning and will help better match internal talent to roles via skills data. It will more efficiently and effectively help improve workflows and search information such as navigating the HR policy and compliance ecosystem. The potential is huge. But in order to take advantage of these benefits, HR organizations will need to ensure that their use of AI is responsible and well-governed. What does being well-governed look like? We describe our governance model below:
- Strategy – You need to develop an AI strategy that answers some of these questions:
- What are the highest level use cases of AI in your organization?
- What risks exist for each use case?
- How does the value-risk trade-off net on balance for each use case?
- How should risks be identified, measured and mitigated?
- How will humans and the AI interact in making decisions and who makes the ultimate decision?
- Talent and culture
- Upskilling – To be well governed, you will either need to hire for some new skills or upskill existing talent. New skills include prompt engineering, data and model governance, explainability analyses, fairness definitions, etc.
- Accountability – It is important to have a person be accountable for your tools. You can have a single accountable executive or multiple but in the end, they are responsible for ensuring the performance, risk, and governance of the models.
- Policy and governance council – you will want to create a council or have a formal stakeholder group that includes data scientists, technologists, risk/governance, legal, DIB to inform, shape, and influence decisions and policy. For instance, this group should set an AI transparency policy – what is the degree of transparency that weighs all the different factors. Also, this group, along with the accountable executive, help build the right culture, where teams understand and communicate risks related to AI and are empowered to raise issues when necessary.
- Model governance
- Inventory – you should register, track, and inventory all the models that you are using in HR. While inventorying, you should include descriptions and purpose of the model, metadata, and the name of an accountable executive.
- Monitoring – we recommend that you monitor your tools on four dimensions:
We will have a separate post on how to measure bias, explainability, effectiveness, and validity/reliability and under what circumstances you can or cannot monitor for each. As part of the performance monitoring, you will also want to monitor your data on dimensions of data privacy, data security, data quality, and data bias.
Our governance framework overlaps heavily with the NIST framework and then customizes and expands on it a bit for the HR context.
Deploying AI without governance would be like driving without dashboard and lights – risky and non-compliant. Governance will require some effort and investment but will ultimately enable you to go faster on your AI journey and take advantage of the tremendous potential of this amazing technology.