Guru Sethupathy
David Green
One of the most impressive examples of how to scale people analytics successfully comes from Capital One. During his five-year tenure as Head of People Strategy and Analytics at Capital One from 2017 to 2022, Guru Sethupathy quadrupled the size of the function to around a hundred people and together with his team delivered significant value to the business as well as Capital One’s Associates. Guru shared a case study of this transformation in Excellence in People Analytics as well as in an episode of the Digital HR Leaders podcast: How Capital One Delivers Value at Scale with People Analytics.
Guru and I caught up again recently to discuss generative AI and the growth of AI in HR in general, as well as the work Guru has been doing since the podcast episode was published two years ago, both at Capital One and now in his new company, FairNow. As Guru explains in the article, FairNow is a platform that supports firms in intelligently auditing and monitoring their AI algorithms and human processes to ensure that their methods are fair, effective, and explainable.
Guru, it’s been two years since you shared the inspiring story of how you scaled people analytics at Capital One on the Digital HR Leaders podcast. What have you been up to since then?
Hi David, good to chat with you again. Since we last chatted, I have been exploring both sides of AI in the HR space. The last few years, my team and I at Capital One were embedding AI in the learning context. We were building a simulator for learning, think a flight simulator for knowledge workers. Our strong belief is that people ‘learn by doing’ much more than sitting through a course or lecture. And the technology we were building enabled a certain category of knowledge workers at Capital One to be able to practice their skills in simulated environments and get real-time AI-based feedback to learn faster and better. This could be a win not just for workers to upskill faster but also a win for Capital One to overcome certain talent shortages. I recently left Capital One but continue to consult on this work and am excited to see the impact.
The reason I left Capital One after 6 wonderful years, was to start a tech company called FairNow. To provide some context, a majority of companies are using some kind of automation and/or AI in HR. And yet a majority of those companies don’t know if those tools are fair, compliant, or effective. And with scrutiny and regulation ramping up, we believe there is a need to square that circle and we have technology to help corporations and people ensure that their AI and human processes in HR are trustworthy.
There’s been a lot of talk recently about ChatGPT, and its implications for the future of work and HR. What are your thoughts?
AI technology has been progressing for some time, but ChatGPT was the first time that the average person could interact with it in such a clear and obvious way. ChatGPT has caught the attention of the broader public regarding both the incredible potential of AI but also the possible downsides. Every other comment I read is either, this thing is amazing, it’s going to transform everything; or this thing is saying wrong or crazy things and could be dangerous if used poorly. And I think there is truth in both.
The best use cases for the current state of AI technology are where there is lots and lots of training data and where accuracy and personalization is helpful but is not a matter of life or death. The biggest use cases of AI to date have been in areas like marketing, advertising and now with ChatGPT with generative content. For instance, AI-based ad targeting is imperfect but it is better than it used to be and that is good enough to generate a big ROI. Same with ChaptGPT writing a poem or a history paper – it kind of doesn’t matter if it’s imperfect, but if it can make you more efficient, then it is valuable. That being said, I think we are still far away from AI replacing humans in many fields. Rather, those who know how to use AI will be able to differentiate themselves. I saw a line recently that I tend to agree with: “AI won’t replace you but the person who knows how to use AI, will.”
For instance, depending on how you specifically prompt ChatGPT, it will give very different responses to the same question. Figuring out how to interact with ChatGPT to get the most value from it will be a skill in the future. In fact, I foresee a world where AI creates a new category of skills and jobs where people become specialized in partnering with particular AI systems to be more efficient and better at solving problems. Today we have AWS and Salesforce specialists. In the future, I believe we will have ChatGPT and Bard specialists.
This also opens up a broader discussion about AI in general. What are the implications for HR when it comes to AI?
First, I would say depending on whom you ask, definitions of AI vary quite a bit. For simplicity, I am alluding to any kind of tool/technology that has predictive modeling, as AI.
I would categorize use cases of AI in HR into predictive use cases and content generation use cases. In the predictive category, I think recruiting/hiring is a natural area where AI can be quite helpful. We already see many, many companies starting to use AI-related tools in the hiring space and that is only going to grow (see graph below, source: SHRM – Automation and AI in HR report, 2022). AI can improve on human inefficiencies, inconsistencies, and biases. However, AI can/will scale biases as well and that is something to really be careful of. I often hear that AI will eliminate all biases and that is not necessarily true at all. In fact, if not careful, it can scale biases in frightening ways.
Other predictive use cases for AI in HR include internal job mobility and attrition. Content generation use cases of AI include HR chatbots, writing job descriptions and talent reviews, summarizing employees conduct investigations, and in the L&D space. The technology is not there yet to do all the work for you, but if you know how to use it, AI can make you more efficient. I truly do believe AI is the future and companies that don’t use AI will be at a severe competitive disadvantage. But there are also many, many ways it can go wrong, and so being well-managed and having AI governance is important.
What about the risks for HR and organizations with AI? What should HR and people analytics leaders pay particular attention to?
AI is the hot buzzword right now and there are a lot of vendors promising AI-based solutions for HR. I would encourage HR and people analytics leaders to pay attention on two dimensions. First, are your AI systems encoding bias? Regulators, employees, and other stakeholders are paying increasing attention to ensure that these solutions are not encoding bias in the HR space – see for instance, the Workday lawsuit. After all, when it comes to HR, we are talking about people’s careers, pay, and livelihoods. The financial and reputational implications of bias are large – companies already spend billions of dollars a year between fines, settlements, payouts, and consulting fees, etc on employment discrimination lawsuits and audits and we expect this is going to only grow in the AI age. Companies can no longer afford to not have a handle on this.
Second, AI solutions are not necessarily plug and play magic, yet. Just buying an AI solution from a vendor is not going to magically lead to better hires. The performance of AI can “drift” over time and become worse. But more importantly, as we discussed above, how poorly or well humans interact with and use AI can lead to worse or better decisions and outcomes. So I would encourage HR and people analytics leaders to continuously measure the effectiveness of their AI-human systems to make sure their investments are constantly delivering value.
According to various surveys, the majority of HR leaders do not know if their HR automation/AI technologies are encoding bias or even working well. Companies that invest in monitoring and governing their AI solutions are going to be able to take advantage of the benefits of AI while managing their risk. And in the future, I think AI monitoring and governance will be a specialized skill and function in companies, including in HR.
Guru, as you mentioned earlier you have left Capital One to co-found FairNow. Please can you explain more about FairNow, where you and the team will be focusing, and how you intend to help firms address some of the risks of AI in HR and HR Technology?
According to various studies, anywhere from 50-70% of companies use automation and AI in various parts of HR, mostly in their hiring function. Yet, nearly half of those CEOs/CHROs don’t know if their AI solutions are encoding bias and don’t have the resources and/or capabilities to check. This is not a sustainable situation with all the regulation and scrutiny coming down the pipe. We started Fairnow to help automate compliance of AI solutions in HR and to help companies be proactive in monitoring their AI-based solutions so they can be well-managed and build trust. Our framework for being well-managed is fair, explainable, and effective. Our technology not only automates any regulatory audit or compliance prep but it provides insights so you can fix biases, and understand the ROI/effectiveness of your solutions and take corrective measures. And we can evaluate solutions whether they are internally built or are provided by external vendors.
Finally, please could you summarize some key tips to help guide HR and people analytics leaders when it comes to deploying AI in its people technology and programs?
HR organizations that use AI well, will have an advantage in the battle for talent. But to realize this potential, you will need to be well-managed and invest in governance. I would recommend investing in:
- Hiring or upskilling talent to be able to work with AI tools
- Collecting better data by working backwards from what you want to measure, to define metrics, and then go and collect the data you need, Otherwise you will keep having mis-measurement
- Collecting data better by leveraging incentives and technology so that the data you capture is of higher quality and not littered with incorrect or missing data
- IO psychologists who can provide scientific validation of various HR processes
- Auditing your AI tools and human processes and provide insights on how to remedy and improve their fairness, explainability, and effectiveness (this is what we do at Fairnow)
- Investing in building an internal governance team within the people analytics or HR function
THANK YOU
ABOUT THE AUTHORS
Guru Sethupathy
Guru has been a leading technologist, economist, advisor, and builder in the Future of Work over the last 15 years. He has led research, advised Fortune 100 companies, and built technology solutions that have shaped talent markets and systems. Most recently, he built the People Strategy and Analytics function at Capital One, where the team shaped talent strategy and built innovative technologies to transform how Capital One hires talent and how people learn. Now, he is the cofounder and CEO of FairNow, a technology company that helps companies be compliant and well-managed in their use of AI.
David Green
David is a globally respected author, speaker, and executive consultant on people analytics, data-driven HR and the future of work. With lead responsibility for Insight222’s brand and market development, David helps chief people officers and people analytics leaders create value with people analytics. David is the co-author of Excellence in People Analytics, host of the Digital HR Leaders podcast, and regularly speaks at industry events such as UNLEASH and People Analytics World. Prior to co-founding Insight222, David worked in the human resources field in multiple major global companies including most recently with IBM.