Photo by Gertrūda Valasevičiūtė on Unsplash
AI is a tool like anything else. So, when I hear about AI in hiring, I think less about the AI itself and what it is capable of and more about how companies, teams and individuals may use it. Talent Acquisition (TA) and retention is a people-centered process: It wouldn’t be wise to overuse AI or replace TA with AI.
Much of the positive press around AI in hiring is behind this hope that AI resume screening or AI-assisted interview processes could help reduce individual recruiter or hiring manager bias. On the surface, this sounds plausible: screening tools will check for matches to the job description on a person’s resume. More connections between a candidate’s listed skills and the text of the role summary will result in a candidate being more highly ranked as a potential fit, removing any assumptions a person may make about names, origins, educational background and more. However, this leaves no room for human difference and nuance.
Any Talent Acquisition partner can tell you that resumes vary wildly, but can an AI resume screening tool adjudicate between similar certifications or only the certifications shown in the job description? Can it distinguish between differences in phrasing? Does it always look at the same part of the resume for a skills summary or can it pull from skill keywords mentioned anywhere in the document?
It’s important to note that most screening tools do not automatically reject a candidate based on what they do or do not see but can instead estimate a candidate’s fitness. In most cases, automatic rejections come from not meeting minimum requirements in self-filled pre-screen questions, not from an AI determined threshold. Instead, AI will put candidates it ranks higher in front of the recruiter faster than manual screening.
We must also consider that AI tools are built within our social and cultural system and we teach them the same way. Many categories of discrimination, such as race and gender presentation, are rooted systemically, which means that we are not often dealing with discrimination based on a hiring partner’s active distaste or dislike of a group of people, but rather that societal expectations inherently preference certain educational backgrounds, work experience and socio-economic statuses. A tool like AI cannot see its own bias when it pulls from data that is inherently biased.
I think here about BlackGPT which is built upon Meta’s Large Language Models by incorporating African American cultural data and history. With this extra information in mind, the answers given to prompts change significantly. While ChatGPT is not an AI screening tool, this illustrates my point: An AI is only as smart, nuanced, culturally aware and effective as we make it.
Read full article here