Photo by Cash Macanaya on Unsplash
As AI technology rapidly advances, companies are eagerly adopting these innovations to revolutionize and streamline their hiring practices. Notable firms like Hilton and Unilever have adopted AI-driven tools such as chatbots for initial screenings and one-way video interview platforms to enhance efficiency and manage large volumes of applicants. These tools automate routine tasks, allowing talent acquisition professionals to focus on more strategic aspects of recruitment.
With the swift progress and implementation of AI in hiring, it raises significant questions: Are we moving toward an era where human interviews will become redundant? Will AI eventually surpass human judgment, rendering it irrelevant? These questions are pivotal as businesses increasingly integrate AI into their hiring processes, prompting a reevaluation of the role human interaction plays in recruitment.
The goal of this article is to delineate the relative strengths and weaknesses of both AI and human assessments to help talent acquisition (TA) professionals strike the optimal balance in their recruitment strategies.
Bias: Human interviewers inherently bring biases into the interview process, whether it’s a preference for candidates who share similar backgrounds (“similar-to-me” bias) or being overly influenced by a candidate’s one positive skill (halo effect). These biases, and many others that creep into the interview process can skew assessments and detract from both the accuracy and fairness of the interview process.
Inconsistency: Human judgment can be highly inconsistent. Factors as trivial as an interviewer’s mood or physical state (like hunger) can influence their decision-making. A well-cited study on parole board decisions found that judges gave harsher decisions before lunch, indicating how physiological states can affect outcomes. Similarly, a 1960s study by Lewis Goldberg examined the consistency of radiologists’ judgments in interpreting the same X-rays at different times. The findings revealed significant inconsistencies in their diagnoses, indicating that external factors such as fatigue and time of day could influence their decision-making. This research highlights the potential variability and subjectivity in professional judgments, even among highly trained experts.
Misalignment: Many interview processes lack clear definitions of what “good” looks like in terms of skills, values, and motivations. This ambiguity leaves substantial room for interpretation, allowing biases to creep in and creating misalignments between different interviewers. Such discrepancies often result in the need for multiple interview rounds.
Bias in, bias out: Many AI algorithms are trained on historical human decisions that contain inherent biases, leading these systems to perpetuate rather than correct these biases. A notable example is the Amazon CV screening algorithm, which was discontinued after it was found to systematically exclude women. This bias arose because the algorithm was trained on historical data from resumes submitted over the past 10 years, which predominantly came from men, reflecting the existing male dominance in the tech industry. This case illustrates how AI, if not carefully monitored and adjusted, can reinforce existing workplace inequalities rather than eliminate them.
Read full article here