Photo by Philipp Katzenberger on Unsplash
Given the rapid adoption and uncertainty surrounding AI, it’s become increasingly necessary for employers to take control of their tech stack and monitor their use of automated employment decision tools (AEDTs). For many, this will take the form of a recruitment technology audit. But undergoing an audit can be a daunting experience for any organization, especially for an HR team. Don’t fear; when approached correctly, it can be an invaluable learning experience that helps identify areas of improvement and ensures fairness in your organization’s practices. In this blog post, we’ll share some best practices for those considering where to start when conducting an AI audit.
Legislation such as NYC’s Local Law 144 mandates that employers must audit their AEDT recruitment technology at least once a year to ensure they are not imposing bias via these AEDTs. This is a new legal requirement for many employers, with no clear guidance on how to begin or best practices to consider when attempting to comply with the law. In order to effectively audit their recruitment technology vendors, employers need to be able to identify what bias might look like within the technology and understand the right questions to ask.
We have identified best practices to help guide employers as they navigate this uncharted territory:
If a vendor cannot provide answers to these questions, it could be a warning sign that their technology may not align with your organization’s commitment to reducing bias and maintaining compliance.
Ultimately, fairness and transparency should always be the top priority. While efficiency is essential in the workplace, it should not come at the expense of fairness, and an AI audit can be an excellent opportunity for organizations to ensure that their practices are fair, transparent, and compliant.
With that in mind, PandoLogic has proactively made efforts to audit our AI technology and review our AI models’ current and future state applications to ensure, among other things, that we are helping to mitigate bias at the top of the recruitment funnel. We believe we have a responsibility to ensure that our use of AI is transparent, compliant, and consenting. Download our explainability statement here.
Read the full report here.