New York City is the first city in the U.S. this week to pass legislation aimed at mitigating the risks of discrimination associated with the use of automated employment decision tools. The bill - I.1894 - passed by the city council in early November would ban employers from using automated hiring tools unless a yearly bias audit can show they won’t discriminate based on an applicant’s race or gender. This bill is similar to legislation from Illinois as well as Maryland, but claims to be much broader in scope - both states passed laws regulating the use of video interview tools, in particular facial recognition. I.1894 would also ask makers of those AI tools to disclose more about their opaque workings and give candidates the option of choosing an alternative process - such as a human - to review their application.
But there's a fairly large loophole for vendors baked right into the bill.
The original version of I.1894 was introduced in February of 2020. The Council’s Committee on Technology held a hearing on the bill in November of 2020, where a number of civil rights and public interest organizations raised a variety of concerns, particularly with respect to the vagueness of the bill’s audit requirements, the inadequacy of its notice provisions, and the absence of strong enforcement mechanisms. In the past twelve months, the bills enforcement and reporting provisions have only gotten watered down.
In fact, when it comes to enforcement - in what may be a cynical move on the part of the Council - it’s up to the vendor to conduct and report audits demonstrating their algorithms aren’t biased. The vendor provides their own bias audits to the prospective client, and then has to offer to perform ongoing audits.
This is the equivalent of Enron performing their own audits, and - well, actually that's essentially what happened. And look what happened there.
Given the complexity and opacity of AI systems, it’s impossible to know what requiring a “bias audit” would mean in practice. As AI rapidly develops, it’s not even clear if audits would work for some types of software.
As Wired noted:
Others, including Julia Stoyanovich, director of the Center for Responsible AI at New York University, echo similar concerns:
The idea of an audit is excellent - but it can't be the foxes reassuring the farmers that all is well. Venture-backed HR/ TA tech software firms are under tremendous investor pressures to make their numbers. Avoiding even the appearance of book-cooking is why independent audit financial audits are required of any publicly traded company. Without that level or reassurance, it will be hard to tell which AI is truly trending away from bias, and which is simply being wrapped to appear that way.
New York City legislators will need to consider adding third-party guard dogs to patrol the fences, and sniff out any bad behavior.
Sign up to get our monthly newsletter and updates about RNN.