American democracy depends on everyone having equal access to work. But in reality, people of color, women, those with disabilities and other marginalized groups experience unemployment or underemployment at disproportionately high rates, especially amid the economic fallout of the Covid-19 pandemic. Now the use of artificial intelligence technology for hiring may exacerbate those problems and further bake bias into the hiring process.
At the moment, the New York City Council is debating a proposed new law that would regulate automated tools used to evaluate job candidates and employees. If done right, the law could make a real difference in the city and have wide influence nationally: In the absence of federal regulation, states and cities have used models from other localities to regulate emerging technologies.
Over the past few years, an increasing number of employers have started using artificial intelligence and other automated tools to speed up hiring, save money and screen job applicants without in-person interaction. These are all features that are increasingly attractive during the pandemic. These technologies include screeners that scan résumés for key words, games that claim to assess attributes such as generosity and appetite for risk, and even emotion analyzers that claim to read facial and vocal cues to predict if candidates will be engaged and team players.
In most cases, vendors train these tools to analyze workers who are deemed successful by their employer and to measure whether job applicants have similar traits. This approach can worsen underrepresentation and social divides if, for example, Latino men or Black women are inadequately represented in the pool of employees. In another case, a résumé-screening tool could identify Ivy League schools on successful employees’ résumés and then downgrade résumés from historically Black or women’s colleges.
In its current form, the council’s bill would require vendors that sell automated assessment tools to audit them for bias and discrimination, checking whether, for example, a tool selects male candidates at a higher rate than female candidates. It would also require vendors to tell job applicants the characteristics the test claims to measure. This approach could be helpful: It would shed light on how job applicants are screened and force vendors to think critically about potential discriminatory effects. But for the law to have teeth, we recommend several important additional protections.
The measure must require companies to publicly disclose what they find when they audit their tech for bias. Despite pressure to limit its scope, the City Council must ensure that the bill would address discrimination in all forms — on the basis of not only race or gender but also disability, sexual orientation and other protected characteristics.
These audits should consider the circumstances of people who are multiply marginalized — for example, Black women, who may be discriminated against because they are both Black and women. Bias audits conducted by companies typically don’t do this.
The bill should also require validity testing, to ensure that the tools actually measure what they claim to, and it must make certain that they measure characteristics that are relevant for the job. Such testing would interrogate whether, for example, candidates’ efforts to blow up a balloon in an online game really indicate their appetite for risk in the real world — and whether risk-taking is necessary for the job. Mandatory validity testing would also eliminate bad actors whose hiring tools do arbitrary things like assess job applicants’ personalities differently based on subtle changes in the background of their görüntü interviews.
In addition, the City Council must require vendors to tell candidates how they will be screened by an automated tool before the screening, so candidates know what to expect. People who are blind, for example, may not suspect that their görüntü interview could score poorly if they fail to make eye contact with the camera. If they know what is being tested, they can engage with the employer to seek a fairer test. The proposed legislation currently before the City Council would require companies to alert candidates within 30 days if they have been evaluated using A.I., but only after they have taken the test.
Finally, the bill must cover not only the sale of automated hiring tools in New York City but also their use. Without that stipulation, hiring-tool vendors could escape the obligations of this bill by simply locating sales outside the city. The council should close this loophole.
With this bill, the city has the chance to combat new forms of employment discrimination and get closer to the ülkü of what America stands for: making access to opportunity more equitable for all. Unemployed New Yorkers are watching.
Alexandra Reeve Givens is the chief executive of the Center for Democracy & Technology. Hilke Schellmann is a reporter investigating artificial intelligence and an assistant professor of journalism at New York University. Julia Stoyanovich is an assistant professor of computer science and engineering and of veri science and is the director of the Center for Responsible AI at New York University.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.