Bias and discrimination have been part of the hiring process for as long as hiring processes have existed. Some employers have recently identified a potential solution: hiring done, not by people, but by AI (artificial intelligence.) It’s the perfect answer, right? Who could be more objective and unbiased than a computer? As the federal Equal Employment Opportunity Commission and the Justice Department warned employers recently, using AI in hiring may not be a completely foolproof solution, and employers should proceed with caution lest they violate disability discrimination laws. Whether you were rejected by Mr. Smith, Ms. Jones, or Watson the Computer, if you think your disability played a role in that rejection, you need to get in touch with a knowledgeable New York City disability discrimination lawyer.
The guidance document, which came out earlier this month, was the federal response to employers who have begun using software that deploys algorithms and AI in parts of the new employee selection process. Algorithms and AI might be employed, for example, in administering online tests required of applicants, scoring applicants’ resumes, and making decisions about whether a particular applicant has or has not met the job’s required qualifications.
This all sounds pretty straightforward, so how could it be discriminatory? There actually are many different ways. The guidance document cited the example of an employer using “facial and voice analysis technologies” to evaluate applicants. While seemingly innocuous on the surface, this part of the process could have the effect of rejecting a person with a speech impairment, or a person with autism (whose eye contact and facial expressions might differ from those of non-disabled, neurotypical candidates,) even though those applicants with disabilities actually were qualified for the job.
Another way these systems can run afoul of the law is when they fail to do precisely what they’re supposed to do. Any AI-driven element of the hiring process should be something that screens for (and measures) only the skills and traits required for the job. When it doesn’t, problems — and legal violations — can arise. For example, an automated “personality assessment” might improperly cull a neurodivergent candidate even though they actually were qualified and should not have been eliminated. Any interactive game that inappropriately rejects an applicant because of their manual dexterity, their sensory issues, or their speaking abilities (when those were not criteria the process was supposed to be assessing) is something that could also reflect a violation of the law against disability discrimination.
New York Already Has Strong Job Applicant Protection in This Area
Here in New York City, our leaders took action last year. Last November, the New York City Council voted overwhelmingly in favor of a bill that substantially cracks down on the use of AI in the hiring process. The new city law, which Bloomberg described as one of the toughest in the country, places obligations on all employers that use AI in hiring. The new law mandates that employers disclose to applicants the fact that they used AI, and also inform applicants how they employed the technology. The new law also requires employers to offer applicants to opt-out of the AI process and, instead, to have a person perform the part of the assessment that AI otherwise would have done.
Disability isn’t the only area where these systems can be discriminatory. Ultimately, AI and algorithms are only as skillful as the people who program them. Errors by programmers, however unintentional, can have incredibly nefarious and damaging consequences. Last decade, Amazon tried to automate parts of the hiring process. Programmers “trained” the models “to vet applicants by observing patterns in resumes submitted to the company over a 10-year period.”
Because the tech industry was heavily dominated by men during those years, the models developed a preference for men, downgrading resumes that contained the word “women” or “women’s” (like Women’s Soccer Team or Society of Women Engineers.) It also penalized certain candidates who had graduated from women’s colleges.
Around the same time Amazon scrapped that project, researchers were publishing studies that showed that some popular facial recognition software programs (which certain employers used to assess applicants’ emotional responses) assigned Black men more negative emotions than white applicants.
The ways in which employers can violate the laws that prohibit employment discrimination are numerous and varied. Some are callous and calculated. Others are completely devoid of malicious intent, so much so that even a computer can be guilty of them. Intentional or inadvertent, they are both against the law and, if you’ve been hurt by it, you need a strong legal team on your side. The knowledgeable New York disability discrimination attorneys at Phillips & Associates are here to help… and to get you the relief to which the law says you’re entitled. To find out more, contact us online or at (866) 578-0614 to set up a free and confidential consultation today.