Feds Warn Employers Against Discriminatory Hiring Algorithms

As AI invades the interview process, the DOJ and EEOC have provided guidance to protect people with disabilities from bias.
person in wheelchair studying at home with laptop and headphones
Photograph: Anna Stills/Getty Images

As companies increasingly involve AI in their hiring processes, advocates, lawyers, and researchers have continued to sound the alarm. Algorithms have been found to automatically assign job candidates different scores based on arbitrary criteria like whether they wear glasses or a headscarf or have a bookshelf in the background. Hiring algorithms can penalize applicants for having a Black-sounding name, mentioning a women’s college, and even submitting their résumé using certain file types. They can disadvantage people who stutter or have a physical disability that limits their ability to interact with a keyboard.

All of this has gone widely unchecked. But now, the US Department of Justice and the Equal Employment Opportunity Commission have offered guidance on what businesses and government agencies must do to ensure their use of AI in hiring complies with the Americans with Disabilities Act.

“We cannot let these tools become a high-tech pathway to discrimination,” said EEOC chair Charlotte Burrows in a briefing with reporters on Thursday. The EEOC instructs employers to disclose to applicants not only when algorithmic tools are being used to evaluate them but what traits those algorithms assess.

“Today we are sounding an alarm regarding the dangers tied to blind reliance on AI and other technologies that we are seeing increasingly used by employers,” assistant attorney general for civil rights Kristen Clark told reporters in the same press conference. “Today we are making clear that we must do more to eliminate the barriers faced by people with disabilities, and no doubt: The use of AI is compounding the long-standing discrimination that job seekers with disabilities face.”

The Federal Trade Commission gave broad guidance on how businesses can use algorithms in 2020 and again in 2021, and a White House agency is working on an AI Bill of Rights, but this new guidance signals how the two agencies will handle violations of federal civil rights law involving the use of algorithms. It also carries the credible threat of enforcement: The Department of Justice can bring lawsuits against businesses, and the EEOC receives discrimination complaints from job seekers and employees that can result in fines or lawsuits.

People with disabilities are unemployed at a rate double the national average, according to US Bureau of Labor Statistics data. People with mental-health-related disabilities also experience high levels of unemployment, and Burrows says employers must take steps to screen the software they use to ensure they don’t lock people with disabilities out of the job market.

A number of actions endorsed by the EEOC and DOJ on Thursday were previously suggested in a 2020 Center for Democracy and Technology report about the ways hiring software can discriminate against people with disabilities. They include eliminating the automated screening out of people with disabilities and providing “reasonable accommodation” for people who may otherwise have difficulty engaging with the software or hardware involved in the hiring process. The CDT report also urges audits of hiring algorithms before and after they’re put into use—a step not included by the EEOC—and refers to bias against hiring people with disabilities online as an “invisible injustice.”

As a teenager, Lydia X.Z. Brown thought filling out personality tests with job applications seemed like a fun or weird game. They can’t prove it, but they now suspect they encountered discrimination when applying for jobs at the mall near where they grew up in Massachusetts. A coauthor of the 2020 CDT hiring discrimination report, Brown called Thursday’s guidance a big win, after years of advocacy, for people like them with disabilities.

“It was really exciting to see that come out, and I'm hoping that this will lead to robust enforcement action too,” they say, adding that they hope future guidance acknowledges the influential role that intersectionality plays in how people with disabilities from different class, gender, or race backgrounds can experience discrimination differently. Similar criticism was made of a New York City law requiring tests to screen for race and gender bias in hiring algorithms.

The greatest benefit of these documents, said Ben Winters with the Electronic Privacy Information Center, is that it tells companies that the DOJ and EECC are paying attention and articulating the sort of responsibility that companies have, including liability for discrimination wrought by the software of a third-party vendor.

“It puts employers on notice that the agencies are expecting them to have a higher standard for the vendors they use,” Winters says.

The joint action by the Department of Justice and EEOC is the first by the two agencies charged with protecting the public and could signal a wider desire to prosecute instances of discrimination wrought by automation. Despite recent signs of increasing regulatory efforts, the US Congress has failed to pass a law to require testing or restrict use of artificial intelligence systems making critical decisions about people’s lives in areas like hiring, education, financial lending, and health care.