To combat time constraints and attempt to eliminate human bias, many companies have taken to entrusting at least part of their hiring processes to outside companies that use machine-learning algorithms to weed out applicants. However, with little known about how these algorithms work, they, too, may be perpetuating bias. New research from a Cornell University Computing and Information Science team found companies prefer obscurity over transparency when it comes to this emerging technology.
Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices by Manish Raghavan, Solon Barocas, Jon Kleinberg and Karen Levy found that tech companies have been able to define, and therefore address, algorithmic bias subjectively. For starters, terms like “bias” and “fairness,” as they relate to these algorithms, have not been universally defined. Therefore, tech companies can be vague about how they handle these issues.
As part of the study, the researchers looked into 19 companies that create algorithmic pre-employment screenings. These screenings typically include video interviews, questions and games. The researchers looked at company sites to find information on how these algorithms work, scouring websites for webinars, pages or other documents that lay out practices and logistics surrounding the algorithms.
They found very few companies share any information on what they specifically do to prevent employment bias. The study found that those that have mentions of “bias” and “fairness” on their sites fail to explain exactly how they mitigate bias and achieve fairness.
Raghavan explained to the Cornell Chronicle that these definitions are similar in obscurity to that of the term “free-range” as it applies to animal products being marketed as ethically sourced. These products must meet minimum standards to be considered “free-range,” which might not line up with the commonly-pictured image of these animals grazing happily in acres of grass.
“In the same way, calling an algorithm ‘fair’ appeals to our intuitive understanding of the term while only accomplishing a much narrower result than we might hope for,” he told the Chronicle.
The study also says that under Title VII, employers bear legal responsibility in the outcomes of their hiring practices. Therefore, an employer can be held liable for the effects of an algorithm it uses, regardless of what the tech vendor claims the algorithm does.
Algorithms work based on knowledge backed up by research that certain traits correlate with desirable outcomes. Machine learning discovers relationships between traits and outcomes, but sometimes exactly how these connections are made is obscure.
The study asks: “When the expert is unable to explain why, for example, the cadence of a candidate’s voice is indicative of higher job performance, or why reaction time predicts employee retention, should a vendor rely on these features?”
Vendors often also outsource their use of facial-recognition technology to third-party companies. Recent coverage of racial bias in this technology makes its use a fraught issue. Additionally, the study says, artificial intelligence emotion recognition technology also could be a disadvantage for people with disabilities.
The researchers told the Chronicle they maintain that algorithms have the potential to be used to prevent human bias. The question, they said, is whether — and how — they can be made perfect.
The Cornell researchers will present their findings in January at the Association for Computing Machinery Conference on Fairness, Accountability and Transparency in Barcelona.
Related Story: Apple Card Algorithm Accused of Gender Discrimination