It seems Artificial intelligence (AI) has been mainstreamed and is here to stay. It will continue to be embedded into our everyday lives, presented as a convenience to make our daily lives more accessible and efficient. Scan your newsfeed, turn on the news, or listen to your favorite podcast, and you are bound to hear something about this technology that can be life-changing in both positive and negative ways. Think again if you believe you have not experienced some form of AI in your daily life. Your company’s spam filter, the voice-to-text feature on your cell phone, smart personal assistants such as Siri, Cortana, and Google Now, and chatbots on your favorite e-commerce sites would beg to differ with you! Have you ever done a Google search for a specific item and suddenly found pop-up ads for similar products everywhere on your screens? You can thank AI!
Our technology often outpaces our ability to develop the appropriate and necessary ethics and guidance on its use. The power of AI has tremendous potential for positive impacts on society. Still, as we’ve seen in the past, in the wrong hands and without appropriate safeguards, the potential for harm is equally as great. Think of the impact hackers have had on the world—holding computer systems hostage, threatening power grids, identity theft, and a myriad of other issues that have the potential to disrupt our daily lives.
In May 2022, the Equal Employment Opportunity Commission (EEOC) issued a technical assistance bulletin titled "The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees." The guidance is based on the EEOC’s concern that AI, in the form of resume scanners, chatbots, and video interviews may introduce bias into the interview process. For example, a video interview that analyzes an applicant’s speech pattern may score a person with a speech disability, such as a speech impediment, lower on the scale and automatically screen them out of the hiring process. Candidates who required time off because of the need for treatment related to a disability or a person who took time off for a new baby or adoption may be rejected due to a gap in their resume. Another example offered by the EEOC cites a personality test that asks about an applicant’s level of optimism. This type of screening tool could inadvertently screen out qualified candidates with depression. The framing of the guidance makes it clear that even if a screening tool’s "group" hiring statistics make an algorithm seem fair, discrimination against any person with a disability violates the Americans with Disabilities Act.
Employers using algorithmic hiring tools and their vendors should confirm that their tools are free from biases that could result in discrimination. An in-depth discussion of the data validation methods vendors use is necessary to ensure your hiring practices pass muster, should the EEOC decide an in-depth review of your hiring practices is in order. Developing a stringent process to guard against bias will likely become standard practice as we move forward in this brave new world. Developing your practices early and continually improving the process will serve you well into the future!