Key Principles From EEOC’s Latest Guidance on Employers’ Use of AI Tools
Recently, the US Equal Employment Opportunity Commission (EEOC) made clear that it intends to make discrimination caused by artificial intelligence (AI) tools an enforcement priority over the next four years. This enforcement priority follows the EEOC’s 2021 announcement of its Artificial Intelligence and Algorithmic Fairness Initiative, which launched an agencywide effort to ensure AI tools and other emerging technologies used in making employment decisions comply with the federal civil rights laws that the EEOC enforces.
In keeping with the agency’s new focus on this issue, on May 18, 2023, the EEOC issued a “technical assistance document” that provides employers using AI in employment decisions with guidance on compliance considerations under Title VII. This recent guidance follows the agency’s technical assistance document on AI and the Americans with Disabilities Act issued last May. While neither guidance document has the force of law, they represent warnings from an agency that has focused on employers’ use of automated systems.
Although Title VII applies to all employment practices, the scope of the recent guidance is limited to employers’ use of “algorithmic decision-making tools” (i.e., types of software or applications that incorporate a set of instructions intended to accomplish a defined end goal) in selection procedures – including hiring, promotion and firing, and the potential for such use to have adverse or disparate impacts on the basis of the race, color, religion, sex or national origin of persons. As the guidance makes clear, employers and software vendors using AI to help, develop or implement algorithmic decision-making tools need to analyze these tools to confirm their use is not adversely impacting any group protected under Title VII. Below are key principles for employers.
1. A wide variety of AI tools are subject to EEOC scrutiny
The agency identified various examples of algorithmic decision-making tools that may incorporate AI (i.e., AI tools) and result in disparate impacts that trigger Title VII violations. These AI tools – which implement algorithmic decision-making at different stages of the employment process – include:
- Résumé scanners that prioritize certain keywords.
- Employee-monitoring software that rates employees on the basis of keystrokes or other factors.
- Virtual assistants or chatbots.
- Video interviewing software.
- Testing software that provides “job fit” scores for applicants regarding their personalities, aptitudes, cognitive skills or perceived cultural fit.
2. Selection procedures using AI tools must be job-related and consistent with business necessity
While the focus of the guidance is on AI tools, the agency recognizes that the algorithmic decision-making tools discussed may not actually rely on AI to accomplish the defined goal. As such, the guidance is directed at all algorithmic decision-making tools – not just those that employ AI. The agency makes clear that employers’ use of algorithmic decision-making tools can constitute selection procedures subject to the EEOC’s 1978 Uniform Guidelines on Employee Selection Procedures (which provides guidance for employers in determining whether their tests and selection procedures are Title VII-compliant) when the tools are used to make or inform decisions about whether to hire, promote, terminate or take similar actions.
Thus, the guidance confirms the agency’s expectation that employers assess whether their selection procedures incorporating the use of such tools have an adverse impact on a particular group and, if the use is not job-related and consistent with business necessity, take appropriate remedial measures. Further, the guidance indicates that even if an employer can demonstrate that a selection procedure is job-related and consistent with business necessity, they should still assess other less discriminatory alternatives available that would be “comparably as effective,” but not disproportionately exclude individuals of protected classes.
3. The ‘four-fifths rule’ is a general rule of thumb but is not dispositive in all circumstances
The EEOC also noted that the “four-fifths rule” – a rule that states a selection rate for one group is “substantially” different from the selection rate of another group if the ratio between the two rates is less than four-fifths (or 80%) – can continue to be used as a “general rule of thumb” to assess whether a selection process could have a disparate impact. However, the agency clarified that it may not be appropriate in all circumstances. For example, relying on the rule may be inappropriate where an employer’s actions have discouraged individuals from applying disproportionately on grounds of a protected characteristic, or where it is not a reasonable substitute for a test of statistical significance. To that end, the agency recommends that employers ask software vendors whether they relied on the four-fifths rule or on a standard such as statistical significance, where applicable.
4. Employers are responsible for use of algorithmic decision-making tools
The agency’s guidance makes clear that employers cannot rely on the representations of outside vendors or developers of AI tools regarding any disparate impact assessments, as employers can be held responsible for the actions of vendors who act on their behalf. The guidance notes, “if the vendor is incorrect about its own assessment and the tool does result in either disparate impact discrimination or disparate treatment discrimination, the employer could still be liable.” The agency recommends that employers ask vendors if steps have been taken to evaluate whether the tool’s use causes a substantially lower selection rate for individuals with a protected characteristic.
5. Employers should proactively conduct self-analyses of AI tools for discrimination issues
The guidance concludes with the recommendation that employers conduct ongoing self-analyses of their AI tools for potential discrimination issues. The EEOC recommends that employers who discover a tool would have an adverse impact take steps to reduce the impact, or select a different tool to avoid potential Title VII violations.
The EEOC’s latest guidance follows the agency’s recent pledge with other federal agencies against discrimination and bias in automated systems, as noted in this May 2023 Cooley alert on New York City and automated employment decision tools. Along with increasing federal focus, employers using AI in employment processes should be on the lookout for developments regulating the use of these tools from the state and local levels, including in New York City. Employers using or considering conducting self-audits of their AI tools should consult with their Cooley employment attorney.
This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as “Cooley”). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. This content may be considered Attorney Advertising and is subject to our legal notices.