AI in the Workplace: US Legal Developments
Recent federal, state and litigation developments in artificial intelligence (AI) use in the workplace highlight the growing tension between fostering innovation and safeguarding against discrimination and other harms arising from using AI tools at work. This alert outlines key developments from this year and offers compliance tips for employers.
Federal developments
Guidance withdrawn, moratorium falters
In a February 2025 Cooley alert, we reported that many federal agencies, including the Equal Employment Opportunity Commission (EEOC) and Department of Labor (DOL), removed key guidance documents regarding the use of AI in the workplace. Since the removal of these documents, the federal government attempted to continue the “pro-innovation” stance of the current administration by attempting to pass a moratorium on state and local AI laws and regulations. The moratorium, part of the budget bill, HR 1 (the One Big Beautiful Bill Act), included a ban on federal broadband funding for states if the states regulated AI. However, the moratorium was removed before HR 1 became law. With the moratorium stripped, the patchwork of state AI laws and regulations continues to be enforceable.
White House AI Action Plan
In July 2025, the White House released its AI Action Plan, a policy roadmap anchored in Executive Order 14179 that rescinds prior directives and establishes a new federal framework for AI. The plan shifts focus from “ethical deployment” toward advancing US AI leadership. Like the Biden administration’s AI Blueprint, its recommendations are nonbinding and will require further agency or congressional action.
Key provisions include:
- Workforce training: Expansion of AI literacy across education, apprenticeships and retraining, with the Department of the Treasury considering tax-free treatment of employer-sponsored AI training. The DOL may use discretionary funds for displaced workers and launch retraining pilots.
- Labor market monitoring: The Bureau of Labor Statistics, Census Bureau and Bureau of Economic Analysis must analyze AI’s impact on jobs and wages, supported by a new DOL AI Workforce Research Hub.
- Regulatory and funding levers: Federal agencies may weigh a state’s AI regulatory climate when awarding funds. The plan directs a review of Federal Trade Commission (FTC) enforcement to avoid hindering innovation and updates procurement rules requiring contracted AI systems to be free of ideological bias.
Given the breadth and potential impact of these proposals, employers should begin assessing how the AI Action Plan may affect workforce strategy, compliance obligations and vendor relationships.
State developments
California finalizes AI regulations
The California Civil Rights Council (CRD) finalized its employment regulations regarding Automated Decision Systems (ADS), effective October 1, 2025. The rules confirm that using ADS – defined broadly to include any computational tool that makes or facilitates human decision-making using AI, machine learning, algorithms or other data-driven tools – can violate state anti-discrimination laws if it negatively impacts employment benefits for applicants or employees based on protected traits. The CRD warned in a press release that ADS may reinforce existing biases, citing examples like hiring tools that replicate male-dominated workforce patterns or job ad delivery systems that target recruiting efforts for roles based on race or gender stereotypes.
Key provisions of the regulations include:
- ADS-related discrimination prohibited. The regulations confirm that using an ADS or selection criteria that discriminates against an applicant or employee is prohibited, unless the criteria is job-related and consistent with business necessity. In addition, there must be no less discriminatory standard, test or other selection criteria that serves the employer’s goals as effectively.
- Proactive testing. Anti-bias testing, including the “quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results,” and any other “proactive efforts” to avoid unlawful discrimination, may be relevant to defending a claim of discrimination.
- Recordkeeping. Employers are required to preserve ADS-related records for at least four years from the creation of the record or personnel action involving the ADS. These records include selection criteria, automated-decision system data, and other records created or received by the employer dealing with any employment practice and affecting any employment benefit of any applicant or employee.
- Third-party liability. In a nod to the Mobley v. Workday case (which, as discussed in this July 2024 Cooley alert and below, found that an AI vendor could be liable for discrimination as an “agent” of an employer), the regulations explicitly provide for third-party liability of software providers and vendors, by defining an “agent” of an employer as also an “employer” for purposes of the regulations. Liability attaches if the agent, on behalf of an employer, exercises a function traditionally exercised by the employer, such as applicant recruitment, applicant screening, hiring, promotion or making decisions regarding pay, benefits or leave.
- Accommodations may be required. Using ADS, such as those that “analyze[] an applicant’s tone of voice, facial expressions or other physical characteristics or behavior,” may discriminate against individuals based on race, national origin, gender, disability or other characteristics. The regulation states that employers may be required to provide reasonable accommodations or alternative assessments in these cases.
Texas enacts pared-down AI governance act
Texas Gov. Greg Abbott signed HB 149, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which goes into effect on January 1, 2026. Unlike many other state AI laws and prior versions of TRAIGA, the enacted version of the law imposes minimal requirements on private employers and focuses primarily on government agencies. TRAIGA defines AI systems broadly as machine-based tools that generate outputs, such as decisions or predictions based on input data.
Key provisions include:
- Intent-based discrimination standard: AI systems may not be used with the intent to unlawfully discriminate, but consistent with recent federal efforts to curb disparate impact theory, TRAIGA explicitly rejects disparate impact as a stand-alone basis for liability.
- Sandbox program: In a unique provision, developers can test innovative AI systems under a state-run sandbox with temporary legal protections, subject to approval by the Texas Department of Information Resources.
- Attorney general enforcement. The state attorney general has exclusive enforcement authority, with no private right of action. Companies receive notice and a cure period before penalties apply. Defenses include substantial compliance with National Institute of Standards and Technology (NIST) risk frameworks. Penalties range from $10,000 to $200,000 per violation, plus daily fines for ongoing issues.
Colorado’s AI law effective date delayed
Last year, Colorado passed SB 205, a comprehensive AI law set to take effect on February 1, 2026. As discussed in this June 2024 Cooley blog post, the law places substantial responsibilities on AI developers and deployers, including requiring reasonable care to avoid algorithmic discrimination, a risk management policy and program, notice and disclosure obligations, and impact assessments. In a special legislative session gathered in response to concerns raised by Gov. Jared Polis, the state recently enacted SB25B-004, which delayed the effective date of the law to June 30, 2026. This delay allows more time to address business and technology concerns when the legislature reconvenes in January 2026.
Virginia’s proposed AI law vetoed
Virginia Gov. Glenn Youngkin vetoed HB 2094, the High-Risk Artificial Intelligence Developer and Deployer Act, which would have enacted a new regulatory framework for employers that develop or use a “high-risk” AI system. In his veto message, Youngkin expressed concerns that the bill would risk stifling the AI industry, harm the creation of new jobs and place an “especially onerous [compliance] burden” on smaller companies.
Litigation developments
Mobley v. Workday age discrimination claim advances as collective action
In May 2025, the US District Court for the Northern District of California granted preliminary certification for a nationwide collective action of applicants over the age of 40, under the Age Discrimination in Employment Act (ADEA). The certified collective includes individuals who, since September 2020, applied for jobs through Workday’s platform and were denied employment recommendations.
The court found that Mobley plausibly alleged a unified policy: Workday’s use of AI-based applicant screening tools to score, sort, rank or screen candidates. Central to the dispute is whether these tools produce a disparate impact on older applicants – a question the court deemed susceptible to common proof across the collective. Workday’s objections regarding the size of the collective and logistical hurdles in identifying members were rejected, with the court noting that widespread alleged discrimination is not a basis to deny notice.
As we previously reported in July 2024, this closely watched case was brought by Derek Mobley, a Black applicant over the age of 40. Along with other plaintiffs, Mobley alleged that Workday’s AI screening tools incorporated biased training data and reflected employer preferences, resulting in systemic rejection of older candidates. The court emphasized that although employers may enable or disable specific AI features, the certified collective includes only applicants whose submissions were processed by Workday’s AI recommendation system.
Workday argued that it does not “recommend” applicants, and that its tools merely reflect employer input. The court rejected this, citing Workday’s own marketing materials and discovery responses indicating that its systems generate AI-driven job recommendations. The court also dismissed Workday’s claim that individual differences among applicants (e.g., qualifications, rejection rates) preclude collective treatment, noting that such issues are better addressed at later stages of litigation.
Americans with Disabilities Act (ADA) allegations against Amazon
According to news reports, a group of disabled workers at Amazon recently accused the company of engaging in “systemic discrimination” by using AI systems to automatically or semi-automatically deny disability requests for accommodations. The allegations arise in the midst of Amazon and many other companies implementing return-to-office mandates, including for employees who worked in hybrid or remote capacities during and after the COVID-19 pandemic. The workers accused Amazon of improperly pushing its return-to-work policy on disabled workers who previously were permitted to work from home based on medical recommendations and accommodation procedures.
Harper v. Sirius XM Radio AI discrimination class action
On August 4, 2025, plaintiff Arshon Harper filed a class action lawsuit in the US District Court for the Eastern District of Michigan against Sirius XM Radio, alleging systemic race discrimination in hiring practices, under Title VII of the Civil Rights Act, through the use of AI-powered applicant screening tools provided by a third-party vendor. Harper, a Black IT professional with more than a decade of experience, claims he was rejected for 149 out of 150 job applications submitted to Sirius XM since November 2023, despite meeting or exceeding qualifications for roles such as IT desktop support and software engineer.
The complaint asserts that Sirius XM’s use of algorithmic decision-making tools – including candidate-matching and shortlisting features – relied on data points that serve as proxies for race (e.g., zip codes, educational institutions), resulting in intentional discrimination (disparate treatment) and disproportionate exclusion of Black applicants (disparate impact). The lawsuit seeks certification of a class of Black applicants rejected via Sirius XM’s vendor applicant platform since January 27, 2024.
This case joins a growing wave of litigation challenging the use of AI in employment decisions, and underscores that the legal risks associated with vendor-provided screening technologies are shared by both developers and deployers.
Next steps: Aligning innovation with compliance
AI is reshaping the workplace, offering faster hiring, lower costs and improved productivity. Generative tools can streamline routine tasks, freeing employees for higher-value and more complex work, while advanced analytics can support more informed workforce decisions. To capture these benefits responsibly, employers and developers should consider approaches that balance innovation with compliance. While there is no single model that fits all, several practices are emerging as useful starting points – subject to adaptation as laws and guidance continue to evolve.
Emerging practices to explore
Evaluate compliance with new and evolving laws.
Employers, including those in jurisdictions like California and Texas, are increasingly mapping where AI tools are used across recruiting, hiring, accommodations and performance management, with an eye toward reviewing safeguards for detecting and addressing potential unintended bias. Developers can play a role by providing documentation, bias-testing results and configuring options to support compliance. While specific methods may differ, maintaining visibility into how AI is deployed is emerging as a practical baseline for responsible use.
Preserve meaningful human involvement.
In light of recent claims, such as those involving Workday, Amazon and Sirius XM, many organizations are incorporating measures to ensure human review of high-stakes employment decisions influenced by AI tools, often supported by escalation protocols when automated outputs raise concerns. Developers can facilitate this by offering features such as explainability, audit logs, warning messages or override functions. While the scope of human involvement may vary by organization and tool, regulators increasingly signal that some degree of human oversight is expected.
Keep records of AI-related decisions.
Employers are increasingly finding value in documenting how AI contributes to significant decisions, such as hiring or employment termination, both to preserve transparency and support defensibility if questions arise. Developers can assist by enabling exportable audit trails, version histories and related features. While the level of detail may vary by organization, maintaining some record of the decision-making process is emerging as a practical risk-management tool.
Evaluate vendor practices.
Employers that rely on third-party AI tools are increasingly seeking information on system design, training and bias-testing to better understand how those tools operate. In response, many developers are providing transparency reports and compliance assurances. Some organizations also are using contractual provisions, such as audit rights, indemnities and cooperation obligations in the event of a regulatory inquiry, as a way to allocate responsibility.
Invest in training and awareness.
Organizations are increasingly training HR, legal and management teams on the responsible use of AI, with attention not only to anti-discrimination rules but also to other workplace laws. Developers can support these efforts by providing tailored resources and role-specific guidance. Recent developments highlight that workplace AI can implicate a wide range of statutes, including Title VII, the ADEA, the ADA, the FMLA and the National Labor Relations Act (NLRA). For example, in a recent memo, National Labor Relations Board Acting General Counsel William B. Cowen cautioned that using AI tools to transcribe recordings, generate meeting notes, identify individuals by voice or secretly record collective bargaining sessions may constitute a per se violation of the NLRA. While training programs will vary by organization, building awareness and preparedness is becoming an essential part of responsible AI governance.
Bottom line
AI can accelerate hiring, improve workforce planning and enhance productivity. While there is no single compliance playbook, certain practices are emerging as effective ways to balance innovation with risk. Employers and developers that stay aligned, share responsibility and keep pace with evolving federal, state and local requirements will be best positioned to capture the benefits of AI while reducing legal and operational exposure.
Employers and developers with questions regarding the use of AI tools should contact their Cooley employment lawyer or one of the lawyers listed below.
Related contacts
This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as "Cooley"). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction, and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. When advising companies, our attorney-client relationship is with the company, not with any individual. This content may have been generated with the assistance of artificial intelligence (Al) in accordance with our Al Principles, may be considered Attorney Advertising and is subject to our legal notices.