News

AI Chatbots at the Crossroads: Navigating New Laws and Compliance Risks

Cooley alert
October 21, 2025

Chatbots powered by artificial intelligence (AI chatbots) have become a key area of focus for innovative businesses as well as US lawmakers, regulators and private litigants – both at the state and federal levels. Most recently, on October 13, California enacted the first state AI law to date that includes a private right of action, creating serious litigation risk for companies that deploy AI chatbots in California.

Due to these developments, businesses deploying AI chatbots throughout the US market must navigate an increasingly complex and fast-evolving patchwork of compliance obligations and legal risks.

In the first part of this alert, we provide an overview of new state laws enacted in 2025 regulating the commercial use of AI chatbots, including their key provisions and scopes of applicability, along with a high-level summary of AI chatbot laws that were enacted before 2025. In the second part of the alert, we discuss new changes recently made to preexisting AI legal frameworks by state legislatures, including in Colorado and Utah. The final section addresses relevant developments concerning federal regulators and private plaintiffs, illustrating certain “high-risk” chatbot use cases that companies should be aware of.

New state laws regulating AI chatbots

The US regulatory framework for AI chatbots has primarily emerged at the state level. As of this writing, at least six states enacted new AI chatbot laws in 2025, supplementing those that were already on the books.

Several state laws seek to address a general consumer deception risk (e.g., failing to inform users that they are not communicating with a live human), while others focus on sector-specific use cases considered to warrant more narrow tailoring (e.g., AI mental health chatbots, chatbots interacting with children and AI-powered companions).

All state chatbot laws currently in effect provide for civil monetary penalties on a per violation basis and are enforced by state regulatory authorities. However, California’s latest “AI companion chatbot” law (discussed at the end of this section) is notable insofar as it is the first AI chatbot law to expressly provide a private right of action remedy – meaning that, companies found to have violated the law and caused the consumer’s injury may be liable for actual or statutory damages ($1,000 per violation) and reasonable attorneys’ fees. 

State AI chatbot laws enacted in 2025

Excluding California, the following US states enacted AI chatbot laws so far this year: 

  • New York – In May 2025, New York enacted the first state law requiring safeguards for AI companions, including implementing safety measures to detect and address users’ expression of suicidal ideation or self-harm. Upon detection, providers of AI companions must refer the user to specified crisis response resources. Providers must also disclose to users that they are not communicating with a human, including at specified intervals. New York’s law is set to become effective on November 5, 2025.
  • Maine – In June 2025, Maine enacted the Chatbot Disclosure Act, which requires businesses that use AI chatbots to communicate with consumers to notify those consumers that they are not interacting with a live human (in cases where a reasonable consumer could not tell the difference). Maine’s law became effective on September 24, 2025, and is enforceable under the Maine Unfair Trade Practices Act, which contains its own limited private right of action.
  • Utah – In March 2025, Utah Gov. Spencer Cox signed HB 452, establishing new rules and disclosure requirements for suppliers of AI “mental health chatbots.” HB 452 requires providers to make clear and conspicuous disclosures to users at several points in time, including prior to initially accessing the chatbot, when a user revisits the chatbot after not using it for more than seven days, and when asked by the user. The law provides that mental health chatbots are also subject to certain advertising restrictions and prohibited from selling individual health information without obtaining user consent. HB 452 also contains a safe harbor for companies that implement a specified and onerous internal compliance program. HB 452 became effective on May 7, 2025.
  • Nevada – In June 2025, Nevada Gov. Joe Lombardo signed AB 406, a law regulating the use of AI and AI chatbots in mental and behavioral healthcare contexts. Among other requirements, AB 406 prohibits offering interactive AI systems to Utah residents that provide, or claim to provide, professional mental or behavioral healthcare services. AB 406 also prohibits companies from making representations about the same and restricts how healthcare professionals can use AI systems in practice. AB 406 became effective on July 1, 2025.
  • Illinois – In August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (WOPRA). WORPA imposes various restrictions on the use of autonomous AI systems in clinical practice, including by prohibiting the use of AI chatbot tools to engage in therapeutic communications or to detect emotions or mental states in patients. The law also exempts certain specified entities and use cases from its scope. The law took effect immediately upon becoming law on August 1, 2025. 

A handful of preexisting US laws already require disclosure when consumers interact with an AI chatbot, albeit in more limited contexts. Laws in New Jersey and California, for example, prohibit using bots to knowingly deceive consumers in connection with online commercial transactions without disclosure. A 2024 California law also requires healthcare providers to include a disclaimer to patients indicating when communications were generated by AI, among other requirements. Several additional states are considering similar bills, which will likely further fragment the compliance environment in the future.

Even when state law does not specifically require disclosure, the use of undisclosed bots may still give rise to claims under state consumer protection statutes, under which, depending on the facts, state attorneys general may seek to characterize failing to provide disclosure as an unfair or deceptive trade practice despite the questionable merits of such claims. 

New potential civil litigation risk – California enacts first AI chatbot law with private right of action

Last week, California Gov. Gavin Newsom signed into law Senate Bill 243 (SB 243), which requires operators of AI-powered “companion chatbots” to comply with disclosure, notice and regulatory reporting requirements. SB 243 broadly defines the term “companion chatbot” as any AI system with a “natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs,” subject to certain exceptions, such as chatbots used only for customer service. 

SB 243 is set to become effective on January 1, 2026. Among other things, the law will require companies to remind users that they are interacting with a chatbot and not a human (for minor users, at least as frequently as every three hours). Companies will also be required to maintain and implement a protocol – similar to the requirement under New York’s AI companion law – to prevent self-harm content, refer users to crisis service providers, publish details about how the protocol works on their website (a unique compliance requirement). The law also requires companies to submit annual reports to the California Office of Suicide Prevention beginning in July 2027, which will require, among other things, disclosure of the number of crisis service referrals the chatbot has made. SB 243 will also require companies to make a public disclosure indicating that “companion chatbots may not be suitable for some minors,” and imposes additional obligations specific to chatbots made available to minor users.

Unlike other state AI laws enacted to date, SB 243 will allow consumers who have suffered an injury as a result of a company’s violation of the law to bring a civil action to recover:

  1. Injunctive relief.
  2. Damages equal to actual damages or $1,000 per violation.
  3. Attorney’s fees and costs.

It remains to be seen whether SB 243’s mandatory disclosure and suicide detection protocol requirements will be challenged on First Amendment grounds, including under theories of compelled and content-based restricted speech, respectively.

US states delay and amend existing comprehensive AI laws

In 2024, Colorado and Utah passed the first two state comprehensive consumer protection statutes governing AI: the Colorado Artificial Intelligence Act (CAIA) and the Utah Artificial Intelligence Policy Act (AIPA), respectively. As originally enacted, each law imposed obligations on entities using generative AI tools to communicate with consumers, including through the use of AI chatbots.

This year, however, both Colorado and Utah enacted legislation to amend various aspects of their respective state laws, with Colorado delaying enforcement of the CAIA, and Utah significantly narrowing the scope of the AIPA’s obligations. 

CAIA enforcement delayed by six months

On August 28, 2025, Colorado Gov. Jared Polis signed SB 25B-004, which postpones the CAIA’s implementation by an additional five months – from February 1, 2026, back to June 30, 2026. 

The amendment does not change the CAIA’s substantive requirements, and it remains unclear whether further legislative amendments in 2026 will. As a result, companies should still prepare to comply with the law’s applicable disclosure and other substantive obligations once implementation begins in June 2026. Notably, the CAIA’s disclosure obligations regarding interactions with a chatbot apply regardless of whether a given AI deployment is considered as “high risk” as defined under the law. 

UAIA obligations narrowed

Utah also enacted legislation this year to amend certain aspects of its state AI law. As passed in SB 332 and SB 226, the amendments extended the AIPA’s expiration date from May 2025 to July 2027 and narrowed the scope of certain disclosure requirements. For example, as amended, the law only requires AI chatbots to notify users that they are AI when asked directly by a consumer and during “high-risk” AI interactions. Before this amendment, the law required AI chatbots to make this notification at the outset of interacting with users and regardless of the nature of the interaction.

FTC chatbot investigations and civil litigation developments

In September 2025, the US Federal Trade Commission (FTC) announced a “Section 6(b)” marketplace study into seven companies operating consumer-facing, generative AI “companion” chatbots, seeking to understand how the bots may impact children’s mental health. The FTC’s inquiry signals a significant increase in AI chatbot regulatory scrutiny, which in light of California’s and New York’s recent legislative activity, appears to be supported on a bipartisan basis.

AI chatbots are also facing heightened legal scrutiny by private plaintiffs. The FTC’s announcement follows after groups of parents across the nation filed lawsuits against AI chatbot providers, seeking damages for harms allegedly caused to their children by the companies’ chatbots.

Finally, businesses should remember that the federal Telephone Consumer Protection Act (TCPA) generally prohibits making telephone calls that use an AI-generated voice to residential or wireless telephone lines without prior consent. Despite the recent US Supreme Court ruling in McLaughlin Chiropractic, the TCPA’s artificial voice call requirements likely remain supported by the TCPA’s text.

Key takeaways for businesses

As the prevalence of chatbots in consumer and other contexts continues to expand, so too will interest from legislators and regulatory bodies. Companies should assess whether existing AI chatbot deployments are covered under any newly emerging laws, and if so, determine next steps for meeting applicable compliance requirements. Companies should also closely monitor the near-term evolution of the statutory and regulatory landscape, particularly at the state level, so as to maximize time to meet compliance and ensure that new offerings adhere to best practices and avoid unintentionally triggering onerous compliance obligations.

This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as "Cooley"). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction, and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. When advising companies, our attorney-client relationship is with the company, not with any individual. This content may have been generated with the assistance of artificial intelligence (Al) in accordance with our Al Principles, may be considered Attorney Advertising and is subject to our legal notices.