News

US Regulatory Agencies Launching Reviews of AI

Cooley alert
November 2, 2023

The extent to which the federal government will regulate artificial intelligence (AI) is a hot topic in Washington, DC. In September 2023, the US Senate held a closed-door forum, followed by another forum in October, with AI tech leaders. In addition, the Biden-Harris administration has announced multiple initiatives involving AI, the most recent being a sweeping executive order on October 30 that addresses a wide range of issues such as AI’s role in national security, public health, privacy, consumer fraud, worker protections and civil rights. (More information about the executive order can be found in our October 31 client alert.)

Federal Communications Commission (FCC)

Regulators are hopeful that AI and machine learning will provide technical breakthroughs needed to fully optimize smart wireless networks. As wireless spectrum becomes increasingly important in a 5G, always-connected world, tools such as AI-assisted spectrum sharing and real-time interference sensing may help maximize spectrum usage. By encouraging the use of AI to operate wireless networks, regulators such as the FCC could have more flexibility to make important policy decisions on issues such as how to allocate spectrum between licensed and unlicensed uses, what interference criteria to adopt, and whether to institute receiver standards.

The FCC has not yet proposed rules that would mandate AI or machine learning in wireless networks, but the FCC is exploring how AI can be used. The FCC began its inquiries with a joint workshop – “The Opportunities and Challenges of Artificial Intelligence for Communications Networks and Consumers” – with the National Science Foundation. The workshop brought together academics, policymakers, and industry experts to discuss how the FCC and the telecom industry can use AI to optimize telecom networks and improve network resiliency. The workshop also addressed how AI will make it more challenging for the FCC to protect consumers from harms such as illegal robocalls and robotexts and digital discrimination.

At the workshop, the FCC announced a spectrum notice of inquiry that focuses on how AI and other technologies can promote effective spectrum management – and identify new opportunities for innovation. The spectrum notice asks questions about how to define spectrum usage and how technology can be harnessed to provide the FCC with the data it needs to improve spectrum management. The spectrum notice will not result in new rules, but the FCC will use the proceeding to inform its policy decisions going forward. Accordingly, parties with interests in wireless communications issues, including broadband infrastructure and Internet of Things (IoT) connectivity, are monitoring the FCC’s inquiry. Comments on the spectrum notice were due on October 3 and reply comments were due November 2; interested parties may file their views with the FCC until the FCC takes action in the proceeding.

The FCC also has announced an inquiry to examine how AI can be used to protect consumers from unwanted robocalls and texts. The FCC’s messaging notice of inquiry will look at the privacy and safety challenges AI poses, as well as how AI can be used to block unwanted calls and protect telecommunications networks. Interested parties will have an opportunity to comment on the FCC’s proposals – comments are due on December 18, 2023, with reply comments due January 16, 2024. Parties active in the messaging industry and parties that depend on automated texts or calls to reach consumers should monitor the proceeding. The FCC’s announcement of its intention to review robocall protections corresponds with an October 24 hearing in the US Senate, during which senators from both parties expressed frustration with current enforcement efforts against robocalls.

Comments filed in response to the FCC’s notices will inform the FCC’s next steps. In each case, the FCC likely will propose rules that are informed by the comments filed in response to the notices and industry participants will have an opportunity to comment on those proposals.

We cannot predict the timing of further FCC action, but the October 30 executive order encouraged the FCC to work with other government agencies to develop rules around the use of AI in telecommunications networks. We believe these proceedings will be a priority for the FCC, and the FCC could adopt rules as early as the second half of 2024. Parties interested in the FCC’s proceedings should consider engaging with the regulators now, so their perspectives can be considered before new rules are in place.

Federal Election Commission (FEC)

News reports of candidates and issue groups using AI to create misleading political ads have caused concerns for the 2024 election cycle. In July, the advocacy group Public Citizen filed a petition with the FEC expressing concerns about the use of AI to create political ads. Public Citizen argued that AI allows political actors to create “deepfake” audio or video clips that can deceive voters and harm the affected candidate, which violates the rule against fraudulent misrepresentation. The Public Citizen petition, therefore, asked the FEC to clarify its rule on “fraudulent misrepresentation” to confirm that the rule applies to the use of AI to create deceptive campaign ads. Mindful of the First Amendment issues that its request raises, Public Citizen does not ask the FEC to outright ban the use of AI in campaign communications. Rather, Public Citizen urges the FEC to ban the use of deepfakes or similar communications where the purpose and effect are to deceive voters.

Comments on the Public Citizen proposal were due October 16, and proponents hope to have the new rules in place no later than early next year. While the FEC commissioners unanimously voted to seek public comment on the Public Citizen petition, that does not mean the FEC will necessarily vote to adopt new rules at this time. Indeed, two commissioners, Republican Allen Dickerson and Democrat Dara Lindenbaum, have questioned whether the FEC has the statutory authority to regulate AI, and they called on Congress to legislate on the topic. Notably, there is legislation pending in Congress, the REAL Political Advertisements Act, that would give the FEC the authority to regulate AI. The path forward for this bill is uncertain, given that it does not have bipartisan support.

For more information about these developments, please reach out to your Cooley contact or any of the lawyers listed below.

This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as “Cooley”). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. This content may be considered Attorney Advertising and is subject to our legal notices.