O’Melveny Worldwide

US AI Regulatory Policy Update: White House Secures More Voluntary Commitments; Congress Holds Hearings and Continues to Mull Legislative Framework; Agencies Provide Guidance on Competition, Financial Markets, and Intellectual Property Issues

August 1, 2023

 

The US Government is continuing down the AI policy path we described in our last AI regulatory update, recently securing voluntary AI safety commitments from industry leaders while working on longer-term plans to develop a regulatory regime to manage AI risks and promote responsible AI development.

While the core principles driving regulation remain unchanged—ensuring safe, secure, and trustworthy AI—these general precepts are beginning to crystallize into specific action points and proposals. For example, abstract discussions about “transparency” have given way to more concrete commitments and proposals to identify AI-generated content with watermarks or similar labels. Of course, politicians have only just begun the process of translating general principles into tangible rules and structures. But it is becoming increasingly clear that officials across all levels of government and across political parties recognize that AI is a transformative technology with both immense potential benefit and serious risk, want to engage with industry and thought leaders to understand the key issues, and have the political will to take action.

White House Announces New Voluntary Commitments from AI Industry Leaders

On July 21, the White House announced that seven leading AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—agreed to the following voluntary commitments, all centered around the principles of safety, security, and trust:1

  • Pre-release Testing. Before new AI systems are released to the public, the companies will ensure that the new systems are thoroughly vetted, both internally and through testing by independent experts.
  • Information Sharing. The companies committed to share information on managing AI risks with industry participants, governmental bodies, and academic experts, and agreed to create or join a forum for developing shared safety standards and best practices. Anthropic, Google, Microsoft, and OpenAI subsequently announced the formation of the Frontier Model Forum, which will aim to “(i) advance AI safety research to promote responsible development of frontier models and minimize potential risks, (ii) identify safety best practices for frontier models, (iii) share knowledge with policymakers, academics, civil society and others to advance responsible AI development; and (iv) support efforts to leverage AI to address society’s biggest challenges.”2
  • Investment in Security. The companies agreed to invest in security measures to protect unreleased AI model weights.
  • Incentives for Reporting Issues and Vulnerabilities. The companies agreed to develop structures—such as bounty systems, contests, or prizes—to incentivize third-parties to recognize and report weaknesses or unsafe behaviors of publicly available AI systems.
  • AI-Content Labels. The companies committed to develop mechanisms to identify AI-created content, such as watermarks, in order to prevent fraud and disseminating false information.
  • Public Disclosure of Limitations. The companies agreed to publicly report AI systems’ capabilities and limitations, including safety evaluations and bias.
  • Research on Societal Risks. The companies committed to research societal risks posed by AI, such as algorithmic discrimination and privacy threats, and to empower trust and safety teams to address these risks.
  • Advancing Human Welfare. The companies committed to support AI research into areas such as climate change mitigation, cancer detection, and cybersecurity. The companies also agreed to support AI education and training initiatives.

The latest White House initiative comes on the heels of prior efforts at regulation through non-binding guidance discussed in our previous update, and there is more to come from the Biden administration. The White House is developing an executive order on AI, working with allies on an international framework to govern AI development, and pursuing bipartisan AI legislation.3 The White House has made it clear that the guidance is a necessary, but not sufficient step, in its view—it does not preclude the need for binding regulation and/or congressional action.

Still unresolved is how AI policy will be managed at the White House going forward – how much influence the National Economic Council will assert vis-à-vis the National Security Council, and whether a separate “AI-czar” or “AI Policy Council” will be established. Also unresolved is whether the Biden administration intends to use its global muscle to make a US AI regulatory framework a global standard, and push the EU and other western authorities to accept the outcome of the US policy process.

Congress Holds AI Hearings and Continues to Explore AI Legislative Initiatives

Lawmakers continue to lay the groundwork for comprehensive AI legislation, holding hearings with leading industry and academic figures and cobbling together consensus on key principles for AI regulation.

Senate Hearings

Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) are spearheading the Senate’s efforts to understand how to best regulate AI, and they have held two hearings featuring testimony from some of the leading industry and academic voices in the AI field. A May 16 hearing featured OpenAI CEO Sam Altman, NYU professor emeritus and AI researcher Gary Marcus, and IBM Chief Privacy and Trust Officer Christina Montgomery. And a July 25 hearing featured Anthropic CEO Dario Amodei and two prominent computer scientists: Yoshua Bengio of the Université de Montreal and Stewart Russell of the University of California, Berkeley.

The hearings provide a revealing window both into the key AI risks motivating legislators to action and the proposed solutions attracting support.

Legislators’ most pressing concerns about AI include:

  • Potential threats to election integrity, including the use of AI to create deepfakes and misinformation and the use of highly personalized and targeted misinformation designed to manipulate specific individuals.
  • AI models potentially being trained on copyrighted works or personal information.
  • The use of AI to perpetrate fraud and scams, e.g., through voice-cloning.
  • Geopolitical risks to the AI hardware supply chain, particularly given Taiwan’s critical role in chip production.
  • Potential for bad actors to take advantage of open-source AI by fine-tuning publicly available base models to help commit misdeeds.

Legislators are also motivated by two broader concerns: that Congress needs to react faster and more effectively to AI than it did to social media, and that if US does not act, other jurisdictions will fill the void and set the standards for AI.

Legislators and industry experts discussed various proposals to mitigate these risks:

  • Creating a federal agency with the power to license AI companies creating AI models, set minimum safety standards for AI models, and conduct audits to ensure compliance with standards. Senator Blumenthal would like the agency to act not just a “reactive body” but as a proactive developer of countermeasures to potential AI harms.
  • Prohibiting AI use altogether where risk of harm is too high—for example, in weapon systems capable of autonomously deciding whether to kill people.
  • Mandating scorecards and “nutrition labels” disclosing model safety and risks, and requiring AI companies to report safety incidents.
  • Mandating disclosure when AI is being used, for example by watermarking AI-generated content.
  • Creating a system that compensates artists and other creators whose works are being used by AI models, and giving people the option to opt out of having their data used to train AI.
  • Ensuring a private right of action to sue for harm caused by AI.4  One of the important questions related to private litigation is whether Section 230 of the Communications Decency Act, which shields “interactive computer services” from liability for third-party content, applies to generative AI. Blumenthal and Hawley believe the answer is “no,” and have introduced the “No Section 230 Immunity for AI Act.”5
  • Encouraging international cooperation in the development and implementation of basic rules and safety standards.

While it is premature to say whether any of these measures will be implemented, all appeared to garner at least some support, and are part of the ongoing policy discussion over best path forward for regulating AI.

Other AI Legislative Initiatives

Blumenthal and Hawley are not the only members of the chamber seeking to play leading roles on AI. As reported in our last update, Senate Majority Leader Chuck Schumer (D-NY) is working on a comprehensive legislative framework. Schumer announced on June 21 that his SAFE Innovation framework will treat continued AI innovation as “the north star” while also trying to ensure that innovation is safe by focusing on the following principles:

  • Security: ensuring guardrails that prevent bad actors from using AI to cause harm, and protecting workers from potential job destruction.
  • Accountability: regulating how AI is developed, audited, and deployed to address potential harms such as algorithmic bias, exploitation of consumers or workers, and appropriation of intellectual property.
  • America’s Foundations: protecting election integrity from AI-created misinformation and ensuring that AI algorithms are aligned with American values of human liberty, civil rights, and justice.
  • Explainability: explaining how AI systems come up with answers in a simple way all users can understand—Schumer flagged this as the top priority because the other goals may not be achievable without it.6

Schumer also announced that he will convene a series of AI Insight Forums later this year, giving legislators the opportunity to debate key issues with AI experts representing a variety of viewpoints. Schumer has set September as a key month for action, so the weeks ahead will be a critical time for the policy debate to take shape.

Meanwhile, a bipartisan group of lawmakers, led by Representatives Ted Lieu (D-CA) and Ken Buck (R-CO), is leading yet another AI regulation effort. The group has proposed the National AI Commission Act, which aims to establish a 20-member commission to explore AI regulation, including review of the current regulatory approach, recommendations for new government structures, and development of a risk-based approach to regulation.7

While work continues on building a comprehensive legislative framework, many lawmakers have been introducing bills and amendments to tackle discrete issues relating to AI. For example:

  • As noted above, Blumenthal and Hawley have introduced the “No Section 230 Immunity for AI Act” to prevent Section 230 immunity from applying to generative AI.
  • Representative Ritchie Torres (D-NY) introduced the “AI Disclosure Act of 2023,” which would require AI-generated videos, photos, text, audio, and other material to carry a clear disclaimer.8
  • Senator John Thune (R-SD) is reportedly preparing to introduce the “Artificial Intelligence Innovation and Accountability Act,” which would propose different self-testing and self-certification requirements for different categories of AI.9

In addition, the content-creating industries (especially the music industry) are quite active on Capitol Hill raising concerns with outspoken allies, such as Senator Marsha Blackburn (R-TN), about AI’s impact on and use of creative IP.

Beyond federal legislative developments, companies in the AI space also need to be aware of state data privacy requirements and their application to AI, which will be the subject of a future alert.

Agencies Continue to Identify AI Areas of Focus

As we discussed in our previous update, federal agencies’ AI policy has focused on highlighting existing laws and rules applicable to AI and identifying specific AI harms, such as algorithmic discrimination, privacy risks, and AI-facilitated fraud. The FTC and SEC recently provided additional guidance about specific AI issues within their respective domains.

FTC Highlights AI-Related Competition Issues

FTC staff outlined the agency’s antitrust concerns about generative AI in a June 29 blog post10 that identified the following key questions:

  • Whether companies controlling key inputs into AI (data, computing resources, talent) can foreclose competition. The concern over talent in particular reflects the antitrust agencies’ recent focus on labor-market restrictions, as the post states that “it is critical that talented individuals with innovative ideas be permitted to move freely, and, crucially, not be hindered by non-competes.”
  • Whether open-source approaches “help open up the playing field” by allowing companies to develop products using publicly available base models or create potential for harm due to risk of misuse (e.g., malicious users disabling built-in guardrails). FTC is also concerned about the possibility that firms may follow an “open first, closed later” model, noting that “firms that initially use open-source to draw business, establish steady streams of data, and accrue scale advantages can later close off their ecosystem to lock-in customers and lock-out competition.”
  • Whether companies could use bundling and tying to distort competition—for example by linking “new generative AI applications with existing core products to reduce the value of competitors’ standalone generative AI offerings.”
  • Whether companies that offer an ecosystem of products and services might “engage in exclusive dealing or discriminatory behavior” to steer users toward their own AI products.
  • Whether mergers and acquisitions might “consolidate market power in the hands of a few players.” FTC is concerned about a wide variety of transactions, including those in which firms “buy up critical applications and cut off rival access to core products,” “buy up complementary applications and bundle them together,” or “buy up nascent rivals instead of trying to out-compete them by offering better products or services.”
  • Whether firms lock in users and make it unnecessarily difficult for them to switch to other platforms.

The FTC initiatives appear to be independent of the White House-led policy process discussed above.

SEC Chair Previews AI Issues in Financial Markets

On July 17, SEC Chair Gary Gensler gave a speech to the National Press Club on the implications of AI for financial markets, financial service providers, and the SEC’s work. Key issues raised in his speech include:

  • Black-Box Models. Gensler noted that advisers and brokers incorporating AI should be cognizant of challenges posed by the fact that humans often cannot explain why or how AI models come up with their predictions, making it more difficult to detect bias, ensure fairness, and spot inaccuracies.
  • Client Conflicts. AI may enable financial service providers to “narrowcast” their offerings by narrowly targeting prospective clients with products and pricing tied to the clients’ “maximum willingness to pay a price or purchase a product.” Gensler expressed concern that these developments could cause potential conflicts of interest between service providers and their investor clients, and could benefit providers to the detriment of investors.
  • Fraud. AI could be used by bad actors to generate and broadcast false information to influence capital markets.
  • Competition. Gensler is concerned that feedback loops (leading models benefiting from data generated by their use and thus extending their lead over rivals) could create a “dominant base layer at the center of the capital markets,” and the SEC will need to watch this trend closely to “continue to promote competition, transparency, and fair access to markets.”
  • Market Stability and Fragility Risks. Gensler worries that AI may exacerbate “herding”—many investing platforms and organizations making “similar decisions because they are getting the same signal from a base model or data aggregator.” Because of herding, AI could play a “central role” in a possible “future financial crisis.”11

Agencies Grapple with Patentability and Copyrightability of AI-Generated Works

With AI gaining the ability to contribute to the development of inventions and creative works, agencies responsible for intellectual property rules are facing novel questions.

The US Patent and Trademark Office (USPTO) is working to develop guidance on AI inventorship in light of last year’s Thaler v. Vidal decision, in which the Federal Circuit held that an inventor on a patent must be a human being.12 The decision relied on the statutory language in the Patent Act, 35 USC. § 100(f), which defines an inventor as an “individual” (i.e., human).13 In response to Thaler, the USPTO requested public comment and held listening sessions on questions about patent inventorship and ownership in situations where AI contributed to a patented invention, as well as ways to advance AI-enabled innovation and potential statutory changes to recognize AI’s contribution to the invention process.14

The US Copyright Office (USCO) is dealing with similar issues relating to copyrightability of AI-generated works. On March 16, the USCO issued guidance explaining that, under current law, “human authorship” is required for copyright registration, and that “if a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it.”15 The key issue is where to draw the line in determining whether “a work containing AI-generated material . . . contain[s] sufficient human authorship to support a copyright claim”16—a question that will likely be the subject of much debate. The USCO hosted listening sessions and plans to solicit public comments to help shape its AI policy.17

***

This year is shaping up to be a formative one for AI regulation. There is strong interest across both parties and different branches of government to understand this transformative technology and shape its development to realize its immense potential while managing its profound risks. Consensus is crystallizing around core principles, and the hard work of translating general principles into concrete measures has begun. The Government is showing both urgency and humility—recognizing that it must act fast to keep up with the rapid pace of AI development and that it must learn from those actually working on and studying AI to get the policy right. The Government has welcomed input from the industry and other affected parties, whether through White House meetings with leading AI companies, congressional hearings featuring thought leaders from academia and the private sector, or agencies’ listening sessions and solicitation of comments. For companies that want to play a role in shaping the AI policy conversation, this is a critical time: Policy discourse has not yet gelled into specific rules and regulations, and policymakers are open to ideas on the best path forward.

O’Melveny’s AI team features Ron Klain, former White House Chief of Staff who helped lay the groundwork for the Biden administration’s AI policies before he left government in February 2023,18 and a cross-disciplinary group of attorneys closely tracking AI legal and regulatory developments in antitrust, consumer protection, privacy and data security, intellectual property, product liability, appellate law, securities law compliance, and other fields. Please contact the attorneys listed on this article or your O’Melveny counsel to help you navigate AI-related legal and strategic issues.


1The White House, FACT SHEET: Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI (July 21, 2023), available at https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/; The White House, Ensuring Safe, Secure, and Trustworthy AI (July 21, 2023), available at https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf. 

2OpenAI, Frontier Model Forum (July 26, 2023), available at https://openai.com/blog/frontier-model-forum.
3The White House, FACT SHEET: Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI (July 21, 2023), available at https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/
4In fact, the plaintiffs’ bar has started targeting companies producing large AI models. On July 7, comedian Sarah Silverman and two other authors filed a class action in the Northern District of California alleging copyright infringement and other claims against OpenAI and Meta. Silverman, et al. v. OpenAI, Inc., et al., 3:23-cv-3416, ECF No. 1 (Complaint) (N.D. Cal. July 7, 2023); Kadre et al. v. Meta Platforms, Inc., 3:23-cv-3417, ECF No. 1 (Complaint) (N.D. Cal. July 7, 2023). Anonymous plaintiffs have also filed class actions against OpenAI and Google alleging, inter alia, that defendants used personal information scraped from the internet to train their AI models. See K.S. v. OpenAI, Inc., 3:23-cv-3199, ECF No. 1 (Complaint) (N.D. Cal. June 28, 2023); J.L.. v. Alphabet Inc., 3:23-cv-3440, ECF No. 1 (Complaint) (N.D. Cal. July 11, 2023).
5Josh Hawley, Hawley, Blumenthal Introduce Bipartisan Legislation to Protect Consumers and Deny AI Companies Section 230 Immunity (Jun. 14, 2023), available at https://www.hawley.senate.gov/hawley-blumenthal-introduce-bipartisan-legislation-protect-consumers-and-deny-ai-companies-section.
6Senate Democrats, Majority Leader Schumer Delivers Remarks To Launch SAFE Innovation Framework For Artificial Intelligence At CSIS (Jun. 21, 2023), available at https://www.democrats.senate.gov/news/press-releases/majority-leader-schumer-delivers-remarks-to-launch-safe-innovation-framework-for-artificial-intelligence-at-csis.
8Ritchie Torres, US Rep. Ritchie Torres Introduces Federal Legislation Requiring Mandatory Disclaimer for Material Generated by Artificial Intelligence (Jun. 5, 2023), available at https://ritchietorres.house.gov/posts/u-s-rep-ritchie-torres-introduces-federal-legislation-requiring-mandatory-disclaimer-for-material-generated-by-artificial-intelligence.
9Ashely Gould & Maria Curi, Scoop: Thune Readies AI Certification Bill, AXIOS (Jul. 18, 2023), available at https://www.axios.com/pro/tech-policy/2023/07/18/thune-readies-ai-certification-bill.
10Staff in the Bureau of Competition & Office of Technology, Generative AI Raises Competition Concerns (Jun. 29, 2023), available at https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns.
11Gary Gensler, “Isaac Newton to AI” Remarks before the National Press Club (July 17, 2023), available at https://www.sec.gov/news/speech/gensler-isaac-newton-ai-remarks-07-17-2023.
1243 F.4th 1207, 1210 (Fed. Cir. 2022).
13Id.
14Request for Comments Regarding Artificial Intelligence and Inventorship, 88 Fed. Reg. 9,492 (Feb. 14, 2023); USPTO Alert, Reminder: Request for comments regarding artificial intelligence and inventorship closes May 15 (May 12, 2023), available at https://www.uspto.gov/subscription-center/2023/reminder-request-comments-regarding-artificial-intelligence-and.
15Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16,190 (Mar. 16, 2023).
16Id.
17US Copyright Office, Copyright Office Launches New Artificial Intelligence Initiative, NewsNet Issue 1004 (Mar. 16, 2023), available at https://www.copyright.gov/newsnet/2023/1004.html.
18Mr. Klain is barred, under federal rules, from contacting the administration regarding policy matters, or from materially aiding others in making such contacts.

This memorandum is a summary for general information and discussion only and may be considered an advertisement for certain purposes. It is not a full analysis of the matters presented, may not be relied upon as legal advice, and does not purport to represent the views of our clients or the Firm. Ron Klain, an O'Melveny partner licensed to practice law in the District of Columbia, Mark Liang, an O'Melveny partner licensed to practice law in California, Sid Mody, an O'Melveny partner licensed to practice law in Texas, William K. Pao, an O'Melveny partner licensed to practice law in California, Nexus U. Sea, an O'Melveny partner licensed to practice law in New York and New Jersey, Sergei Zaslavsky, an O’Melveny partner licensed to practice law in the District of Columbia and Maryland, Scott W. Pink, an O'Melveny special counsel licensed to practice law in California and Illinois, Colleen Powers, an O'Melveny counsel licensed to practice law in New York and New Jersey, Amit Itai, an O'Melveny associate licensed to practice law in California and Israel, and Laura K. Kaufmann, an O’Melveny associate licensed to practice law in California, contributed to the content of this newsletter. The views expressed in this newsletter are the views of the authors except as otherwise noted.

© 2023 O’Melveny & Myers LLP. All Rights Reserved. Portions of this communication may contain attorney advertising. Prior results do not guarantee a similar outcome. Please direct all inquiries regarding New York’s Rules of Professional Conduct to O’Melveny & Myers LLP, Times Square Tower, 7 Times Square, New York, NY, 10036, T: +1 212 326 2000.