O’Melveny Worldwide

President Biden Issues Sweeping Executive Order on Artificial Intelligence

November 2, 2023

On October 30, 2023, President Biden announced an Executive Order (EO) establishing a broad range of requirements and goals designed to mitigate the risks of Artificial Intelligence (AI) while attempting to minimize obstacles to innovation.1 The EO figures to be of great interest to industry observers because, as a threshold matter, the EO’s definitions of AI and AI Model cast a wide net that could potentially capture many mundane statistical or computational tools that experts might not otherwise consider true applications of AI technology.2 And although the EO’s requirements fall primarily on government agencies at first, the EO does clarify the Biden Administration’s policy positions on AI, and there are several aspects of the EO that could impose a burden on private actors, either directly or as a result of agency actions directed by the EO. The EO declares eight policy goals:

  1. Establishing “New Standards for AI Safety and Security”
  2. “Protecting Americans’ Privacy”
  3. “Advancing Equity and Civil Rights”
  4. “Standing Up for Consumers, Patients, and Students”
  5. “Supporting Workers”
  6. “Promoting Innovation and Competition”
  7. “Advancing American Leadership Abroad”
  8. “Ensuring Responsible and Effective Government Use of AI”

Establishing New Standards for AI Safety and Security

Critically, the EO will invoke the Defense Production Act of 1950 to impose notice and reporting obligations on private actors meeting either of two criteria, to take place within 90 days:3 

  • Companies developing AI technology that meets the definition of a “dual-use foundation model.”4 Generally, this applies to those companies using the largest AI models, trained on broad data with a wide range of applications. The companies will be required to notify the federal government when training those models. They will also have to provide the government with the results of all safety testing.
  • Companies that possess, develop, or acquire a “large-scale computing cluster.”5 They will be required to report the existence and location of these clusters and the amount of total computing power available in each cluster to the federal government.

In turn, the National Institute of Standards and Technology is directed to establish standards for safety testing—to be applied by the Departments of Energy and Homeland Security to assess AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.

Other federal agencies are also directed to establish standards for their areas of expertise:

  • Agencies that fund life-sciences projects are directed to develop new standards for biological synthesis screening to help protect against the risk of AI being used to synthesize dangerous biological materials.
  • Leading by example, the Department of Commerce will develop internal guidance for identifying AI-generated content in government communications.
  • The Defense Advanced Research Projects Agency is directed to support the development of AI tools to find and fix vulnerabilities in critical software, such as the ongoing AI Cyber Challenge.
  • The National Security Council and White House Chief of Staff are directed to develop a memorandum identifying further actions on AI and security.

Protecting Americans’ Privacy

While the EO does not seek to impose new industry standards on data privacy, it does call for Congress to pass bipartisan data privacy legislation while allocating additional resources to develop techniques and tools to preserve individuals’ privacy when their data is used to train AI systems. This includes (a) providing a “differential-privacy guarantee,” which allows information about a group to be shared while provably limiting the improper access, use, or disclosure of personal information about particular entities; and (b) developing and implementing “privacy-enhancing technology” to mitigate any risk of loss of privacy related to data processing, including enhancing predictability, manageability, disassociability, storage, security, and confidentiality. The EO also directs federal agencies to develop internal standards and procedures governing the “collection, processing, maintenance, use, sharing, dissemination, and disposition” of data containing personal information. These requirements reflect the EO’s “lead by example” approach, developing policies for the federal government that can be adopted by—or imposed on—the private sector.

Advancing Equity and Civil Rights

The EO addresses the perceived risks of discrimination and bias arising from AI in three ways. First, the EO directs agencies to give clear guidance to landlords, federal benefits programs, and federal contractors to avoid discrimination arising from the use of AI. Although the sections of the EO related to the provision of public benefits apply solely to the U.S. Department of Health and Human Services (HHS) and any state or local governments engaged in implementing public benefits and services administered by HHS, any standards established under this process may serve as a precursor to imposing similar standards on private entities that facilitate federal benefits programs as part of a private-public partnership, such as those under Medicare Part C.

Second, the EO directs the Department of Justice and federal civil rights offices to coordinate on developing best practices for investigating and prosecuting AI-related civil rights violations.  

Finally, the EO directs agencies to develop best practices on the use of AI in the criminal justice system.

Standing Up for Consumers, Patients, and Students

This part of the EO focuses specifically on healthcare and education. In healthcare, the EO directs HHS to establish a program for reporting and remedying “harms or unsafe healthcare practices involving AI.” Specifically, section 8(b)(iii) empowers the HHS Secretary to issue guidance or “tak[e] other action as appropriate” in response to any complaints regarding applications of AI technology in healthcare that violate Federal nondiscrimination and/or privacy laws. In education, the EO directs the creation of additional resources to support educators seeking to leverage AI-enabled educational tools.

Supporting Workers

The EO directs federal agencies to publish information on the impact of AI on the labor market and to identify options for strengthening federal support for workers affected by disruptions from AI. The EO also directs the Secretary of Labor to publish guidance for organizations to minimize the risk of worker harms caused by AI—this will include specific steps for employers to take when implementing AI in areas including compensation, the collection and use of data gathered from employees, and the evaluation of job applications.

Promoting Innovation and Competition

The EO outlines three directives to promote innovation. First, the EO will expand grants for AI research while also calling for the creation of a “National AI Research Resource” designed to provide resources and data to AI researchers. Second, the EO directs agencies to provide technical assistance and resources to small businesses developing AI technology while also encouraging the Federal Trade Commission to consider using its rulemaking authority “to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.” Finally, the EO will rely on existing authority under immigration laws to streamline the visa process for highly skilled immigrants and nonimmigrants with relevant expertise.

Advancing American Leadership Abroad

The EO promises that the Administration will continue collaborating with foreign governments on AI topics by expanding existing engagements, supporting the development of AI standards with international partners and organizations, and promoting the deployment of AI tools abroad to address global challenges related to sustainability and infrastructure.

Ensuring Responsible and Effective Government Use of AI

In another instance of “leading by example,” the EO directs agencies to enhance the federal government’s AI capabilities by issuing internal guidance governing the use of AI, streamlining the contracting process for AI products and services, accelerating the hiring of AI professionals, and training existing employees in AI.  

Implications for AI Innovators & Industry-Specific Provisions

Since most of the EO’s requirements are directed toward government agencies and not private actors, the EO could be seen as encouraging federal legislators to come up with laws and regulations governing the use of AI tools—which many industry participants view as necessary in light of the current patchwork of state regulations.6 Guidance issued under the EO may also serve as an interim measure while Congress continues to strive toward consensus on AI topics. More than 50 bills have been introduced in Congress this year alone seeking to govern various aspects of AI technology, including algorithmic bias,7 data privacy,8 and AI-generated content.9

We will continue to monitor any developments as the various requirements of the EO are implemented over the next year. In the meantime, we have highlighted several aspects of the EO that could affect private actors in the short term, either directly or as a result of anticipated agency implementation of the EO:

  • Any organizations developing or acquiring AI technology or the means of developing AI technology will need to establish policies and/or procedures for assessing whether such technology falls under the definitions of “dual-use foundation model” or “large-scale computing cluster,” which would trigger the reporting and information-sharing requirements of the EO.
  • In anticipation of forthcoming guidance on discrimination in AI, any organization that acts as a landlord or federal contractor, or which participates in federal benefits programs, should develop internal processes for evaluating and/or mitigating potential bias in any business functions that rely on AI algorithms.
  • Although the contours of HHS’s program for reporting and “remedying” the harmful or unsafe use of AI in healthcare remain to be seen, any organization developing or utilizing AI tools in healthcare should be prepared for greater scrutiny regarding such tools.
  • Organizations contemplating the use of AI tools to support HR functions should pay close attention to the federal guidance being developed on these topics, and to the interplay between this guidance, state statutes and regulations, and the possibility of federal legislation.
  • Organizations struggling to hire qualified workers with AI expertise should keep abreast of any changes to immigration policy that can be leveraged to address such needs.
  • Organizations developing data privacy-preservation tools or engaging in AI research should consider whether they are eligible for federal funds made available in connection with the EO.
  • Organizations developing AI-enabled educational tools should consider collaborating with federal agencies tasked under the EO with creating resources for educators.
  • Life-sciences organizations that rely on federal funding must remain aware of—and potentially consider methods to influence the development of—future standards governing biological synthesis screening.

Also, many provisions of the EO are targeted toward specific industries. Some are discussed above and here are a few other examples, but we can also provide more comprehensive and detailed information to any readers who are interested in a particular subject area:

  • Section 4.2(c) of the EO imposes additional reporting obligations on Infrastructure as Service (IaaS) providers engage in transactions with foreign persons related to the training of AI models.
  • Section 4.3(a)(ii) directs the Treasury Secretary to issue a public report on best practices for financial institutions to manage AI-related cybersecurity risks.
  • Section 7.2(b)(i) directs the HHS Secretary to publish a plan addressing the use of AI systems in the implementation of federal benefits programs to improve access to benefits, detect and appeal unjust denials, retain appropriate levels of human discretion, and analyze whether algorithmic systems used to administer benefit programs achieve “equitable and just outcomes.”
  • Section 7.3(c) encourages the Housing and Urban Development Secretary and the Director of the Consumer Financial Protection Bureau to issue guidance on how to avoid discriminatory outcomes that may arise when using tenant screening systems and algorithm-based delivery of housing and credit-related advertising.

O’Melveny has long advised clients on the legal and industry standards related to AI and in developing internal policies and processes to anticipate and mitigate AI-related risks. Please feel free to contact any of the individuals listed on this alert if you have any questions or would like any additional information on any of the topics we have discussed.


1 The full text of the EO is available at: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

2 The EO defines “AI Model” as “a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.” The EO defines AI as “[A] machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”

3 Although the Defense Production Act has rarely been invoked outside the procurement context, O’Melveny has previously issued an alert discussing President Trump’s invocation of the Act in conjunction with the nation’s pandemic response, and a partner in O’Melveny’s Washington, DC office advised the White House Coronavirus Task Force concerning the Defense Production Act in his role as Counsel to the Vice President and Deputy Assistant to the President.

4 Although the EO delegates responsibility for defining “dual-use foundation model” to the Secretary of Commerce, it also provides an interim definition for the term as covering “any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations.”

5 Similarly the EO delegates responsibility for defining “large-scale computing cluster” to the Secretary of Commerce while providing an interim definition for the term as covering “any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI.”

6 For example, New York City and the State of Illinois have both enacted laws governing the use of AI in hiring. Illinois’ Artificial Intelligence Video Interview Act imposes notice, consent, and confidentiality requirements on any employers that rely on AI to analyze video interviews. New York City’s Rule on Automated Employment Decision Tools goes further by prohibiting the use of AI tools in hiring unless such tools have been subject to a public bias audit.

7 For example, S. 2419, the No Robot Bosses Act, would impose training, bias testing, oversight, and disclosure requirements on employers that use AI tools in employment-related decision making and would completely prohibit employers from relying solely on any AI tools in employment-related decision making.

8 For example, S. 631, the UPHOLD Privacy Act of 2023, would completely prohibit the use of individual health data collected from any source in commercial advertising, would require companies to limit employee use or access to health data unless “strictly necessary,” and would prohibit the sale of location data to or by data brokers. The bill would also create a private cause of action to seek damages or other relief for any violation of these requirements.

9 For example, H.R. 3831, the AI Disclosure Act of 2023, would impose a blanket requirement that any material generated using AI technology must be accompanied by this statement: “DISCLAIMER: this output has been generated by artificial intelligence.”


This memorandum is a summary for general information and discussion only and may be considered an advertisement for certain purposes. It is not a full analysis of the matters presented, may not be relied upon as legal advice, and does not purport to represent the views of our clients or the Firm. Randall W. Edwards, an O’Melveny partner licensed to practice law in California, Kevin Feder, an O’Melveny partner licensed to practice law in California and the District of Columbia, Sid Mody, an O’Melveny partner licensed to practice law in Texas, William K. Pao, an O’Melveny partner licensed to practice law in California, Jonathan P. Schneller, an O’Melveny partner licensed to practice law in California, Nexus U. Sea, an O’Melveny partner licensed to practice law in New York and New Jersey, Marc J. Pensabene, an O'Melveny partner licensed to practice law in New York, Megan K. Smith, an O’Melveny partner licensed to practice law in California, Massachusetts, and New York, Scott Sugino, an O’Melveny partner licensed to practice law in California, Stephen M. Sullivan, an O’Melveny partner licensed to practice law in California, Sergei Zavlasky, an O’Melveny partner licensed to practice law in the District of Columbia and Maryland, Coke Morgan Stewart, an O'Melveny senior counsel licensed to practice law in the District of Columbia and Virginia, Scott W. Pink, an O’Melveny special counsel licensed to practice law in California and Illinois, and Jason Yan, an O’Melveny counsel licensed to practice law in the District of Columbia and Virginia contributed to the content of this newsletter. The views expressed in this newsletter are the views of the authors except as otherwise noted.

© 2023 O’Melveny & Myers LLP. All Rights Reserved. Portions of this communication may contain attorney advertising. Prior results do not guarantee a similar outcome. Please direct all inquiries regarding New York’s Rules of Professional Conduct to O’Melveny & Myers LLP, Times Square Tower, 7 Times Square, New York, NY, 10036, T: +1 212 326 2000.