Proposed California Privacy Regulations May Heighten Obligations for “High Risk” Activities
September 11, 2023
The California Privacy Protection Agency board recently released and discussed at its latest board meeting draft regulations on risk assessment (“Risk Assessment Regulations”) and cybersecurity audits (“Audit Regulations”), offering an early indication of its position on artificial intelligence and other automated technologies. Although the board indicated that finalizing these regulations will take several more meetings, the members’ discussions during the meeting suggest the Audit Regulations may see finalization before the Risk Assessment Regulations.
The board’s proposed approach of requiring risk assessments when using such technologies appears to be inspired by the data protection impact assessments required when engaging in “high risk” processing activities, such as profiling or systematic monitoring, under Article 35 of the EU GDPR. While several points in the draft regulations are marked “TBD” or bracketed (including certain consumer information and revenue thresholds), the Risk Assessment Regulations further suggest that the CPPA may be inclined to require businesses to provide thorough explanations for their use of such technologies and implement appropriate privacy safeguards. Similarly, the Audit Regulations may require businesses to invest more time and resources into developing a more sophisticated data security program than the CCPA currently requires.1 Businesses engaging in these types of activities defined as “high risk” should familiarize themselves with any potential obligations that may arise as these regulations become finalized.
Risk Assessments
The Risk Assessment Regulations’ definitions of Artificial Intelligence2 and Automated Decisionmaking Technologies (which include systems derived from Artificial Intelligence)3 encompass technologies that have been the focus of recent litigation, such as generative models and machine-learning systems. Examples of high-risk processing activities that would require a risk assessment, according to the regulations, include automated decision technology use by a rideshare provider, installation of video cameras inside a delivery service’s vehicles, and processing photographs and extracting faceprints to train facial recognition technology. However, these definitions may be subject to some narrowing, given the board’s concern around the breadth of these definitions and potential overlap with soon-to-come regulations focusing on automated decisionmaking.
If a covered business is using Automated Decisionmaking Technology, the risk assessment must include:
- Why the business is using or seeks to use the Automated Decisionmaking Technology to achieve the purpose of the processing, including any benefits of using the technology over manual processing, the appropriate use(s) of the Automated Decisionmaking Technology, and any limitations on the appropriate use(s) of the Automated Decisionmaking Technology;
- The personal information processed by the Automated Decisionmaking Technology, including the personal information used to train the technology and the sources of the personal information;
- The output(s) secured from the Automated Decisionmaking Technology and how the business will use the output(s);
- The steps the business has taken or plans to take to maintain the quality of personal information processed by the Automated Decisionmaking Technology, including personal information used by the business to train the technology;
- The logic of Automated Decisionmaking Technology, including any assumptions about the logic;
- How the business evaluates its use of the Automated Decisionmaking Technology for validity, reliability, and fairness;
- If the business has not consulted external parties in its preparation or review of the risk assessment, why the business did not do so and which safeguards it has implemented to address risks to consumers’ privacy that may arise from the lack of external party consultation; and
- Any safeguards that the business plans to implement to address the negative impacts to consumers’ privacy specific to its use of Automated Decisionmaking Technology or for data sets produced by or derived from the Automated Decisionmaking Technology.
Furthermore, risk assessments for businesses processing personal information to train Artificial Intelligence or Automated Decisionmaking Technology must include:
- The appropriate purposes for which people may use the Artificial Intelligence or Automated Decisionmaking Technology;
- How the business has provided or plans to provide the required information to those people, and any safeguards the business has implemented or will implement to ensure that the Artificial Intelligence or Automated Decisionmaking Technology is used for appropriate purposes by other people; and
- For any business providing Artificial Intelligence or Automated Decisionmaking Technology for other businesses, how it has provided or plans to provide the necessary facts for such other businesses to conduct their own risk assessments.
In addition to applying to the use of Automated Decisionmaking Technology in furtherance of significant decisions4 and the processing of personal information to train Artificial Intelligence or Automated Decisionmaking Technologies, the Risk Assessment Regulations would apply to new processing activities that constitute a “significant risk to consumers’ privacy.” These activities are defined to include (1) processing the personal information of consumers who the business has actual knowledge are under 16; (2) processing the personal information of consumers who are employees, independent contractors, job applicants, or students using technology to monitor employees, independent contractors, job applicants, or students (e.g., keystroke loggers, video or audio recording or live-streaming, facial recognition, etc.); and (3) processing the personal information of consumers in publicly accessible places using technology to monitor consumers’ behavior, location, movements, or actions (e.g., WiFi/Bluetooth tracking, drones, geofencing, etc.).
Still up in the air is the degree to which businesses would have to publicize any such risk assessments. Bracketed language in the Risk Assessment Regulations contains one proposal: that covered businesses would have to annually submit abridged assessments or certifications of compliance while making the assessments themselves available to the CPPA or the California AG on request. Other options to be discussed include mandating assessment every three years or updating only as necessary, except for processing subject to Automated Decisionmaking Technology that would require that access and opt-out rights be reviewed and updated at least annually, biannually, or once every three years.
Cybersecurity Audits
As the board acknowledged during the September 8 meeting, one of the key challenges surrounding finalizing the Audit Regulations will be establishing the applicability threshold, especially since these regulations do not have counterparts under similar state privacy laws (e.g., Colorado’s). The current draft regulations present a few options for what businesses might be considered to have processing that constitute a “significant risk” including (1) a business meeting the CCPA revenue threshold and processing in the prior calendar year (a) the personal information of one million or more consumers or households, (b) sensitive personal information of 100,000 or more consumers, or (c) personal information of 100,000 or more consumers who the business has actual knowledge were under 16; (2) the business has a gross revenue threshold of a to-be-determined amount, or (3) the business has more than a to-be-determined number of employees. Covered businesses would be required to conduct audits annually without any gaps in coverage.5
Throughout the Audit Regulations, the board emphasizes the impartiality and objectivity of the audit and auditor. The audit must identify specific evidence examined to make decisions and assessments and cannot rely primarily upon assertions by the business’s management. If the business chooses to use an internal auditor, he or she cannot develop, implement, or maintain the business’s cybersecurity program or prepare the business’s documents or participate in business activities that the auditor may review in the current or subsequent cybersecurity audits. Furthermore, internal auditors must report issues directly to the business’s board of directors or governing body—and if none exists, the auditor must report to the highest ranking executive who does not have direct responsibility for the business’s cybersecurity program.
As for the substance of the audit itself, the Audit Regulations provide some helpful guidance. Overall, the audit must assess and document the policies, procedures, and practices that protect the security, confidentiality, integrity, and availability of personal information; identify any gaps or weaknesses in the business’s cybersecurity program; address the status of any gaps or weaknesses identified in any prior audit; and specifically identify any corrections or amendments to prior audits.
The Audit Regulations include two options for further inclusion in the audit requirements. One is to assess and document how the business’s cybersecurity program considers and protects against (a) unauthorized access, destruction, use, modification, or disclosure of personal information or unauthorized activity resulting in the loss of availability of personal information, and (b) impairing customers’ control over their personal information as well as economic, physical, psychological, and reputational harm associated with such activities described in (a). The other is to assess and document risks from cybersecurity threats, including as a result of a cybersecurity incident (each defined in the draft regulations).
The audit will also need to include specific details on the components of a business’s cybersecurity program or explanations why certain components are not necessary, including for multi-factor authentication, encryption, vulnerability scans, penetration tests, oversight of service providers, etc.
To demonstrate compliance with the Audit Regulations, a business would need to submit to the CPPA either (1) a written certification that the business complied with the requirements during the 12 months the audit covers, or (2) written acknowledgement that the business did not fully comply with the requirements during such period along with either (a) identification of all sections and subsections that the business has not complied with and describe the nature and extent of such noncompliance, and (b) a remediation timeline or confirmation that remediation has been completed.
Even businesses not directly subject to the CCPA may find themselves affected by these draft regulations should they become final. Both draft regulations impose obligations on third-party service providers to assist with a business’s efforts to conduct cybersecurity audits and risk assessments. Therefore, it will be important for all businesses with ties to California, especially those using artificial intelligence and other Automated Decisionmaking Technology, to continue monitoring these regulations as they evolve.
1 Under the current statute a subject business has to implement only “reasonable security procedures and practices” rather than certain specified procedures/practices.
2 “Artificial Intelligence” means an engineered or machine-based system, including a learning generative model, that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.
3 “Automated Decisionmaking Technology” means any system, software, or process—including one derived from machine-learning, statistics, other data-processing techniques, or artificial intelligence—that processes personal information and uses computation as a whole or part of a system to make or execute a decision or facilitate human decisionmaking. The draft regulations specify that Automated Decisionmaking Technology include automated processing to evaluate certain personal aspects and make predictions concerning that person (i.e., profiling).
4 The Risk Assessment Regulations specify such use is “in furtherance of a decision that results in the provision or denial of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment or contracting opportunities or compensation, healthcare services, or access to essential goods, services, or opportunities.”
5 The first audit may be conducted within 24 months of the effective date of the regulations
This memorandum is a summary for general information and discussion only and may be considered an advertisement for certain purposes. It is not a full analysis of the matters presented, may not be relied upon as legal advice, and does not purport to represent the views of our clients or the Firm. Randall W. Edwards, an O’Melveny partner licensed to practice law in California, Sid Mody, an O’Melveny partner licensed to practice law in Texas, Scott W. Pink, an O’Melveny special counsel licensed to practice law in California and Illinois, Emily Losi, an O'Melveny associate licensed to practice law in New York, and Kayla Tanaka, an O'Melveny associate licensed to practice law in California, contributed to the content of this newsletter. The views expressed in this newsletter are the views of the authors except as otherwise noted.
© 2023 O’Melveny & Myers LLP. All Rights Reserved. Portions of this communication may contain attorney advertising. Prior results do not guarantee a similar outcome. Please direct all inquiries regarding New York’s Rules of Professional Conduct to O’Melveny & Myers LLP, Times Square Tower, 7 Times Square, New York, NY, 10036, T: +1 212 326 2000.