California Privacy Protection Agency Board Conducts Pre-Rulemaking Sessions

California Privacy Protection Agency Board Conducts Pre-Rulemaking Sessions

Insight Blog

On March 29 and March 30, 2022, the Board of the California Privacy Protection Agency (CPPA or “Agency”) conducted informational sessions on the California Privacy Rights Act (CPRA) and allowed public comment on its pre-rulemaking activities. Sessions on the first day included an overview of personal information and the California Consumer Privacy Act. Sessions on the second day included an overview of risk assessments and consumer rights with regards to automated decision-making. The informational sessions were led by academics, government officials and other such advisors.

The sessions highlighted a number of topic areas that could be a priority for the CPPA for both rulemaking and enforcement, including global opt-out signals, dark patterns, transparency, and data security. From earlier public comments, we anticipate that the CPPA will release regulations under the CPRA by this fall. These regulations will provide important guidance to businesses seeking to understand their compliance obligations under the law. We will continue to keep you posted of notable updates on this front.

Overview of the Sessions

On the first day of the informational sessions, six speakers provided an overview of the CCPA, and of personal information flows. The following are some selected takeaways from the sessions:

  1. Global Opt-Out Preference Signals: Stacey Schesser (Supervising Deputy Attorney General, California Department of Justice) spoke about opt-out preference signals and the CCPA.She recommended that the CPPA offer consumers a global opt-out option to facilitate the exercise of the opt-out right. In the CCPA regulations it drafted (11 CCR § 999.315), the California Attorney General (OAG) required businesses that collect personal information from consumers online to treat user-enabled global privacy controls “that communicate or signal the consumer’s choice to opt-out of the sale of their personal information as a valid request.”Schesser noted that the OAG drafted this regulation based on its experience enforcing the California Online Privacy Protection Act’s (CalOPPA) requirement that companies disclose whether they respond to “Do Not Track” signals.
  2. Personal Information Flows: Lisa Kim (Deputy Attorney General, California Department of Justice) spoke about how the CCPA interacts with personal information data flows. Kim discussed the consumer rights granted by both the CCPA and the CPRA, and whether consumers could exercise the rights under these laws with first party businesses, third party businesses, and service providers.
  3. Dark Patterns: Two speakers (Dr. Jennifer King, Stanford Institute for Human-Centered Artificial Intelligence, and Lior J. Strahilevitz, University of Chicago) spoke about how “dark patterns” manifest online and the potential dangers of these practices. Dr. King recommended that the agency: (1) hire staff with expertise in “dark patterns” to assess and measure the impact of manipulative design; and (2) consider developing positive guidance and standards around decision points. Strahilevitz shared his research into dark patterns, including his findings that mild dark patterns are the most insidious (as opposed to aggressive dark patterns) because they are highly effective, but do not alienate consumers. Strahilevitz also recommended that businesses should use symmetry principles to avoid dark patterns (e.g., opting out of a service should be as easy as opting in).
  4. Improving Consumer Understanding: Lorrie Faith Cranor (Carnegie Mellon University) spoke about how companies and governments can help streamline privacy information for users. She suggested adopting the use of (1) a privacy icon with accompanying words; (2) standardized interfaces and search engines for privacy information; (3) better interface design so that consumers can make better privacy choices; and (4) alternatives to long privacy notices.

On the second day of the informational sessions, five speakers informed the Agency on the challenges and potential solutions that the Agency should consider when rule-making concerning data processing and automated decision making (“ADM”). This informational session addressed the Agency’s invitation to public comments from September 22, 2021 where the Agency inquired about (i) what activities constitute “automated decision making” or profiling, (ii) when should consumers be able to access information about businesses’ use of ADM, (iii) what information must businesses provide to consumers in response to access requests and what businesses must do in order to provide “meaningful information about the logic” involved in the ADM process, and (iv) the scope of consumers’ opt-out rights with regard to ADM. The selected takeaways and suggestions by the speakers second day of sessions include:

  1. Limit Racial Profiling. Historically, artificial intelligence (AI)-driven algorithms used in ADM were described as “just math” alluding to their neutrality. Today academics, researchers and lawmakers understand the complexity and the potential risk of racial profiling and disparate impact when companies use ADM AI. The speakers encouraged the Agency to closely evaluate how ADM models, AI and AI-driven algorithms can strive to be more neutral and less discriminatory. The speakers suggested that the Agency will ensure that: (i) companies would explain the inner workings of the ADM model and algorithm itself, including how it collects data, notwithstanding intellectual property concerns, and (ii) companies would need to assess the impact of the model at a broad community level rather than at the individual level. In other words, the evaluation would need to consider if there is any profiling impacting an entire community class and not just whether the model affects an individual.
  2. Data Privacy Impact Assessment. Under GDPR, activities, like ADM, that are likely to result in high risk require companies to perform Data Privacy Impact Assessments (DPIAs). DPIAs is a process that helps companies identify data protection risks of a project and mitigate such identified risks. In such a way, companies can report and assess whether the risks are acceptable and introduce additional controls if necessary. The question for the Agency would be whether ADM warrants enforcement of DPIAs. The speakers noted that it is unclear whether DPIAs can serve the EU and the US in the same way because the underlying goals are different. There is a notable absence of individual rights, of explanation, contestation and fair decisions, in the US. Further, the speakers noted that there are challenges with using DPIAs because DPIAs are quantitative and the harms of ADM, namely potential racial profiling, described above, are not quantitative or but rather include contested concepts such as discrimination and fairness. It will be interesting to follow whether the CPPA adopts DPIAs under CPRA.
  3. Transparency and Explainability. Like Articles 13–15 and Article 22 of the GDPR that provide rights to “meaningful information about the logic involved” in automated decisions, ADM will need to comply with principles of transparency and explainability. The speakers suggested that ways to promote explainability and transparency of ADM include that (i) documentation will need to describe how the models work rather than just what they collect, (ii) DPIAs will ensure to document decisions before they are made and will uncover future social impacts, and (iii) audits will ensure consistent monitoring of risk.
  4. Security Frameworks. To address the kind of security frameworks needed for ADM, the speakers noted that most of today’s existing security frameworks are congruent and there is consensus around the need for security hygiene. There are numerous security standards supporting security hygiene such as NIST Cybersecurity Framework, ISO 270XX, SNIST SP800-53 and PCI-DSS. Creating a brand-new security framework will likely be duplicative in effort. Thus, the suggestion to the Agency was to consider enforcing existing security frameworks such as NIST Cybersecurity Framework, International Organization for Standardization (ISO270XX) or other such existing frameworks when addressing ADM rulemaking.

In a month, the CPPA Board will be holding additional review sessions, this time with stakeholders and interested parties. The date of these sessions has not yet been announced but we look forward to following.

Authors

More from this series