Transparency in Frontier Artificial Intelligence Act (SB-53): California Requires New Standardized AI Safety Disclosures

Transparency in Frontier Artificial Intelligence Act (SB-53): California Requires New Standardized AI Safety Disclosures

Blog WilmerHale Privacy and Cybersecurity Law

On September 29, California Governor Gavin Newsom signed into law the Transparency in Frontier Artificial Intelligence Act (“”), making California the first state to require public, standardized safety disclosures from developers of advanced artificial intelligence (AI) models.

In the absence of imminent comprehensive federal legislation addressing AI safety, California is among a growing list of states seeking to lead on AI safety issues. As we described in a prior post, Colorado was the first state to pass a broad AI law imposing substantive disclosure, risk management, and transparency practices on developers and deployers of high-risk AI systems through the Colorado AI Act—though the law’s implementation has been delayed until June 2026. Relatedly, Texas passed the Texas Responsible AI Governance Act in June 2025 to place categorical limitations on the deployment and development of AI systems, though the bill is much narrower than Colorado’s law and most of its requirements apply only to government entities.

TFAIA’s passage marks the culmination of a long-running debate in California over the proper scope of AI safety regulation after similar legislation stalled last year over concerns from industry that it would stymie AI innovation. The law is significantly narrower than originally envisioned, with requirements that apply explicitly only to large developers, as opposed to deployers, and pertain to significant harms. TFAIA is nonetheless poised to set benchmarks for transparency and accountability that will shape safety practices across the United States and abroad. The following post summarizes the California law’s most important features and highlights key provisions and areas to watch in the coming months across the AI legislative landscape. 

Overview of TFAIA

TFAIA requires developers to disclose how they manage safety risks and introduces new mechanisms for transparency, accountability, and enforcement. Developers who are not in compliance with the law when it takes effect in January 2026 face civil penalties of up to $1,000,000 per violation, enforced by the California Attorney General.

Whom it Covers. TFAIA applies to developers of frontier AI models. The bill defines a “frontier model” as a foundation model trained on a quantity of computing power greater than 10^26 integer or floating-point operations (“FLOPs”), including whatever computer power is used in subsequent fine-tuning, reinforcement learning, or other material modifications. That threshold mirrors the 2023 AI Executive Order (“Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”) and exceeds the EU AI Act’s threshold of 10^25 FLOPs. To date, few companies have publicly disclosed that they have surpassed the 10^26 FLOP threshold—and thus would be covered by SB-53—but additional companies are anticipated to meet this threshold in the coming year. The law imposes additional transparency and accountability requirements on “large frontier developers” whose annual revenue, including affiliates, exceeded $500,000,000 in the preceding calendar year. 

Requiring publication of a general frontier AI framework. Under TFAIA’s requirements, large frontier developers must publish an accessible general safety framework that, among other things, shows how the developer incorporates national standards, international standards, and industry-consensus best practices and, furthermore, explains how the developer assesses whether the frontier model has capabilities that could pose catastrophic risk as well as how it mitigates such risks and measures the effectiveness of those mitigations, including through the use of third parties. The framework must also include cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer, and companies must institute internal governance measures to ensure the implementation of the required cybersecurity processes. 

While the law’s legislative findings acknowledge that many major AI developers voluntarily publish safety frameworks, it notes that “not all developers” provide reporting that is sufficient to ensure necessary transparency and protect the public. TFAIA does not specify particular standards, and we anticipate industry will look, at least for initial guidance, to NIST Frameworks like the Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST AI 600-1) as well as international standards such as ISO 42001. Large AI developers must review frameworks on an annual basis and publish any material modifications within 30 days. 

Bolstering transparency at deployment. TFAIA requires that, when a frontier developer releases a new or substantially modified , the developer must publish a transparency report identifying the model’s release date, modalities (such as text, image, audio, or video inputs and outputs), intended uses, and any restrictions on deployment. Large frontier developers must also summarize their catastrophic-risk assessments of the model, disclose their results, and describe the role of any third-party evaluators in their assessments. 

Mandating reporting of “critical safety incidents.” Frontier developers must, under TFAIA, notify the California Governor’s Office of Emergency Services (OES) of any “critical safety incident”—defined as model behavior that results in or materially risks death, serious injury, or loss of control over the system—within 15 days of discovering the incident.  If the incident presents an imminent risk of death or serious injury, disclosure must be made within 24 hours to the appropriate public safety authority. OES will establish a reporting portal to receive both public and confidential submission of such incidents, and—separately—will create a secure channel for large frontier developers to confidentially submit summaries of any assessments evaluating the potential for catastrophic risks posed by their frontier models. Beginning on January 1, 2027, OES will issue anonymized annual summaries of reported incidents.

Whistleblower protections. The law establishes strong whistleblower protections for employees of frontier developers, prohibiting retaliation, requiring large frontier developers to establish anonymous reporting channels, and authorizing the recovery of fees for successful claimants. The California Attorney General will publish anonymized annual reports on whistleblower activity beginning in 2027.

The “CalCompute” Consortium. TFAIA directs the establishment of a consortium to design a state-backed public cloud compute cluster—“CalCompute”—that would provide researchers, universities, and others with advanced computing capabilities to support safe, equitable, and sustainable AI innovation in the public interest. By January 1, 2027, the consortium is to deliver a report to the California Legislature detailing CalCompute’s proposed design, governance structure, and funding framework.

Ongoing updates to law. The law recognizes that the edge of the AI frontier is constantly evolving and that consequently the law, too, should evolve. It specifically directs California’s Department of Technology to review annually the definitions of “frontier model,” “frontier developer,” and “large frontier developer” and to submit recommendations to the California Legislature about whether and how to update California’s definitions so that they align with international and federal standards and are both verifiable and simple. The law also acknowledges that “foundation models developed by smaller companies or that are behind the frontier” may nonetheless still pose “significant catastrophic risk,” requiring additional legislation at a future date This echoes concerns raised in the California Report on Frontier AI Policy—an independent expert report commissioned by Governor Newsom in September 2024—which recommended legislators incorporate mechanisms to update the scope of safety legislation as AI technology evolves, and not to rely exclusively on computing power in defining those safety thresholds.

Meanwhile, several of the more controversial features of last year’s vetoed AI safety bill, SB-1047, were omitted, including mandatory third-party audits of frontier models, pre-launch testing and certification before deployment, and kill-switch mandates requiring the ability to shut down deployed models. Overall, the version that has now become law emphasizes transparency and accountability rather than pre-approval and direct control over model operations.

What’s Next on the Horizon for AI Safety Regulation. Other AI-specific statutes will take effect in California in 2025 and 2026, including transparency mandates (AB 2013 and SB 942) that will require developers to disclose training data and embed invisible watermarks in AI-generated content. Other AI bills are under active consideration in Sacramento. Among the pending proposals, AB 412 would require developers to document any copyrighted materials they knowingly use in training their AI models and to provide a mechanism for copyright holders to verify whether their works appear in training datasets. SB 243, which has passed both houses of the California Legislature and now awaits the governor’s signature, would regulate “companion chatbots.” The bill requires clear disclosures to users that they are interacting with AI, periodic reminders that they are not conversing with a human being, and restrictions designed to prevent minors from accessing sexual content.

Meanwhile, Congress continues to debate whether to impose a federal “moratorium” that would, for a specific period of time, preempt most state AI legislation. Earlier this summer, lawmakers rejected efforts to impose a 10-year or five-year ban on state AI laws, but the Trump Administration’s AI Action Plan has revived the issue by threatening to withhold AI-related federal funds from states whose regulations are deemed “burdensome” or “unduly restrictive.” Absent more direct federal engagement, the breakneck pace of state AI laws is likely to increase—over 100 bills were enacted across the country in the previous legislative session alone—and we anticipate other states are likely to pass similar versions of California’s AI safety law.

Authors

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.