OI
Open Influence Assistant
×
California’s SB 53 Forces AI Labs to Open Their Safety Playbooks A New Model for AI Oversight

California signed SB 53 on September 29, 2025, forcing the largest AI labs to publish safety protocols, risk assessments, and testing governance while adding whistleblower protections. The law raises AI transparency, compliance, and trade secret questions and could shape national AI regulation.

California’s SB 53 Forces AI Labs to Open Their Safety Playbooks A New Model for AI Oversight

On September 29, 2025, California Governor Gavin Newsom signed SB 53, a first in the nation law that requires the largest AI labs to increase transparency about safety work and governance. The bill names major developers including OpenAI, Anthropic, Meta and Google DeepMind and mandates public disclosure of safety protocols, risk assessments, and documentation about how advanced models are tested and governed. SB 53 also strengthens whistleblower protections so employees can raise safety concerns with reduced risk of retaliation.

Why SB 53 arrived now

Policymakers, researchers, and the public have increasingly emphasized AI safety and AI governance as models grow more capable. Frontier AI models are those that demonstrate broad capabilities near or beyond human performance and that require significant data and compute. Concentration of these models in a small number of firms has driven calls for greater AI transparency, accountability, and external oversight.

Key details and requirements

SB 53 focuses on boosting AI safety and enabling meaningful review by regulators and independent researchers. Core obligations include:

  • Applicability: The law targets the largest frontier AI labs using revenue and size thresholds to define which companies must comply.
  • Mandatory disclosures: Covered firms must publish safety protocols, risk assessments, and reports that explain testing regimes, mitigation measures, and governance practices so others can evaluate potential harms and controls.
  • Whistleblower protections: Employees at covered companies are afforded legal safeguards when they report internal safety concerns, encouraging internal reporting and safer development practices.
  • Rulemaking: Agencies will define reporting templates, timelines, and formats that aim to balance AI transparency with legitimate protection of trade secrets.

What this means for industry and the public

SB 53 is likely to reshape AI compliance and operational processes at the largest labs. Expected effects include:

  • Greater scrutiny: Public safety reports will provide researchers and regulators material for AI audits and oversight, allowing for improved detection of misuse, bias, and other risks.
  • Compliance work: Companies must invest in new documentation, risk management practices, and automated compliance monitoring to meet AI regulatory requirements.
  • Trade secret tensions: Labs warn that detailed disclosures could expose proprietary techniques and data. Rulemakers will need to build protections that preserve commercial confidentiality while enabling accountability.
  • Precedent for wider policy: As the first state law with these provisions, SB 53 could influence federal AI policy and spur similar laws in other jurisdictions, increasing demand for standardized AI reporting formats and responsible AI frameworks.

Implications for partners and customers

Organizations that license or integrate models from covered labs should begin updating contracts and due diligence workflows to address AI risk assessment and AI accountability. Procurement, legal, and security teams will need to consider how safety disclosures affect liability, service level expectations, and data protection obligations.

Balancing transparency and innovation

Transparency is not a cure all. Disclosures are most useful when paired with clear regulatory standards, third party review capacity, and technical methods for protecting sensitive information. The law pushes forward the broader conversation about how to achieve responsible AI with mechanisms such as AI audits, ethical AI frameworks, and independent oversight.

Next steps to watch

  • How implementing agencies define threshold criteria and reporting templates for AI safety and governance.
  • Whether rulemaking adopts protected channels for trade secrets while preserving AI transparency.
  • How federal lawmakers respond and whether SB 53 informs national AI regulation and compliance strategies.

Brief FAQ

Who does SB 53 cover The largest AI labs as determined by model capability and revenue or size thresholds.

What must companies publish Safety protocols, risk assessments, and documentation about testing and governance to support external review and AI risk management.

Why it matters SB 53 is designed to increase public trust, enable AI oversight, and create stronger whistleblower protections so employees can safely report safety lapses.

SB 53 marks a meaningful shift in AI governance toward greater transparency, AI accountability, and stronger whistleblower protections. Businesses and researchers should prepare for new compliance requirements and engage in shaping reporting standards that protect both safety and innovation.

selected projects
selected projects
selected projects
Get to know our take on the latest news
Ready to live more and work less?
Home Image
Home Image
Home Image
Home Image