What does California’s new AI law mean for your business?

Quick answer
California just passed a narrow AI safety law that targets the biggest model developers, and the key word here is “narrow.” Most SaaS and startups are not directly covered, but your procurement, sales, and security reviews will feel it. Expect customers to ask for your AI disclosures, your upstream model details, and your incident playbook. The smart move is to treat this as a standards nudge, get your house in order now, and turn compliance into proof your buyers can see.

Why this matters

  • Buyers will copy California’s questions even if they are outside California.

  • RFPs will ask which model you use, how you assess catastrophic risk, and how you would report an incident.

  • You can turn this into an advantage by publishing simple, credible AI facts and procedures.

Who is actually covered

  • The law targets “frontier” model developers with very large training runs and very large revenue.

  • If you use models from big providers, you are usually out of scope, but you may still inherit new diligence and contract language.

What to do in the next 30 days

  1. Publish a 1-page AI factsheet: State what models you use, your use cases, your high-level safeguards, and how customers can reach you for AI issues.

  2. Add an AI section to your incident plan: Define what counts as an AI-related incident. Add a 24-hour internal escalation path.

  3. Update your model supplier file: Track model name and version, provider contact, eval notes, and release dates.

  4. Tune contracts and DPAs: Add light AI representations, logging language, and change-notification triggers.

  5. Train the frontline: Give sales and support a short answer set for “Which model do you use,” “How do you prevent misuse,” “How would you notify us.”

Procurement prompts you can send upstream today

Copy, paste, and ask your model provider or any AI-heavy vendor:

  • Which base model and version are you using?

  • What evals or red-team tests were run, including dates?

  • How you detect and handle misuse or emergent unsafe behavior?

  • Your definition of a “critical” incident and the reporting timeline?

  • How model weights and secrets are protected?

  • What changes would trigger a customer notification?

Signals buyers want to see on your site

  • Clear model disclosures that use simple words.

  • An AI use register or page that lists core use cases, not marketing.

  • A short incident and takedown path with a real inbox.

  • A record of version changes that affect outputs or risk.

What to expect next

  • Large providers will publish more safety frameworks.

  • Enterprise buyers will standardize question sets.

  • States and regulators will keep borrowing from each other, so alignment beats whack-a-mole.

Definitions, in plain English

  • Frontier model: a very large, general-purpose model trained with extreme amounts of compute.

  • Catastrophic risk: a foreseeable model failure that could seriously harm people or cause huge damage.

  • Critical incident: a safety event that meets a defined threshold and must be reported fast.

Previous
Previous

Kids’ apps, fake messages, and subscriptions: what the Sendit case teaches every founder

Next
Next

Centralized Opt-Out Is Coming. Here's How to Prepare.