
California has slammed the brakes on the most important AI regulation effort to date.
What happened: California Governor Gavin Newsom vetoed legislation that would have required businesses operating in the state (which includes nearly all the largest AI companies) to comply with sweeping new safety rules.
- The proposed law would have required companies deploying large-scale AI models to test for safety risks, include a human-operated “kill switch,” and create a legal liability for developers of AI models if their tech was used to cause “critical harm.”
Why it matters: As the home of Silicon Valley, California is one of the most important regulators of AI — the laws it sets will shape how the technology develops worldwide.
Why it’s happening: Newsom and critics of the bill (including much of Silicon Valley and powerful venture capital investors) argued that it doesn’t make sense to regulate AI based on the size of models and that the law would chill innovation.
- Some large-scale models, they say, are used for safe tasks like customer service chatbots, while smaller models are sometimes used for more critical functions that do require greater oversight.
What’s next: California will convene a group of experts to rewrite the bill, but until that’s done the most important AI developers will continue to operate outside the constraint of safety laws.—TS