Among the hundreds of new laws taking effect in California this year are those regulating artificial intelligence.
SACRAMENTO, Calif. — California leaders say new state laws governing artificial intelligence and social media companies remain in full force, even if the Trump administration moves to block state-level oversight in favor of a single federal standard.
Among hundreds of new laws taking effect this month, several measures make California the first state in the nation to impose broad transparency and consumer protection requirements on large AI platforms.
Democratic Senators Scott Wiener of San Francisco and Steve Padilla of San Diego, authors of some of the new laws, say federal inaction has forced states to intervene.
“Congress has not demonstrated its ability thus far to pass meaningful technology policy,” Wiener said.
“It would be a great argument if they did something,” Padilla said of the executive order.
Chairman of the Senate Government Organization Committee — which oversees areas like AI — Padilla added: “It’s big politics, I guess, for some. It’s a set-up, as far as we’re concerned, it’s kind of a distraction.”
Wiener’s legislation – SB 53 – requires the largest AI companies to publicly disclose their safety and security protocols and report serious security incidents. Padilla’s law – SB 243 – targets what he describes as predatory “companion chatbots,” requiring platforms to clearly warn users when they are interacting with artificial intelligence rather than a human.
“These systems are basically acting like a human being when they’re not,” Padilla said, warning that they can lead users — especially young people or those in emotional or mental crisis — into dangerous situations.
New laws come like President Donald Trump signed an executive order establishing a task force aimed at challenging state AI regulations and potentially penalizing states by withholding federal funding. The White House did not respond to requests for comment on the status of these efforts.
Wiener said that if a national standard is ultimately adopted, it must be strict enough to effectively protect users.
“If we are going to have a national standard, it must be strong, meaningful and impactful,” he said.
California lawmakers say additional legislation is already in the works. Padilla – through SB 300 – is pursuing stricter age verification requirements and measures to ensure minors are not exposed to sexually explicit content through AI platforms.
Republican Sen. Roger Niello of Fair Oaks said he supports safeguards to protect children, which is why he supported both of Padilla’s bills. Niello warned of regulations that could stifle innovation.
“I worry about how it’s used, not how it’s developed,” Niello said. “Let’s not joke with the way tech companies develop their technology. That would really discourage innovation.”
The debate has also moved beyond the Capitol and is now in the hands of voters. Earlier this month, OpenAI – ChatGPT’s parent company – partnered with child safety advocacy group Common Sense Media to advance a proposed ballot initiative known as the Parents and Kids Safe AI Act. If approved by voters, the measure would impose an age guarantee, ban targeted advertising to minors and prohibit emotional manipulation or encouragement of harmful behavior.
Assembly member Rebecca Bauer-Kahan, a Democrat from San Ramon, said voter involvement could become increasingly likely if lawmakers or Congress don’t act.
“If we don’t act in California, voters can and they will,” Bauer-Kahan said. “There is tremendous consensus across party lines. I work with parents, grandparents, aunts, uncles, we all have children in our lives that we love. And we experience first-hand what happens to our children online.”
Last year, Bauer-Kahan also authored legislation that “would have made chatbots safe by design for our children,” as she described it. The bill passed both houses with bipartisan support, but was vetoed by the governor. She plans to reintroduce the bill, the congresswoman said.
Padilla is introducing another bill that targets toys with AI chatbot capabilities. The bill seeks to impose a four-year moratorium on the sale and manufacture of these toys intended for children under the age of 18.
The goal, Padilla said, is to allow more time to develop safeguards and better understand the impacts of these toys.
“Today, in the 21st century, we see how quickly this technology is evolving, literally every day,” the senator said. “But at the same time, [AI is] not sophisticated enough to write into the programming that he can’t talk to a 12 year old about sex.
State leaders say California can strike a balance between protecting consumers and promoting innovation, even as tensions grow with the federal government over who should regulate the rapidly evolving AI industry.
“We can walk and chew gum at the same time in California,” Padilla noted.