Close this menu

Flicking the Kill Switch on California’s AI Leadership

California is on the verge of crippling its future as a leader in the development of artificial intelligence with Senate Bill 1047. This legislation is a fundamentally flawed attempt to regulate AI. It mandates “kill switches” for AI systems, holds developers—not criminals—liable for misuse and demands extensive safety protocols based on hypothetical risks.

SB 1047 tries to regulate a technology still in its infancy. AI safety science is evolving rapidly, and our understanding of risks and mitigation strategies is just beginning. By imposing stringent requirements based on hypothetical scenarios and underdeveloped standards, this bill puts the cart before the horse.

In practice, these ideas will create enormous barriers and costs to innovators. The requirement for “kill switches” would mean that no one would develop open-source AI, as competitors could shut down their work. Holding developers accountable for bad actors misusing their tools allows more criminals to exploit AI systems, knowing they will not have to bear the consequences. Addressing “hypothetical risks” asks developers to gaze into a crystal ball and foresee decades into the future—an impossible task given that even more established tools like ChatGPT are only a few years old.

These measures will only serve to stifle innovation and competition, discourage open-source AI development, and move AI research and investment out of California while failing to make AI tools “safer.” SB 1047’s vague and overly-broad definitions of what “artificial intelligence” actually means could inadvertently cover basic technologies like photo editing and text-to-speech, subjecting routine tech tools to unnecessary regulation.

Perhaps the most alarming aspect of the bill is its potential to stifle open-source AI development. SB 1047 holds developers responsible for how their AI is used, which scares companies away from sharing their open-source models. While open-source AI has already led to breakthroughs like detecting diseases, cancer and heart conditions, as it becomes more advanced and useful, the higher the risk of creators getting sued. It’s akin to a tax on creating: if you increase the liability, it makes it more expensive for entrepreneurs to build, and the end result is less innovation.

The requirement for “kill switches” in AI systems is also troubling as it creates enormous uncertainty for businesses building products on AI models. No entrepreneur would want to invest time and resources into a product that could be rendered useless if the underlying AI system can be shut down at the whims of a regulator.

Additionally, SB 1047 could severely impact California’s economy. The tech industry drives the state’s prosperity, especially for workers. A 2024 CompTIA study found that California was the top state in the U.S. for net tech employment with over 1.5 million workers in the industry. Implementing a restrictive regulatory environment like SB 1047 risks undermining California’s vibrant tech workforce, locking out competition from new tech firms and pushing AI development to other states or countries with more favorable conditions.

Creating an uncertain and restrictive environment for AI development undermines California’s position as a global leader in technological innovation. While safety is essential, SB 1047 is fundamentally flawed and could have severe unintended consequences for California’s tech industry and America’s global leadership in AI development.

Instead of rushing to pass new, AI-specific regulations, California policymakers should focus on enforcing and updating existing laws to address specific, real-world challenges. AI systems already must comply with a wide array of existing laws. For example, AI tools in healthcare must follow HIPAA and FDA guidelines. AI tools in finance must abide by FCRA and ECOA guidelines. And AI tools in schools must follow FERPA rules. 

The FTC also may oversee the AI industry using its existing authorities. Broadly applicable anti-discrimination statutes such as the Civil Rights Act, the Fair Housing Act, and the Americans with Disabilities Act outline rules for AI tools in relation to employment, credit and housing to prevent disparate impacts. All these existing rules are applicable to AI just as they are with other tools in their respective industries. 

A recent deepfake robocall impersonating President Biden in New Hampshire provides an additional, helpful example for how existing laws and collaboration between law enforcement and the tech industry effectively combats bad actors misusing AI tools. The impersonator was arrested and prosecuted for violating telecommunications and election laws—without needing to pass a new law.

Rather than creating redundant rules that could impede innovation while failing to protect the public, lawmakers should focus on strategically filling gaps in existing legislation with targeted updates for specific harms not covered by current law. This includes acts to punish non-consensual deepfake pornography and AI-manipulated child exploitation material.

As we navigate the complex landscape of AI development and deployment, striking the right balance between ensuring safety and fostering innovation is crucial. SB 1047 in its current form fails to achieve the right balance. It risks undermining California’s leadership in AI, stifling innovation and driving talent and investment out of the state while not addressing the risks lawmakers aim to mitigate.

The future of AI is bright, and California could lead the way. But Golden State policymakers should not dim that bright future with rushed legislation. Instead, they should enforce existing laws, foster public-private law enforcement collaboration and make targeted updates on real problems when necessary. This approach will allow real AI risks to be addressed while preserving the innovation ecosystem that has made California a global leader in tech development.