Scarlett Johansson’s recent allegations against OpenAI for misappropriating her voice have added fire to the heated debate in Washington over AI regulation. This high-profile case, along with the recent incident involving fake AI-generated calls targeting President Joe Biden, demonstrates that existing laws are well-equipped to handle the many of the challenges posed by AI innovations. Policymakers should take a closer look at how existing laws cover this area before creating a vast new set of AI-specific regulations that could have major implications for competition.
Just as Bernie Madoff was sent to prison for fraud, not Microsoft for creating the Excel spreadsheets he used to perpetrate his crimes, and Sam Bankman-Fried faced charges for financial misdeeds, not the cryptocurrency industry as a whole, policymakers must focus on enforcing existing laws to address AI misuse. If the allegations from Johansson against OpenAI are substantiated, the company would likely face significant consequences under existing intellectual property laws, such as copyright and right of publicity. These laws have successfully protected individuals’ creative works and personal attributes for decades, and there’s no reason they can’t be applied effectively to complaints against misuse of AI-generated content.
Johansson could bring a suit that OpenAI misappropriated her likeness. By using a voice that allegedly “sounds like” Johansson’s, she can exercise centuries of old tort law to receive financial compensation and also an injunction to prevent further use by OpenAI. No new laws, no new legislation—this can be done by enforcing and applying existing laws.
The same is true for concerns about AI being used for fraudulent or defamatory purposes, which are already covered by robust anti-fraud and defamation statutes. The Federal Trade Commission Act’s prohibition on unfair and deceptive practices provides an additional layer of protection against AI systems that mislead consumers or cause them harm.
While some targeted updates to existing laws may be necessary to address unique harms caused by the abuse of AI tools, such as non-consensual deepfake pornography, the vast majority of these cases can be tackled using our current legal framework. To fill these gaps, policymakers should focus on strategically and specifically targeted rules, like the Stop Non-Consensual Distribution of Intimate Deepfake Media Act.
AI is already transforming healthcare, with the FDA recently approving the first AI-driven system to predict sepsis, a life-threatening response to infection. This groundbreaking technology will save countless lives by enabling earlier detection and more targeted treatment of this deadly condition. The potential for AI to revolutionize patient care and improve health outcomes is truly staggering, and we’ve only just begun to scratch the surface.
Similarly, in the field of education, AI-powered tools are enabling personalized learning, providing real-time feedback to students and helping educators to identify and support struggling learners. By harnessing the power of AI, we can create a more effective education system that prepares our children for the jobs of the future.
AI is also proving crucial in the development of energy solutions, from optimizing power distribution networks to accelerating the pace of fusion research. By leveraging AI’s capabilities, American companies are working to create a more resilient energy future.
Overregulation risks derailing this incredible progress and ceding our technological edge to competitors like China. Not only have foreign nations like China ignored U.S. laws like copyright and patent, bad actors, too, disregard rules and regulations – that is what makes them bad actors. While law-abiding businesses are handcuffed from development and innovation, the bad actors aren’t. Ultimately, this will mean that America would cease to be the leader on AI.
Instead of succumbing to fear, policymakers should focus on promoting transparency, accountability and security in AI systems while strategically updating laws to address specific gaps.
As we grapple with the challenges and opportunities presented by AI, it’s essential that we approach regulation with a clear-eyed, evidence-based mindset. We must resist the urge to react based on hyperbolic headlines or isolated incidents, and instead focus on developing a comprehensive, adaptive regulatory framework that encourages responsible innovation while protecting the rights and interests of individuals.
The Scarlett Johansson case and the Joe Biden deepfake incident serve as powerful reminders that our existing legal system is far more resilient and adaptable than many give it credit for. By enforcing our laws, fostering public-private collaboration, and judiciously updating regulations where necessary, we can mitigate the risks of AI misuse without compromising its incredible potential to drive progress and improve the human condition.
The message to bad actors must be unambiguous: you can’t hide behind computers. We will enforce our laws rigorously and hold you accountable for any misuse of AI technology. Simultaneously, we won’t let fear dictate our approach to AI governance.
By striking the right balance between protecting the public interest and promoting responsible innovation, America can maintain its leadership in this critical technological domain and unlock AI’s vast potential to drive progress, improve lives and solve some of the world’s most pressing challenges.
Image generated by NetChoice using ChatGPT’s DALL-E.