There have been increasing concerns over “deepfakes” created by malicious actors abusing AI tools. But we need to look closer at the story and how companies and law enforcement are working to combat it.
In the unfolding saga of the Biden robocall deepfake incident, a pivotal development has emerged—what happened after the calls occurred and the incident was identified.
As New Hampshire voters prepared to go to the polls in January, they suddenly received phone calls from Biden announcing his withdrawal from the competition. It turns out that was an AI deepfake.
That, however, was only the beginning of the story—not the end. After the attack, voice-fraud detection companies like Pindrop Technology identified the tools used to create the deepfake, which led to the apprehension of the perpetrator. This case exemplifies a broader truth in the digital era: we need good AI to combat bad AI. However, this is only feasible by enabling—not disabling—AI development.
After the robocall occurred, Pindrop’s analysis pinpointed the use of ElevenLabs’ AI tools in this deceitful act. By cleaning the audio and comparing it with samples from over 120 voice synthesis technologies, Pindrop was able to identify the source with over 99% certainty.
ElevenLabs then suspended the responsible user account once informed, marking a significant step in combating AI-driven election interference. And we can expect swift next steps to locate and arrest the perpetrator. That’s when we reach the end of the story so far.
This event underscores a crucial fact often overlooked: the enforcement of existing laws against AI misuse is critically important. There are already laws against using AI for election disruption, but what we lack is sufficient law enforcement against these crimes. And for any law to work successfully, legislators and researchers need to investigate why current laws aren’t being enforced.
Unfortunately, under President Biden, the U.S. Federal Trade Commission has been less than effective in preventing fraud, with incidents up by 300%, costing Americans over $8 billion annually. This laxity in enforcement has real consequences for existing and emerging tools, as seen in the Biden deepfake case.
The rise of AI in politics is not a novel issue. Recall the 2016 incident where fake phone calls claimed Marco Rubio had dropped out of the South Carolina Republican primaries. Whether with or without AI, these tactics are not new, and our government should respond by enforcing existing laws and targeting the criminals responsible—not clamoring for panic-driven bans.
Bad actors won’t adhere to bans – that’s what makes them bad actors. Instead, we need to empower responsible AI development. When used correctly, AI can be a force for good, as demonstrated by Pindrop’s role in this case.
Enforcement doesn’t just mean penalizing the misuse of AI; it also involves government working with technology companies like ElevenLabs and Pindrop to develop technologies that can detect and prevent such abuses. AI has immense potential for positive impact, from enhancing cybersecurity to improving accessibility. Stifling its development only leaves us vulnerable to those who will exploit technology without regard for ethics or legality.
To be clear, there is certainly a need for some new laws to address gaps in existing law with regard to new AI tools. NetChoice has identified two such gaps with regard to child sexually explicit material (CSAM) and deepfakes. Today, most laws require an actual photo of CSAM. But child abusers are using AI to manufacture horrific “fake photos” of a real child engaged in sexual acts to escape prosecution. That’s why we need the Stop Deepfake CSAM Act to fill this gap by making clear that CSAM is CSAM—whether it’s an actual photo or AI generated.
Likewise, the Stop Non-Consensual Distribution of Intimate Deepfake Media Act is needed, too, which makes clear that victims of AI generated revenge pornography can bring civil actions against bad actors.
But the need to fill in gaps doesn’t mean we need a new, complex regulatory system. The Biden robocall deepfake incident is a wake-up call: not for more regulation, but for smarter enforcement, learning and gap-filling. We need to foster an environment where innovation of new tools can thrive responsibly while laws are enforced effectively against those who seek to abuse such developments.
In the fight against malicious actors abusing AI tools, our best defense is not prohibition, but the intelligent and ethical use of AI itself.