Close this menu

New Paper Explains Security & Privacy Risks of Large Language Models in Generative AI

WASHINGTON—Today, privacy and cybersecurity policy expert James X. Dempsey released a paper, commissioned by NetChoice, titled “Generative AI: The Security and Privacy Risks of Large Language Models.

In the paper, Dempsey explains why it is essential to the future development of artificial intelligence that industry adopts responsible AI standards to avoid undermining public trust and inviting regulations that will stifle innovation of large language models. He examines how Microsoft has integrated ChatGPT, and what can be learned from the serious privacy and security missteps with this release. 

“To realize the potential benefits of AI, we have to ensure that risks posed by this new technology are identified and mitigated through responsible development practices,” says Dempsey. “AI has amazing potential. A rush to deploy risks that potential. If the responsible development practices, including the criteria laid out in this paper, are followed by developers, the risks of AI can be appropriately mitigated, protecting society from harm.”

Dempsey sets forth the following 5 criteria for industry to evaluate the security and privacy risks of artificial intelligence, focusing on LLMs:

  • Design: Throughout the entire design process of an AI model, user security and privacy should be prioritized. This includes the way it is trained, what data it is trained on, and how it collects, processes, and stores user input.
  • Vulnerability Management: Developers should adequately identify and mitigate risks presented by their product before deployment and should frequently issue updates to mitigate new vulnerabilities so users can expect a reasonable amount of safety from attacks. 
  • Deployment: AI developers should refrain from releasing to the public AI models which pose serious, unmitigated risks. Companies adding AI to their software supply chain should follow risk management standards and practices and make sure their customers understand how such integrations will impact the products they use.
  • Transparency: AI developers should provide clarity as to how data is trained and how it moves across supply chains, especially around private data retention and the processing of sensitive user data.
  • Confidentiality: The LLM and its hosting platform should not retain, re-train or otherwise disclose information provided by the user without explicit user approval, especially regarding the use of sensitive documents, transcripts, emails, and code.

“This research from Dempsey is a road map for responsible AI development and best practices,” said NetChoice President & CEO Steve DelBianco. “Our industry has to be clear-eyed about the risks of premature AI integration and deployment, or we risk having regulators stifle American innovation. AI innovation has the potential to improve our lives more than we know, so let’s ensure the marketplace for AI technology remains open, competitive and responsible.” 

You can read Dempsey’s paper here. Please contact press@netchoice.org with inquiries.