Cybersecurity » Employees Are Already Submitting Sensitive Data to ChatGPT, and We’re Not Even in the First Inning

Employees Are Already Submitting Sensitive Data to ChatGPT, and We’re Not Even in the First Inning

March 23, 2023

digital-smart-contract-isometric-icon-concept-of-electronic-signature-vector-id1313551846

Over 4 percent of employees have been submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models. If proper data security isn’t in place, the data could be retrieved at a later date and result in massive leaks of confidential information. With more employees using ChatGPT and other AI-based services as productivity tools, the risk will only grow. Some companies are taking action. JPMorgan has restricted workers’ use of ChatGPT, and Amazon, Microsoft, and Wal-Mart have all issued warnings to employeesKarla Grossenbacher, a partner at law firm Seyfarth Shaw, cautioned that employers should include prohibitions on employees referring to or entering confidential, proprietary or trade secret information into AI chatbots or language models in employee confidentiality agreements and policies. She wrote that on the flip side, “employees might receive and use information from the tool that is trademarked, copyrighted, or the intellectual property of another person or entity, creating legal risk for employers.”

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top