The widespread adoption of new AI chatbots and writing tools could leave companies vulnerable to data leaks and lawsuits, said the report. The fear is that the chatbots could be exploited by hackers to access sensitive corporate information or perform actions against the company.
There is a “medium risk” that using generative AI could increase discrimination, harm a company’s reputation, or expose it to legal action over copyright issues, it said
Companies using generative artificial intelligence tools like ChatGPT could be putting confidential customer information and trade secrets at risk, according to a report from Team8, an Israel-based venture firm.
The widespread adoption of new AI chatbots and writing tools could leave companies vulnerable to data leaks and lawsuits, said the report, which was provided to Bloomberg News prior to its release. The fear is that the chatbots could be exploited by hackers to access sensitive corporate information or perform actions against the company.
There are also concerns that confidential information fed into the chatbots now could be used by AI companies in the future.
Major technology companies including Microsoft Corp. and Alphabet Inc. are racing to add generative AI capabilities to improve chatbots and search engines, training their models on data scraped from the Internet to give users a one-stop-shop to their queries.
If these tools are fed confidential or private data, it will be very difficult to erase the information, the report said.
“Enterprise use of GenAI may result in access and processing of sensitive information, intellectual property, source code, trade secrets, and other data, through direct user input or the API, including customer or private information and confidential information,” the report said, classifying the risk as “high.” It described the risks as “manageable” if proper safeguards are introduced.
The Team8 report stressed that chatbot queries are not being fed into large-language models to train AI, contrary to recent reports that such prompts could potentially be seen by others.
“As of this writing, Large Language Models cannot update themselves in real-time and therefore cannot return one’s inputs to another’s response, effectively debunking this concern. However, this is not necessarily true for the training of future versions of these models,” it said.
The document flagged three other “high risk” issues in integrating generative AI tools and underlined the heightened threat of information increasingly being shared through third-party applications.
Microsoft has embedded some AI chatbot features in its Bing search engine and Microsoft 365 tools.
“On the user side, for example, third party applications leveraging a GenAI API, if compromised, could potentially provide access to email and the web browser, and allow an attacker to take actions on behalf of a user,” it said.
There is a “medium risk” that using generative AI could increase discrimination, harm a company’s reputation, or expose it to legal action over copyright issues, it said.
Ann Johnson, a corporate vice president at Microsoft, was involved in drafting of the report. Microsoft has invested billions in Open AI, the developer of ChatGPT.
“Microsoft encourages transparent discussion of evolving cyber risks in the security and AI communities,” a Microsoft spokesperson said.
Dozens of chief information security officers of US companies are also listed as contributors to the report. The Team8 report was also endorsed by Michael Rogers, the former head of the US National Security Agency and US Cyber Command.