Sharing sensitive business data with ChatGPT may be risky
As the ins and outs of the potential of AI chatbots continue to make headlines, the frenzy surrounding ChatGPT remains at a fever pitch. One question that has captured the attention of many in the security community is whether the technology's ingestion of sensitive business data poses risks to organizations. There was concern that if someone entered sensitive information — quarterly reports, internal presentation materials, sales numbers, etc. — and asked ChatGPT to write text around it, anyone could get the company's information simply by asking ChatGPT.
The impact can be far-reaching: Imagine working on an internal presentation that contains new company data that reveals something to be discussed at a board meeting corporate issues discussed above. Leaking this proprietary information out could damage stock prices, consumer attitudes and customer confidence. Worse, legal items on the leaked agenda could expose the company to real liability. But can any of these things really happen just by stuff put into a chatbot?
Research firm Cyberhaven explored this concept in February, focusing on how OpenAI used what people input into ChatGPT as training data to improve its technology, with output that closely resembled what was input. Cyberhaven claims that confidential data entered into ChatGPT could be leaked to third parties if the third party asks ChatGPT certain questions based on information provided by executives.
ChatGPT does not store user input data - does it?
The UK’s National Cyber Security Center (NCSC) shared further insight into the matter in March, stating that ChatGPT and other large language models (LLMs) currently do not automatically add information from queries to the model for Others inquire. That is, including the information in the query does not result in potentially private data being incorporated into the LLM. "However, queries will be visible to the organization providing the LLM (and in the case of ChatGPT, also to OpenAI)," it wrote.
"These queries have been stored and will almost certainly be used to develop an LLM service or model at some point. This may mean that the LLM provider (or its partners/contractors) is able to read the queries and They may be incorporated into future releases in some way," it added. Another risk, which increases as more organizations produce and use LLMs, is that queries stored online could be hacked, leaked or accidentally made public, the NCSC writes.
Ultimately, there are real reasons to be concerned about sensitive business data being entered and used by ChatGPT, although the risk may not be as widespread as some headlines make it out to be.
Possible Risks of Entering Sensitive Data into ChatGPT
LLM exhibits a type of emergent behavior called situated learning. During a session, when the model receives inputs, it can perform tasks based on the context contained in those inputs. "This is most likely the phenomenon people are referring to when they are concerned about information leakage. However, it is impossible for information from one user's session to be leaked to another user," Andy Patel, senior researcher at WithSecure, told CSO. "Another concern is that prompts entered into the ChatGPT interface will be collected and used for future training data."
While concerns about chatbots ingesting and then regurgitating sensitive information are valid, Patel said A new model needs to be trained to integrate this data. Training an LLM is an expensive and lengthy process, and he said he would be surprised if a model could be trained on the data collected by ChatGPT in the near future. "If a new model is eventually created that contains collected ChatGPT hints, our fear turns to membership inference attacks. Such attacks have the potential to expose credit card numbers or personal information in the training data. However, there are no targets for supporting ChatGPT and others like it. The system's LLM proves membership inference attacks." This means that future models are extremely unlikely to be vulnerable to membership inference attacks.
Third-party links to AI could expose data
Wicus Ross, senior security researcher at Orange Cyberdefense, said the issue is most likely caused by an outside provider that doesn’t clearly state its privacy policy , so using them with other security tools and platforms could put any private data at risk. “SaaS platforms such as Slack and Microsoft Teams have clear data and processing boundaries, and the risk of data exposure to third parties is low. However, if third-party plug-ins or bots are used to enhance the service, whether they are related to artificial intelligence or not, associated, these clear lines can quickly become blurred,” he said. "In the absence of an explicit statement from the third-party processor that the information will not be disclosed, you must assume that it is no longer private."
Neil Thacker, EMEA chief information security officer at Netskope, told CSO that in addition to the sensitive data shared by regular users, companies should also be aware of prompt injection attacks that could reveal previous instructions provided by developers when adjusting tools, or Causes it to ignore previously programmed instructions. "Recent examples include Twitter pranksters changing the bot's behavior and an issue with Bing Chat, where researchers found a way for ChatGPT to reveal instructions that were previously supposed to be hidden, possibly written by Microsoft."
Control data submitted to ChatGPT
According to Cyberhaven, sensitive data currently accounts for 11% of content posted by employees to ChatGPT, and the average company leaks sensitive data to ChatGPT hundreds of times a week. “ChatGPT is moving from hype to the real world, and organizations are trying to implement actual implementations in their operations to join other ML/AI-based tools, but caution needs to be exercised, especially when sharing confidential information,” Thacker said. “All aspects of data ownership should be considered, as well as what the potential impact would be if the organization hosting the data were breached. As a simple exercise, information security professionals should at least be able to identify the data that might be accessed if these services were breached category."
Ultimately, it is the business's responsibility to ensure that its users fully understand what information should and should not be disclosed to ChatGPT. The NCSC said organizations should be very careful about the data they choose to submit in prompts: "You should ensure that those who want to try LLM can, but do not put organizational data at risk."
WARNING EMPLOYEES The potential dangers of chatbots
However, Cyberhaven warns that identifying and controlling the data employees submit to ChatGPT is not without its challenges. "When employees enter company data into ChatGPT, they don't upload the file, but instead copy and paste the content into their web browser. Many security products are designed around protecting files (marked as confidential) from being uploaded , but once the content has been copied from the file, they cannot track it," it reads. Additionally, Cyberhaven said corporate data that goes into ChatGPT often does not contain identifiable patterns that security tools look for, such as credit card numbers or Social Security numbers. “Today’s security tools can’t differentiate between a person typing a cafeteria menu and a company’s merger and acquisition plan without understanding its context.” To improve visibility, Thacker said, organizations should add more features to their secure web gateways Implement policies on the (SWG) to identify the use of AI tools, and also apply data loss prevention (DLP) policies to identify what data is submitted to these tools.
Michael Covington, vice president of portfolio strategy at Jamf, said organizations should update their information protection policies to ensure the types of applications that are acceptable for handling confidential data are properly documented. “Controlling the flow of information starts with well-documented and informed policies,” he said. "Additionally, organizations should explore how they can leverage these new technologies to improve their businesses in thoughtful ways. Rather than shying away from these services out of fear and uncertainty, invest some people in exploring new tools that show potential so you can Understand the risks early and ensure adequate protection is in place when early end-user adopters want to start using these tools”
The above is the detailed content of Sharing sensitive business data with ChatGPT may be risky. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

DALL-E 3 was officially introduced in September of 2023 as a vastly improved model than its predecessor. It is considered one of the best AI image generators to date, capable of creating images with intricate detail. However, at launch, it was exclus

Users can share the wallpapers they obtain with friends when using WallpaperEngine. Many users do not know how to share WallpaperEngine with friends. They can save their favorite wallpapers locally and then share them with friends through social software. How to share wallpaperengine with friends Answer: Save it locally and share it with friends. 1. It is recommended that you save your favorite wallpapers locally and then share them with friends through social software. 2. You can also upload it to the computer through a folder, and then click Share using the creative workshop function on the computer. 3. Use Wallpaperengine on the computer, open the options bar of the creative workshop and find

More and more enterprises choose to use exclusive enterprise WeChat, which not only facilitates communication between enterprises and customers and partners, but also greatly improves work efficiency. Enterprise WeChat has rich functions, among which the screen sharing function is very popular. During the meeting, by sharing the screen, participants can display content more intuitively and collaborate more efficiently. So how to share your screen efficiently in WeChat Enterprise? For users who don’t know yet, this tutorial guide will give you a detailed introduction. I hope it can help you! How to share screen on WeChat Enterprise? 1. In the blue area on the left side of the main interface of Enterprise WeChat, you can see a list of functions. We find the "Conference" icon. After clicking to enter, three conference modes will appear.

The perfect combination of ChatGPT and Python: Creating an Intelligent Customer Service Chatbot Introduction: In today’s information age, intelligent customer service systems have become an important communication tool between enterprises and customers. In order to provide a better customer service experience, many companies have begun to turn to chatbots to complete tasks such as customer consultation and question answering. In this article, we will introduce how to use OpenAI’s powerful model ChatGPT and Python language to create an intelligent customer service chatbot to improve

In daily life and work, we often need to share files and folders between different devices. Windows 11 system provides convenient built-in folder sharing functions, allowing us to easily and safely share the content we need with others within the same network while protecting the privacy of personal files. This feature makes file sharing simple and efficient without worrying about leaking private information. Through the folder sharing function of Windows 11 system, we can cooperate, communicate and collaborate more conveniently, improving work efficiency and life convenience. In order to successfully configure a shared folder, we first need to meet the following conditions: All devices (participating in sharing) are connected to the same network. Enable Network Discovery and configure sharing. Know the target device

Installation steps: 1. Download the ChatGTP software from the ChatGTP official website or mobile store; 2. After opening it, in the settings interface, select the language as Chinese; 3. In the game interface, select human-machine game and set the Chinese spectrum; 4 . After starting, enter commands in the chat window to interact with the software.

With the development of the digital era, shared printers have become an indispensable part of the modern office environment. However, sometimes we may encounter the problem that the shared printer cannot be connected to the printer, which will not only affect work efficiency, but also cause a series of troubles. This article aims to explore the reasons and solutions for why a shared printer cannot connect to the printer. There are many reasons why a shared printer cannot connect to the printer, the most common of which is network issues. If the network connection between the shared printer and the printer is unstable or interrupted, normal operation will not be possible.

chatgpt can be used in China, but cannot be registered, nor in Hong Kong and Macao. If users want to register, they can use a foreign mobile phone number to register. Note that during the registration process, the network environment must be switched to a foreign IP.
