TRENDING

When Using ChatGPT at Work, We May Be Exposing Confidential Information: Here’s What You Can Do to Avoid It

  • OpenAI uses the data from user conversations to train its AI models.

  • Its employees may also review these conversations.

  • Here's how you can find when this happens and what you can do about it.

Chatgpt
No comments Twitter Flipboard E-mail

Ever since ChatGPT emerged, many people have decided to integrate it into their workflow. The chatbot’s artificial intelligence has proven to be a valuable asset in the office, with its capability to assist in scheduling tasks, composing emails, creating summaries, analyzing data, and generating graphs.

However, the enthusiasm to use this OpenAI-developed tool to enhance productivity can lead to a major issue. Some users may be unintentionally exposing confidential data about themselves or the company they work for. The issue arises because ChatGPT, by design, isn’t equipped to keep secrets.

Using ChatGPT at Work, a Double-Edged Sword

Imagine this scenario: You work at a company and have a meeting in a couple of hours to discuss its business strategy for the coming year. To be well-prepared for the meeting, you decide to jot down the most crucial points from the annual planning document.

The document contains confidential information that, if disclosed, could have adverse effects on your company. It includes sections analyzing competitors and mentioning products that haven’t yet been launched. However, due to time constraints, you decide to summarize it using ChatGPT.

ChatGPT A table created by ChatGPT.

As such, you upload the PDF to the chatbot, retrieve the key points you need, and your presentation in the meeting is a success. Not only were you fully informed about your specific area, but you also gained a global view of the direction the company you work for is heading. However, it’s important to be cautious because this information could potentially fall into the wrong hands.

When using ChatGPT for the first time, a pop-up window advises you not to share confidential information. Sometimes, however, users tend to simply click the "Okay, let’s go" button without fully understanding what this implies. OpenAI employees could potentially access the content and even use it to improve the chatbot.

ChatGPT ChatGPT’s pop-up window with usage suggestions.

All this is outlined in OpenAI’s data usage policy. The company explicitly states that it may utilize user prompts, responses, images, and files to enhance the performance of its underlying models, such as GPT-3.5, GPT-4, GPT-4o, and future iterations.

The method for improving the models involves training them with additional data so that they can provide more precise answers when queried on specific topics. If certain precautions aren’t taken, it’s possible that confidential data may inadvertently contribute to training the model.

The risk, however, extends beyond potential leaks of trade secrets or other sensitive information by ChatGPT. Once data is uploaded to OpenAI’s servers, company employees and authorized trusted service providers may also access it for various purposes.

There are four scenarios where other people could end up viewing a user’s chatbot activity history:

  1. To investigate security incidents
  2. To provide assistance they have requested
  3. To respond to legal issues
  4. To improve the model's performance

This possibility has led companies like Samsung to take steps to prevent sensitive data from being leaked. For instance, the company has limited the use of chatbots for certain tasks and implemented corporate versions that promise not to use chat data for training purposes.

How to Improve Your Data Security in ChatGPT

Users and businesses have two specific options available to them. According to OpenAI, these two alternatives promise to protect sensitive data. These options include disabling the model enhancement feature with conversations or using one of the enterprise versions of ChatGPT.

Let’s delve into how to utilize each option effectively.

If you’re using ChatGPT or ChatGPT Plus and wish to prevent your conversations from being utilized to train OpenAI models, you can follow the steps below:

ChatGPT The window to deactivate training.
  1. Open ChatGPT on your computer.
  2. Click on your profile picture and then click on Settings.
  3. Click on Data controls and then click on Improve the model for everyone.
  4. Make sure the Improve the model for everyone toggle is turned off.

If you work in a professional environment and use the paid ChatGPT Enterprise or ChatGPT Team solutions, your data is never used to train OpenAI models. Additionally, the data is protected by encryption of data at rest (using TLS 1.2+) and in transit.

Your Chats Can Still Be Seen

Even with the use of paid professional tools, there are still occasional situations where individuals outside your company may be able to access your conversations.

For instance, in the case of ChatGPT Enterprise, OpenAI employees have the ability to view conversations in order to address incidents, retrieve user conversations with their permission, or in response to legal requests.

In the case of the ChatGPT Team, OpenAI employees can access conversations for engineering support, investigating possible abuse, and ensuring legal compliance. “Specialized third-party contractors” may also view conversations in abuse or misuse scenarios.

OpenAI employees or external actors who can view ChatGPT users’ conversations are subject to confidentiality and security obligations. The measures are in place to keep the data they access safe. There’s no doubt about that.

However, it’s important to remember that we’re dealing with humans. In the business world, confidentiality obligations aren’t always completely effective. One clear example are the leaks of products or services that sometimes occur within tech companies.

Image | OpenAI | Screenshots

Related | We Thought ChatGPT Was Great for Programming. A New Study Finds That Half of Its Answers Are Wrong

Home o Index
  翻译: