Table of Contents
ChatGPT: A scary open AI?
“Imperfect” models: Knowledge bases lacking intelligence
How ChatGPT uses session data
The UK Government Communications Headquarters (GCHQ) intelligence agency, through its National Cyber ​​Security Center (NCSC), has issued a warning about the limitations and risks of large language models (LLMs) such as ChatGPT warning. While these models have been praised for their impressive natural language processing capabilities, the NCSC warns that they are not infallible and may contain serious flaws.
As businesses continue to adopt artificial intelligence and other emerging technologies, ensuring appropriate security measures to protect sensitive data and prevent accidental disclosure of confidential information will be critical.
Home Technology peripherals AI The Risks of Using AI-Powered Chatbots in the Enterprise

The Risks of Using AI-Powered Chatbots in the Enterprise

Apr 25, 2023 pm 09:01 PM
chatgpt Security Risk

The Risks of Using AI-Powered Chatbots in the Enterprise

Since the official launch of ChatGPT in November 2022, millions of users have poured in crazily. Due to its excellent human-like language generation capabilities, programming software talent, and lightning-fast text analysis capabilities, ChatGPT has quickly become the tool of choice for developers, researchers, and everyday users.

As with any disruptive technology, generative AI systems like ChatGPT also have potential risks. In particular, major players in the technology industry, national intelligence agencies, and other government agencies have issued warnings about feeding sensitive information into artificial intelligence systems such as ChatGPT.

Concerns about the security risks of ChatGPT stem from the possibility that information could end up leaking into the public domain through ChatGPT, whether through security vulnerabilities or the use of user-generated content to "train" the chatbot.

In response to these concerns, technology companies are taking action to mitigate the security risks associated with large language models (LLMs) and conversational AI (CAI). Some businesses have even chosen to disable ChatGPT entirely, while others are warning their employees of the dangers of entering confidential data into such models.

ChatGPT: A scary open AI?

Artificial intelligence-driven ChatGPT has become a popular tool for enterprises to optimize operations and streamline complex tasks. However, recent events have highlighted the potential dangers of sharing confidential information via the platform.

Disturbingly, three incidents of sensitive data leakage via ChatGPT were reported in less than a month. South Korean media reported that employees at smartphone maker Samsung's main semiconductor factory entered confidential information, including highly sensitive source code used to troubleshoot programming errors, into an artificial intelligence chatbot, sparking controversy.

Source code is one of the most closely guarded secrets of any technology company, as it is the fundamental building block of any software or operating system. And now, such valuable business secrets have accidentally fallen into the hands of OpenAI.

According to people familiar with the matter, Samsung has currently restricted its employees’ access to ChatGPT.

Other Fortune 500 conglomerates, including Amazon, Walmart and JPMorgan, have experienced similar situations where employees accidentally entered sensitive data into chatbots .

There have been previous reports of Amazon employees using ChatGPT to obtain confidential customer information, prompting the tech giant to quickly restrict use of the tool and sternly warn employees not to enter any sensitive data into the tool.

“Imperfect” models: Knowledge bases lacking intelligence

Mathieu Fortier, director of machine learning at Coveo, an AI-driven digital experience platform, said that multiple LLMs such as GPT-4 and LLaMA exist Imperfect, and warns that while they excel at language understanding, these models lack the ability to recognize accuracy, immutable laws, physical reality, and other non-linguistic aspects.

Although LLMs build an extensive intrinsic knowledge base from training data, they have no explicit concept of truth or factual accuracy. Additionally, they are vulnerable to security breaches and data-extraction attacks, as well as tending to deviate from expected responses or exhibit "psychotic" characteristics—the technical name is "hallucinations."

Fortier highlighted the high risks faced by businesses. The consequences can severely undermine customer trust and cause irreparable damage to a brand's reputation, leading to significant legal and financial problems.

Following in the footsteps of other tech giants, the retail giant's tech arm, Walmart Global tech, has taken steps to reduce the risk of data breaches. In an internal memo to employees, the company instructed employees to block ChatGPT immediately after detecting suspicious activity that could compromise corporate data and security.

A Walmart spokesperson said that while the retailer is creating its own chatbot based on the capabilities of GPT-4, it has implemented several measures to protect employee and customer data from ChatGPT and others. Generative AI tool dissemination.

The spokesperson said, "Most new technologies bring new benefits, but also new risks. Therefore, we will evaluate these new technologies and provide usage guidance for our employees to Protecting the data of our customers, members and employees is not uncommon. Leveraging existing technology, such as Open AI, and building a layer on top of it to communicate with retailers more effectively allows us to develop new customer experiences , and improve existing capabilities."

In addition, other companies such as Verizon and Accenture have also taken steps to limit the use of ChatGPT, with Verizon instructing its employees to limit chatbots to non-sensitive tasks, and Accenture Stricter controls have been implemented to ensure compliance with data privacy regulations.

How ChatGPT uses session data

Even more concerning is that ChatGPT retains user input data to further train the model, which raises concerns that sensitive information may be exposed through data breaches or other security incidents The problem.

OpenAI, the company behind the popular generative artificial intelligence models ChatGPT and DALL-E, recently implemented a new policy to improve user data privacy and security.

Starting March 1 this year, API users must explicitly choose to share their data to train or improve OpenAI’s models.

In contrast, for non-API services such as ChatGPT and DALL-E, users must opt ​​out if they do not want OpenAI to use their data.

OpenAI said in a recently updated blog, "When you use our non-API consumer services ChatGPT or DALL-E, we may use the data you provide to improve our models. With us Sharing your data not only helps our models become more accurate and better solve your specific problem, it also helps improve their overall capabilities and security... You can fill out your Organization ID and email address associated with the account owner to request to opt-out of the use of your data to improve our non-API services.” Information is released with caution. The Italian government recently joined the fray, banning the use of ChatGPT nationwide, citing concerns about data privacy and security.

OpenAI says it removes any personally identifiable information from the data used to improve its artificial intelligence models and only uses a small sample of data from each customer for this purpose.

Government Warning

The UK Government Communications Headquarters (GCHQ) intelligence agency, through its National Cyber ​​Security Center (NCSC), has issued a warning about the limitations and risks of large language models (LLMs) such as ChatGPT warning. While these models have been praised for their impressive natural language processing capabilities, the NCSC warns that they are not infallible and may contain serious flaws.

According to the NCSC, LLM can generate incorrect or "illusory" facts, as demonstrated by the first demonstration of the Google Bard chatbot. They can also display bias and gullibility, especially when answering leading questions. Additionally, these models require extensive computing resources and large amounts of data to train from scratch, and they are vulnerable to injection attacks and toxic content creation.

Coveo’s Fortier said, “LLMs generate responses to prompts based on the prompt’s inherent similarity to internal knowledge. However, given that they have no inherent internal ‘hard rule’ or reasoning capabilities, they are unlikely to be 100% successful in adhering to the constraint of not disclosing sensitive information. Despite efforts to reduce the generation of sensitive information, LLM can regenerate this information if it uses this data for training. The only solution is not to use Sensitive material to train these models. Users should also avoid providing them with sensitive information in prompts, as most services currently save this information in their logs."

Generative AI Safety and Ethics Best Practices to Use

As businesses continue to adopt artificial intelligence and other emerging technologies, ensuring appropriate security measures to protect sensitive data and prevent accidental disclosure of confidential information will be critical.

The actions taken by these companies highlight the importance of remaining vigilant when using artificial intelligence language models such as ChatGPT. While these tools can greatly increase efficiency and productivity, they can pose significant risks if used incorrectly.

Peter Relan, chairman of conversational artificial intelligence startup Got it AI, suggested that “the best approach is to incorporate every new development in the original improvement of language models into an enterprise strategy-driven architecture. The architecture combines a language model with pre- and post-processors for guarding, fine-tunes them for enterprise-specific data, and then even deploys them locally. Otherwise, the original language model is too powerful and sometimes Processing in the enterprise is harmful."

Prasanna Arikala, chief technology officer of Nvidia-backed conversational AI platform Kore.ai, said that in the future, companies will have to limit LLM access to sensitive and personal information to avoid violation.

Arikala noted that “implementing strict access controls, such as multi-factor authentication, and encrypting sensitive data, can help mitigate these risks. In addition, regular security audits and vulnerability assessments are required to identify and Eliminate potential vulnerabilities. LLM is a valuable tool if used correctly, but it is critical for companies to take the necessary precautions to protect sensitive data and maintain the trust of customers and stakeholders."

It remains to be seen how these regulations will evolve, but businesses must remain vigilant to stay ahead of the curve. While generative AI brings potential benefits, it also brings new responsibilities and challenges, and the technology industry needs to work with policymakers to ensure that this technology is developed and implemented in a responsible and ethical manner.

The above is the detailed content of The Risks of Using AI-Powered Chatbots in the Enterprise. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Chat Commands and How to Use Them
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

ChatGPT now allows free users to generate images by using DALL-E 3 with a daily limit ChatGPT now allows free users to generate images by using DALL-E 3 with a daily limit Aug 09, 2024 pm 09:37 PM

DALL-E 3 was officially introduced in September of 2023 as a vastly improved model than its predecessor. It is considered one of the best AI image generators to date, capable of creating images with intricate detail. However, at launch, it was exclus

The perfect combination of ChatGPT and Python: creating an intelligent customer service chatbot The perfect combination of ChatGPT and Python: creating an intelligent customer service chatbot Oct 27, 2023 pm 06:00 PM

The perfect combination of ChatGPT and Python: Creating an Intelligent Customer Service Chatbot Introduction: In today’s information age, intelligent customer service systems have become an important communication tool between enterprises and customers. In order to provide a better customer service experience, many companies have begun to turn to chatbots to complete tasks such as customer consultation and question answering. In this article, we will introduce how to use OpenAI’s powerful model ChatGPT and Python language to create an intelligent customer service chatbot to improve

How to install chatgpt on mobile phone How to install chatgpt on mobile phone Mar 05, 2024 pm 02:31 PM

Installation steps: 1. Download the ChatGTP software from the ChatGTP official website or mobile store; 2. After opening it, in the settings interface, select the language as Chinese; 3. In the game interface, select human-machine game and set the Chinese spectrum; 4 . After starting, enter commands in the chat window to interact with the software.

How to develop an intelligent chatbot using ChatGPT and Java How to develop an intelligent chatbot using ChatGPT and Java Oct 28, 2023 am 08:54 AM

In this article, we will introduce how to develop intelligent chatbots using ChatGPT and Java, and provide some specific code examples. ChatGPT is the latest version of the Generative Pre-training Transformer developed by OpenAI, a neural network-based artificial intelligence technology that can understand natural language and generate human-like text. Using ChatGPT we can easily create adaptive chats

Can chatgpt be used in China? Can chatgpt be used in China? Mar 05, 2024 pm 03:05 PM

chatgpt can be used in China, but cannot be registered, nor in Hong Kong and Macao. If users want to register, they can use a foreign mobile phone number to register. Note that during the registration process, the network environment must be switched to a foreign IP.

How to use ChatGPT and Python to implement user intent recognition function How to use ChatGPT and Python to implement user intent recognition function Oct 27, 2023 am 09:04 AM

How to use ChatGPT and Python to implement user intent recognition function Introduction: In today's digital era, artificial intelligence technology has gradually become an indispensable part in various fields. Among them, the development of natural language processing (Natural Language Processing, NLP) technology enables machines to understand and process human language. ChatGPT (Chat-GeneratingPretrainedTransformer) is a kind of

How to build an intelligent customer service robot using ChatGPT PHP How to build an intelligent customer service robot using ChatGPT PHP Oct 28, 2023 am 09:34 AM

How to use ChatGPTPHP to build an intelligent customer service robot Introduction: With the development of artificial intelligence technology, robots are increasingly used in the field of customer service. Using ChatGPTPHP to build an intelligent customer service robot can help companies provide more efficient and personalized customer services. This article will introduce how to use ChatGPTPHP to build an intelligent customer service robot and provide specific code examples. 1. Install ChatGPTPHP and use ChatGPTPHP to build an intelligent customer service robot.

How to develop an AI-based voice assistant using ChatGPT and Java How to develop an AI-based voice assistant using ChatGPT and Java Oct 27, 2023 pm 06:09 PM

How to use ChatGPT and Java to develop an artificial intelligence-based voice assistant. The rapid development of artificial intelligence (Artificial Intelligence, AI for short) has entered various fields, among which voice assistants are one of the popular applications. In this article, we will introduce how to develop an artificial intelligence-based voice assistant using ChatGPT and Java. ChatGPT is an open source project for interaction through natural language, proposed by OpenAI, an AI research institution.

See all articles