Since the official launch of ChatGPT in November 2022, millions of users have poured in crazily. Due to its excellent human-like language generation capabilities, programming software talent, and lightning-fast text analysis capabilities, ChatGPT has quickly become the tool of choice for developers, researchers, and everyday users.
As with any disruptive technology, generative AI systems like ChatGPT also have potential risks. In particular, major players in the technology industry, national intelligence agencies, and other government agencies have issued warnings about feeding sensitive information into artificial intelligence systems such as ChatGPT.
Concerns about the security risks of ChatGPT stem from the possibility that information could end up leaking into the public domain through ChatGPT, whether through security vulnerabilities or the use of user-generated content to "train" the chatbot.
In response to these concerns, technology companies are taking action to mitigate the security risks associated with large language models (LLMs) and conversational AI (CAI). Some businesses have even chosen to disable ChatGPT entirely, while others are warning their employees of the dangers of entering confidential data into such models.
Artificial intelligence-driven ChatGPT has become a popular tool for enterprises to optimize operations and streamline complex tasks. However, recent events have highlighted the potential dangers of sharing confidential information via the platform.
Disturbingly, three incidents of sensitive data leakage via ChatGPT were reported in less than a month. South Korean media reported that employees at smartphone maker Samsung's main semiconductor factory entered confidential information, including highly sensitive source code used to troubleshoot programming errors, into an artificial intelligence chatbot, sparking controversy.
Source code is one of the most closely guarded secrets of any technology company, as it is the fundamental building block of any software or operating system. And now, such valuable business secrets have accidentally fallen into the hands of OpenAI.
According to people familiar with the matter, Samsung has currently restricted its employees’ access to ChatGPT.
Other Fortune 500 conglomerates, including Amazon, Walmart and JPMorgan, have experienced similar situations where employees accidentally entered sensitive data into chatbots .
There have been previous reports of Amazon employees using ChatGPT to obtain confidential customer information, prompting the tech giant to quickly restrict use of the tool and sternly warn employees not to enter any sensitive data into the tool.
Mathieu Fortier, director of machine learning at Coveo, an AI-driven digital experience platform, said that multiple LLMs such as GPT-4 and LLaMA exist Imperfect, and warns that while they excel at language understanding, these models lack the ability to recognize accuracy, immutable laws, physical reality, and other non-linguistic aspects.
Although LLMs build an extensive intrinsic knowledge base from training data, they have no explicit concept of truth or factual accuracy. Additionally, they are vulnerable to security breaches and data-extraction attacks, as well as tending to deviate from expected responses or exhibit "psychotic" characteristics—the technical name is "hallucinations."
Fortier highlighted the high risks faced by businesses. The consequences can severely undermine customer trust and cause irreparable damage to a brand's reputation, leading to significant legal and financial problems.
Following in the footsteps of other tech giants, the retail giant's tech arm, Walmart Global tech, has taken steps to reduce the risk of data breaches. In an internal memo to employees, the company instructed employees to block ChatGPT immediately after detecting suspicious activity that could compromise corporate data and security.
A Walmart spokesperson said that while the retailer is creating its own chatbot based on the capabilities of GPT-4, it has implemented several measures to protect employee and customer data from ChatGPT and others. Generative AI tool dissemination.
The spokesperson said, "Most new technologies bring new benefits, but also new risks. Therefore, we will evaluate these new technologies and provide usage guidance for our employees to Protecting the data of our customers, members and employees is not uncommon. Leveraging existing technology, such as Open AI, and building a layer on top of it to communicate with retailers more effectively allows us to develop new customer experiences , and improve existing capabilities."
In addition, other companies such as Verizon and Accenture have also taken steps to limit the use of ChatGPT, with Verizon instructing its employees to limit chatbots to non-sensitive tasks, and Accenture Stricter controls have been implemented to ensure compliance with data privacy regulations.
Even more concerning is that ChatGPT retains user input data to further train the model, which raises concerns that sensitive information may be exposed through data breaches or other security incidents The problem.
OpenAI, the company behind the popular generative artificial intelligence models ChatGPT and DALL-E, recently implemented a new policy to improve user data privacy and security.
Starting March 1 this year, API users must explicitly choose to share their data to train or improve OpenAI’s models.
In contrast, for non-API services such as ChatGPT and DALL-E, users must opt out if they do not want OpenAI to use their data.
OpenAI said in a recently updated blog, "When you use our non-API consumer services ChatGPT or DALL-E, we may use the data you provide to improve our models. With us Sharing your data not only helps our models become more accurate and better solve your specific problem, it also helps improve their overall capabilities and security... You can fill out your Organization ID and email address associated with the account owner to request to opt-out of the use of your data to improve our non-API services.” Information is released with caution. The Italian government recently joined the fray, banning the use of ChatGPT nationwide, citing concerns about data privacy and security.
OpenAI says it removes any personally identifiable information from the data used to improve its artificial intelligence models and only uses a small sample of data from each customer for this purpose.
Government Warning
According to the NCSC, LLM can generate incorrect or "illusory" facts, as demonstrated by the first demonstration of the Google Bard chatbot. They can also display bias and gullibility, especially when answering leading questions. Additionally, these models require extensive computing resources and large amounts of data to train from scratch, and they are vulnerable to injection attacks and toxic content creation.
Coveo’s Fortier said, “LLMs generate responses to prompts based on the prompt’s inherent similarity to internal knowledge. However, given that they have no inherent internal ‘hard rule’ or reasoning capabilities, they are unlikely to be 100% successful in adhering to the constraint of not disclosing sensitive information. Despite efforts to reduce the generation of sensitive information, LLM can regenerate this information if it uses this data for training. The only solution is not to use Sensitive material to train these models. Users should also avoid providing them with sensitive information in prompts, as most services currently save this information in their logs."
Generative AI Safety and Ethics Best Practices to Use
The actions taken by these companies highlight the importance of remaining vigilant when using artificial intelligence language models such as ChatGPT. While these tools can greatly increase efficiency and productivity, they can pose significant risks if used incorrectly.
Peter Relan, chairman of conversational artificial intelligence startup Got it AI, suggested that “the best approach is to incorporate every new development in the original improvement of language models into an enterprise strategy-driven architecture. The architecture combines a language model with pre- and post-processors for guarding, fine-tunes them for enterprise-specific data, and then even deploys them locally. Otherwise, the original language model is too powerful and sometimes Processing in the enterprise is harmful."
Prasanna Arikala, chief technology officer of Nvidia-backed conversational AI platform Kore.ai, said that in the future, companies will have to limit LLM access to sensitive and personal information to avoid violation.
Arikala noted that “implementing strict access controls, such as multi-factor authentication, and encrypting sensitive data, can help mitigate these risks. In addition, regular security audits and vulnerability assessments are required to identify and Eliminate potential vulnerabilities. LLM is a valuable tool if used correctly, but it is critical for companies to take the necessary precautions to protect sensitive data and maintain the trust of customers and stakeholders."
It remains to be seen how these regulations will evolve, but businesses must remain vigilant to stay ahead of the curve. While generative AI brings potential benefits, it also brings new responsibilities and challenges, and the technology industry needs to work with policymakers to ensure that this technology is developed and implemented in a responsible and ethical manner.
The above is the detailed content of The Risks of Using AI-Powered Chatbots in the Enterprise. For more information, please follow other related articles on the PHP Chinese website!