Home > Technology peripherals > AI > ChatGPT application is booming. Where to find a secure big data base?

ChatGPT application is booming. Where to find a secure big data base?

王林
Release: 2023-05-21 15:31:06
forward
1253 people have browsed it

There is no doubt that AIGC is bringing a profound change to human society.

ChatGPT application is booming. Where to find a secure big data base?

Peeling away its dazzling and gorgeous appearance, the core of operation cannot be separated from the support of massive data.

ChatGPT’s “intrusion” has caused concerns about content plagiarism in all walks of life, as well as increased awareness of network data security.

Although AI technology is neutral, it does not become a reason to avoid responsibilities and obligations.

Recently, the British intelligence agency, the British Government Communications Headquarters (GCHQ), warned that ChatGPT and other artificial intelligence chatbots will be a new security threat.

Although the concept of ChatGPT has not been around for long, the threats to network security and data security have become the focus of the industry.

Regarding ChatGPT, which is still in its early stages of development, are such worries unfounded?

Security threats may be occurring

At the end of last year, the startup OpenAI launched ChatGPT. After that, its investor Microsoft launched a chatbot based on ChatGPT technology this year." Bing Chat”.

Because this type of software can provide human-like conversations, this service has become popular all over the world.

ChatGPT application is booming. Where to find a secure big data base?

GCHQ’s cybersecurity arm noted that companies that provide AI chatbots can see the content of queries entered by users. As far as ChatGPT is concerned, its developer OpenAI can see this.

ChatGPT is trained through a large number of text corpora, and its deep learning capabilities rely heavily on the data behind it.

Due to concerns about information leakage, many companies and institutions have issued "ChatGPT bans".

City of London law firm Mishcon de Reya has banned its lawyers from entering client data into ChatGPT over concerns that legally privileged information could be compromised.

International consulting firm Accenture warned its 700,000 employees worldwide not to use ChatGPT for similar reasons, fearing that confidential client data could end up in the wrong hands.

Japan’s SoftBank Group, the parent company of British computer chip company Arm, also warned its employees not to enter company personnel’s identifying information or confidential data into artificial intelligence chatbots.

In February this year, JPMorgan Chase became the first Wall Street investment bank to restrict the use of ChatGPT in the workplace.

Citigroup and Goldman Sachs followed suit, with the former banning employees from company-wide access to ChatGPT and the latter restricting employees from using the product on the trading floor.

Earlier, in order to prevent employees from leaking secrets when using ChatGPT, Amazon and Microsoft prohibited them from sharing sensitive data with them, because this information may be used to further Iterative training data.

In fact, behind these artificial intelligence chatbots are large language models (LLM), and the user's query content will be stored and used at some point in the future. Develop LLM services or models.

This means that the LLM provider can read related queries and possibly incorporate them into future releases in some way.

Although LLM operators should take steps to protect data, the possibility of unauthorized access cannot be completely ruled out. Therefore, enterprises need to ensure that they have strict policies and provide technical support to monitor the use of LLM to minimize the risk of data exposure.

In addition, although ChatGPT itself does not have the ability to directly attack network security and data security, due to its ability to generate and understand natural language, it can be used to forge false information, attack social engineering, etc.

In addition, attackers can also use natural language to let ChatGPT generate corresponding attack code, malware code, spam, etc.

Therefore, AI can allow those who originally have no ability to launch attacks to generate attacks based on AI, and greatly increase the success rate of attacks.

With the support of technologies and models such as automation, AI, and "attack as a service", network security attacks have shown a skyrocketing trend.

Before ChatGPT became popular, there had been many cyber attacks by hackers using AI technology.

In fact, it is not uncommon for artificial intelligence to be adjusted by users to "deviate from the rhythm". Six years ago, Microsoft launched the intelligent chat robot Tay. When it went online, Tay behaved politely. He was polite, but in less than 24 hours, he was "led bad" by unscrupulous users. He used rude and dirty words constantly, and his words even involved racism, pornography, and Nazis. He was full of discrimination, hatred, and prejudice, so he had to be taken offline and ended his short life. .

On the other hand, the risk closer to the user is that when using AI tools such as ChatGPT, users may inadvertently input private data into the cloud model, and this data may become a training tool. Data can also become part of the answers provided to others, leading to data breaches and compliance risks.

AI applications must lay a secure foundation

As a large language model, the core logic of ChatGPT is actually the collection, processing, and Processing and output of operation results.

In general, these links may be associated with risks in three aspects: technical elements, organizational management, and digital content.

Although ChatGPT stated that it will strictly abide by privacy and security policies when storing the data required for training and running models, there may still be problems such as cyber attacks and data crawling in the future. Ignored data security risks.

Especially when it comes to the capture, processing, and combined use of national core data, local and industry important data, and personal privacy data, it is necessary to balance data security protection and flow sharing.

In addition to the hidden dangers of data and privacy leaks, AI technology also has problems such as data bias, false information, and difficulty in interpreting models, which may lead to misunderstanding and distrust.

The trend has arrived, and the wave of AIGC is coming. Against the backdrop of a promising future, it is crucial to move forward and establish a data security protection wall.

Especially when AI technology gradually improves, it can not only become a powerful tool for productivity improvement, but it can also easily become a tool for illegal crimes.

Monitoring data from the Qi’anxin Threat Intelligence Center shows that from January to October 2022, more than 95 billion pieces of Chinese institutional data were illegally traded overseas, of which more than 57 billion pieces were illegally traded overseas. is personal information.

Therefore, how to ensure the security of data storage, calculation, and circulation is a prerequisite for the development of the digital economy.

From an overall perspective, top-level design and industrial development should be insisted on going hand in hand. On the basis of the "Cybersecurity Law", the risk and responsibility analysis system should be refined and a security accountability mechanism should be established.

At the same time, regulatory authorities can carry out regular inspections, and companies in the security field can work together to build a full-process data security system.

Regarding the issues of data compliance and data security, especially after the introduction of the "Data Security Law", data privacy is becoming more and more important.

If data security and compliance cannot be guaranteed during the application of AI technology, it may cause great risks to the enterprise.

In particular, small and medium-sized enterprises have relatively little knowledge about data privacy security and do not know how to protect data from security threats.

Data security compliance is not a matter for a certain department, but the most important matter for the entire enterprise.

Enterprises should train employees to make them aware that everyone who uses data has the obligation to protect data, including IT personnel, AI departments, data engineers, developers, users, etc. Reporting people, people and technology need to be integrated.

Faced with the aforementioned potential risks, how can regulators and relevant companies strengthen data security protection in the AIGC field from the institutional and technical levels?

Compared to taking regulatory measures such as restricting the use of user terminals directly, it will be more effective to clearly require AI technology research and development companies to follow scientific and technological ethical principles, because these companies can limit users at the technical level scope of use.

At the institutional level, it is necessary to establish and improve a data classification and hierarchical protection system based on the characteristics and functions of the data required by AIGC's underlying technology.

For example, the data in the training data set can be classified and managed according to the data subject, data processing level, data rights attributes, etc., according to the value of the data to the data rights subject, and once the data is Classify the degree of harm to the data subject if it has been tampered with, destroyed, etc.

On the basis of data classification and classification, establish data protection standards and sharing mechanisms that match the data type and security level.

Focusing on enterprises, it is also necessary to accelerate the application of "private computing" technology in the AIGC field.

This type of technology allows multiple data owners to share, interoperate, and calculate data by sharing SDK or opening SDK permissions without exposing the data itself. , modeling, while ensuring that AIGC can provide services normally, while ensuring that data is not leaked to other participants.

In addition, the importance of full-process compliance management has become increasingly prominent.

Enterprises should first pay attention to whether the data resources they use comply with legal and regulatory requirements. Secondly, they should ensure that the entire process of algorithm and model operation is compliant. The innovative research and development of enterprises should also maximize Meet the ethical expectations of the public.

At the same time, enterprises should formulate internal management standards and set up relevant supervision departments to supervise data in all aspects of AI technology application scenarios to ensure that data sources are legal, processing is legal, and output is legal. This ensures its own compliance.

The key to AI application lies in the consideration between deployment method and cost. However, it must be noted that if security compliance and privacy protection are not done well, it may have "" A greater risk point.”

AI is a double-edged sword. If used well, enterprises will be even more powerful; if used improperly, neglecting security, privacy and compliance will bring greater losses to the enterprise.

Therefore, before AI can be applied, it is necessary to build a more stable "data base". As the saying goes, only stability can lead to long-term development.

The above is the detailed content of ChatGPT application is booming. Where to find a secure big data base?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template