


ChatGPT Past and Future: The Evolution of Artificial Intelligence and Data Privacy in Digital Communications
Translator | Liu Tao
Reviewer| Chonglou
The development of artificial intelligence in the past few years has brought us not only opportunities, but also frustration. Some major breakthroughs have revolutionized the Internet, and many are for the better.
However, before people had time to prepare for the release of OpenAI’s ChatGPT, it had already swept the entire world. It creates an ability to naturally talk to humans and give insightful answers in a very short time, which is unprecedented.
As more public attention began to pay attention to what ChatGPT could do, every visionary leader in the world realized that from that point on, digital communication technology would usher in revolutionary changes.
But innovation often comes with controversy, and in this case, the Supernova chatbot had to deal with the privacy of legitimate data.
The development of ChatGPT required extensive data collection, and thought leaders and government privacy watchdogs have raised concerns about data privacy practices due to OpenAI's inability to accurately describe how the chatbot works, processes and stores data. There are also more and more doubts.
The issue has not gone unnoticed by the public. According to a 2023 survey, 67% of global consumers believe they are losing control of their data from technology companies.
The same survey also showed that 72.6% of iOS apps track private user data, and free apps are 4 times more likely to track user data than paid apps.
If you are concerned about this, remember that most users of ChatGPT still use the free version.
In view of this, data privacy companies need to make full use of the results generated by ChatGPT, provide products that enhance data privacy, and create a cultural atmosphere with greater data transparency and greater responsibility. This will allow people to be aware of their data rights and how they are used, while also keeping these groundbreaking AI technologies from relying on unethical tactics to make money, as is the case with many big tech companies.
1. ChatGPT may already know you
ChatGPT is a Large Language Model (LLM), which means it requires a lot of data to work properly, making it Ability to predict and process information coherently.
That is to say, if you have ever written articles on the Internet, it is very likely that ChatGPT has scanned and processed this information.
Additionally, large language models (LLMs) like ChatGPT rely heavily on large amounts of data from online sources (such as e-books, articles, and social media posts, etc.) to train their algorithms. This enables users to use it to generate authentic responses that appear to be identical to human-written text messages.
In short, any article that has been published to the web can be used to train ChatGPT or its competitors' large language models (LLM), which will definitely be used in ChatGPT. Follow up after success.
Concerns over data privacy issues are unsurprising, as OpenAI recently admitted that data leaks were caused by vulnerabilities in open source libraries. Additionally, a cybersecurity company discovered that a recently added component was vulnerable to an actively exploited vulnerability.
OpenAI conducted an investigation and discovered that the leaked data included the titles of active users’ chat history and the first message of newly created conversations.
The vulnerability also exposed the payment information of 1.2% of ChatGPT Plus users, including their first and last name, email address, payment address, payment card expiration date, and payment card number. Last four digits.
To say this is a data protection disaster is an understatement. There is probably more information inside ChatGPT than any product on the planet, and sensitive information is already being leaked just months after its release.
2. What do ChatGPT users need to do?
The silver lining is this: bringing public attention to the real risks ChatGPT poses to privacy can provide an excellent opportunity for individuals to begin to understand the importance of data protection and gain a deeper understanding more details. This is especially important given the rapid expansion of ChatGPT’s user base.
In addition to implementing precautionary measures and remaining vigilant, users are also required to exercise their data subject rights (DSR), which include retaining their rights to access, edit and delete personal data.
In the digital age, every user must become an advocate for stronger data privacy regulations so that they can better control their personal information and ensure that it is used with the utmost responsibility.
ChatGPT appears to have responded to this, as new sessions will now prompt people not to enter sensitive data or company secrets as they are not secure once inside the system.
Samsung has found that it is still difficult to do this, and more people need to pay attention and exercise caution when using ChatGPT prompts.
Things like using a new ChatGPT plugin to shop may seem harmless, but do you really want an insecure digital record of what you eat on the internet? All the things that have passed?
Until these privacy concerns are resolved, we as a public need to slow down and not get too caught up in the frenzy over new AI technologies.
3. What does the company need to do?
It goes without saying: When users commit to terminating transactions, companies must take responsibility for inappropriate data use and protection practices.
Therefore, companies large and small should promote transparent and easy-to-understand protocols so that individuals clearly understand how their data is used and where it goes, as well as anyone who may have access to this data. third-party entities.
In addition, business leaders should provide users with clear ways to exercise their data subject rights (DSR) and educate employees to adhere to ethical guidelines for data processing and storage.
We are still far from that goal, as most default permissions remain in a regulatory gray area, given that they do not clearly indicate the need to opt out or opt in, depending on the user and the company’s location.
Transparency, clarity and accountability should be at the forefront of every organization’s considerations regarding data privacy.
The rise of ChatGPT has ushered in a new era of data privacy vigilance, in which organizations and individuals need to be equally proactive in ensuring data is handled appropriately to avoid breaches and misuse.
ChatGPT is collecting more data at a faster rate than any other company in history, and if security goes wrong, the impact on personal data privacy will be unparalleled.
If companies want to ensure they are truly aware of potential issues, they must start protecting data more strategically and build consumer trust in the internet. Otherwise, a better shared digital future is in grave danger.
Original link: https://hackernoon.com/the-evolution-of-ai-and-data-privacy-how-chatgpt-is-shaping-the-future-of -digital-communication
Translator’s introduction:
Liu Tao, 51CTO community editor, is the person in charge of online detection and control of the system of a large central enterprise.
The above is the detailed content of ChatGPT Past and Future: The Evolution of Artificial Intelligence and Data Privacy in Digital Communications. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



DALL-E 3 was officially introduced in September of 2023 as a vastly improved model than its predecessor. It is considered one of the best AI image generators to date, capable of creating images with intricate detail. However, at launch, it was exclus

In 2023, AI technology has become a hot topic and has a huge impact on various industries, especially in the programming field. People are increasingly aware of the importance of AI technology, and the Spring community is no exception. With the continuous advancement of GenAI (General Artificial Intelligence) technology, it has become crucial and urgent to simplify the creation of applications with AI functions. Against this background, "SpringAI" emerged, aiming to simplify the process of developing AI functional applications, making it simple and intuitive and avoiding unnecessary complexity. Through "SpringAI", developers can more easily build applications with AI functions, making them easier to use and operate.

OpenAI recently announced the launch of their latest generation embedding model embeddingv3, which they claim is the most performant embedding model with higher multi-language performance. This batch of models is divided into two types: the smaller text-embeddings-3-small and the more powerful and larger text-embeddings-3-large. Little information is disclosed about how these models are designed and trained, and the models are only accessible through paid APIs. So there have been many open source embedding models. But how do these open source models compare with the OpenAI closed source model? This article will empirically compare the performance of these new models with open source models. We plan to create a data

Installation steps: 1. Download the ChatGTP software from the ChatGTP official website or mobile store; 2. After opening it, in the settings interface, select the language as Chinese; 3. In the game interface, select human-machine game and set the Chinese spectrum; 4 . After starting, enter commands in the chat window to interact with the software.

If the answer given by the AI model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

Author丨Compiled by TimAnderson丨Produced by Noah|51CTO Technology Stack (WeChat ID: blog51cto) The Zed editor project is still in the pre-release stage and has been open sourced under AGPL, GPL and Apache licenses. The editor features high performance and multiple AI-assisted options, but is currently only available on the Mac platform. Nathan Sobo explained in a post that in the Zed project's code base on GitHub, the editor part is licensed under the GPL, the server-side components are licensed under the AGPL, and the GPUI (GPU Accelerated User) The interface) part adopts the Apache2.0 license. GPUI is a product developed by the Zed team

Not long ago, OpenAISora quickly became popular with its amazing video generation effects. It stood out among the crowd of literary video models and became the focus of global attention. Following the launch of the Sora training inference reproduction process with a 46% cost reduction 2 weeks ago, the Colossal-AI team has fully open sourced the world's first Sora-like architecture video generation model "Open-Sora1.0", covering the entire training process, including data processing, all training details and model weights, and join hands with global AI enthusiasts to promote a new era of video creation. For a sneak peek, let’s take a look at a video of a bustling city generated by the “Open-Sora1.0” model released by the Colossal-AI team. Open-Sora1.0

Microsoft and OpenAI were revealed to be investing large sums of money into a humanoid robot startup at the beginning of the year. Among them, Microsoft plans to invest US$95 million, and OpenAI will invest US$5 million. According to Bloomberg, the company is expected to raise a total of US$500 million in this round, and its pre-money valuation may reach US$1.9 billion. What attracts them? Let’s take a look at this company’s robotics achievements first. This robot is all silver and black, and its appearance resembles the image of a robot in a Hollywood science fiction blockbuster: Now, he is putting a coffee capsule into the coffee machine: If it is not placed correctly, it will adjust itself without any human remote control: However, After a while, a cup of coffee can be taken away and enjoyed: Do you have any family members who have recognized it? Yes, this robot was created some time ago.
