News on May 7th, in the past week, OpenAI has successfully appeased Italian regulators, prompting them to lift the temporary ban on chatbot ChatGPT, but the artificial intelligence company The battle between research firms and European regulators is still not over, and more challenges are just beginning.
OpenAI’s popular but controversial chatbot ChatGPT hit a major legal hurdle in Italy earlier this year, with the Italian Data Protection Authority (GPDP) accusing OpenAI of violating EU data protection rules. In an attempt to resolve the issue, the company agreed to restrict use of the service in Italy.
On April 28, ChatGPT was relaunched in Italy, and OpenAI easily addressed the concerns of the Italian Data Protection Authority without making major changes to its service. This is a clear win for OpenAI.
While the Italian Data Protection Authority “welcomes” the changes made by ChatGPT, the legal challenges facing OpenAI and other companies developing chatbots may have just begun. Regulators in several countries are investigating the way these artificial intelligence tools collect data and generate information, citing reasons ranging from the collection of unsanctioned training data to the tendency of chatbots to emit incorrect messages.
The European Union has begun enforcing the General Data Protection Regulation (GDPR), one of the world’s strongest privacy legal frameworks, with ramifications that could reach far beyond Europe. At the same time, EU lawmakers are working on a law specifically targeting artificial intelligence, which is also likely to usher in a new era of regulation of systems such as ChatGPT.
ChatGPT has become the target of much attention
ChatGPT is one of the most watched applications in generative artificial intelligence (AIGC), covering the generation of text and images based on user prompts , video and audio and other tools. According to reports, just two months after its launch in November 2022, ChatGPT reached 100 million monthly active users, making it one of the fastest-growing consumer applications in history.
With ChatGPT, people can translate text into different languages, write college papers, and even generate code. But some critics, including regulators, point to the unreliable information ChatGPT outputs, copyright issues and shortcomings in protecting data.
Italy is the first country to take action against ChatGPT. On March 31, the Italian Data Protection Authority accused OpenAI of violating the General Data Protection Regulation by allowing ChatGPT to provide inaccurate or misleading information, failing to notify users of data collection practices, failing to comply with regulations on personal data processing, and failing to adequately Prevent children under 13 from using the Service. The Italian Data Protection Authority ordered OpenAI to immediately stop using personal information collected from Italian citizens in ChatGPT’s training data.
At present, other countries have not taken similar major actions. But since March, at least three EU countries - Germany, France and Spain - have launched their own investigations into ChatGPT. Meanwhile, across the Atlantic, Canada is evaluating ChatGPT's privacy concerns under its Personal Information Protection and Electronic Documents Act (PIPEDA). The European Data Protection Board (EDPB) has even set up a dedicated working group to coordinate investigations. If these agencies require OpenAI to make changes, it could affect how the company serves users around the world.
Regulators have two major concerns
The biggest concerns of regulators about ChatGPT are mainly divided into two categories: Where does the training data come from? How does OpenAI deliver information to users?
To support ChatGPT, OpenAI requires the use of GPT-3.5 and GPT-4 large language models (LLM), which are trained on large amounts of human-generated text. OpenAI remains cautious about exactly which training text it uses, but says it draws on "a variety of authorized, publicly available data sources, which may include publicly available personal information."
This can cause huge problems under the General Data Protection Regulation. Enacted in 2018, the law covers all services that collect or process data on EU citizens, regardless of where the organization providing the service is based. The General Data Protection Regulation requires companies to obtain explicit consent from users before collecting personal data, have a legally valid reason to collect the data, and be transparent about how the data is used and stored.
European regulators claim that the confidentiality of OpenAI’s training data means they cannot confirm whether the personal information it uses originally had users’ consent. Italy's Data Protection Authority argued that OpenAI had no "legal basis" to collect the information in the first place. So far, OpenAI and other companies have faced little scrutiny.
Another issue is the General Data Protection Regulation’s “right to be forgotten,” which allows users to ask companies to correct their personal information or delete it entirely. OpenAI has updated its privacy policy in advance to facilitate responding to these requests. But given how complex the separation can be once specific data is fed into these large language models, whether it's technically feasible is always debatable.
OpenAI also collects information directly from users. Just like other internet platforms, it collects a standard range of user data such as name, contact and credit card details. But more importantly, OpenAI records user interactions with ChatGPT. As stated on the official website, OpenAI employees can view this data and use it to train its models. Considering the personal questions people have asked ChatGPT, such as thinking of the bot as a therapist or a doctor, this means the company is collecting all kinds of sensitive data.
This data may include information about minors. Although OpenAI's policy states that it "does not knowingly collect personal information from children under the age of 13," there is no strict age verification threshold. This is inconsistent with EU regulations, which prohibit the collection of data from minors under 13 and in some countries require parental consent to collect information from minors under 16. On the output side, the Italian Data Protection Authority claimed that ChatGPT's lack of an age filter allowed minors "to receive responses that are absolutely inappropriate in terms of their level of development and self-awareness."
OpenAI has wide latitude in using this data, which worries many regulators, and storing it poses security risks. Companies including Samsung and JPMorgan Chase have banned employees from using AIGC tools over concerns they would upload sensitive data. In fact, before Italy issued the ban, ChatGPT suffered a serious data leak, which resulted in the exposure of a large number of users' chat histories and email addresses.
In addition, ChatGPT’s tendency to provide false information may also cause problems. The General Data Protection Regulation stipulates that all personal data must be accurate, a point emphasized by the Italian Data Protection Authority in its announcement. This can cause problems for most AI text generators, as these tools are prone to "hallucinations", i.e. giving factually incorrect or irrelevant responses to queries. This has caused real-world problems elsewhere, such as when an Australian mayor threatened to sue OpenAI for defamation after ChatGPT falsely claimed he had been jailed for bribery.
Special regulatory rules are about to be introduced
ChatGPT is particularly vulnerable to regulatory targets due to its popularity and dominance of the artificial intelligence market. But like competitors and partners like Google's Bard and Microsoft's OpenAI-powered Azure AI, there's no reason it shouldn't be scrutinized. Before ChatGPT, Italy had banned the chatbot platform Replika from collecting information about minors. As of now, the platform remains banned.
While the General Data Protection Regulation is a powerful set of laws, it was not created to solve problems unique to artificial intelligence. However, dedicated regulatory rules may be forthcoming. In 2021, the European Union submitted a draft of the first version of its Artificial Intelligence Act (AIA), which will be implemented together with the General Data Protection Regulation. The AI Act would regulate AI tools based on their level of risk, from "minimal risk" (such as spam filters) to "high risk" (AI tools used for law enforcement or education) to "unusable". Accept risk” (such as the social credit system).
After the explosion of large language models like ChatGPT, lawmakers are now scrambling to add rules for “base models” and “general artificial intelligence systems” (GPAI). Both terms refer to large-scale AI systems, including LLMs, and may classify them as "high-risk" services.
The provisions of the Artificial Intelligence Act go beyond data protection. A recently proposed amendment would force companies to disclose any copyrighted material used to develop AIGC tools. That could expose once-secret data sets and leave more companies vulnerable to infringement lawsuits, which have already impacted some services.
Specialized AI laws may be passed by the end of 2024
At present, it may take some time to implement this bill. EU lawmakers reached an agreement on an interim artificial intelligence bill on April 27, but a committee will still need to vote on the draft on May 11, with the final proposal expected to be published in mid-June. The European Council, EU Parliament and European Commission will then have to resolve any remaining disputes before implementing the law. If all goes well, it could be passed in the second half of 2024.
For now, the spat between Italy and OpenAI gives us a first look at how regulators and AI companies might negotiate. Italy's Data Protection Authority said it would lift the ban if OpenAI met several proposed resolutions by April 30.
The resolutions include informing users of how ChatGPT stores and uses their data, requiring users’ explicit consent to use such data, facilitating the correction or deletion of false personal information generated by ChatGPT, and requiring Italian users to register before When registering your account, confirm that you are over 18 years old. Although OpenAI fell short of these regulations, it has satisfied Italian regulators and has restored access in Italy.
OpenAI still has to meet other conditions, including establishing a stricter age threshold by September 30, filtering out minors under 13 and requiring parental consent for older minors. If it fails, OpenAI may be banned again. However, OpenAI appears to have set an example that Europe considers the behavior of AI companies to be acceptable, at least until new laws are introduced. (Xiao Xiao)
The above is the detailed content of Back online in Italy, but OpenAI's regulatory troubles are just beginning. For more information, please follow other related articles on the PHP Chinese website!