21st Century Business Herald reporter Cai Shuyue Guo Meiting Intern Tan Yanwen Mai Zihao Reporting from Shanghai and Guangzhou
Editor’s note:
In the past few months of 2023, major companies have rushed to develop large-scale models, explored the commercialization of GPT, and are optimistic about computing infrastructure... Just like the Age of Discovery that opened in the 15th century, human exchanges, trade, and wealth have exploded. growth, the space revolution is sweeping the world. At the same time, change also brings challenges to order, such as data leakage, personal privacy risks, copyright infringement, false information... In addition, the post-humanist crisis brought by AI is already on the table. What attitude should people take? Are you facing the myths caused by the mixture of humans and machines?
At this moment, seeking consensus on AI governance and reshaping a new order have become issues faced by all countries. Nancai Compliance Technology Research Institute will launch a series of reports on AI contract theory, analyzing Chinese and foreign regulatory models, subject responsibility allocation, corpus data compliance, AI ethics, industry development and other dimensions, with a view to providing some ideas for AI governance plans and ensuring Responsible innovation.
The rise of self-generated AI technology has led to the current situation of "Battle of Hundreds of Models", and the industry chain map of this technology has also taken initial shape.
(AIGC industrial chain map. Drawing/Nancai Compliance Technology Research Institute, 21st Century Business Herald reporter)
Before generative AI becomes a common technology, every participant in the production chain must consider how to make it a "controllable" tool.
In late March this year, a letter signed by Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and more than a thousand entrepreneurs and scholars signed a letter "Suspension of Large-scale Artificial Intelligence". Intelligent Experiment" open letter released.
The letter mentioned that although artificial intelligence laboratories around the world have been locked in an out-of-control race in recent months to develop and deploy more powerful digital minds, including the developers of the technology, "No one can truly understand, predict or fully control this technology."
The Yuanshi Culture Laboratory of the School of Journalism and Communication at Tsinghua University also pointed out in the "AIGC Development Research" report that AIGC's strong involvement in the global industrial chain will comprehensively replace programmers, graphic designers, customer service and other tasks, and provide artificial intelligence If the cost is capped, the third world industrial chain will suffer a huge impact.
This means that AIGC, supported by large computing power, may become a sharp blade that separates the global industrial chain of multinational companies, and may also become a dagger that cuts through the illusion of the "global village."
Therefore, with the rapid development of AIGC, putting the generative AI technology behind it into a regulatory cage and clarifying the responsibilities of all parties in the industry chain has become an urgent proposition for countries around the world to deal with.
Regulatory policy review: draw a clear bottom line for industrial research and development
At present, our country is already on the road of generative AI technology regulation. In April this year, the Cyberspace Administration of China issued the "Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments)" (hereinafter referred to as the "Measures"), which is my country's first regulatory document targeting generative AI technology.
Generally speaking, the "Measures" are based on the existing deep synthesis regulatory framework, which regulates the "Internet Information Service Deep Synthesis Management Regulations", "Internet Information Service Algorithm Recommendation Management Regulations", "Network Audio and Video Service Management Regulations" and "Internet Information Service Deep Synthesis Management Regulations". The refinement of the "Regulations on Ecological Governance of Network Information Content" In addition to the general obligations of personal information protection, artificial intelligence service providers are also required to further perform obligations such as security assessment, algorithm filing, and content identification.
Regarding the promulgation of the above relevant policy documents, Xiao Sa, senior partner of Beijing Dacheng Law Firm, pointed out in an interview with a reporter from the 21st Century Business Herald that relevant companies should pay attention to connecting existing algorithm recommendation services, deep synthesis services and other artificial intelligence In accordance with the requirements of regulatory regulations, we strive to achieve internal compliance, combine technology and legal power to propose creative compliance solutions, and win more institutional space for industrial development.
Most of the industry is supportive and supports the successively introduced "Measures" and other bills regulating the development of artificial intelligence technology. In an interview with 21 reporters, Wei Chaoqun, senior product director of Liangfengtai, shared his views. He believes that when generative AI technology has just begun, the implementation of relevant management methods is crucial to the healthy development of the entire industry, and these methods will play a significant role in promoting it.
"On the one hand, the promulgation of the "Measures" means that the entire industry has clear operating specifications, which can guide a complete set of R&D processes for enterprises. On the other hand, it also sets a R&D bottom line for the entire industry, including What can be done and what cannot be done." Wei Chaoqun pointed out.
For example, Article 17 of the "Measures" requires artificial intelligence service providers to "provide necessary information that can affect user trust and choice, including descriptions of the source, scale, type, quality, etc. of pre-training and optimized training data, artificial intelligence Labeling rules, the scale and type of manually labeled data, basic algorithms and technical systems, etc." to achieve the governance of artificial intelligence technology with large amounts of data and volatile rules.
However, some people believe that the current domestic laws, regulations and policy documents related to artificial intelligence still need to be further improved.
Xiao Sa mentioned in the interview that although the "Measures" responded to the risks and impacts brought by generative artificial intelligence, but by sorting out its content, it will be found that it has many differences in terms of responsible subjects, scope of application, and compliance. The provisions on regulatory obligations and other aspects are relatively broad.
For example, Article 5 of the "Measures" stipulates that service providers (i.e. subjects) who use generative artificial intelligence products should bear the responsibilities of content producers.
The original article mentioned that organizations and individuals who use generative artificial intelligence products to provide services such as chatting and text, image, sound generation, etc., including supporting others to generate text, images, sounds, etc. by providing programmable interfaces and other methods, are responsible for this Product generated content is the responsibility of the producer. However, the "Measures" have not yet elaborated on the specific legal responsibilities that service providers should bear.
Development Difficulties: How to Balance Regulation and Technology
How to improve the artificial intelligence supervision system under the premise of technological innovation and development, and strengthen its connection and coordination with data compliance and algorithm governance, is an issue that needs to be solved urgently.
Among them, clarifying the responsible entities of each industrial chain link of AIGC and creating "responsible" AI technology is one of the key points that supervision needs to pay close attention to.
In addition to the issue of the distribution of subject responsibilities mentioned in Article 5 of the "Measures", recently, the EU also mentioned in the revised "Artificial Intelligence Act" that In terms of the distribution of responsibilities in the artificial intelligence value chain, any distribution Authors, importers, deployers or other third parties should be regarded as providers of high-risk artificial intelligence systems and need to perform corresponding obligations. For example, indicate the name and contact information on the high-risk artificial intelligence system, provide data specifications or data set-related information, save logs, etc.
Pei Yi, assistant professor at Beijing Institute of Technology Law School, also pointed out to 21 reporters that as a key entity in providing AI services, enterprises need to ensure transparent data collection and processing on the one hand - clearly inform the data subject of data collection and processing purposes and obtain the necessary consent or authorization. Implement appropriate data security and privacy protection measures to ensure the confidentiality and integrity of data. On the other hand, compliant data sharing is also required. When conducting multi-party data sharing or data transactions, ensure compliant data use rights and authorization mechanisms, and comply with applicable data protection laws and regulations.
21 Reporters observed that some artificial intelligence companies are currently clarifying their obligations as responsible entities.
For example, OpenAI has specially opened a "Security Portal" for users. In this page, users can browse the company's compliance documents, including backup, deletion, and static data in "Data Security" Encrypted information, as well as code analysis, credential management, and more in App Security.
(OpenAI's "Secure Portal" page. Source/OpenAI official website)
In the privacy policy released by the official website of AI painting tool Midjourney, also provides specific instructions on the sharing, retention, transmission scenarios and uses of user data. At the same time, it also lists in detail the application's process of providing services to users. , it is necessary to collect 11 types of personal information such as identification, business information, and biometric information.
It is worth mentioning that the legal person in charge of an emerging technology company in Shanghai said in a conversation with 21 reporters that the terms of service for the company’s internal artificial intelligence-related business are currently being formulated. Part of the rules for the distribution of responsibilities Refer to OpenAI’s approach.
On the other hand, as providers of generative AI services, enterprises also need to pay attention to internal compliance. Xiao Sa pointed out that the business of AIGC-related companies needs to rely on massive data and complex algorithms, and the application scenarios are complex and diverse. Companies are prone to fall into various risks, and it is very difficult to rely entirely on external supervision. Therefore, related companies Be sure to strengthen AIGC’s internal compliance management.
On the one hand, regulatory agencies should take the opportunity to comprehensively implement corporate compliance reform, actively explore the promotion of compliance reform of companies involved in the network digital field and implement third-party supervision and evaluation mechanisms, establish and improve institutional mechanisms for compliance management, and effectively prevent Internet crimes. On the other hand, it is also necessary to actively explore regulatory paths to promote ex-ante compliance construction through ex-post compliance rectification, and promote network regulatory authorities and Internet companies to jointly study and formulate data compliance guidelines to ensure the healthy development of the digital economy.
"The most important task of the regulatory authorities is to draw the bottom line. Among them, 'tech ethics' and 'national security' are two inalienable bottom lines . Within the bottom line, the industry can be given as much tolerance as possible There is room for development, so as to prevent technology from being timid and restricted in its development for the sake of compliance." Pei Yi told 21 reporters.
Coordinator: Wang Jun
Reporters: Guo Meiting, Cai Shuyue, Tan Yanwen, Mai Zihao
Drawing: Cai Shuyue
For more content, please download 21 Finance APP
The above is the detailed content of AI Contract Theory ⑤: Generative AI is racing with thousands of sails, how to use rules to 'steer'. For more information, please follow other related articles on the PHP Chinese website!