According to news on June 8, the "Artificial Intelligence Law" has been included in the legislative plan, and the draft is scheduled to be submitted to the Standing Committee of the National People's Congress for review within the year. As early as April, the Cyberspace Administration of China drafted the "Measures for the Management of Generative Artificial Intelligence Services (Draft for Comments)". Foreign countries have also introduced some regulations on AIGC. Some people may feel that the threat of artificial intelligence may exceed that of nuclear technology. But now the most basic applications of AI have not reached the penetration rate of the Internet. It may be said that supervision is a bit early, and it is probably more towards management. If there is no management and no restrictions, then AI technology may be used for some illegal things. .
Not only our country, other countries are also paying attention to legislative issues. Last month, German TV reported that the Internal Market Committee and Civil Liberties Committee of the European Parliament overwhelmingly passed the negotiation authorization draft of the "AI Act" proposal on the 11th.
The European Commission proposed this bill as early as two years ago, aiming to develop global standards for AI topics. The final form of the law is also being negotiated with representatives of EU member states. A statement issued by the European Parliament stated that once approved, the law will become the world's first regulation on artificial intelligence. After the law is passed, companies that violate the regulations can be fined up to 40 million euros or 7% of their global annual turnover. In amendments to the proposal, lawmakers said they want to ensure that AI systems can be monitored by humans, are safe and transparent, non-discriminatory and environmentally friendly.
This issue deserves attention.
Now some people have begun to use AI technology to carry out some illegal activities, such as using harassing phone recordings to extract someone's voice, and then synthesize the voice after obtaining the material, so that they can deceive the other party with a fake voice. Another example is using AI to change faces, analyzing various types of information posted by the public online, identifying a series of deceptions, and using AI technology to screen target groups. Using AI to change faces during video calls to gain trust, etc., this type of fraud has occurred several times.
Regulation to the left, innovation to the right?
Supervision and innovation cannot be separated for too long. The two should accompany each other. Whoever walks in front needs to wait for the one behind. They cannot deviate too far. In some cases, the AI system may not be able to deal with some information. Take judgment. If there is no legislation to reasonably regulate it, these loopholes are likely to be exploited. Legislation can help AI technology develop in a reasonable and orderly environment, and some products have emerged. With technology, there are Products, but there is no corresponding regulatory system, and it is difficult to solve problems if they occur. Therefore, it is still necessary to strengthen market supervision and discipline through legislation.
Moreover, AI technology has not yet been applied on a large scale, but it has been used in some industries, such as medical care, transportation, finance, education, security and other industries. It has brought us a lot of convenience and allowed us to see More possibilities for AI technology or products. Regarding the possibilities of AI technology or products in the future, we need to know the challenges and potential risks brought by the technology in order to respond and regulate them reasonably.
With the corresponding legislative system, relevant agencies can restrict and supervise AI technology or products to avoid the possibility of abuse of this technology. Not only that, with institutional AI technology manufacturers, there is also a fair and standardized market competition environment. Standardized legislation and regulatory systems can avoid human errors and abuse, thereby building a trustful and safe network environment, in a fair environment. Next, we will focus on the research and development of product technology to reduce the occurrence of unnecessary problems and make products or technologies better serve mankind.
In fact, everyone's resistance may be that there are still some disputes over the method and content of legislation. For example, it is necessary to determine which industries really need to be included in the law, and how to define whether it is illegal. Also, it is necessary to ensure safety without affecting the research and development and promotion of technology. We are afraid that strict supervision will affect manufacturers' understanding of technology. Or product development and implementation. What we ultimately have to face is how to achieve a balance between promoting innovation, reaping AI benefits, responding to AI risks, and effectively protecting rights.
Therefore, the legislative content may involve interests from multiple perspectives, such as manufacturers who develop technological products, consumers, etc., so policymakers should take these different interests into consideration when formulating legislation. aspects, and adopt extensive consultation and participation procedures to ensure the feasibility and fairness of legislation. Manufacturers who develop technology products must abide by the provisions of legislation, make products under the requirements of relevant laws, or update and iterate products. They may also have to communicate the content and purpose of legislation to let everyone know that the products or technologies comply with relevant laws and regulations. carried out under the circumstances. For the public who enjoy technology or products, they may also need to know the content of legislation to avoid problems caused by ignorance of relevant laws and regulations.
How to control?
The legislation of AI technology is not static. It should be a long-term adjustment and improvement. It must be adjusted and improved slowly according to the development and updates of technology or products in order to keep up with the progress of product technology.
How to control is very important. One method is to hierarchically define risks. In a set of artificial intelligence legal framework proposed by the European Commission in April 2021, artificial intelligence application scenarios are divided into "minimum risk" according to the level of risk. There are four risk levels: , limited, high, and unacceptable. The higher the level of application scenarios, the stricter the restrictions.
According to the report, high-risk application scenarios refer to scenarios that may have a serious impact on people’s lives and livelihoods, such as transportation equipment, education and training, medical assistance, credit ratings, etc.; the European Commission emphasized that all long-distance biometric systems are subject to Considered to be of high risk, law enforcement agencies are prohibited from using this technology in public places, with only a few exceptions, such as finding missing children or preventing individual terrorist attacks, screening criminals and suspicious persons, etc. Such applications need to be authorized by the judicial department or an independent agency ; Some systems that clearly threaten people's safety and lives and infringe on people's rights are classified as unacceptable risk scenarios, including applications that use human behavior to mislead users and violate their free will, such as using voice assistants to induce minors to make Toys with dangerous behaviors, etc., such applications must be prohibited.
Even though the EU took care of so many aspects at the time, the bill also caused controversy. For example, some people felt that the complex review requirements and procedures would increase the administrative costs of enterprises, which was especially detrimental to small and medium-sized digital start-ups, curbing innovation and The use of technology slows down the EU’s digital transformation process. The bill reflects the EU’s inherent conflict in effectively balancing the promotion of innovation and the protection of rights.
Last year, Shanghai’s “Shanghai Regulations on Promoting the Development of the Artificial Intelligence Industry” officially came into effect. As the first provincial-level local regulation in the field of artificial intelligence in China, it also mentions the expression of different risk levels: it requires list-based management of high-risk artificial intelligence products and services, and compliance with the principles of necessity, legitimacy, controllability, etc. regulatory review. Adopt a governance model of ex-ante disclosure and ex-post control for medium- and low-risk artificial intelligence products and services to promote pilot trials.
In fact, regulation in various countries, whether it is legislation or other means, these definitions are inevitable logical requirements for formulating legal provisions. The main direction is that legislators hope that artificial intelligence and industry development will get better and better. Better serve us in the fields of medical care, transportation, finance, education, security, etc., instead of being used by criminals.
Lu Changshun (Cairns) Certificate number: A0150619070003. [The above content only represents personal opinions and does not constitute a basis for buying and selling. The stock market is risky, so investment needs to be cautious]
The above is the detailed content of AI legislation is imminent, how do you view its impact on the industry?. For more information, please follow other related articles on the PHP Chinese website!