The EU Artificial Intelligence Act (AIA) currently under discussion mentions the regulation of open source artificial intelligence. However, imposing strict restrictions on the use, sharing, and distribution of open source general artificial intelligence (GPAI) can be considered a step backwards.
The only way humanity can advance technology at such a rapid pace is through open source culture. Until recently, it has been acceptable for AI researchers to release their source code to increase openness and verifiability; however, limiting this trend could reverse the cultural progress made by the scientific community.
Two objectives of the proposed regulatory structure of the Artificial Intelligence Bill stand out in particular:
The GPAI regulations in the Artificial Intelligence Bill appear to contradict these claims. GPAI encourages innovation and the exchange of information without fear of costly and negative legal consequences. So instead of building a security market that resists fragmentation, some serious regulatory restrictions that simultaneously hinder open source development and further monopolize AI development by the major tech giants may actually occur.
This is more likely to lead to less transparency in the market, making it difficult to determine whether an artificial intelligence application is "legal, safe and trustworthy" be more challenging. Of course, none of this is good for GPAI. Instead, there is a growing and troubling fear that the disparity this imposition could create would give more power to large corporations.
It is also important to recognize that some may interpret this opposition to the changes as an attempt by businesses to get around the rules. There is no doubt that regulations such as the Artificial Intelligence Act are necessary to curb risky misconduct. Without regulations, will AI fall into the wrong hands?
It should be noted that regulations like the Artificial Intelligence Bill are undoubtedly necessary to deter risky misconduct…
This is a legitimate concern and, of course, rules are necessary. However, rather than applying this law to all models simultaneously, it is better to apply it one by one. Rather than regulating open source at the source and limiting innovation, each model should be assessed for potential damage and managed accordingly.
The implementation of the Act is nuanced, complex and multi-dimensional. Even those who generally agreed differed in other ways. However, the fact that GPAI is open to the public is a major sticking point. This open, collaborative approach is the main driver for progress, transparency and technological development, for the collective and individual benefit of society rather than commercial gain.
Open source licenses like the MIT License are intended for information and idea sharing, not for selling a polished, proven product. Therefore, they should not be handled in a similar manner. There is a real need for an ideal regulatory mix. This is to increase reliability and openness about how these AI models are developed, the types of data used to train them, and whether there are any known limitations. However, this must not be at the expense of the freedom of exchange of information.
The form of the Artificial Intelligence Act needs to be tailored to attract more cautious users of open source software.
The Artificial Intelligence Act should be structured to encourage users of open source software to be more cautious and conduct their own research and testing before making it available to a large audience. This can catch bad actors trying to commercially exploit a creator's work without conducting more investigation or applying quality standards.
In fact, it should be the responsibility and obligation of the final developer to thoroughly check everything before delivering it to consumers. These people will ultimately benefit financially from open source initiatives. However, the framework does not explicitly seek to achieve this in its current state. A core principle of open source is the free exchange of information and expertise for personal and non-commercial purposes.
Expanding the legal liability of open source GPAI developers and researchers will only stifle technological progress and innovation. This will prevent developers from exchanging knowledge and ideas, making it harder for new businesses or aspiring individuals to gain access to cutting-edge technology. They will not be able to build on their own knowledge or be motivated by what others have gained.
Nowadays, artificial intelligence is widely used, including in smart buildings, which also involves safety issues. Recently, the 23rd China International Building Intelligence Summit in 2022, hosted by Qianjia.com, will officially kick off. The theme of this summit is "Digital Intelligence Empowerment, Carbon Solving a New Future", in which how to create safer smart buildings will It has become one of the main topics discussed at this summit.
The summit will be held grandly in the five major cities of Xi'an, Chengdu, Beijing, Shanghai and Guangzhou from November 8 to December 8, 2022. At that time, we will join hands with world-renowned building intelligence brands and experts to share hot topics and the latest technology applications such as AI, cloud computing, big data, IoT, smart cities, smart homes, and smart security, and discuss how to create a "lower carbon, A safer, more stable and more open industry ecology will help achieve the "double carbon" goal.
The above is the detailed content of EU's AI bill: Will regulation hinder innovation?. For more information, please follow other related articles on the PHP Chinese website!